Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 43 NATIONAL INFORMATION INFRASTRUCTURE 2 The Open Data Network: Achieving the Vision of an Integrated National Information Infrastructure The committee's vision for a national information infrastructure (NII) is that of an Open Data Network, the ODN. Having a vision of future networking, however, is not at all the same thing as bringing it to fruition. As a contribution to the ongoing debate concerning the objectives and characteristics of the NII, the committee details in this chapter how its ODN architecture can enable the realization of an NII with broad utility and societal benefit, and it discusses the key actions that must be taken to realize these benefits. An open network is one that is capable of carrying information services of all kinds, from suppliers of all kinds, to customers of all kinds, across network service providers of all kinds, in a seamless accessible fashion. The telephone system is an example of an open network, and it is clear to most people that this kind of system is vastly more useful than a system in which the users are partitioned into closed groups based, for example, on the service provider or the user's employer. The implications of an open network1 are that there is a need for a certain minimum level of physical infrastructure with certain capabilities to be provided, for an agreement to be forged for a set of objectives for services and interoperation, for relevant standards to be set, for research on enabling technology to be continued, and for oversight and management. The role government should take here is critical. The committee's advocacy of an Open Data Network is based in part on its experience with the enormously successful Internet experiment, whose basic structure enables many of the capabilities needed in a truly national information infrastructure. The Internet's defining protocols, TCP and IP, are not proprietary âthey are open standards that can be
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 44 NATIONAL INFORMATION INFRASTRUCTURE implemented by anyone. Further, these protocols are not targeted to the support of one particular application, but are designed instead to support as broad a range of services as possible. Moreover, the Internet attempts to provide open access to users. In the long term, users and networks connected to such a universal network benefit from its openness. But in data networking today, this vision of open networking is not yet universally accepted. Most of the corporate data networking done currently uses closed networks, and most information and entertainment networks targeted to consumers are closed, by this committee's definition. THE OPEN DATA NETWORK Criteria for an Open Data Network The Open Data Network envisioned by the committee meets a number of criteria: â¢ Open to users: It does not force users into closed groups or deny access to any sectors of society, but permits universal connectivity, as does the telephone system. â¢ Open to service providers: It provides an open and accessible environment for competing commercial or intellectual interests. For example, it does not preclude competitive access for information providers. â¢ Open to network providers: It makes it possible for any network provider to meet the necessary requirements to attach and become a part of the aggregate of interconnected networks. â¢ Open to change: It permits the introduction of new applications and services over time. It is not limited to only one application, such as TV distribution. It also permits the introduction of new transmission, switching, and control technologies as these become available in the future. Technical, Operational, and Organizational Objectives The criteria for an Open Data Network imply the following set of very challenging technical and organizational objectives: â¢ Technology independence. The definition of the ODN must not bind the implementors to any particular choice of network technology. The ODN must be defined in terms of the services that it offers, not the way in which these services are realized. This more abstract definition will permit the ODN to survive the development of new technology options, as will certainly happen during its lifetime if it is successful. The ODN
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 45 NATIONAL INFORMATION INFRASTRUCTURE should be defined in such a way that it can be realized both over the technology of the telephone and the cable industries, over wire and wireless media, and over local and long-distance network technology. â¢ Scalability. If the ODN is to be universal, it must scale to global proportions. This objective has implications for basic features, such as addressing and switching, and for operational issues such as network management. New or emerging modalities, such as mobile computers and wireless networking today, must be accommodated by the network. If the ODN is to provide attachment to all users, it must physically reach homes as well as businesses. This capability implies an upgrade to the "last mile" of the network, the part that actually enters the home (or business). Further, the number of computers (or more generally the number of networked devices) per person can be expected to increase radically. The network must be able to expand in scale to accommodate all these trends. â¢ Decentralized operation. If the network is composed of many different regions operated by different providers, the control, management, operation, monitoring, measurement, maintenance, and so on must necessarily be very decentralized. This decentralization implies a need for a framework for interaction among the parts, a framework that is robust and that supports cooperation among mutually suspicious providers. Decentralization can be seen as an aspect of large scale, and indeed a large system must be decentralized to some extent. But the implications of highly decentralized operations are important enough to be noted separately, as decentralization affects a number of points in this chapter. â¢ Appropriate architecture and supporting standards. Since parts of the network will be built by different, perhaps competing organizations, there must be carefully crafted interface definitions of the parts of the network, to ensure that the parts actually interwork when they are installed. Since the network must evolve to support new services over time, there is an additional requirement that implementors must engineer to accommodate change and evolution. These features may add to the network costs that may be inconsistent with short-term profitability goals. â¢ Security. Poor security is the enemy of open networking. Without adequate protection from malicious attackers and the ability to screen out annoying services, users will not take the risk of attaching to an open network, but will instead opt, if they network at all, for attachment to a restricted community of users connected for a specific purpose, i.e., a closed user group. This version of closed networking sacrifices a broad range of capabilities in exchange for a more reliable, secure, and available environment. â¢ Flexibility in providing network services. If a network's low-level technology is designed to support only one application, such as broad
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 46 NATIONAL INFORMATION INFRASTRUCTURE cast TV or telephony, it will render inefficient or even prohibit the use of new services that have different requirements, although it may support the target application very efficiently. Having a flexible low- level service is key to providing open services, and to ensuring the capacity to evolve. For example, the emerging broadband integrated services digital network (B-ISDN) standards, based on asynchronous transfer mode (ATM), attempt to provide a more general capability for the telephone system than was provided by the current generation of technology, which was designed specifically for transport of voice. The Internet protocol, IP, is another example of a protocol that provides a flexible basis for building many higher-level services, in this case without binding choices to a particular network technology; the importance of this independence is noted above. â¢ Accommodation of heterogeneity. If the ODN is to be universal, it must interwork with a large variety of network and end-node devices. There is a wide range of network infrastructure: local area and wide area technology, wireline and wireless, fast and slow. Perhaps more importantly, there will also be a range of end-node devices, ranging from powerful computers to personal digital assistants to intelligent devices such as thermostats and televisions that do not resemble computers at all. The ODN must interwork with all of them, an objective that requires adaptability in the protocols and interface definitions and has implications for the way information is represented in the network.2 â¢ Facilitation of accounting and cost recovery. The committee envisions the ODN as a framework in which competitive providers contribute various component services. Thus it must be possible to account for and cover the costs of operating each component. Because the resulting pricing will be determined by market forces, fulfilling the objective of universal access may require subsidy policies, a point discussed in Chapter 5. Benefits of an Open Data Network Comparing the success of the open Internet to the limited impact of various closed, proprietary network architectures that have emerged in the past 20 years âsystems that eventually either disappeared or had to be adjusted to allow open accessâsuggests that the wisdom of seeking open networks is irrefutable.3 Many of the proprietary networks that have played to captive audiences of vendor-specific networks for years are now rapidly losing ground as users demand and achieve the ability to interoperate in a world of heterogeneous equipment, services, and network operating systems. On the other hand, the Internet, and those networks that have "opened up," are enjoying phenomenal growth in membership.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 47 NATIONAL INFORMATION INFRASTRUCTURE It is important to note that achieving an open network does not preclude the existence of closed networks and user groups. First, there will always be providers (such as current cable TV providers) that choose to develop closed networks for a variety of reasons, such as control of revenues, support of closed sets of users, and mission-critical applications. It is unrealistic to believe that such an approach either can or should be controlled. For this reason, it will be necessary to provide some level of interoperation with proprietary protocols, with new versions of protocols, and with networks that do emerge to deal with special contingencies or with special services. Second, closed user groups will always exist, for reasons of convenience and security. The Open Data Network can be configured to allow closed groups to use its facilities to construct a private network on top of the ODN resources. (See, for example, the discussion below under "Security," which presents approaches, such as the use of security firewalls, to providing a restricted secure environment.) OPEN DATA NETWORK ARCHITECTURE To realize the vision of an integrated NII, it is necessary to create an appropriate network architecture, that is, a set of specifications or a framework that will guide the detailed design of the infrastructure. Without such a framework, the pieces of the emerging communications infrastructure may not fit together to meet any larger vision, and may in fact not fit together at all. The architecture the committee proposes is inspired in part by examining the Internet and identifying those of its attributes that have led to its success. However, some important departures from the Internet architecture must be included to enable an evolution to the much larger vision of an open NII. An Architectural Proposal in Four Layers Described below is a four-layer architecture for the Open Data Network.4 The four layers provide a conceptual model for facilitating the discussion of the various categories of services and capabilities comprised by the ODN. 5 The layers are the bearer service, transport, middleware, and the applications. 6 1. At the lowest level of the ODN architecture is an abstract bit-level transport service that the committee calls the bearer service of the ODN. Its essence is that it implements a specified range of qualities of service (QOS) to support the higher-level services envisioned for the ODN. At this level, bits are bits, and nothing more; that is, their role in exchanging information between applications is not visible. However, it should be
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 48 NATIONAL INFORMATION INFRASTRUCTURE stressed that there can be more than one quality of service; the differences among these are manifested in the reliability, timeliness, correctness, and bandwidth of the delivery. Having multiple QOS will permit an application with a particular service requirement to make a suitable selection from among the QOS provided by the bearer service.7 The bearer service of the ODN sits on top of the network technology substrate, a term used to indicate the range of technologies that realize the raw bit-carrying fabric of the infrastructure. Included in this set are the communication links (copper, microwave, fiber, wireless, and so on) and the communication switches (packet switches, ATM switches, circuit switches, store-and-forward switches, and optical wave-length-division multiplexers, among others). This set also includes the functions of switching, routing, network management and monitoring, and possibly other mechanisms needed to ensure that bits are delivered with the desired QOS. The Open Data Network must be seen not as a single, monolithic technology, but rather as a set of interconnected technologies, perhaps with very different characteristics, that nonetheless permit interchange of information and services across this set. 2. At the next level, the transport layer, are the enhancements that transform the basic bearer service into the range of end-to-end delivery services needed by the applications. Service features typically found at the transport layer include reliable, sequenced delivery, flow control, and end-point connection establishment.8 In this organization of the levels, the transport layer also includes the conventions for the format of data being transported across the network.9 The bit streams are differentiated into identifiable traffic types such as voice, video, text, fax, graphics, and images. The common element among these different types of traffic is that they are all digital streams and are therefore capable of being carried on digital networks. Currently much of the work in the commercial sector is aimed at defining these sorts of format standards, mostly driven by workstation and PC applications. The distinction between the bearer service and the transport layer above it is that the bearer service defines those features that must be implemented inside the network, in the switches and routers, while the transport layer defines services that can be realized either in the network or the end node. For example, bounds on delay must be realized inside the network by controls on queues. Delay, once introduced, cannot be removed at the destination. On the other hand, reliable delivery is normally viewed as a transport-layer feature, since the loss of a packet inside the network can be detected and corrected by cooperating end nodes. Since transport services can be implemented in the end node if they are not provided inside the network, they do not have to be mandated as a core part of the bearer service. This suggests that they should be sepa
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 49 NATIONAL INFORMATION INFRASTRUCTURE rated into a distinct transport layer, since it is valuable, as the committee discusses, to minimize the number of functions defined in the bearer service. While the service enhancements provided by the transport layer are very important, this report does not elaborate further on this layer (as it does below for the other three layers), since these services are a well-understood and mature aspect of networking today. 3. The third layer, middleware, is composed of higher-level functions that are used in common among a set of applications. These functions, which form a toolkit for application implementors, permit applications to be constructed more as a set of building blocks than as vertically integrated monoliths. These middleware functions distinguish an information infrastructure from a network providing bit-level transport. Examples of these functions include file system support, privacy protection, authentication and other security functions, tools for coordinating multisite applications, remote computer access services, storage repositories, name servers, network directory services, and directory services of other types. A subset of these functions, such as naming, will best be implemented in a single, uniform manner across all parts of the ODN. There is a need for one or more global naming schemes in the ODN. For example, there may have to be a low-level name space for network end nodes, and a higher-level name space for naming users and services. These name spaces, since they are global, cannot be tied to a particular network technology choice but must be part of the technology-independent layers of the architecture. These somewhat more general service issues will benefit from a broad architectural perspective, which governmental involvement could sustain. 4. The uppermost layer is where the applications recognized by typical users reside, for example, electronic mail (e-mail), airline reservation systems, systems for processing credit card authorizations, or interactive education. It is at this level that it is necessary to develop all the user applications that will be run on the ODN. The benefit of the common services and interfaces of the middleware layer is that applications can be constructed in a more modular manner, which should permit additional applications to be composed from these modules. Such modularity should provide the benefit of greater flexibility for the user, and less development cost and risk for the application implementors. The complexity of application software development is a major issue in advancing the ODN, and any approach to reducing the magnitude and risk of application development is an important issue for the NII. As a wider range of services is provided over the network, it will be important that users see a uniform interface, to reduce the learning necessary to use a new service. A set of common principles is needed for the
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 50 NATIONAL INFORMATION INFRASTRUCTURE construction of user interfaces, a framework within which a new network service can present a familiar appearance, just as the Macintosh or the X Window interface attempts to give the user a uniform context for applications within the host. A generation of computer-literate users should be able to explore a new network application as confidently as they use a new television remote control today. Much effort is under way in the commercial sector to identify and develop approaches in this area, and it will be necessary to wait and see if market forces can produce a successful set of options in this case. A critical feature of the ODN architecture is openness to change. Since the committee sees a continuous evolution in network technology, in end-node function, and, most importantly, in user-visible services, the network standards at all the levels must be evolvable. This requires an overall architectural view that fosters incremental evolution and permits staged migration of users to new paradigms. There must be an agreed-upon expectation about evolution and change that links the makers of standards and the developers of products. This expectation defines the level of effort that will be required to track changing requirements, and it permits the maintainers to allocate the needed resources. The need for responsiveness to change can represent a formidable barrier, since the status quo becomes embedded in user equipment, as well as in the network. If standards are not devised to permit graceful and incremental upgrade, as well as backwards compatibility, the network is likely either to freeze in some state of evolution or to proceed in a series of potentially disruptive upheavals. The Internet is currently planning for a major change, the replacement of its central protocol, IP. This change will require replacing software in every packet switch in the Internet and in the millions of attached hosts. To avoid the disruption that might otherwise occur, the transition will probably be accomplished over several years. In a similar approach the telephone industry has made major improvements to its infrastructure and standards (such as changes to its numbering plan) in a very incremental and coordinated manner. Part of what has been learned in these processes is the wisdom of planning for change. The committee thus concludes that the definition of an ODN architecture must proceed at many levels. At each of these levels technical decisions will have to be made, and in some cases making the correct decision will be critical to the ODN's success in enabling a true NII. The committee recognizes that if the government attempts to directly set standards and conventions, there is a risk of misguided decision making.10 There is also concern that if the decisions are left wholly to the marketplace, certain key decisions may not be made in a sufficiently timely and coherent manner. Some overarching decisions, such as the specification
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 51 NATIONAL INFORMATION INFRASTRUCTURE of the bearer service discussed in the next section, must of necessity be made early in the development process, since they will shape the deployment of so much of the network.11 Therefore, both the nature of the architecture's layers and the decision process guiding their implementation are important. In the interests of flexibility, the committee emphasizes the architecture, services, and access interfaces composing an ODN. It describes the characteristics the infrastructure technology should have, leaving to the engineers and providers how those characteristics should be realized. The Centrality of the Bearer Service The nature of the bearer service plays a key role in defining the ODN architecture. Its existence as a separate layerâthe abstract bit-level network serviceâprovides a critical separation between the actual network technology and the higher-level services that actually serve the user. One way of visualizing the layer modularity is to see the layer stack as an hourglass, with the bearer service at the narrow waist of the hourglass (Figure 2.1). Above the waist, the glass broadens out to include a range of options for transport, middleware, and applications. Below the waist, the glass broadens out to include the range of network technology substrate options. Imposing this narrow point in the protocol stack isolates the application builder from the range of underlying network facilities, and the technology builder from the range of applications. In the Internet protocols, the IP protocol itself sits at this waist in the hourglass. Above IP are options for transport (TCP, UDP, or other specialized protocols); below are all the technologies over which IP can run. The benefit of this architecture is that it forces a distinction between the low-level bearer service and the higher-level services and applications. The network provider that implements the basic bearer service is thus not concerned with the standards in use at the higher levels. This separation of the basic bearer service from the higher-level conventions is one of the tools that ensures an open network; it precludes, for example, a network provider from insisting that only a controlled set of higher-level standards be used on the network, a requirement that would inhibit the development and use of new services and might be used as a tool to limit competition. This partitioning of function is not meant to imply that one entity cannot be a provider of both low-and higher-level services. What is critical is that the open interfaces exist to permit fair and open competition at the various layers. The committee notes that along with this open competition environment comes the implication that the low-level and high-level services should be unbundled.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 52 NATIONAL INFORMATION INFRASTRUCTURE Even if the providers of the bearer service are indifferent to the higher- level standards, those standards should be specified and promulgated. If there are accepted agreements among software developers as to the standards and services at the middleware and application levels, users can benefit greatly by obtaining software that permits them to participate in the services of the network. Characterizing the Bearer Service If the ODN is indeed to provide an open and accessible environment for higher-level service providers, then there must be some detailed understanding of the characteristics of this underlying bearer service, so that higher-level service providers can build on that foundation. A precedent can be seen in the telephone system's well-developed service model, which was initially developed to carry a voice conversation. Other applications such as data modems and fax have been developed over this service, precisely because the basic "bearer service," the service at the waist of the hourglass, was well defined. In the case of data networks, defining the base characteristics of the underlying service is not easy. Applications vary widely in their basic requirements (e.g., real-time video contrasted with e-mail text). Current data network technologies vary widely in their basic performance, from modem links (at 19,200, 14,400, 9,600, or fewer bits per second) to local area networks such as the 10-Mbps Ethernet, to the 100-Mbps FDDI. A number of conclusions can be drawn about the definition of the bearer service. â¢ The bearer services are not part of the ODN unless they can be priced separately from the higher-level services, since the goal of an open and accessible network environment implies that (at least in principle) higher-level services can be implemented by providers different from the provider of the bearer service. As long as two different providers are involved in the complete service offering, the price of each part must be distinguished. The definition of the bearer services must not preclude effective pricing for the service. The committee recognizes that such pricing elements have been bundled together and that this history will complicate a shift to the regime advanced here in the interest of a free market for entry at various levels.12 â¢ We must resist the temptation to define the bearer service using simplistic measures such as raw bandwidth alone. We must instead look for measures that directly relate to the ability of the facilities to support the higher-level services, measures that specify QOS parameters such as bandwidth, delay, and loss characteristics.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 53 NATIONAL INFORMATION INFRASTRUCTURE FIGURE 2.1 A four-layer model for the Open Data Network.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 54 NATIONAL INFORMATION INFRASTRUCTURE â¢ The bearer service must be defined in an abstract way that decouples the service characteristics from any specific technology choice. Network technology will continue to evolve, and the bearer service must provide the stable service interface that permits new technology to be incorporated into the ODN. This last point deserves further emphasis. It attempts to capture for the ODN one of the key strengths of the Internet, which is that the protocol suite was defined in such a way that it could be realized across a wide range of network technologies with different speeds, capabilities, and characteristics. The analog of the bearer service in the Internet suite is the IP protocol itself; this protocol is deliberately defined in a manner that is as independent from any specific technology as could be accomplished. This technology independence is accomplished by including in the IP protocol only those features that are critical to defining the service as needed by the higher levels, and leaving all other details for definition by whatever lower- level technology is used to realize IP in a particular situation.13 IP is thus a very minimal protocol, with essentially two service features:14 â¢ The source and destination end-node address carried in the packet, and â¢ The behavior that the user may assume of the packet delivery service, namely, the "best-effort" delivery service discussed below in "Quality of Service." The IP protocol's decoupling from specific technologies is one of the keys to the success of the Internet, and this lesson should not be lost in designing the ODN. There are several benefits to this independence of the bearer service from the technology. First, competition at the technology level will be greater the less the service definition constrains innovation. This competition can be expected to lead to reduced cost and increased function. Second, the standards of the ODN will be widely deployed and will remain effective for a long time. They must, if successful, outlive any particular technology over which they are implemented. It is thus critical that every effort be expended to ensure that the bearer service is not technology dependent, but is defined instead in an abstract way that permits technology evolution and service evolution to proceed in parallel. The ODN can thereby be enabled by managing change at a certain level in the protocol hierarchy; it is possible to create a platform (like IP in the Internet) that allows application vendors to operate without regard to changes in the implementation of transmission lines, switches, and so onâalthough the platform defines general requirements that designers
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 55 NATIONAL INFORMATION INFRASTRUCTURE of transmission lines, switches, and so on should meet. Box 2.1 discusses a specific issue in this context, which is the relationship between the bearer service and a very important emerging technology standard, asynchronous transfer mode. Middleware: A New Set of Network Services Above the bearer service and the transport level (layers 1 and 2 of the ODN architecture), but below the application level (layer 4) itself, a new set of service definitions is needed to support the development of next-generation applications. At this level, it is necessary to develop network services for the naming and locating of information objects. Also needed here are tools for buying and selling products (goods and services), including a definition of electronic money, and an architecture for dealing with intellectual property rights. Another necessary service is support for coordinating group activities. Services of this sort have been called "middleware," since they sit above the traditional network transport services but below the applications themselves. A transition to an environment characterized by an abundance of information of all kinds, forms, and sources implies a need for services to locate, access, retrieve, filter, and organize information. The foundation has already been laid for substantial personalization of information resources (e.g., in the development of personal libraries and the conduct of personalized services). Tools for users are especially important because of the expectation that the provider community is not likely to organize information resources sufficiently, if at all.15 See Appendix C. Development of middleware services is not as mature as development of the basic transport services or the specific application services such as those found on the Internet. This area, layer 3, of the architecture is discussed in "Research on the NII" at the end of this chapter. Electronic money, for example, is a middleware service that illustrates the kinds of issues that must be resolved if the NII infrastructure is to become not just a means for moving bits but also a general environment for on-line services and electronic commerce. One key to the controlled buying and selling of information is an easy means for payment. Today, anyone interested in selling information on the network must either establish a financial relationship with each individual customer or use an information reseller. Information resellers may be an effective mechanism but may also inhibit the freedom to package and structure information in ways that provide product differentiation. A related issue in electronic commerce is that transactions that deliver information, such as directory enquiry, will very often be sold for small sums of money. To bill for each transaction separately would add considerably to the cost of
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 56 NATIONAL INFORMATION INFRASTRUCTURE BOX 2.1 THE ROLE OF ASYNCHRONOUS TRANSFER MODE IN THE ODN BEARER SERVICE For the last several years, a proposal has been emerging for a new sort of network technology, which is called asynchronous transfer mode, or ATM. Technically, ATM is a variant of packet switching, with the specific characteristic that all the packets are the same small size, 48 payload bytes. This small size is in contrast to the larger and variable-size packets found in today's packet switched networks (Ethernet, for example, has a maximum packet size of 1,500 payload bytes). These small packets, which are called "cells" in the ATM standard, provide technical advantages in some contexts. They reduce certain forms of variation in delivery delay, and they make more straightforward the design of highly parallel packet switches, the so-called space division switches. ATM was originally developed in the context of the telephone industry, as a successor to the current generation of digital telephone standards. In 1988 it was selected by the CCITT (now the ITU Telecommunications Sector) as the agreed-upon approach to achieving a broadband integrated services digital network (B-ISDN).* The perceived utility of ATM subsequently caused it to be standardized in other contexts, such as local area networking. The main goal of ATM is flexibility. In the current telephone system, a static connection is established and bandwidth reserved from the time a call is placed until it is terminated. Even if a speaker is silent, a flow of (null) bits continues. With ATM (as with packet switching in general), transmission is asynchronous, which means that cells need to be sent only when there is information to transfer. This has the obvious advantage of improved utilization of bandwidth. An equally important point, however, is that this more flexible framework can be used to realize a broad range of services, such as connections of any desired capacity. Today's telephone system is implemented using a fixed hierarchy of link speeds: voice circuits (rated at 64 kbps each) are combined into a circuit at about 1.5 Mbps (called T1), T1 circuits are combined into a circuit at about 45 Mbps (called T3), and so on. In order to sell other speeds (fractional T1 or T3), it is necessary to add additional equipment to the system. With ATM, a circuit of any speed can be configured, by varying the rate at which cells are being sent. This flexibility permits the providers to sell a wider range of transmission services and also provides a basic mechanism for building the internal control and management communications that support the network itself: communications for signaling, for operations and maintenance, and for accounting and billing. Building these facilities with the same technology (multiplexers and switches) that implements the user services has the potential to reduce cost. This situation contrasts with today's telephone network, which has a quite separate data network for signaling. The other advantage of ATM over traditional telephone circuits is that the physical link from the customer to the network can be used at one time for
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 57 NATIONAL INFORMATION INFRASTRUCTURE many simultaneous conversations on which information is flowing in bursts. During the idle period between one burst and the next a conversation consumes no transmission bandwidth and so leaves the transmission line free to carry bursts of traffic belonging to other conversations. By conveying data in small cells, ATM can rapidly interleave bursts of traffic from different conversations and so provide a quality of service that is particularly appropriate for multimedia and computer applications. As was noted above, ATM may have more in common with packet switching than with the traditional circuit switching of the telephone system. However, in contrast with packet switching as seen in the Internet or on current network technology such as Ethernet local area networks, ATM permits resources (in particular, bandwidth) to be allocated to support specific flows of cells. This feature permits ATM to provide connections with different quality of service (QOS), which refers to distinguishable traffic characteristics such as continuous steady flow, low delay for short bursts, low priority and possibly long delay for bulk-rate traffic, and so on. This explicit control over QOS is in contrast to the single QOS offered by the Internet protocol, the "best-effort" transfer service. What is most important about ATM is not its technical details, but rather the fact that it has been successful as a common starting point for discussion and cooperation among the previously somewhat disjoint computer, data network, and telecommunications industries. There are now widespread plans for ATM deployment, both in the wide area and local area context. The joint effort put into ATM by these communities represents a force for unification that materially enhances the opportunities for a future integrated Open Data Network. But while ATM may be a way of realizing the bearer service in the late 1990s, the ATM standards, because they contain many details related to the specifics of ATM technology, do not directly play the role of the bearer service for the ODN described by this committee. They are specific to ATM and cannot describe how ODN traffic should be carried over other technologies such as Ethernet or dial-up telephone lines. As discussions in the text affirm, the committee believes that the ODN bearer service should be defined independently from a specific technology, which will permit both ATM and other sorts of technology to be mingled in the ODN, just as a range of technologies are mingled in the Internet today. To build a network out of heterogeneous network technologies, such as ATM combined with Ethernet, FDDI, and so on, requires a unifying definition of the overall offered service that is independent of the technology details. In today's Internet, this is the objective of the IP protocol; for the ODN this is the function of the bearer service. In the short run, how IP will evolve and how it will interwork with ATM are issues for the Internet and ATM standards bodies. In the longer run, the same issues will be discussed in relating ATM to the broader NII objectives.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 58 NATIONAL INFORMATION INFRASTRUCTURE * Indeed, the terms "asynchronous transfer mode" (ATM) and "broadband integrated services digital network" (B-ISDN) are often used synonymously. Properly, ATM is a technology approach, and B-ISDN is a standard, but this distinction has been lost, especially since the most active of the current relevant standards bodies calls itself the ATM Forum. Regarding standards setting, "In 1988 there was only a very reduced recommendation with respect to broadband ISDN. It was already agreed then that ATM (Asynchronous Transfer Mode) would be the transfer mode for the future broadband ISDN. Two years later, in 1990, CCITT SGXIII prepared 13 recommendations, using the accelerated procedure. These recommendations defined the basics of ATM and determined most parameters of ATM." See de Prycker, Martin. 1991. Asychronous Transfer Mode: Solutions for Broadband ISDN. Prentice-Hall, Englewood Cliffs, N.J., p. 9. the transaction. Mechanization by means of electronic money can reduce the cost of billing, making small transactions more affordable and the network more useful. To address these issues, the middleware layer should provide, as a basic service, the means to effect payment transfer. Were this capability in place, an information seller could deal with any purchaser on a casual basis, trusting in the infrastructure to protect the underlying financial transaction. The two proposed models for payment are the credit-debit card model and the money model, which differ in functionality and complexity. A credit-debit card paradigm permits a user to present his identity to a purchaser, with a third party (who plays the role of the authorizing agent in a merchant transaction) assuring the transaction. Although this style of payment is not complex to specify, one of its key aspects is that the identity of the participants is known. This prevents anonymous purchase and allows building a profile of users based on a pattern of purchases. Concerns about privacy are increasing as more and more transactions are carried out on-line, and as requirements grow for users to identify themselves in a robust manner for these transactions. A more complex scheme for credit-debit transactions is "electronic money," a term connoting a means to transfer value across the network between a buyer and a seller who are mutually anonymous. Such a scheme provides an important degree of privacy and may facilitate wider acceptance of network payment, but it is substantially more complex and more computationally demanding, and it carries the risk of running afoul of existing banking and export regulations. The problems of network payment are sufficiently complex that the considerable private development efforts under way may not be sufficient. Additional basic research in this area may be needed.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 59 NATIONAL INFORMATION INFRASTRUCTURE Defining the Higher-level Services Defining the ODN bearer service and the middleware services is only part of specifying the ODN architecture. There must be broad agreementâderived from the vision of an integrated NII and spanning the range of processes from standards setting to implementation of products and servicesâon what the core higher-level services will be, and there must be standards that define how these services are to be realized. These agreements and standards are critical to ensuring that users with end-node equipment (computers, "smart" TVs and other entertainment units, and so on) can interwork, even though they are from different vendors and run software from different vendors. The alternative might be that a particular hardware choice becomes coupled to a particular proprietary or immutable set of higher-level services. Again, the objective of the layered ODN architecture is to decouple the low-level technology options (in this case both network and end-node choices) from the higher-level services that can be realized on the technology. Basic Higher-level Services Based on its observations of current and emerging uses of the Internet, the committee suggests below a sample list of minimal higher-level services for the future NII. These services should be taken as examples, which may change over time, given the importance of change as a component of openness. The services most relevant to the eventual network may emerge only in the next decade. At present, the committee has identified the following six services as constituting a minimal set: â¢ Electronic mail. The importance of electronic mail, the largest Internet application, is expected to continue. E-mail is mostly textual now but will presumably be multimedia in the near future, as the conversion is already under way. â¢ Fax. Today, fax is transmitted using the existing telephone network and is one of the largest uses of this network. A digital version of fax would be a very low cost service with wide utility. Advanced features of the bearer service, such as real-time delivery, are not needed for fax delivery, as has been illustrated by the emerging transport of fax over the Internet. â¢ Remote login: network access to remote computer systems. The access speed for remote login will increase with time, starting with voice- grade telephone lines or ISDN. Any open network should be required to supply a bearer service adequate to support this minimal level of connectivity, since it is a basic building block for a range of yet higher level services that involve using a remote computer. The standards that define the
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 60 NATIONAL INFORMATION INFRASTRUCTURE remote access should support at least the modes in use today, including emulation of a "dumb" terminal and support for standard window packages such as the X Window system. These services can be supported today over voice-grade lines with compression. â¢ Database browsing. Access to a variety of database services for the purpose of browsing should be permitted. Browsing through digital libraries is one example; another is accessing one's own health or credit records, a service that implies the need for a high level of security assurance. The standards that define the access should provide for at least simple forms of database query operations, which can be packaged by the end-node device in some user-friendly way. Given the exploding success on the Internet of services such as World-Wide Web, Gopher, archie, and WAIS, all of which provide means to browse through and retrieve information (Box 2.2; Table 2.1), it is clear that this area will be of great importance. â¢ Digital object storage. There should be a basic framework for a service that permits users to store digital objects of any kind inside the network and also make them available to others. The term "digital object" denotes an object that is more complex than a file in a file system and that combines contents, type information, and attributes; an example is a video clip. This capability is a first step toward allowing any user of the network to be an information provider, as well as an information consumer. â¢ Financial transaction services. Certain financial transaction services will soon be pervasive. For example, electronic rendering and payment of bills will be a popular service that will enable bills to be directed to an authorized party on behalf of individuals or businesses, and paid electronically. Banks and other financial institutions would handle this function. These services, which are similar to services now being used or at least explored on current networks, have the characteristic that they are not strongly dependent on high bandwidth or sophisticated QOS support. There are other higher-level services that are more demanding. A partial list follows. More Demanding Higher-level Services â¢ Audio servers. Audio today is an important component of teleconferencing, multimedia objects, and other more advanced applications. Multicast real-time audio, although now only being experimented with, promises a powerful and compelling service on today's Internet and represents an example of the sort of new service that can be expected on the NII.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 61 NATIONAL INFORMATION INFRASTRUCTURE BOX 2.2 EMERGING INFORMATION SERVICES ON THE INTERNET The driving application that first defined the Internet was electronic mail. While the Internet also supported file transfer and remote login, e- mail was the service that most people used and valued. It is supported by various directory services such as WHOIS and NetFind. Recently, we have seen a second generation of applications that are very different from e-mail. Rather than facilitating communication among people, they provide access to information. They offer a means for providers of information to place that information on the network, and they provide a means for users of information to explore the information space and to retrieve desired information elements. These applicationsâwhich include archie, Gopher, the World-Wide Web (WWW) and its Mosaic interface, and the Wide Area Information Service (WAIS)âare redefining the future of the Internet and providing a whole new vision of networking. â¢ Archie is an attempt to make the original file transfer protocol of the Internet, FTP, more useful by organizing the available files and making it possible to search for desired objects. There are, at many places on the Internet, hosts that provide files that can be retrieved by a mechanism known as anonymous FTPâby any person on the Internet without the necessity of having an account or being known to that machine. Although anonymous FTP does permit others to retrieve a file, it is not a very effective means to disseminate information, because there is no way for a potential user to search for the file. There is no way to find all the anonymous FTP sites on the Internet, or to find out if a file is located in any particular spot. Archie attempts to find all the anonymous FTP sites on the Internet, extract from each site a list of all the accessible files, and build a global index to this information. â¢ Gopher is an attempt to replace the original FTP with a file access protocol that is easier to useâin other words to make it easier to ''gopher" the information. In contrast to archie, its focus is less on cataloging and more on easy storage and retrieval. A software package that anyone can obtain and install on a computer, Gopher creates on that machine a "Gopher server," a location in which files can be stored and retrieved. The structure of Gopher makes it substantially easier to list and retrieve files than did previous tools such as anonymous FTP. â¢ The World-Wide Web (WWW) is a more ambitious project for the representation and cataloging of one-line information. WWW is based on the paradigm of hypertext, in which a document, instead of being organized linearly, is organized as a collection of multimedia objects (typically the size of a few screens of content), each of which has pointers to other relevant objects. The user, by following this web of pointers, can browse in hyperspace without having to be concerned about where the objects are actually stored. In fact, the objects can be stored in any WWW server on the Internet, so that
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 62 NATIONAL INFORMATION INFRASTRUCTURE the user, by following the web pointers, can in fact cross the Internet, perhaps going from country to country as successive objects are explored. The web of pointers, which can be translated automatically by the computer into the desired object itself, make WWW an on-line wonderland for browsing. The most popular user interface to the WWW is a tool called Mosaic, which displays the web objects for the user to read. The pointer is represented to the user as a highlighted region of the screen; clicking on that region with the mouse causes the object named by that pointer to appear. Exploration of the top level of the WWW information structure reveals an extraordinary range of information. A recent visit showed, for example, a number of on-line journals in fields as diverse as physics and the classics; collections of pictures, including the Library of Congress Vatican Exhibit (which over a half a million people have visited over the network); and weather maps, guides to restaurants, and personal profiles of researchers. All of these have been contributed by individuals and organizations interested in making their particular knowledge available to others. This last point is the key to the success of the WWW. The Web is a skeleton, a framework, into which anyone can attach an information object. The Web provides the tools to let anyone become an information producer as well as a consumer. Thus it exemplifies an open marketplace of ideas, in contrast to commercial services that select and control the content of material provided by their systems to users. WWW has been wildly successful. While the size of the Internet has been growing at a rapid, if not alarming, rate, the use of the WWW has grown at an even greater rate (Figure 2.2). The key to the Web is in the pointers, the linkages that tie one piece of information to another. The pointers from a Web object must be installed by the person creating that Web element, based on that person's expectation of where a user of that object might want to go next. Although there are tools for creating Web objects, the intellectual decisions about what pointers to install belong to the object creator, who thus has great flexibility in how pointers are used. This can lead to very creative and expressive information structures, but also to structures that are rather idiosyncratic. One consequence of this ability to put pointers into objects is that anyone can build an index to other objects. That is, one can become an information producer not only by adding an object with original content, but also by creating a different way of indexing and organizing the objects already there. The Web thus becomes a testbed for experiments in novel ways to organize and search information. â¢ WAIS represents a different approach to information searching. In WWW, a human user searches by following Web pointers. In contrast, the WAIS server has used a Connection Machine, a highly parallel supercomputer, to permit very powerful query requests to be made of the server. One of the most interesting requests is to ask the server to develop a word usage
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 63 NATIONAL INFORMATION INFRASTRUCTURE profile of a document, and then search all the other documents in the server to find others with a similar profile. If WWW is a means for people to browse in cyberspace, WAIS is a way to let the computer do the browsing. Although some of these tools are proving very successful, they represent only a first step in the discovery of paradigms for the realization of on-line information. The Web pointers provide a way to navigate in cyberspace but do not provide an effective way to filter a set of objects based on selection criteria. In contrast, WAIS provides a way to filter through all the objects in a server but does not provide an easy way to link objects in different servers. As Paul Evan Peters of the Coalition for Networked Information observed in briefing the committee, we are in a paleoelectronic information environment, with crude tools, hunters and gatherers, and incipient civilization, but more advanced civilization is coming.* The expectation of far greater quantities and varieties of information combined with far greater ease and diversity in communication implies a need for more effort to develop the necessary related services. * Committee members observe, however, that the Internet experience, with its abundance of information and a rising tide of e-mail traffic, raises the question of whether people will be hunters or prey, and whether people will want to be camouflaged or colorful. â¢ Video servers. Standards are now being defined for the coding and transmission of moving images, such as traditional or high-definition television pictures. These standards will permit services similar to today's cable TV services, as well as advanced offerings such as movies on demand, and the playback of video components of multimedia information services and advertisements. Video will make a range of demands for network services. Delivery of a movie, for example, represents a long-term requirement for bandwidth, while exploring a database of short video fragments represents a very "bursty" load on the network. â¢ Interactive video. While entertainment video can be coded in advance and stored until needed, interactive or real-time video is coded, transmitted, and played back with minimum latency. This form of video is required for teleconferencing, remote monitoring, distributed interactive education, and so on. Coding in real time for interactive video imposes time limits on the coding process that may limit the quality or compression of the signal. A separate standard may therefore be needed for the coding and transmission of interactive video, to support applications such as multimedia teleconferencing.
NATIONAL INFORMATION INFRASTRUCTURE FIGURE 2.2 Growth of Internet browsing services, December 1991 to March 1994. Graph courtesy of the Internet Society, Reston, Va. THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 64
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 65 NATIONAL INFORMATION INFRASTRUCTURE TABLE 2.1 Resource Discovery Services Type Machines People Files Documents X.500 X.500 X.500 X.500 Resource /etc/hosts; WHOIS; archie; WAIS discovery NIS; DNS finger; KIS; Prospero; netfind; Alex; netdig Gopher; WWW; Z39.5 Retrieval FTP; NFS; WAIS; AFS; Gopher; Prospero Z39.50 Selection WAIS; Gopher; Z39.50 SOURCE: Quarterman, John S., and Carl-Mitchell Smoot. 1994. The Internet Connection: System Connectivity and Configuration. Addison-Wesley, Reading, Mass. QUALITY OF SERVICE: OPTIONS FOR THE ODN BEARER SERVICE Best-Effort and Reserved Bandwidth Service An objective of the Internet has been to enable two computers to agree privately to implement some new service, and then implement it by exchanging packets across the network. The only conformance requirements for these packets are low-level matters such as addressing. This flexibility is very important and is captured in the ODN's architectural distinction between the low-level services of the network infrastructure and the higher-level conventions that define how applications are constructed. The Internet today provides a service that has been called "best effort." When one sends a packet, the network forwards it as best it can, given the other traffic offered at the moment, but makes no specific guarantee as to the rate of delivery, or indeed that the packet will be delivered at all. Many computer applications operate very naturally in a context that does not guarantee bandwidth, and the Internet has demonstrated that best-effort service is attractive in this situation.16 Just as many operating systems offer a different perceived performance depending on what other processes are running, so also does the best-effort service divide up
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 66 NATIONAL INFORMATION INFRASTRUCTURE the available bandwidth among current users. Having an application sometimes run a bit more slowly does not usually cause serious user dissatisfaction. For applications such as remote file access, in fact, best-effort service has proved very effective, as can be gauged by the success of local area networks (LANs) such as Ethernet that provide only this service. Of course, best-effort service is tolerable only within limits, and a network that is totally congested will not be acceptable. An alternative to best-effort service is one that requires the traffic source to declare its service requirements, so that the network can either reserve and guarantee this service or explicitly refuse the request. When there is excess bandwidth, both best-effort and reserved bandwidth service work well to make users happy. But when there is congestion, the choice is then between slowing all users down somewhat, as is the case with best-effort service, or refusing some users outright in order to fully serve others with reserved bandwidth.17 For some applications such as real-time audio and video, a reserved service may be preferable.18 Users will need to be able to express a preference. This sort of variation in the bearer service is described by the term "quality of service," which is understood in the technical community as covering control of delay, bandwidth, degree of guarantee versus degree of sharing, loss rates, and so on. The Internet today is moving from single, best-effort service to a more complex model with explicit options for QOS, to support new applications such as video and audio. The conclusion in that community is that options for QOS are needed, but not as a replacement for best-effort service, which will remain effective for many applications in the Internet. The best-effort service and the current cost structure of the Internet are related. The original motivation for packet switching, made available almost 30 years ago, was to permit statistical sharing among users, who were willing to accept the variable delays of best-effort service in exchange for much lower costs. There is no reason to assume that this situation has changed. Some form of best-effort service will probably continue to exist, since it may lead to a more competitive pricing option for many users.19 Effective bandwidth management will probably continue to be the key to controlling costs in long-lines situations. The NII architecture must provide a range of means to allow bandwidth sharing for cost control. This issue is discussed further in Chapter 5. Assuring the Service Whether the offered service is a reserved bandwidth or a best-effort sharing of bandwidth, it will be necessary to offer some assurance that the service actually provided is the one the user was supposed to get.20
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 67 NATIONAL INFORMATION INFRASTRUCTURE In the case of an explicit bandwidth commitment, the user can measure the provided bandwidth, but in the case of best-effort service, the user cannot tell if the allocation of bandwidth was actually fair, or if some users somehow received more than they should have. In the current Internet, allocation of bandwidth is done without robust enforcement. The host software is required to implement an algorithm that regulates the sending rate of the source during periods of overload. But if the end user is clever enough to rewrite his operating system, cheating is possible, since there is no direct enforcement of fair sharing in the switch nodes. Further, only TCP among all the Internet protocols calls for this algorithm; current protocols for voice and video do not adapt to congestion. There is fear in the Internet community that this honor system cannot survive and that some explicit controls will be needed, even in the case of best-effort service, to assure users that the sharing is fair. The Internet Engineering Task Force is working actively in this area. The debate about enforcing fair sharing of best-effort service is likely be a central part of the debate about the bearer service of the NII. An issue closely related to enforcement is accounting for usage. Usage accounting could provide the basis of billing, a means to assess network allocation after the fact, and a means to assess usage and provide information for planning. A recurring question is whether adding to the current Internet additional facilities for usage accounting would yield sufficient benefits. Feedback to users on actual usage, even in the absence of usage billing, can encourage efficient and prudent use. However, there is a recurring concern that the addition of accounting tools will inevitably lead to usage-based billing, which is in some cases undesirable (see Chapter 5), and a separate concern that the cost of adding accounting tools to the Internet will greatly increase the cost and complexity of the switches and the supporting management environment. The committee notes this matter as requiring further investigation.21 NII COMPLIANCE The previous sections have explored the range of services that could form the basis of the NII, at the low level (the bearer service), the middleware level, and the application level. To define the minimum services that must be provided everywhere in the NII, it is necessary to establish a baseline of defined mandatory functionality. Without some such criteria, any provider could essentially market any network technology or service as being a part of the NII, and users would have no assurance that it would actually be useful. Defining NII compliance, that is, reaching agreement on the minimum set of NII services, is critical to turning the envisioned NII into an effective national infrastructure.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 68 NATIONAL INFORMATION INFRASTRUCTURE However, attempting to define NII compliance in terms of core services that must be available everywhere requires dealing with a basic tension between power and universality. This tension precludes any single definition of NII compliance, so that defining compliance thus becomes rather difficult. The high-level services discussed above by the committee include some, such as video, that demand substantial bandwidth and sophisticated techniques for traffic management. The committee believes that such services will form the basis of important applications with great utility that will help drive the NII into existence. Yet the necessary bandwidth will not be available or affordable everywhere, even after a significant period of time. To define high-bandwidth services as a mandatory part of NII compliance would thus exclude from the NII infrastructure many components of today's networks, including all the transmission paths using traditional telephone links. Such a restriction is much too limiting. Any realistic plan for the NII must tolerate a range of capabilities, consistent with the different technologies that will be used in different parts of the network. This reality requires that the basic service of the NII be characterized in an adaptable way that takes into account both power and universality. As a result, the committee's definition of NII compliance has two parts: â¢ First, an evolving minimum set of basic services, both bearer services and application services, will be required without exception for NII- compliant systems. As discussed above, application services such as electronic mail, fax, remote login, and simple (text-oriented) information browsing do not require advanced infrastructure characteristics and should be available without exception. â¢ Second, any piece of infrastructure engineered to provide NII services beyond the minimum core will have to be implemented in an NII- compliant manner.22 The real consequence of this definition of NII compliance is the maximizing of interoperation, the ability of end nodes attached to the ODN to communicate among themselves effectively, assuring users that any parts of the NII that can support a particular service will implement it in a compatible manner. Users interested in services beyond the minimum set must still verify whether the relevant parts of the infrastructure can actually carry them. This approach to defining compliance is again motivated somewhat by observation of the Internet. The Internet standards recognize a small core of protocols that must be present in any machine that declares itself Internet compatible. However, most Internet standards, including the essentially ubiquitous TCP, are not actually mandatory, but only recom
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 69 NATIONAL INFORMATION INFRASTRUCTURE mended. The Internet philosophy is that if a service similar to the one offered by TCP is available on a machine, then TCP should be implemented. This approach has been very effective in practice and seems to represent a middle ground in defining compliance, pushing for interoperation and openness where possible but at the same time accepting the wide range of specific capabilities provided by existing network technology (Box 2.3). BOX 2.3 TESTING FOR COMPLIANCE The term "compliance" carries the perhaps unfortunate implication of a need for formal conformance testing of services and interfaces. In fact, although formal testing seems to work for low-level standards such as link framing protocols, it has not been effective for higher-level standards such as those for electronic mail, remote login, file transfer, and so on. Rigorous testing based on formal protocol modeling has not succeeded in practice in identifying nonconforming implementations. In fact, the marketplace itself has proved a much more effective test of conformance than has the certification laboratory. In the real world of interworking among heterogeneous implementations, real issues in interoperation are quickly discovered, and vendors who are not responsive to these issues do not succeed. A specific example of these market forces can be seen in a tradition of one of the first Internet trade shows, called Interop. On the show floor of Interop is installed a real network, and all vendors bringing products to the show are expected to connect to the Interop network and to demonstrate interoperation with their competitors' products. Failures are very obvious and quickly fixed. The resulting definitions of protocol conformance, while perhaps more operational than formal, have served the user community well. Bandwidth varies significantly over the range of network options, and the question of compliance at various speeds becomes an issue. But bandwidth is not the only dimension in which technology will vary. For example, wireless technology, which is traditionally limited in transmission capacity due to the scarcity of spectrum and issues of cost and transmitter power, may be unable to compete with fiber optics for end-user bandwidth, but fiber systems, which have abundant transmission capacity, lack critical mobility attributes enjoyed by wireless systems.23 A wireless system would not be deemed noncompliant simply because it failed to support the same bandwidth as a fiber-optic system. Similarly, a fiber-optic system would not be noncompliant because it failed to support
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 70 NATIONAL INFORMATION INFRASTRUCTURE mobile users. Each would have to be evaluated within the intrinsic limits of its capabilities. Other issues, such as integrity, privacy, and security, also affect the range of services a particular part of the NII can support. While it may not be realistic to mandate that support for video and other higher-bandwidth services be a part of the minimum definition of NII compliance, the committee believes that these more demanding services will be key to many useful and important societal objectives in the coming decades. It thus concludes that the bearer services necessary to provide these services should be a long-term objective of the NII. STANDARDS Role of Network Standards To make the vision of an integrated NII a reality and to define NII compliance, it is necessary to specify the technical details of the network. This is the role of standards, the conventions that permit the successful and harmonious implementation of interoperable networks and services. That standards serve to translate a high-level concept into operational termsâand that the process of standards definition is thus key to success in achieving an NII âis well understood in many sectors; indeed, the latter half of the 1980s seemed as much preoccupied with standards as it was with product differentiation. This was true in both the computer and communications industries. Today, network standards relevant to the NII are being discussed in many different, sometimes competing contexts, such as the following: â¢ The Internet standards are formulated by the Internet Engineering Task Force (IETF), an open-membership body that currently operates under the auspices of the Internet Society and with support from research agencies of the U.S. federal government. â¢ The Open Systems Interconnection (OSI) network protocols, which offer an alternative set of protocols somewhat similar to the Internet protocols, have been formulated by the International Organization for Standardization (ISO) internationally, with U.S. contributions coordinated by the American National Standards Institute. A U.S. government version, GOSIP, has been promulgated by the National Institute of Standards and Technology (NIST).24 ISO is broadening the OSI framework (see Appendix E). â¢ Standards for local area and metropolitan area networks such as Ethernet, Token Ring, or distributed queue dual bus (DQDB) are formulated by committees under the auspices of the Institute for Electrical and Electronics Engineers.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 71 NATIONAL INFORMATION INFRASTRUCTURE â¢ Asynchronous transfer mode, an important emerging standard at the lower levels of network service, is being defined by at least two organizations, the ATM Forum and the International Telecommunications Union (ITU) Telecommunications Sector (formerly referred to as the CCITT). â¢ Standards for the television industry are formulated by a number of organizations, including the Society of Motion Picture and Television Engineers, the Advanced Television Systems Committee, and the ITU. These sometimes discordant processes are shaped by commercial interests, professional societies, governmental involvement, international negotiation, and technical developments. The recent explosion of commercial involvement in networking has had a major impact on standards definition, as standards have become a vehicle for introducing products rapidly and for gaining competitive advantage. Factors That Complicate Setting Standards The committee believes that the critical process of setting standards is currently at risk. Historical approaches to setting standards may not apply in the future, and we lack known alternatives to carry us forward to the NII. The committee discusses below a number of forces that it sees as acting to stress the process.25 Network Function Has Moved Outside the Network As noted above, much of the user-visible functionality of information networks such as the Internet is accomplished through software running on users' end-node equipment, such as a computer. The network itself only implements the basic bearer service, and this causes changes in the standards- setting process. When function moved outside the network, the traditional network standards bodies no longer controlled the process of setting standards for new services based on this functionality. The interests of a much larger group, representing the computer vendors and the applications developers, needed to be heard. This situation is rather different from that in the traditional telephone network, where most of the function was implemented in the interior of the network, and the user equipment, the telephone itself, had characteristics dictated largely by the telephone company. With the advent of computer networks in the 1970s and the strong coupling to computer research, the clear demarcation between users' systems and the network began to blur, and uncontrolled equipment appeared more frequently at user sites, to be attached directly to data networks. Some of this equipment was experimental, and the network could make few assumptions about its proper behavior.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 72 NATIONAL INFORMATION INFRASTRUCTURE One thing that is clear now is that much of the capability and equipment of connectivity is moving to the ''periphery" of the networking infrastructure; for example, there is tremendous penetration of private local area networks on customer premises. This emphasis on the periphery implies that a single entity, or even a single industry, will be incapable of controlling the deployment of the networking infrastructure. Thus the need for an ODN architecture is clear. It Is Hard to Set Standards Without a Recognized Mandate The controlled nature of the early telephone system essentially gave the recognized standards bodies a mandate to set the relevant standards. Historically, the telephone network was designed and implemented by a small group that controlled the standards-setting process because it controlled the network. (In most countries outside the United States, the telephone system is an arm of the government.) Similarly, in the early days of the Internet, the standards were set by a body established and funded by the Department of Defense, which (for the DOD) had a mandate to provide data network standards.26 No such mandate exists in the larger network context of today. For the Internet, for example, the explicit government directive to set standards has been replaced by a process driven by vendor and market pressures, with essentially no top-down control. A Bottom-up Process Cannot Easily Set Long-term Direction As can be seen in the Internet community, the absence of a mandate to set and impose standards has led to a bottom-up approach, a process in which the development community experiments with new possibilities that become candidates for standardization after they have been subject to considerable experimentation and use. Standardization is thus akin to ratification of what is the generally accepted practice. The paradigm of translating operational experience into a proposed standard imposes at least one measure of quality on a set of competing proposals. It has proved successful compared to the relatively more controlled and top-down processes occurring in the ISO. But the bottom-up approach to setting standards is not without fault. Although setting standards by negotiation, compromise, and selection in the marketplace has been largely effective for the Internet, it is important to recall that the Internet had its early success in the context of overall direction and guidance being provided by a small group of highly motivated researchers. This indirect setting of direction seems to have faltered as the Internet community has become larger and more fragmented by commercial interests. Currently, the Internet community
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 73 NATIONAL INFORMATION INFRASTRUCTURE seems to make short-range decisions with some success, but long-range decisions, which reflect not only immediate commercial interests but also broader societal goals, may not get an effective hearing. The Internet Architecture Board (IAB) has the charter to develop longer-range architectural recommendations on behalf of the Internet community, but it cannot impose these recommendations on anyone. A Top-down Approach No Longer Appears Workable Many people consider the bottom-up approach to be too much like a free- market model in which the final result is due to individual enterprise and competition, that is to say, is not sufficiently managed. The top-down approach appears to be more manageable to many observers, particularly those with extensive experience in managing large-scale networks to meet commercial expectations for performance. However, the classical top-down approach has not succeeded in the current environment, whereas the bottom-up Internet process, which has directly embraced diverse approaches and objectives in its bottom-up process, is a phenomenal success that must be applauded and respected. Although there may be merit in considering how to integrate the top- down and bottom-up approaches, there is little experience to suggest that either approach alone will work easily in the larger context of the future NII. Commercial Forces May Distort the Standards-Setting Process A vendor, especially one with a large market share, can attempt to set a unilateral standard by implementing it and shipping it in a product. Some of the most widespread "standards" of the Internet are not actually formal standards of the community, but rather designs that have been distributed by one vendor and accepted as a necessity by the competition. This approach can lead to a very effective product if the vendor has good judgment; it may open the market to innovation and diversity at a higher level based on the standard. However, the objectives of the vendor may not match the larger objectives of the community. The resulting standard may be short-sighted, it may be structured to inhibit competition and to close the market, it may simply be proprietary, and it may inhibit evolution. Setting Standards for the NIIâPlanning for Change Is Difficult But Necessary Managing the process by which the NII network environment evolves is one of the most critical issues to be addressed. Planning for change
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 74 NATIONAL INFORMATION INFRASTRUCTURE requires an overall architecture and constant attention to ensure that any standard, at any level of the architecture, is designed to permit incremental evolution, backward compatibility, and modular replacement to the extent possible. Unfortunately, as the committee has noted above, the current standards-setting processes seem least effective in setting a long-term direction or guiding the development of standards according to an overall vision of the future. Thus there is some need to find a middle ground whereby an overall vision of the NII can inform standards selection and also allow for competing interests and approaches to be evaluated in an open process. The critical question is not what the exact vision is, but how it will be promulgated and integrated into the various ongoing standards activities. ISSUES OF SCALE IN THE NII The committee views the NII as being universal in scope, reaching not just to businesses and universities, but also eventually to most homes, as does the telephone system. This objective raises many issues related to growth and scaling. A major research focus of the last 10 years has been scaling network bandwidth to higher speeds (gigabits and beyond). Scaling in the number of nodes has perhaps received less attention. But the issues of scale in the number of nodes are perhaps more challenging that those of speed. As the network gets larger, issues of addressing, routing, management and fault isolation, congestion, and heterogeneity become more relevant. These issues are further complicated by the likely decentralized management structure of the NII, in which the parts of the network will be installed and operated by different organizations. We see in the Internet today that some of the protocols and methods are reaching their design limits and need to be rethought if we are to build a network of universal scale. A major effort is now being made to deal with serious limitations in the Internet's current addressing scheme, for example. Close attention should be paid to the resolution of these problems in the Internet to derive insights for the far larger NII. Addressing and Naming The Open Data Network envisaged by the committee will surely grow to encompass an enormous number of users and will be capable of interconnecting every school, library, business, and individual in the United States, extending beyond that to international scale. The ability to communicate among such a huge set requires the ability to name the desired communicant. The Internet currently provides a 32-bit address
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 75 NATIONAL INFORMATION INFRASTRUCTURE space, within which it is theoretically possible to address approximately 4 billion hosts. However, as a result of the structured way in which this address space has been allocated, the number of addresses may be restricted to far fewer than 4 billion, a limitation that the Internet standards community is working to rectify for both the short and a longer term. Similarly, the international telephone numbering plan is facing the result of the tremendous growth in demand for services and lines caused by the advent of fax, cellular phones, and soon personal communicators and other highly mobile services. In the case of telephone numbering, the demands of international commerce have further aggravated the problem by calling for universal information and 800-number services. The Internet and the telephone naming systems' simultaneous arrival at a crisis suggests that perhaps a common solution can be developed.27 The current address spaces of the Internet and the telephone network are a low-level framework suited for naming network and telephone locations and delivering data and voice. This framework is not used for naming users of the network or service providers or information objects, all of which must be named as well. Developing suitable name spaces for these entities involves issues of scale, longevity, and mobility. User names, for example, should be location independent (more like unique identifiers than locators), which in turn implies a substantial location service that translates names into current locations. The current services in the Internet lack a suitable naming system for users. One of the most user-friendly actions the NII community could take would be to rectify this situation. Past attempts have failed for a number of reasons, mostly nontechnical, and this lack of success perhaps deters the community from trying again. In this and other situations, a leadership with a mandate could stimulate useful long-term action to achieve needed results. Clearly the naming problem will grow substantially as more addressable (and nameable) devices proliferate; it has been suggested that numerous electronic devices (such as thermostats and stoves as well as devices explicitly intended for computing and communications) in homes and workplaces could be connected to networks. Some devices may be mobile, some stationary. Thus it is important to remember that names can be used to identify people, devices, locations, and groups. The migration to a new address space will be a major upheaval that will affect users, network providers, and vendors. Unifying the various network communities, the Internet, the cable industry, and the telecommunications industry is an additional complex undertaking that will not happen unless there is a clear and explicit advantage. An effort to define a single overarching architecture is the only context in which this integration can be motivated. It will take careful consideration to plan and
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 76 NATIONAL INFORMATION INFRASTRUCTURE implement a scheme that properly resolves such major concerns as an appropriate addressing scheme, interim management actions, and migration plans. This implies that overarching architectural decisions for the NII, such as addressing, must be made in a context with an appropriate long-term vision and architectural overview. Wide-ranging discussion is needed on the requirements for next-generation integrated addressing, with the goal of determining what the scope of this coming address space should be. It is critical that the requirements be appropriately specified. The government and the existing private-sector standards bodies should cooperate to ensure that the resulting decision meets the needs of the future NII. Sustaining a discussion among the players about core issues such as addressing, and then creating a context in which the resulting decisions are actually implemented, is an example of an effort in which government action could have significant payoff. Mobility as the Computing Paradigm of the Future We are in the midst of a subtle, but powerful, change in the way individuals access data and information processing systems. It is now the case that end users are often "on the road" or simply away from their "home" or "base" computing environment when they need access to that environment. That is, the computing infrastructure in this country is rapidly becoming mobile. One- third of all PCs sold today are mobile, a fraction that is rapidly approaching one- half. Moreover, more than 30 million people now have computers at home as well as in the office, and they often conduct business on their home machines. Thus a new paradigm is emerging in which location-independent access to personal and shared data, resources, and services will be the goal of our information processing infrastructure. Although relatively small systems to support mobility have been built, the complications will increase dramatically with the enormous number of nodes envisioned for the NII. Mobility raises a number of issues that have to do with the software architecture. One problem is the uneven capability of mobile systems. For example, the computers in use will include today's high-performance laptop computers, as well as intelligent terminals with a quality user interface. The available communications infrastructure will vary greatly over time and from place to place, spanning the range from gigabit bandwidth and microsecond delays to nearly no bandwidth and delays measured in hours or days. It is unreasonable to ask that the application handle such variability by itself. Rather, the architecture must provide software that can support mobility. From the user's perspective, the network should present to the application the appearance of a coherent global environment, which does
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 77 NATIONAL INFORMATION INFRASTRUCTURE not depend on the quality of the available communications, and which assures the availability of needed resources. Examples of some of the elements of the underlying system support functions are predictive caching, dynamically partitionable application architectures, and an appropriate naming and addressing scheme. Although issues of mobility arise with or without wireless connectivity, it is important to recognize that wireless versus wireline connectivity has other important repercussions. Issues related to privacy, security, authentication, bandwidth, addressability, and locating make it clear that the architecture developed for an NII should not be limited to that of wireline connectivity. Full development of a proper model of mobility has yet to be conducted and is certain to be a critically important area of investigation for the emerging NII. Management Systems In a network and service aggregate as complex as that envisaged for the NII, it is critical that network management be attended to properly. The NII will require continuous support to keep it up and running and to ensure that users have a high level of available service. Since the NII is likely to be a conglomeration of many interconnected networks, the network management function must be able to interoperate in a highly heterogeneous environment that is in a continual state of change, growth, and improvement. It is likely that this management function will be distributed across the component networks, a difficult reality with which we must be prepared to deal. This is not a new problem; industry today is paying considerable attention to management systems. Measurement and Monitoring Measurement of a network's behavior is important to understanding its functioning. As the network grows very large and becomes very distributed, the process of data collection, the bandwidth needed by monitoring tools, the requirements for storing and processing of the collected data, and so on are all affected. Techniques to detect faults and to diagnose and predict performance are essential, as is the ability to do automated or semiautomated measurement, reporting, and even repair in some cases. However difficult detection, diagnosis, and repair may be in the context of a given network, the difficulty is compounded in an NII environment. One network's occasional loss of a packet, much less the source of the difficulty, may be difficult to pinpoint. Dealing with intermittent
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 78 NATIONAL INFORMATION INFRASTRUCTURE conditions that occur in an environment not administered by a single entity is among the more difficult of the technical challenges to be faced. Another difficult problem is characterizing and controlling the performance of a complex system consisting of numerous independently owned and operated components. In order to monitor the behavior of traffic and users for purposes of billing and activity summaries, it is important that measurement hooks be judiciously placed in the network components (switches, routers, line drivers, and others). Another critical function that can take advantage of measurement capabilities is network control. Because networks are occasionally subject to disastrous failures that bring down major functions in society and industry (recall the past effects of power failures on Wall Street or of telephone outages on airline reservation systems, air traffic control, or 911 services), it is imperative that controls be placed on network traffic to avoid such catastrophes. The kinds of control the committee refers toâadmission, flow, routing, error, and congestion controls, among othersâwill be designed to react to traffic overloads, network dynamics, network hardware, software failures, and so on. All of these control issues are the subject of current research, and many approaches are being tested in the various networking testbeds. In addition to its use in network management and control, measurement is essential to evaluating a network's performance. The various measures of performance include response time, blocking, errors, throughput, jitter, sequencing, and others. How measures of these functions affect users' perceptions of satisfactory service is a matter of great interest to the providers of network services and infrastructure. Data on measured actual performance form the basis of models that can be constructed to predict the operation of the network over a wide range of system parameter values. SECURITY AND THE OPEN DATA NETWORK Securing the Network, the Host, and Information There are certainly advantages to having a ubiquitous open data highway. The telephone system, for example, is ubiquitous in the sense that attaching to the telephone network makes it possible to reach and be reached by any telephone. Society has come to depend on universal telephone connectivity, and computer network connectivity can be equally useful. However, the very real threats faced when computers are reachable on the networkâthreats such as theft of data, theft of service, corruption of data, and virusesâmust be anticipated. This has led many companies to hide their corporate networks behind special-purpose gate
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 79 NATIONAL INFORMATION INFRASTRUCTURE ways, such as mail gateways, that allow only a limited set of applications to cross into or out of the network. Another solution has been the closed user group; some companies have used networks to interconnect their sites, but they do not want connectivity between any of their sites and anything else. Many public provider networks provide such functionality. The openness of the telephone network is occasionally a detriment, as with prank calls, telephone harassment, and unwanted telemarketing. Most people are sufficiently resistant to these annoyances that they tolerate them in exchange for the utility of the network. But computers seem more vulnerable than people, and the effects of compromised computer software can be catastrophic. Computer security is not a solved problem, and virtually all systems are vulnerable to break-ins and viruses.28 A virus can render a computer and all data on it unusable. Developing a Security Architecture If the NII is to flourish, we must provide solutions so that any end node attached to the network can mitigate its risk to an acceptable level. Further, the infrastructure must be developed to achieve sufficiently high reliability, and the architecture must protect against system-wide failures. Infrastructure security and reliability must also address the vulnerabilities inherent in wireless technologies.29 The prospect of new services and applications emerging on the NII increases the urgency with which security must be addressed. Electronic commerce will increase the threat of fraud. Accounting for network usage may lead to theft of service. These threats may require better tools for user authentication, which may in turn lead to increased concern about the privacy of on-line activities. Mobility may lead to new location-independent network names, which may raise similar concerns. Gateways used to restrict network access to known services such as electronic mail will prevent the introduction of new services and applications. All of these matters will arise as the network evolves in the next few years, and all must be addressed now. The committee sees security as another area in which active governmental involvement can materially advance the state of the NII. As part of its overall support for the NII, the government should foster the development of a security architecture.30 This architecture should provide for mechanisms that protect against classic security threats (to confidentiality, integrity, and availability of data and systems) as well as violations of intellectual property rights and personal privacy. This security architecture must include technical facilities, recommended operational procedures, and means for recourse within the legal system. The most well developed security architecture is that used by the
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 80 NATIONAL INFORMATION INFRASTRUCTURE Department of Defense for the protection of classified information. This model does not seem adequate for the full range of problems to which the NII will be subject, and research on a broader spectrum of security problems is needed. Currently, the federal government, particularly the Advanced Research Projects Agency (ARPA), is funding research in robust, available networking. The committee supports this approach and urges other agencies such as the National Science Foundation (NSF) to have an explicit program in this area.31 Three elements of security issues that are not well characterized at the present time, and that seem critical to achieving success in the NII, are as follows: â¢ First, effective protection of data and systems will require the use of secure "walls" to separate network functions and service offerings that are expected to be accessible from those that are not. The network must allow information providers to determine the degree of access that will be permitted to their works. The architecture must allow these walls to be constructed so that controlled access through the walls can be implemented. â¢ Second, technology will have to aid in protecting data integrity. It is critical for information creators to know that what they produce is what network users get, and for users to be assured that what they are getting is what they think it is. There must be protection against the dissemination of a work altered without authorization. A technological means is needed for "certifying" the authenticity of the data, so that users are able to choose sources of information with a reasonable degree of confidence. â¢ Third, the network itself must provide the reliability (availability) that is critical to the delivery of any higher-level service. Today, users view the Internet as reasonably stable, but it is known that it can be seriously disrupted by abuse from an end node, either by gross flooding of the network with traffic or by the injecting of false control packets that disrupt the internal state of the network, for example, the routing tables. There is evidence in the network communities of increasing concern about security, concern heightened by reports of incidents and abuses.32 For the last several years, all proposals for Internet standards have been required to include discussion of impacts on security. Although this requirement does force the community to notice in passing the issue of security, the committee urges an even stronger emphasis on and expectation for security requirements in key network protocols. Security Objectives and Current Approaches for Reaching Them It is often assumed that security problems can be solved by the creation and deployment of new security technology. While some techno
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 81 NATIONAL INFORMATION INFRASTRUCTURE logical means exist that currently are not fully exploited, the problem is not just technical. Ensuring security requires attention to operating procedures, user attitudes and values, policy and legislative context, and a range of other issues.33 The following sections discuss existing technology in the larger context of particular security objectives. Computer System Protection It is possible to imagine that individual computers and operating systems could be made secure, but current practice limits the ability of today's products to offer robust protection. Any host attached to a network such as the Internet must assume that it is vulnerable to attack by an attacker that can penetrate the computer. Although attacks are not a common problem on the Internet, their occurrence must be anticipated. Apart from the assurance of better end-node security, the current approach to ensuring computer system protection is to impose a computer in the network that restricts the range of access. One method is to use a mail relay, which certainly restricts the range of access; unfortunately it does this so well that it prevents applications other than mail from succeeding. The other, use of a router that forwards packets only between a certain set of hosts, can permit a wider range of services but must occur in combination with some flexible approach to specifying the hosts that are permitted to communicate. Although such router technology exists, there is no generally accepted architecture for administering the control that it can provide. Protection of Information in the Host Given that system penetration cannot be totally prevented, it is necessary to have some means to prevent damage to information in the host as a result of penetration. To protect from loss, the best method is the old-fashioned one of backup to detached media. This approach protects from both attack and physical failure and should be a standard procedure for almost any computer user. Protection against disclosure and corruption is more subtle. Disclosure can be protected against in several ways. One, the model used in the military and intelligence communities, is to implement, as part of the system, an inner core of protected mechanism (a kernel that monitors access to user information) that is intended to survive during a system penetration and that will maintain control of access to the information. Another model, so-called discretionary control, typically involves user authentication and access control lists to identify the users permitted to access information objects. This sort of control is widely used in systems
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 82 NATIONAL INFORMATION INFRASTRUCTURE today, but many in the security community consider the typical access control list mechanism to be less robust than the security kernel. A final means of protection, which does not depend on system software, is encryption of the information when it is not being used. This method is effective with two limits. First, the information must be transformed to its unprotected form to be used, and this opens a period of vulnerability. Second, encryption depends on having and protecting the encryption key, which is not easy. If the key is stored in the computer, then it is no more protected than the original file was. If it is stored outside the computer, it may be forgotten or compromised by physical means, such as by being observed on the paper on which it is written. Without care, an encryption system becomes no more secure than a password system. The committee recognizes that the various uses of encryption are current matters of government policy discussion.34 Such matters as export control, and whether keys must be made available to the government, will have an impact on the ways in which encryption is employed. Protection of Information in the Network As commercial traffic grows on the Internet, so also will the temptation to attack that traffic. The only effective means to protect information in the network is encryption. There is no direct equivalent of the military security kernel that can protect data even in the event of a penetration. Encryption makes data unreadable to attackers without the encryption key: thus it protects from all the gross forms of attack and provides a clean separation between those who should have access and those who should not. Again, the problem is protecting the key used to transform the information, but the difficulty is greater in the network because the key must be shared between the sender and the receiver. There are two solutions to the problem of protecting the key: a systematic way of managing shared keys, and ''public-key" systems. Among a small group of cooperating users, it is reasonable to imagine a reliable way of sharing keys off-line, such as mailing them printed on paper. But this approach is neither fast nor easily scaled to global communication. Public-key systems, sometimes called asymmetric systems, attempt to reduce the problem of key distribution by implementing a scheme in which a separate key is used to encrypt and decrypt. This separation means that the encryption key can be widely publicized (e.g., listed in name servers), while the decryption key is kept secret. Thus, anyone can encrypt a message, but only its intended receiver (with the secret decryption key) can decrypt it. This technique seems a very powerful way to address the problem of keys.35 It does not entirely eliminate the concern with keys, however. For example, the decryption key can still be stolen,
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 83 NATIONAL INFORMATION INFRASTRUCTURE or the sender can be tricked into using the wrong encryption key, so that the wrong recipient can read the message. Authenticating Users Most security controls depend on some way to verify users' identities: the user authentication process. The need to identify and authenticate users exists at many levels of the security architecture. At the network level, it is necessary to identify users for purposes of accounting and billing. At the system level, it is necessary to authenticate users so that they can be given the proper authorization. The traditional access control list implies that the system can know which user is making a request, so that the proper entry on the list can be evaluated. Network applications such as network mail also would benefit from a system that would allow a mail recipient to verify the sender's identity. Systems with this capability, such as the Internet's Privacy Enhanced Mail, are just now being deployed. The most common method for authenticating users is to demand a password from the user as a proof of identity. Although it is well understood, this scheme has a number of weaknesses, both in the human factors (people forget the password, or pick a password easily guessed, or write it down in a visible place) and technically (some network technologies make wiretapping very easy, which facilitates casual theft of passwords, as a recent rash of attacks on the Internet has illustrated).36 Today, systems are being deployed that use cryptography rather than simple passwords to provide a more robust facility. These systems are beginning to be used on the network, but they are not yet widely integrated into commercial systems. Control of Authorized Users While encryption can be used to keep data from the hands of unauthorized users, encryption in its basic form offers no limits on what an authorized user can do. Once the user has the key to reveal the contents of a file, there seem to be no limits as to what that user can do to it. In particular, the user can copy the information, change it, and give it to other users. There is no identified technical means to prevent this copying, without unreasonably limiting the primary use of the information, although technology is under development in this area. However, encryption can be used to achieve some specific capabilities. Most importantly, cryptography can be used to provide a certificate, a digital signature, which attests that a file's contents have not been corrupted. Such a certificate cannot be forged, and so it provides assurance from the creator that a file is intact.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 84 NATIONAL INFORMATION INFRASTRUCTURE Taking a Comprehensive Approach to Ensuring Security Technology provides some effective tools for use in building a secure and effective system, but it is not the whole answer. Technology must be made part of a total approach that includes a set of operating assumptions and controls that allow the users to make reasonable decisions about the operation of a system. If, for example, all users passing along a piece of information were required to attach an additional certificate, asserting the integrity and authenticity of the contents, then presumably there would be an incorruptible trail from any receiver back to a source that claimed to be the original creator. Such a trail would not prevent data theft and corruption, but would provide evidence of the event. However, this control would be effective only if users understood that they should not be party to receiving data without a certificate. Such a significant departure from current practice would doubtless generate great resistance. But implementation of these practices by major vendors of software for information handling might help the scheme to succeed. The committee concludes that progress in the area of trustworthy and controlled dissemination of information does not depend primarily on technology but rather on the development of an overall model, or architecture, for control, as well as education and public attitudes that promote responsible, ethical use of information, and associated regulation and policy. Although this model can make effective use of technology components, it will derive its strength from its acceptance by the community. At the same time, there is still a strong need for research and development in the security area, both to develop new technical concepts in key areas and to explore alternative approaches to architecture and operations. FINDING AND BALANCING OPPORTUNITIES TO BUILD TOWARD CONVERGENCE As the committee has pointed out in Chapter 1 and above, a diverse but interrelated set of expectations and requirements is driving current efforts aimed at developing a U.S. national information infrastructure. One vision of the NII, developed around the notion that it will provide citizens access to the information they need over a ubiquitous national network, derives from the observed success of the Internet and includes open access and a range of services. Another, the combined entertainment, telephone, and cable TV (ETC) vision, emphasizes access to a large number of television channels providing video entertainment and other related services for which there is perceived to be a large market that promises considerable commercial success. Harmonizing these visions
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 85 NATIONAL INFORMATION INFRASTRUCTURE conceptually and technically will require finding specific points at which convergence can be encouraged. Competing needs and interests will have to be balanced as we take opportunities at hand now to begin to construct an open NII that will serve the nation well into the future. Development of Standards for TelevisionâAn Example A particularly important opportunity exists in the development of new standards for television, including high-definition television (HDTV). The move to HDTV represents a move from the old analog standard of NTSC video to a digital standard for coding and transmission. A key standards issue is how to relate methods for HDTV coding and transmission to the broader matters of direct importance to the NII.37 There is no question that video will be an important component in the emerging NII, and it will be packaged and transmitted along with data and other digital information according to various standards. If broadcast channels (including those that are now envisioned to be made available only for HDTV) were to support a wide range of applications including, but extending well beyond, HDTV, then they would become general- purpose digital distribution channels, which could couple at their destinations into networks such as LANs that are designed to further transport the information. Although the current HDTV architecture is layered, it is not really modular; that is, the interfaces between layers are not defined with the goal in mind of allowing different layers to be replaced with other layers as technology and distribution methods evolve and/or dictate. The main goals of the current HDTV design were limited to the primary issues of concernâto support television with higher resolution and to efficiently use broadcast television channels with high-resolution signals. It is important to expand these goals in light of the emerging NII and its applications. The same considerations apply in the nearer term to the evolving cable TV infrastructure. At present, digital TV channels on next-generation cable TV systems are assumed to be tailored to the delivery of television using the MPEG- II standard. Unless the need is recognized for a general service capable of delivering digital information of many forms, the details of the cable infrastructure may become so targeted to MPEG-II that no other information can be added to the system without extensive modification. The economic forces that shape the reconstruction of the infrastructure are very compelling. Generality in a system almost always adds to cost. The future television standards represent an excellent example of the tug-of-war between generality and specificity. If the future standards
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 86 NATIONAL INFORMATION INFRASTRUCTURE for video dissemination are specifically engineered for the delivery of TV only, cost optimizations may result that are in total very substantial, due to the large number of expected end nodes and due to economies of specialization; such economies support an argument for multiple outlets or interfaces specialized to individual applications. But such a monetary cost reduction might come at the opportunity cost of preventing any of the broader goals of the NII (notably integrated service access and delivery) from being implemented over this infrastructure. This, surely, would be a greater cost. Nevertheless, the television, cable, and related entertainment industries cannot be expected to take on themselves the objective of engineering into their systems extra cost to meet goals that lie beyond their objectives.38 It thus seems likely that market forces alone will not produce an access infrastructure that can accommodate diverse uses over time. Reengineering of the Nation's Access Circuits Events as they are currently developing suggest that cable and telecommunications industries vary widely in their recognition of the vision of an integrated NII as articulated by the committee. Although some isolated service offerings are very exciting (such as the proposal from Continental Cablevision and PSI to offer Internet access service),39 there is no consensus either as to the importance or the viability of a broadly useful NII. A key ingredient in this lack of consensus is that there is no widely agreed upon method as to how, or if, cable infrastructure will attach to the desktop computer in the home, to the data networks that exist (or are being planned), to business environments, and so on. This is an urgent matter to resolve in the context of the NII. It is particularly important in the context of standards and interfaces for consumer equipment, which is very difficult to upgrade or replace once it is widely deployedâas the example of NTSC, the analog coding standard for broadcast television, makes clear. The practical impossibility of replacing all the televisions in the country is causing the NTSC standard to have an impact and consequences far beyond its technical merits. A major example of commercial interests moving ahead very quickly without a context in which to address broader public interests is now in the making, namely, the currently planned reengineering of the nation's access circuits. Articulation of public interests, preferably with direct government involvement, could help to shape these developments. The committee recognizes that, as this report was being written, the Technology Policy Working Group of the Information Infrastructure Task Force was contemplating relevant activities.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 87 NATIONAL INFORMATION INFRASTRUCTURE Cost and Function in Access Circuits One of the key issues in the argument over the reengineering of access circuits is the potential of added cost to provide reasonable bandwidth going away from the home, the so-called "back" or reverse channel. There are several apparent trade-offs in this area. One of the committee's key objectives is that the overall structure of the NII not act to preclude anyone from playing a particular role in offering a high- level service. For example, it should be possible for any user to become a provider of information on the network. This objective is an issue because in many cases producers and consumers have asymmetric needs for bandwidth; the high-bandwidth path runs from the producer to the consumer, with only a low rate of data transmission in the reverse direction. Today some networks provide symmetric data paths, whereas in other networks the paths to and from the user are very different. For example, the telephone network provides symmetric bandwidth to and from the user, while the cable networks provide vast bandwidth to the user, but little or no return communication. Such asymmetry is a potential limitation to the development of an open NII, because it means that becoming a producer of information (whose communication into the network requires large bandwidth) is possible only by entering into a special agreement with the network provider. Normal users can only be consumers of information, and the network provider controls access to what the producers provide. The objective of allowing for equal two-way communication must be balanced with the technical and economic problem that symmetric channels may not be cost-effective for many situations; a related issue is to assess the costs and benefits of back channels of varying sizes.40 Engineering high- bandwidth reverse channels into an existing cable TV system would add substantially to the system costs. These costs are hard to justify in immediate practical terms, since many access paradigms are asymmetric at the application level. For example, a great deal of asymmetry is built into the architecture of the popular client-server model, and access to certain archival library databases implies asymmetry as well. On the other hand, peer-to-peer applications can often exploit symmetrical channel access. It is important, however, to recognize the value of introducing into asymmetric networks the potential for symmetry, even if that capability is not universally deployed. That is, the overall architecture should provide for symmetry although individual applications need not. Indeed, the issue should not be choosing asymmetry as opposed to symmetry, but rather determining how to provide flexibility so that customers can purchase the appropriate degree of asymmetry.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 88 NATIONAL INFORMATION INFRASTRUCTURE Options for Incorporating the ODN Bearer Service There are a number of different options for incorporating the committee's concept of the open bearer service into an access network intended primarily for some particular application, such as delivery of cable video. The five different patterns of integration discussed below, which have very different implications for cost and functionality, are ordered to illustrate increasing degrees of flexibility, with delivery of video used as the example of a core application of the network. â¢ Option 1âThe undesirable option, which allows for no integration, is installed technology that is useful for only one purpose, in this case the delivery of video. Cable systems today are often this limited, offering no options for expansion of service. Future systems might also be designed for this single function if there is no broader vision of the range of services to be offered. â¢ Option 2âThe system is designed to carry two separate servicesâ multiple channels of video and the open bearer service. Some of today's cable systems have small additions (like bidirectional amplifiers) that allow a completely separate service such as the one Continental Cablevision and PSI are proposing to offer in Cambridge, Massachusetts. The two separate services cannot see each other, but the second could grow to be NII compliant in its own right. â¢ Option 3âThe idea of separate services for video delivery and open bearer service becomes more interesting for future systems with digital video encoding. As the delivery systems migrate from NTSC encoding of video to digital encoding, such as MPEG-II, the video becomes more amenable to processing by both specialized and general digital processors. While the digital video may be delivered over the network in a highly specialized and cost-effective manner, the encoding can be defined (as MPEG-II is) so that it has at least one form that separates the actual video information from the details of the delivery method. A capability for converting the video into this format at customers' premises allows it to be transferred to and processed by other end-node elements such as general-purpose computers. This arrangement is a very powerful one: on the one hand, it does not affect the coding or delivery of the presumed high-volume information, which can be delivered in whatever cost-reduced manner the industry prefers; on the other hand, it simultaneously supports a general bearer service and provides a way to move the video into that more general format as needed at the end node. This report suggests that this combination be used as an example of a step that can be taken to realize the vision of an open and flexible NII. â¢ Option 4âFurther options for integrating the delivery of video and the general bearer service would more directly couple the two, but
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 89 NATIONAL INFORMATION INFRASTRUCTURE might also raise issues of cost and the utility of the network technology. One step would be to use the ODN bearer service as the delivery framing for cable TV. While some would argue that a properly defined bearer service could be used for a specific high- efficiency situation such as cable delivery of video, the uncertainty about cost impacts represents a significant issue that could impede this step. However, this step is not necessary in order to achieve an integrated ODN. Once the costs are understood, this degree of integration might make sense. This is a decision for industry to make. â¢ Option 5âDeploying the ODN could be further facilitated by designing an access network technology specifically intended to reduce the costs of the core service (in this instance, digital video delivery) and at the same time provide a carefully designed general, bidirectional ODN-style bearer service. The TV would ideally use an open bearer representation if costs permit, but this detail is not critical to the success of the effort. In the section "Research on the NII" at the end of this chapter, the committee notes a specific research and development effort that could be funded to move toward this goal. The committee believes that a truly open NII cannot be achieved unless the cable TV and/or telephone access circuits are reconstructed in a way that supports the services of both the entertainment and information sectors. Technically, what is needed is a bearer service based on the concept of bidirectional digital packet switching. Such a service would permit the objectives of the ODN to be supported at the same time as other services dedicated to the entertainment sector. Need for Government Action in Balancing Objectives Because there will be real incremental costs in engineering access circuits to the home to meet broad NII objectives of the kind expressed in the Open Data Network,41 the committee concludes that this is a time when the government, by direct action, can materially influence the course of the NII. At a minimum, the government could urge that future access circuits to homes be implemented in a way that supports the vision of an integrated NII, although, where real incremental costs are involved, simple urging will not be effective. At the other extreme, the government could mandate that all future access circuits support NII objectives, based on some specific set of service requirements. Such a mandatory action would preserve the concept of equitable cost for competitive providers, as occurs, for example, with various communications equipment features required by the Federal Communications Commission. However, in the current climate of limited regulation and increased
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 90 NATIONAL INFORMATION INFRASTRUCTURE dependence on competitive forces to shape industry, the tack of direct government mandate may not be feasible, and such precedents may not be followed. The committee thus seeks a middle ground for government action, in which government actively works with the affected industries to define a suitable structure of economic incentives that serve to motivate the installation of a national infrastructure with appropriate characteristics. The debate about next-generation television, entertainment networking, and the emerging reconstruction of the nation's access circuits has reached a crucial point: what follows may be either a departure from a broader NII, or a striving for interoperability with it. The committee strongly urges the latter approach, and it urges the government to recognize the opportunity at hand and to act on it. ACTING NOW TO REALIZE A UNIFIED NII There is a definite role to be played by government in the pursuit of an open and flexible national information infrastructure, especially as commercial providers begin to figure ever more dominantly in the deployment of network technology. Left to their own devices, commercial providers will properly serve the markets that offer them growth and profitability. Planning for change, providing a general and open base-level service, and using and supporting open standards all may increase the cost of network deployment and thus may be at odds with commercial plans targeted to essentially one product, such as video delivery. It is difficult to judge the degree of tension between the committee's vision of an open network and current commercial deployment plans. The true economic costs of openness cannot be judged by assessment of today's equipment, which was not designed to this end. The intrinsic support from the market for an open NII cannot yet be judged; this committee has assessed the experience in the limited communities (such as the academic research community) that have had real exposure to its vision but notes in those discussions the issues in scaling from those limited communities to a larger society. Finally, although it can discover anecdotal evidence that the tension exists, the committee necessarily has a limited ability to assess marketing plans and commercial projections. The tension that is evident is more a question of degree than a choice between two poles. Nonetheless, it is a tension that must be recognized and rationalized if a coherent NII is to come into existence. In stating its approach, the committee recognizes that in the long run, it is the market, and not the force of government, that will determine if the vision of an open NII is relevant, useful, or successful. In attempting to identify those steps that could lead to the realization of an integrated NII, the commit
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 91 NATIONAL INFORMATION INFRASTRUCTURE tee notes at the same time the need to minimize the impact of guessing incorrectly. It is the committee's belief that, with proper technology development, taking the key steps now that will avoid precluding its vision can be done at tolerable costs. The committee urges the government to articulate a vision in this area and to work to ensure that industry will take this vision into account in its activities and development plans. Only if there is an accepted definition of NII compliance, and policy that encourages that compliance, will there be an effective way to accomplish a common vision of the future networks. RECOMMENDATION: Technology Deployment The committee recommends that the government work with the relevant industries, in particular the cable and telephone companies, to find suitable economic incentives so that the access circuits (connections to homes, schools, and so on) that will be reconstructed over the coming decade are engineered in ways that support the Open Data Network architecture. The term "engineering" refers to the process of choosing what equipment to deploy, when to deploy it, and in what configuration to deploy it so that customer needs are met at least cost. The committee concludes that a national infrastructure capturing the ODN architecture will not be widely deployed if competitive forces alone shape the future; deregulation, along likely lines, will not be sufficient to guide the development and deployment of the ODN architecture. While anecdotal, numerous comments from inside the cable and telephone industries suggest that the perceived costs of adding the features that support openness are discouraging the necessary investment in the current competitive climate. The committee therefore concludes that these features will not be incorporated in the evolving national information infrastructure without policy intervention. Needed now is direct action by government to ensure a planned, coordinated start to deploying the access circuit technology for the NII. RESEARCH ON THE NIIâENSURING NECESSARY TECHNICAL DEVELOPMENT This chapter touches on a number of areas in which continued research is required to realize the NII. The strong traditions of academic and industrial research that have led to U.S. leadership in telecommunications and networking must be continued and expanded if a truly national information infrastructure is to come into existence. Significant technical issues remain to be addressed in the development of the NII;
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 92 NATIONAL INFORMATION INFRASTRUCTURE achieving the ''Information Superhighway" involves more than just a matter of policy, legislation, and regulation. It would be easy to conclude, from the great success of the Internet, that all required network research has been done. Such a conclusion would be very destructive to the leadership role played by U.S. industries and universities in all areas of technology related to information infrastructure. This committee believes that research, both experimental and basic, is essential to the future success of the NII and to national competitiveness. Chapter 6 summarizes the many ways in which the government can have an impact by continued involvement in the nurturing of research. In this section, the committee reviews specific technical areas in which work is required to fulfill the vision of an integrated NII. The section has been written to be self-contained and therefore repeats some concepts advanced in earlier sections. For NSF to undertake support for the range of topics identified below would imply a significant expansion in the modes and paradigms for research that NSF has traditionally recognized. The list includes a number of specific research topics that would naturally fit into the traditional pattern of NSF proposals. It also identifies architecture studies and testbeds as research objectives. Architecture studies are often larger, more diffuse, collaborative, and less easy to define and to evaluate in advance. Testbeds, especially the virtual testbeds that have been built on the Internet, are again very collaborative efforts among workers at many sites, and they involve coordination and collective setting of direction as much as they do funding. Testbeds have been actively supported by ARPA, sometimes with NSF collaboration. A greater testbed effort by NSF would imply a departure from its normal pattern of funding, which involves the submission and evaluation of individual proposals from various sites and does not naturally lead to the required degree of direction setting, coordination, and architecture leadership. Research to Develop Network Architecture The Open Data Network architecture is a plan that defines the integrated NII's key aspects and how they fit together. The ODN architecture must be developed, a requirement that implies more than a program to study a series of technical issues. What is needed is a fitting together of all these pieces into an integrated concept that drives the whole development. This effort represents a research program in its own right and is perhaps the most important of the tasks that will lead to the NII. The Internet is based on such a framework, which was first developed in the 1970s as an ARPA-funded research program. That research on architecture defined the key principles of the Internet: how the func
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 93 NATIONAL INFORMATION INFRASTRUCTURE tions were divided into layers, how functional responsibilities were divided between the host and the network switches and routers, where conformance to a single standard would be mandatory, and so on. Similar architectural planning underlies any coherent infrastructure such as the telephone system, and it will definitely be required for the larger and more complex NII. Network research today has tended to stress the issue of higher speed. The most obvious examples are the gigabit testbeds, jointly funded by ARPA and NSF, which have accelerated the deployment of high-speed network technology and stimulated the drive to higher-speed applications. However, the more pressing problems for networking tomorrow are issues of scale and heterogeneity, rather than speed. A system built so that it can scale to a large size is perhaps the most basic consequence of a successful ODN architecture. Problems of addressing and routing, of decentralization of management and operations, of dealing with heterogeneity, and of providing secure and trustworthy service are all issues that must be addressed in an overall architecture plan. The government must support research into general and flexible architecture as a keystone of its NII research. Defining the Bearer Service Definition of the ODN bearer service is an example of the sort of issue that arises as a part of architecture planning. One of the starting points for the ODN bearer service described by the committee was the IP protocol from the Internet protocol suite. However, as noted above in this chapter, even in the restricted domain of the Internet (as compared to the broader NII) the current IP services must be extended to meet emerging needs in areas such as explicit quality of service (QOS). To address this issue, it is not sufficient to do research in how to implement QOS. The harder problem, now being addressed for IP in the relevant IETF working groups, is how to fit these concepts into the overall framework of assumptions that define the IP protocol. The most important architectural issue is to balance two objectivesâto add flexibility to IP to make it more useful, and also to keep it simple and uniformâso that the protocol can be implemented in a consistent manner across a wide range of lower-level technologies. The power of IP is that it can be made to work over almost any network technology. Once the technology independence is augmented with complex requirements for QOS, this power may be lost. Balancing technology independence with the need for QOS is the essence of architecture research. Presumably, the bearer service for the NII will have to reflect the integration of issues even broader than the issues associated with the
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 94 NATIONAL INFORMATION INFRASTRUCTURE next-generation IP. Developing the bearer service will require an understanding and balancing of a number of key technical factors. The outcome of this effort must be a scalable design that provides the needed isolation of the bitways below it from the information services above it; it must also be designed in a fashion such that a full range of QOS requirements can be met, including some that we cannot yet anticipate. A directed research program toward this end would be a major contribution to the technical accomplishment of an open NII. Issues for the Lower Levels: Scale, Robustness, and Operations The ODN architecture must, of course, incorporate a number of specific technical developments, each of which must be explored as part of the overall NII development. Summarized below are a number of issues, also raised early in this chapter, that are certainly under study today but are not yet sufficiently understood to enable meeting needs of the future. Addressing and Routing The issues of naming, addressing, and routing are among the most central to the success of a large-scale open NII. Both at the lower levels, where bits are being delivered, and at the higher levels, where services are being invoked, meaningful names must exist for the entities in question in order to make use of them. Without telephone numbers, one cannot call. Most successful network architectures, including the telephone system and the Internet, are struggling with the problem that the naming plan did not provide enough names to support the actual growth of the network. Even more important, perhaps, is the problem of routing. As the network grows larger, and the range of services grows more complex, the difficulty of finding the location and a route to all the named objects gets more complex. This is a topic of much research at the present time, but there is yet no clear consensus as to the correct approach, taking into account all the real issues: decentralization of management, competing providers, mobility of end nodes (both computers and people), multicast delivery, worldwide scope, and so on. Decentralization adds a substantial degree of complexity to routing. The various providers in the NII will each wish to make local assertions as to the sorts of traffic they will carry, and for whom. These various local decisions must be combined into a self-consistent route before any traffic can actually flow across the network. The problem of finding
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 95 NATIONAL INFORMATION INFRASTRUCTURE routes becomes even more complex when QOS is taken into account; the suitable route may depend on the details of the QOS specification, for example, a specific bandwidth requirement. When all these concerns are combined with the objective of establishing routes quickly, so that traffic is not delayed during route setup, the overall problem can be quite daunting. There is a new generation of applications being deployed on the Internet, including audio, video, and shared work-space tools for multiparty teleconferencing, which depend for their operation on multicast (the ability to deliver information from a source to a set of recipients, instead of just a single recipient). Multicast makes the problem of finding good routes much more complex, since the range of options is greatly expanded. One approach is to build a separate route from each source to all its intended destinations. This approach is computationally complex but potentially the most efficient. Alternatively, one could build a single tree of routes that reaches from a known central point to all the destinations, and then allow any source to send to that central point as a way to accomplish multicast. This alternative is much easier to implement but is potentially highly inefficient, both in use of bandwidth and in extra network delay. Multicast is becoming a very important feature of the Internet, and research into its effective implementation is critical. Even in the Internet there is concern today that the system has scaled to the point that the complexity of managing the routing at the global level will exceed current capabilities. Recognition of this problem helped to motivate the recent NSF award for routing-related research in connection with developing a routing arbiter function. A new generation of software tools will be needed to control and operate the even more complex NII. Quality of Service The NII will support a diversity of applications ranging from current Internet services such as e-mail, file transfer, and remote login to such new applications as interactive multimedia video conferencing, transmission of medical images, and real-time remote sensing. Each of these applications has different requirements for such QOS measures as delay, throughput, and reliability. Research is needed to understand how to design network components and interfaces that can provide the wide range of QOS that will be required in the NII. This work includes the design of specific mechanisms such as queue management and admission control, as well as the specification of general service models that the application can use to take advantage of these mechanisms.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 96 NATIONAL INFORMATION INFRASTRUCTURE New Approaches to Transport Protocols TCP has been the default transport protocol of the Internet suite. However, it is not suited to all situations and all applications. TCP is designed to ensure totally reliable delivery. If it cannot deliver the information without error, it will not continue to transfer any information at all. However, some applications can tolerate errors. Audio and video for teleconferencing do not require totally error- free delivery. A momentary disruption in the transmitted stream is preferable to a total suspension of delivery. TCP as defined cannot provide this service. Multicast, again, implies new issues for the transport layer. If one is sending to a set of receivers and an error occurs that disrupts transmission to only one of the receivers, the options for resolving the error are expanded. The transmission to all the destinations can be suspended until the error to the one is fixed, or the one receiver can be restored while communication to the others continues. There may be more relaxed models of reliability that make sense in certain applications, and there may be more options for restoring the state of the receiver. Today, these cases are covered at the transport level by building into the application a special transport protocol that is custom designed for the application. There have been a number of proposals for a general framework for reliable multicast, but these have proved somewhat controversial, and no consensus has emerged. Research to generalize these ideas and propose a new generation of transport protocol would be very valuable. Network Control Functions Network controls are needed so that a very large and decentralized NII will be able to react to traffic overloads, network dynamics, and hardware and software failures. In addition to the routing function discussed above, a number of other network control functions will have an impact on the success of the NII. For example, when a user specifies a needed QOS, the network must decide if that request is one that should be accepted or not. If, for example, no route exists that can support the requested QOS, then the admission control function will reject the request. On the other hand, a feasible route may be available that satisfies the QOS request, but the cost may be unacceptable to the user, in which case the request may be rescinded by the user; this is a form of self-imposed admission control, and it requires more interaction with the user at a high level. Once a call is accepted, the network has the responsibility of providing the agreed-upon QOS. In order to accomplish this, the network must "monitor" the traffic submitted by the user and must deter
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 97 NATIONAL INFORMATION INFRASTRUCTURE mine if that user is delivering no more than the level of traffic that was negotiated at the time of call acceptance. A number of congestion control methods exist that serve to guarantee that the contract is being kept by the user (one popular class being that of "leaky bucket control"), but no one technique has yet been universally accepted. A proper study to determine an appropriate mix of these many control algorithms (routing control, admission control, cost control, congestion control, flow control, error control) has yet to be done, the outcome of which is necessary to set the proper guidelines for key aspects of the lower levels. All of these control issues are the subject of current research, many are being tested in the various networking testbeds, and work on these issues will support multiple needs such as those discussed under "Quality of Service" above. Mobility as the Computing Paradigm of the Future Mobility has to do with the capability to access data, resources, and services from any location via either wireline or wireless connections to the NII. One low-level issue in mobility is addressing and routing: how to deliver data to a host whose location changes as it is in operation. Another low-level issue is how and to what the network should assign names. A name could be assigned to the device that roams, to the user, to a location, to a user with a specific device at a specific location at a specific time, and so on. The higher-level issue, perhaps the more important in the long run, is how applications must change in the mobile environment, and what kind of network and operating system support is required to manage mobility. One implication of true mobility seems to be that communications may be intermittent. Medium-and long-term disruptions in the communications infrastructure will cause most applications today to fail, or at least to display to the user behavior that is quite unsuitable. New models are needed for information caching, coordination of local and remote versions, methods for providing services to mobile users according to their system profile, and so on. Research is also needed to develop a system that can give a mobile user, who may be linked with a low-speed line to the infrastructure, feedback about how many bytes, dollars, or increments of time may be needed for the reply to a query made of the system. Such a capability would allow the user to either abort the request, or, in a system with the necessary intelligence, to ask for a subset of the reply, or an approximation. To date, only preliminary research has been done on the software architecture and related system interfaces that will enable mobility; this is a critically important area of investigation for the emerging NII.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 98 NATIONAL INFORMATION INFRASTRUCTURE Management SystemsâMonitoring and Control The NII will require continuous support to keep it up and running and to ensure that users have a high level of available service. Since the NII is likely to be a conglomeration of many heterogeneous interconnected networks, network management will be especially difficult. First, the management function will be distributed across the component networks, since each network operator will presumably be responsible for managing his or her own component of the overall system. Second, in a very large system the degree of trust and integration between the parts is likely to be reduced, which leads to greater problems in isolating and resolving problems. Research is needed to enable development of new technical means to alleviate some of these problems, for example, techniques based on new methods to assess current operating state and to share this state among neighbor regions. Measurement and monitoring aid in understanding the functioning of a given network. As the network grows very large and very distributed, the collection of data, the bandwidth needed by monitoring tools, the requirements for storing and processing of the collected data, and so on are affected. Research is needed in support of better techniques for fault detection, performance diagnosis, and prediction, as well as automated or semiautomated measurement capabilities, reporting, and even repair. New Technology for Access Circuits As this chapter has noted, the design of access circuits (the "last-mile" technology) has much to do with the ability to provide a ubiquitous NII with the range of services envisioned by the committee. Picking among the design options is a matter of both available technology and policy. Specifically, the government could sponsor research to explore the options for several key features: â¢ Access circuit technology that could provide very efficient delivery of identified traffic classes such as video and at the same time support the general sorts of services, including multimedia mixed data traffic, envisioned for the NII; â¢ Access circuits that provide cost-effective mixing of "bursty" traffic from several sources, as well as a means to deal with transient congestion by reflecting it back to the end nodes; and â¢ Access circuit technology that could provide for a cost-effective variation in services for bidirectional traffic, to accommodate a range of end-node needs for capacity into and out of the network. If the technology provided the ability to adjust this service dynamically, the investment
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 99 NATIONAL INFORMATION INFRASTRUCTURE in installed infrastructure could better survive the changes in the pattern of usage that can be anticipated over the lifetime of the investment. Middleware and Information Services Support The previous section addressed some of the current concerns related to classical networking, the low-level transport of bits among end nodes. As the committee has observed, what distinguishes an information infrastructure from a basic data network is a defined and implemented set of higher-level services, the middleware layer, which provides an environment more directly suited to the advanced applications that will run there. The middleware layer is a much less mature component of networking than the lower layers, which have been explored and reduced to practice in a number of cases. Thus, the issues in middleware research are very open ended and wide-ranging. Navigation and Filtering Tools A major issue is how to tame the multimedia wave of information that is breaking over our heads: 500 satellite channels, terabytes of skyfall per day, thousands of new books and magazines and hundreds of thousands of newspapers and newsletters a day, and an exploding array of electronic mail and bulletin boards. How can people find the good stuff and filter out the rest? Much of this information is text or has a text component. Certainly that is true of print and mail, but it is also true of some video (if it is closed captioned) and documents scanned using optical character readers. A major issue is how to "navigate" and "filter" text. Until recently, information retrieval has been a neglected subject of computer science and of library and information sciences. The work has tended to be very academic and has not focused much on either user interfaces or very large databases. Research is needed on text understanding, enhancing, and indexing and on filter and search interfaces and engines. Nontext media (sound, speech, image, video) research is much more speculative. The problem statement is the same: capture, process, enhance, index, filter, search. But the task is much more formidable. We have no good techniques for categorizing such media (e.g., fingerprints have one category scheme, x-rays another, but most image classes do not have a well-defined categorization). Any search or index scheme requires such categorization. Probably the major breakthroughs will come not from the research on search methods but rather from research on user interfaces. Today, there is a real barrier between people and machines. Graphical user in
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 100 NATIONAL INFORMATION INFRASTRUCTURE terfaces are valuable, but the current interfaces between humans and our knowledge bases are very labor intensive because there are significant learning barriers to their use. They present a hypertext and keyword-search capability but generally do not incorporate contextual knowledge of the user. A breakthrough is needed in the research on user interfaces and query interfaces to facilitate access to query engines and knowledge bases. Intellectual Property Rights The NII promises a future in which the trade in information is as significant for the national economy as the trade in goods and services now is. Research is needed to make clear what new tools will be needed and how they will affect society. The solutions should promote the availability of a broad diversity of information over the networks, rather than simply control access. Some of this work was contemplated in recently proposed legislation; relevant work would also fall under the Information Infrastructure Technology and Applications (IITA) component of the HPCC initiative. Partnerships among industry, academia, and government will be essential. Today we lack a consistent technical, legal, and business framework for the dissemination of intellectual property over networks. Problems to be solved range from licensing and fee recovery when material is being sold, to ensuring the integrity of information and proper attribution when the information is being freely distributed. If information owners have some assurance that licensing agreements entered into electronically over the network are enforceable, they will have significantly more incentive to trust information to this new environment. Information providers will also require some assurance that they are protected against liability, at least to the extent that they are protected when distributing their information over other types of electronic systems. Another capability that might tend to reduce unauthorized copying of intellectual property is an easy means for on-line payment of copying fees. Experimentation with technology that allows payment, in a simple and robust fashion, for services obtained through an electronic route is essential. The committee recognizes that such experimentation has begun in industry and university settings. Mechanisms based on both credit card charges and electronic money could provide attributed and anonymous purchase of information. A very efficient payment system would facilitate the selling of information in small increments, which is a natural transaction over a network. However, experiments with users will be required to determine if incremental or flat-fee payments are more desirable.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 101 NATIONAL INFORMATION INFRASTRUCTURE Computer and Communications Security Classical end-node security is based on the idea that each node should separately defend itself by using controls on that machine. However, current- generation PCs and workstations are not engineered with a high degree of security assurance, so as a practical matter, an alternative is being deployed based on putting "firewalls" into the network, machines that separate the network into regions of more and less trust and regulate the traffic coming from the untrusted region. Firewalls raise a number of serious issues for the Internet protocol architecture, since they violate a basic assumption of the Internet, which is that two machines on the same internetwork can freely exchange packets. Firewalls explicitly restrict the sorts of packets that can be exchanged, which can cause a range of operational problems. Research with the goal of making firewalls work betterâmaking them both more secure and more operationally robustâwould be very important at the present time. The strongly decentralized nature of the NII makes security issues more difficult, because it will be necessary to establish communication among a set of sites, each of which implements its own controls (e.g., user authentication) and is not willing to trust the other. Trustworthy interaction among mutually suspicious regions is a fundamental problem for which there are few general models. Security techniques using any form of encryption are no more robust than the methods used to distribute and store the encryption keys. Personal computers today offer no secure way to store keys, which severely imperils many security schemes. A proposal for solving this problem would be very important. The other issue with keys is the need for trustworthy distribution of keys in the large, decentralized NII. How can two sites with no common past history of shared trust exchange keys to begin communication in a way that cannot be observed or corrupted? The most direct solution would seem to be a trusted third party who "introduces" the two sites to each other, but there is no framework or model to define such a service or to reason about its robustness. Research in this area is a key aspect of fitting security into a system with the scale of the NII. In addition to protection of host computers and the data they hold, the network itself must be protected from hostile attack that overloads the network, steals service, or otherwise renders the system useless. Additional research and development should be done on technical mechanisms, better approaches to operation, and new approaches to training and education. Methods and technology for ensuring security are relevant to both the lower levels of the network and to the higher levels of the information infrastructure. Protecting intellectual property rights is a security
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 102 NATIONAL INFORMATION INFRASTRUCTURE concern, as is anticipating problems of fraud in payment schemes (control of fraud depends on identifying users in a trustworthy manner). Again, achieving security requires the study of specific mechanisms and overall architecture. Much is known about techniques such as encryption. What is equally important is a proposal for an overall plan that combines all useful techniques into a consistent and effective approach to ensuring security. This overall plan must be developed, validated, and then replicated in such a way that users and providers can understand the issues and implications associated with their parts of the overall system. This effort, not the study of specific mechanisms, is the hard part, and the key to success. Research in the Development of Software The continuing need for research in means to develop large and complex software packages is not new, nor is it specific to networking and the information infrastructure. At the same time, it is a key issue for which there seems no ready solution. Problems of software development are a key impediment to realization of the NII. A new generation of applications developed to deal with information and its use are likely to be substantially more complex than the application packages of today: they will deal with large quantities of information in heterogeneous formats, they will deal with distributed information and will be distributed themselves, they will provide a level of intelligence in processing the quantities of information present on the network, and they will be modularâcapable of being reconfigured and reorganized to meet new and evolving user objectives. These requirements represent a level of sophistication that is very difficult to accomplish with reliability, very expensive to undertake, and thus very risky. The committee adds its support to the continued attempts to advance this area. Experimental Network Research Experimental research, which involves the design of real protocols, the deployment of real systems, and the construction of testbeds and other experimental facilities, is a critical part of the research needed to build the NII. Since this sort of work is often difficult to fund and execute, especially within the limits of the academic context, the committee stresses the importance of facilitating it. The Internet has provided an experimental environment for a number of practical research projects. In the early stages of the Internet, the network itself was viewed as experimental, and indeed these experiments
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 103 NATIONAL INFORMATION INFRASTRUCTURE played an important role in the Internet's development. However, the increasingly operational nature of the Internet has essentially precluded its use as a research vehicle. In the future, any remaining opportunity for large-scale network research will vanish, given that the NSFNET backbone is about to disappear and will be replaced by commercial networks and a backbone with only a small number of nodes, the very high speed backbone network service (vBNS), which is to provide high bandwidth for selected applications. In addition, it is likely that most of the Internet, like the larger NII, will be operated by commercial organizations over the next few years. This transition has required the implementation of separate networks used specifically for research and experimentation. The gigabit testbeds provide facilities for investigating state-of-the-art advanced technologies and applications. ARPA has also provided a lower-speed experimental network, the DARTnet, connecting a number of ARPA-funded research sites. However, these networks are small and do not provide any real means to explore issues of scale. Indeed, there does not seem to be any affordable way to build a free- standing experimental network large enough to explore issues of scale, which is a real concern, since practical research in this area is key to the success of the NII. Currently, the research community attempts to deal with this problem by using the resources of the Internet to realize a ''virtual network" that researchers then use for large-scale experiments. Thus, multicast has been developed by means of a virtual network called the M-bone (multicast backbone) that runs over the Internet.42 Similar schemes have been used to develop new Internet protocols. There is a danger that the success of the Internet, much of which has been based on its openness to experimentation, will lead to a narrowing of opportunities for such experimentation. It is important that a portion of the NII remain open to controlled experiments. A balance must thus be maintained between the need to experiment and the need to provide stable service using commercial equipment. Attention should be given to the technical means to accomplish these goals. Funding should be allocated for the deployment of network experiments and prototype systems on the NII, even though they may be relatively more expensive than other research paradigms. Experimental Research in Middleware and Application Services Conducting testbed experimentation at the middleware level is usually less problematic than doing network research, because operation of experimental higher-level services cannot easily disrupt the ongoing operational use of the network by applications not depending on those ser
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 104 NATIONAL INFORMATION INFRASTRUCTURE vices. The Internet thus remains a major facility for development and evaluation of middleware services, an opportunity that should be recognized and encouraged. Testbeds can address associated management of rights and responsibilities, including assessment of needs and mechanisms for the protection of privacy, security, and intellectual property rights. Experimental and testbed efforts are needed to support a transition to higher-level, information management uses of networks. As John Diebold has observed,43 applications of information technology progress through a cycle encompassing modernization of old ways, innovation (involving the development of new access tools and services), and ultimately transformation from one kind of activity to another (including doing the previously inconceivable). A great deal of experimentation is needed to achieve truly transformational applications. The challenge can be illustrated by reference to the emergence of "casual publishing." The ability to publish from a desktop has changed publication practices; desktop video generation and reception will change them more. Although computer technology is making publishing changes possible, who can benefit, how, and at what costs will depend on the nature of the infrastructure. A similar set of technical, market, and policy issues arises in the digital library context, where experimentation has begun with support from NSF, ARPA, and NASA.44 Rights Management Testbed More generally, an example of a useful testbed relating to rights management would incorporate systematic identification of the rationale for actions appropriate to government and industry into a joint industry-government project demonstrating model contractual and operational relationships to support the carriage of multimedia proprietary content. The computer, telecommunications carrier, cable provider, software provider, and content provider industries should participate, perhaps providing matching funds complementing a small contribution from the federal government, with broad dissemination of results a requirement. Questions that should be answered include the following: â¢ How can electronic authorization or execution of electronic contracts be provided over the network? This is an example of a general and flexible piece of infrastructure that the private sector is not likely to provide. â¢ What means can be developed to quickly provide varying degrees of authorization for particular uses of a work, for example, when the work may be used by different users for different purposes and at different pricing schemes?
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 105 NATIONAL INFORMATION INFRASTRUCTURE â¢ What various technological meansâand the associated best times to use themâcan be found for protecting data? â¢ What are the options for formatting multimedia information in a consumer-friendly fashion for distribution over the network to "episodic" users? This area is now the focus of considerable amounts of research and industry activity. All efforts should be aimed at the most cost-efficient and interoperable means of achieving goals. A variant or a component of the above concept might include a series of multimedia projects that explore provision of electronic access to collections and materials generally inaccessible in the past, but of high research value, including photographs, drawings, films, archival data, sound recordings, spatial data, written manuscripts, and so on. Research to Characterize Effects of Change It will be important to understand how the evolving infrastructure will affect both the infrastructure for research and education as well as processes for research and education. This continuing process of change presents new challenges that militate against NSF assuming that it has successfully demonstrated the value of networking to research and therefore can diminish activity in that area. The new NSF-ARPA-NASA digital libraries initiative and NSF and ARPA information infrastructure-oriented activities under the IITA component of the HPCC program are steps in the right direction, but they are only first steps. RECOMMENDATION: Network Research The committee recommends that the National Science Foundation, along with the Advanced Research Projects Agency, other Department of Defense research agencies, the Department of Energy, and the National Aeronautics and Space Administration, continue and, in fact, expand a program of research in networks, with attention to emerging issues at the higher levels of an Open Data Network architecture (e.g., applications and information management), in addition to research at the lower levels of the architecture. The technical issues associated with developing and deploying an NII are far from resolved. Research can contribute to architecture, to new concepts for network services, and to new principles and designs in key areas such as security, scale, heterogeneity, and evolvability. It is important to ensure that this country maintains its clear technical leadership and competitive advantage in information infrastructure and networking.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 106 NATIONAL INFORMATION INFRASTRUCTURE NOTES 1. The term "open" has been used in a variety of ways in the networking and standards community. Some of the uses describe rather different situations from that which is described in this chapter. For example, the telephone companies have been developing a concept they call open network architecture. That architecture does not address the conceres listed here; it is a means to allow third- party providers to develop and attach to existing telephone systems alternative versions of advanced services such as 800-number service. 2. Tolerance of heterogeneity must be provided for at more than the physical layer. At the higher levels, information must be coded to deal with a range of devices. For example, different video displays may have very different resolution: one may display a high-definition TV picture, while another may have a picture the size of a postage stamp. To deal with this either (1) the picture must be simultaneously transmitted with multiple codings, or the postage stamp display must possess the computational power of an HDTV, so that it can find within the high-resolution picture the limited information it needs, or (2) (preferably) the information stream must have been coded for heterogeneity: the data must have been organized so that each resolution display can easily find the portions relevant to it. 3. An illustration of this point can be seen in the history of a protocol suite called XNS that was developed by Xerox. XNS was proposed in the early 1980s and received considerable attention in the commercial community, since it was perceived as rather simple to implement. The interest in XNS continued until it became clear that Xerox did not intend to release the specification of one protocol in the XNS suite, Interpress, which was a protocol for printing documents. Within a very short time, all interest in XNS ceased, and it is essentially unknown today. 4. The notion of a multilayer approach is consistent with directions now being undertaken by ARPA and NSF in supporting the NREN and IITA components of the HPCC initiative. It also appears in such projects as the proposed industry-university "I-95" project to "facilitate the free-market purchase, sale and exchange of information services." See Tennenhouse, David, et al. 1993. I-95: The Information Market, MIT/LCS/TR-577. Massachusetts Institute of Technology, Cambridge, Mass., August. 5. The committee notes that conceptual models of the sort offered here may differ from models used to organize implementations, and emphasizes that the purpose of its conceptual model is to provide a framework for discussion and understanding. Models intended to guide actual implementation must be shaped by such issues as performance and may thus be organized in a somewhat different manner. In particular, a modularity based on strong layering may not be appropriate for organizing software modules. 6. This four-layer taxonomy is not inconsistent with a three-layer model that has been articulated in recent NII and HPCC presentations, a model based on "bitways," middleware, and applications. The taxonomy suggested in this report further divides the lower bitways layer to emphasize the importance of the bearer service, as is discussed in text below. 7. Quality of service (QOS) is discussed again later in this chapter. Although somewhat technical, this matter is a key aspect of defining the ODN. Today's Internet does not provide any variation in QOS; it provides a single sort of service often called "best effort." The telephone system also provides only one QOS, one designed for telephony. The Internet is currently undertaking to add user-selected QOS to its core service; it seems a requirement for a next-generation general service network. 8. In the Internet today, these transport features are provided by a protocol called the Transmission Control Protocol, or TCP, which is the most common version of the transport layer of the Internet. The TCP assigns sequence numbers to data being transferred across the network and uses those sequence numbers at the receiver to assure that all data is
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 107 NATIONAL INFORMATION INFRASTRUCTURE received and that it is delivered in order. It a packet is lost or misordered, these sequence numbers detect that fact. To detect whether any of the data being transferred over the network become damaged due to transmission errors, TCP computes a "checksum" on the data and uses that checksum to discover any corruption. If a damaged packet is detected, the receiving TCP will ask the sending TCP to retransmit that packet. The TCP also contains an initial connection synchronization mechanism, based on the exchange of unique identifiers in packets, to bring an end- node connection into existence reliably. While TCP is the most prevalent of the transport protocols used in the Internet, it is not mandatory, nor is it the only transport service. A range of situations, such as multicast delivery of data, and delivery where less than perfect reliability is required, imply the use of an alternative to TCP. For this reason, TCP is defined in such a way that no part of its implementation is inside the network. It is implemented in the end nodes, which means that replacing it with some other protocol does not require changes inside the network. 9. The transport layer defined in this report is not exactly the same as the layer with the same name in the OSI reference model, the OSI layer 4, because it also includes protocols for data formats, which are a part of the OSI presentation layer. Thus the ODN transport layer is a more inclusive collection of services that gathers together all the services that are provided in the networks of today to support applications in the end node. 10. For example, the government required that television sets include UHF tuners. In retrospect, most people would argue that the policy was seriously flawed. UHF television has never lived up to its expectations, the service has hoarded billions of dollars worth of valuable spectrum for decades, and the cost of television sets was increased with little net benefit to consumersâ especially those living in less populated areas. 11. The committee recognizes that the Information Infrastructure Task Force has begun to explore the concept of technically based "road maps" for the Nil. 12. The committee recognizes that unbundling is a controversial issue under current debate among state and federal regulatory agencies. The Ameritech proposals to open up its facilities present one indication that recognition of tendencies toward unbundling may be widening within industry. See Teece, David J. 1993. Restructuring the U.S. Telecommunications Industry for Global Competitiveness: The Ameritech Program in Context, University of California at Berkeley, April This monograph describes how Ameritech offers to unbundle its local loops and provide immediate access to practically all local facilities and switching systems, with significantly lower costs for the unbundled loop compared to the revenue available from exchange telephone and related services: Once effectuated, the Ameritech unbundling plan will make the local exchange effectively contestable. Basically, anyone wanting to enter any segment could do so at relatively low cost. Entry barriers would in essence be eliminated. . . . [I]nterconnectors can literally isolate and either use or avoid any segment of the network. They are also flee to interconnect using their own transport or purchasing transport from Ameritech. . . . [A]ll elements of the network must be correctly priced since any underpriced segment can be used separately from the balance of the network and overpriced segments can easily be avoided. (p. 64) 13. With each emerging network technology, including Ethernet, personal computers, high-speed LANs such as FDDI, and high-speed long-distance circuits, there have been predictions that IP or TCP would not be effective, might perform poorly, and would have to be replaced. So far these predictions have proved false. This concern is now being repeated in the context of network access to mobile end-node devices, such as PCs and other computers, and other new communications paradigms. It remains to be seen if there are real issues in these new situations, but the early experiments suggest that IP will indeed work in the mobile environment. 14. There are some other less central IP features, such as the means to deal with lower-level technology with different maximum packet sizes. There is also a small set of IP-level
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 108 NATIONAL INFORMATION INFRASTRUCTURE control messages, which report to the end node various error conditions and network stares, such as use of a bad address, relevant changes in network routing, or network failures. 15. A related issue is development of standard format sets for publishing over the Internet, for requiring headers and/or signatures, or for requiring some kind of registration that might automatically put the "work" in a directory. 16. Indeed, many applications cannot predict in advance what their bandwidth needs will be, as these depend very dynamically on the details of execution, for example, which actions are requested, which data are fetched, and so on. 17. Providing a refusal capability has implications for applications and user interfaces designed in the Internet tradition, which today do not ask permission to use the network but simply do so. The concept of refusal is missing. 18. By late 1993, perhaps 1,000 people worldwide were using real-time video over the Internet. The consequence was that at times fewer than 0.1 percent of Internet users consumed 10 percent of the backbone capacity. Personal communication, Stephen Wolff, National Science Foundation, December 20, 1993. 19. This point is relevant to a current debate in the technical community about whether the basic bearer service that can be built using the standards for ATM should support statistical sharing of bandwidth. Some proposals for ATM do not support best-effort service, but rather only services with guaranteed QOS parameters. This position is motivated by a set of speculations that a "better" quality of service better serves user needs. However, taking into account cost structures and the success of best-effort service on the Internet, a "better" service may not be more desirable. Technical decisions of this sort could have a major bearing on the success of ATM as a technology of choice for the NIL 20. The guarantee issue is related to the scheduling algorithm that packets see. A packet (or ATM) switch can have either a very simple or a rather complex scheduling algorithm for departing network traffic. The simple method is First In, First Out (FIFO). There are a number of more complex methods, one of which is Weighted Fair Queueing (WFQ). In FIFO, a burst of packet traffic put into the network goes through immediately, staying in front of other later packets. WFQ services different packet classes in turn, so that the burst is fed into the network in a regulated way and then mixed by the scheduler with packets from other classes. One alternative for achieving fairness is to allocate bandwidth in a very conservative manner (peak rate allocation) so that the user is externally limited (at the entry point of the net) to some rate, and then to assure that on every link of the network over which this flow of cells will pass, there is enough bandwidth to carry the full load of every User at once. Such an approach using peak allocation eliminates from the network any benefit of statistical bandwidth sharing, which is one of the major benefits of packet switching. On the other hand, WFQ is one method for ensuring that we benefit from statistical multiplexing. The way this decision is settled will have real business consequences for the telephone companies and other ATM network providers. 21. One estimate for the accounting file for only long-haul intra-U.S. Internet traffic is that it would exceed 45 gigabytes per month per billion packets; the NSFNET backbone was approaching 40 billion packets per month by late 1993. See Roberts, Michael M. 1993. "Internet Accounting-- Revisited," EDUCOM Review, November-December (December 6 e-mail). 22. This explicitly does not preclude implementing similar services in noncompliant ways as well. Thus, video might be provided according to the standards required for NII compliance, and as well in some proprietary noncompliant coding. 23. There is a great deal of uncertainty about the limitations of wireless. The low Earth orbiting satellites could provide considerable bandwidth; in this area, ARPA has funded a gigabit satellite experiment. At the local level (ground radio) the limitations are also un
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 109 NATIONAL INFORMATION INFRASTRUCTURE clear, but bandwidth will always be a problem to some extent with wireless. The question is how pervasive wireless will be for data communications. The predictions are indeed muddied. To quote a February 15, 1994, article in America's NETWORK, "The most experienced analyst with the best research data can't predict with certainty how the coming wireless data market will develop." Robert Pepper (FCC) reminds us that when cellular began, the best guesses were that there would be 1 million customers by the end of the century; today there are 60 million. It is clear that the lower- speed data services will surely be used widely, it is use of the high-speed services that is hard to predict. 24. The current status of the GOSIP is in doubt. NIST has convened the Federal Internet-working Requirements Panel to advise it on options for dealing with the GOSIP. At this writing, the draft report of this panel, opened for comments, was not yet final. However, the overall direction of the report appears to be to abandon the current GOSIP, which mandates one required protocol suite (the OSI suite), and to move to a more open approach based on multiple suites and an explicit acceptance of the Internet protocols. 25. See also U.S. Congress, Office of Technology Assessment. 1992. Global Standards: Building Blocks of the Future. TCT-512. Government Printing Office, Washington, D.C., March. 26. In the early 1970s, ARPA undertook the development of TCP/IP for the specific purpose of providing a standard approach to interoperation of DOD networks. The technical development was done by a working group convened and funded by ARPA, with academic and industrial research participants. In the late 1970s, ARPA worked with the Defense Communications Agency (DCA) to mandate TCP/IP as a preliminary standard for internetworked DOD systems. The DCA and ARPA cooperated on the establishment of a more formal review committee to oversee the establishment and deployment of TCP/IP within the DOD. 27. Additionally, the committee notes the emerging issues of addressing in the cable networks. Today, the cable networks have no real need for a global addressing architecture, since distinguishing between individual end nodes is needed only for directing the control messages sent to the set-top box. However, as the entertainment products become more complex and interactive, the need for an explicit addressing scheme will increase. If the cable networks expand to interwork with other parts of the information infrastructure, their addressing scheme should probably be unified with the scheme for telephony and information networks. 28. Computer Science and Telecommunications Board (CSTB), National Research Council. 1990. Computers at Risk: Safe Computing in the Information Age. National Academy Press, Washington, D.C. 29. Wireless radio transmission is especially subject to security risks for a number of reasons. First, the transmission is broadcast into the air, and so it is relatively simple to "tap" the transmission. Second, since the transmission is broadcast, a number of other radio receivers can easily receive it, and more than one of them may decode the message, opening up more opportunities for a breach of security. Third, since the medium is radio, it is easy to "jam" the transmission. Fourth, since radios are usually (though not always) portable, they are more vulnerable to being stolen, lost, damaged, and so on. 30. The committee recognizes that security is an emphasis of the administration's Information Infrastructure Task Force, but it seeks a sufficiently broad and deep technical framework, beginning with a security architecture. 31. CSTB has previously recommended more security-related research. See CSTB, 1990, Computers at Risk. 32. Lewis, Peter H. 1994. "Computer Security Experts See Pattern in Internet Break-ins," New York Times, February 11; and Burgess, John. 1994. "DOD Plan May Cut Ties to Internet,' Network World, January 10, p. 95.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 110 NATIONAL INFORMATION INFRASTRUCTURE 33. CSTB, 1990, Computers at Risk. 34. CSTB will launch a separate study of encryption and cryptography policy in mid-1994. 35. See CSTB, 1990, Computers at Risk. 36. Lewis, 1994, "Computer Security Experts See Pattern in Internet Break-ins,' 37. Recognition of this problem is developing in the relevant industries, but problems of design and implementation remain. See "National Information Infrastructure and Grand Alliance HDTV System Operability," February 22, 1994. 38. Note that specificity is a theme of technology decisions for most interactive television trials to date. See Yoshida, Junko, and Terry Costlow. 1994. "Group Races Chip Makers to Set-top," EE Times, February 7 (electronic distribution), which observes that "many of the digital interactive TV trials and commercial rollouts are married to a particular set-top box design that is directly tied to a specific network architecture. Examples range from the set-top box Silicon Graphics is basing on its Indy workstation for Time Warner Cable's Full Service Network project in Orlando, Florida, to the box Scientific-Atlanta is building around 312)O Inc.'s graphics chip set for US West's trial in Omaha, Nebraska.' 39. Continental Cablevision. 1994. "Continental Cablevision, PSI Launch Internet Service: First Commercial Internet Service Delivered via Cable Available Beginning Today in Cambridge, Massachusetts," News Release, March 8. 40. A wide range of speeds might be offered from the user back into the network. Today's options for access speeds range from voice-grade modems to current higher-speed modems at 56 kbps to ISDN at 128 kbps. None of these speeds are sufficient either for low-delay transfer of significant quantities of data or for delivery of video from the home into the network. Since high-quality compressed video seems to require between 1.5 and 4 Mbps, a channel of this size (at least a T1 channel) would permit a user to offer one video stream. It would still represent a real bottleneck for a site offering access to significant data. For comparison, in today's LAN environments 10 Mbps is considered minimal for access to data stored on file servers. Finally, for networks whose primary purpose is to provide access to entertainment video, the operator of the network presumably has the access capacity to deliver several hundred video streams into the network simultaneously. It is unlikely that this sort of inbound capacity will be readily available to any other user of the network. But at lower and more realistic input speeds, perhaps from T1 to 10 Mbps, there are a variety of interesting opportunities for becoming an information provider. 41. In an attempt to explore what these costs might be, the committee discovered that there are technical disagreements about the degree of additional complexity and cost implied by its objectives. Comments from inside the cable and telephone industries indicate that these industries have already assessed the costs of adding these more general features and have concluded internally that they cannot afford them in the current competitive climate. The committee thus takes as a given that these features will not be incorporated any time soon without policy intervention. 42. A virtual network such as the M-bone is constructed by attaching to the Internet a set of experimental routers. The operational IP addressing and routing is used to establish paths between these routers, which then use these paths as if they were point-to-point connections. Experimental routing algorithms, for example, can then be evaluated in these new routers. These new algorithms can neither see nor disrupt the operational routing running at the lower level, and so the experiment does not disrupt normal operation. The isolation is not perfect, however. In the case of the M-bone, quantities of multicast traffic might possibly flood the real Internet links, preventing service. Explicit steps have been taken in the experimental routers to prevent this occurrence. Building a virtual network requires care to prevent any chance of lower-level disruptions, since it does involve sending real data over the real network.
THE OPEN DATA NETWORK: ACHIEVING THE VISION OF AN INTEGRATED 111 NATIONAL INFORMATION INFRASTRUCTURE 43. Diebold was quoted by Paul Bran Peters in a June 1993 briefing to the committee. 44. The NSF-ARPA-NASA digital libraries initiative solicits proposals for research in three areas: (1) "Capturing data (and descriptive information about such data) of all forms (text, images, sound, speech, etc.) and categorizing and organizing electronic information in a variety of formats." (2) "Advanced software and algorithms for browsing, searching, filtering, abstracting, summarizing and combining large volumes of data, imagery, and all kinds of information." (3) "The utilization of networked databases distributed around the nation and around the world." Examples of relevant research are listed in National Science Foundation. 1993. "Research on Digital Libraries: Announcement," NSF 93-141. National Science Foundation, Washington, D.C.