2
The Open Data Network: Achieving the Vision of an Integrated National Information Infrastructure

The committee's vision for a national information infrastructure (NII) is that of an Open Data Network, the ODN. Having a vision of future networking, however, is not at all the same thing as bringing it to fruition. As a contribution to the ongoing debate concerning the objectives and characteristics of the NII, the committee details in this chapter how its ODN architecture can enable the realization of an NII with broad utility and societal benefit, and it discusses the key actions that must be taken to realize these benefits.

An open network is one that is capable of carrying information services of all kinds, from suppliers of all kinds, to customers of all kinds, across network service providers of all kinds, in a seamless accessible fashion. The telephone system is an example of an open network, and it is clear to most people that this kind of system is vastly more useful than a system in which the users are partitioned into closed groups based, for example, on the service provider or the user's employer.

The implications of an open network1 are that there is a need for a certain minimum level of physical infrastructure with certain capabilities to be provided, for an agreement to be forged for a set of objectives for services and interoperation, for relevant standards to be set, for research on enabling technology to be continued, and for oversight and management. The role government should take here is critical.

The committee's advocacy of an Open Data Network is based in part on its experience with the enormously successful Internet experiment, whose basic structure enables many of the capabilities needed in a truly national information infrastructure. The Internet's defining protocols, TCP and IP, are not proprietary—they are open standards that can be



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 43
Realizing the Information Future: The Internet and Beyond 2 The Open Data Network: Achieving the Vision of an Integrated National Information Infrastructure The committee's vision for a national information infrastructure (NII) is that of an Open Data Network, the ODN. Having a vision of future networking, however, is not at all the same thing as bringing it to fruition. As a contribution to the ongoing debate concerning the objectives and characteristics of the NII, the committee details in this chapter how its ODN architecture can enable the realization of an NII with broad utility and societal benefit, and it discusses the key actions that must be taken to realize these benefits. An open network is one that is capable of carrying information services of all kinds, from suppliers of all kinds, to customers of all kinds, across network service providers of all kinds, in a seamless accessible fashion. The telephone system is an example of an open network, and it is clear to most people that this kind of system is vastly more useful than a system in which the users are partitioned into closed groups based, for example, on the service provider or the user's employer. The implications of an open network1 are that there is a need for a certain minimum level of physical infrastructure with certain capabilities to be provided, for an agreement to be forged for a set of objectives for services and interoperation, for relevant standards to be set, for research on enabling technology to be continued, and for oversight and management. The role government should take here is critical. The committee's advocacy of an Open Data Network is based in part on its experience with the enormously successful Internet experiment, whose basic structure enables many of the capabilities needed in a truly national information infrastructure. The Internet's defining protocols, TCP and IP, are not proprietary—they are open standards that can be

OCR for page 43
Realizing the Information Future: The Internet and Beyond implemented by anyone. Further, these protocols are not targeted to the support of one particular application, but are designed instead to support as broad a range of services as possible. Moreover, the Internet attempts to provide open access to users. In the long term, users and networks connected to such a universal network benefit from its openness. But in data networking today, this vision of open networking is not yet universally accepted. Most of the corporate data networking done currently uses closed networks, and most information and entertainment networks targeted to consumers are closed, by this committee's definition. THE OPEN DATA NETWORK Criteria for an Open Data Network The Open Data Network envisioned by the committee meets a number of criteria: Open to users: It does not force users into closed groups or deny access to any sectors of society, but permits universal connectivity, as does the telephone system. Open to service providers: It provides an open and accessible environment for competing commercial or intellectual interests. For example, it does not preclude competitive access for information providers. Open to network providers: It makes it possible for any network provider to meet the necessary requirements to attach and become a part of the aggregate of interconnected networks. Open to change: It permits the introduction of new applications and services over time. It is not limited to only one application, such as TV distribution. It also permits the introduction of new transmission, switching, and control technologies as these become available in the future. Technical, Operational, and Organizational Objectives The criteria for an Open Data Network imply the following set of very challenging technical and organizational objectives: Technology independence. The definition of the ODN must not bind the implementors to any particular choice of network technology. The ODN must be defined in terms of the services that it offers, not the way in which these services are realized. This more abstract definition will permit the ODN to survive the development of new technology options, as will certainly happen during its lifetime if it is successful. The ODN

OCR for page 43
Realizing the Information Future: The Internet and Beyond should be defined in such a way that it can be realized both over the technology of the telephone and the cable industries, over wire and wireless media, and over local and long-distance network technology. Scalability. If the ODN is to be universal, it must scale to global proportions. This objective has implications for basic features, such as addressing and switching, and for operational issues such as network management. New or emerging modalities, such as mobile computers and wireless networking today, must be accommodated by the network. If the ODN is to provide attachment to all users, it must physically reach homes as well as businesses. This capability implies an upgrade to the "last mile" of the network, the part that actually enters the home (or business). Further, the number of computers (or more generally the number of networked devices) per person can be expected to increase radically. The network must be able to expand in scale to accommodate all these trends. Decentralized operation. If the network is composed of many different regions operated by different providers, the control, management, operation, monitoring, measurement, maintenance, and so on must necessarily be very decentralized. This decentralization implies a need for a framework for interaction among the parts, a framework that is robust and that supports cooperation among mutually suspicious providers. Decentralization can be seen as an aspect of large scale, and indeed a large system must be decentralized to some extent. But the implications of highly decentralized operations are important enough to be noted separately, as decentralization affects a number of points in this chapter. Appropriate architecture and supporting standards. Since parts of the network will be built by different, perhaps competing organizations, there must be carefully crafted interface definitions of the parts of the network, to ensure that the parts actually interwork when they are installed. Since the network must evolve to support new services over time, there is an additional requirement that implementors must engineer to accommodate change and evolution. These features may add to the network costs that may be inconsistent with short-term profitability goals. Security. Poor security is the enemy of open networking. Without adequate protection from malicious attackers and the ability to screen out annoying services, users will not take the risk of attaching to an open network, but will instead opt, if they network at all, for attachment to a restricted community of users connected for a specific purpose, i.e., a closed user group. This version of closed networking sacrifices a broad range of capabilities in exchange for a more reliable, secure, and available environment. Flexibility in providing network services. If a network's low-level technology is designed to support only one application, such as broad-

OCR for page 43
Realizing the Information Future: The Internet and Beyond cast TV or telephony, it will render inefficient or even prohibit the use of new services that have different requirements, although it may support the target application very efficiently. Having a flexible low-level service is key to providing open services, and to ensuring the capacity to evolve. For example, the emerging broadband integrated services digital network (B-ISDN) standards, based on asynchronous transfer mode (ATM), attempt to provide a more general capability for the telephone system than was provided by the current generation of technology, which was designed specifically for transport of voice. The Internet protocol, IP, is another example of a protocol that provides a flexible basis for building many higher-level services, in this case without binding choices to a particular network technology; the importance of this independence is noted above. Accommodation of heterogeneity. If the ODN is to be universal, it must interwork with a large variety of network and end-node devices. There is a wide range of network infrastructure: local area and wide area technology, wireline and wireless, fast and slow. Perhaps more importantly, there will also be a range of end-node devices, ranging from powerful computers to personal digital assistants to intelligent devices such as thermostats and televisions that do not resemble computers at all. The ODN must interwork with all of them, an objective that requires adaptability in the protocols and interface definitions and has implications for the way information is represented in the network.2 Facilitation of accounting and cost recovery. The committee envisions the ODN as a framework in which competitive providers contribute various component services. Thus it must be possible to account for and cover the costs of operating each component. Because the resulting pricing will be determined by market forces, fulfilling the objective of universal access may require subsidy policies, a point discussed in Chapter 5. Benefits of an Open Data Network Comparing the success of the open Internet to the limited impact of various closed, proprietary network architectures that have emerged in the past 20 years—systems that eventually either disappeared or had to be adjusted to allow open access—suggests that the wisdom of seeking open networks is irrefutable.3 Many of the proprietary networks that have played to captive audiences of vendor-specific networks for years are now rapidly losing ground as users demand and achieve the ability to interoperate in a world of heterogeneous equipment, services, and network operating systems. On the other hand, the Internet, and those networks that have "opened up," are enjoying phenomenal growth in membership.

OCR for page 43
Realizing the Information Future: The Internet and Beyond It is important to note that achieving an open network does not preclude the existence of closed networks and user groups. First, there will always be providers (such as current cable TV providers) that choose to develop closed networks for a variety of reasons, such as control of revenues, support of closed sets of users, and mission-critical applications. It is unrealistic to believe that such an approach either can or should be controlled. For this reason, it will be necessary to provide some level of interoperation with proprietary protocols, with new versions of protocols, and with networks that do emerge to deal with special contingencies or with special services. Second, closed user groups will always exist, for reasons of convenience and security. The Open Data Network can be configured to allow closed groups to use its facilities to construct a private network on top of the ODN resources. (See, for example, the discussion below under "Security," which presents approaches, such as the use of security firewalls, to providing a restricted secure environment.) OPEN DATA NETWORK ARCHITECTURE To realize the vision of an integrated NII, it is necessary to create an appropriate network architecture, that is, a set of specifications or a framework that will guide the detailed design of the infrastructure. Without such a framework, the pieces of the emerging communications infrastructure may not fit together to meet any larger vision, and may in fact not fit together at all. The architecture the committee proposes is inspired in part by examining the Internet and identifying those of its attributes that have led to its success. However, some important departures from the Internet architecture must be included to enable an evolution to the much larger vision of an open NII. An Architectural Proposal in Four Layers Described below is a four-layer architecture for the Open Data Network.4 The four layers provide a conceptual model for facilitating the discussion of the various categories of services and capabilities comprised by the ODN.5 The layers are the bearer service, transport, middleware, and the applications.6 At the lowest level of the ODN architecture is an abstract bit-level transport service that the committee calls the bearer service of the ODN. Its essence is that it implements a specified range of qualities of service (QOS) to support the higher-level services envisioned for the ODN. At this level, bits are bits, and nothing more; that is, their role in exchanging information between applications is not visible. However, it should be

OCR for page 43
Realizing the Information Future: The Internet and Beyond stressed that there can be more than one quality of service; the differences among these are manifested in the reliability, timeliness, correctness, and bandwidth of the delivery. Having multiple QOS will permit an application with a particular service requirement to make a suitable selection from among the QOS provided by the bearer service.7 The bearer service of the ODN sits on top of the network technology substrate, a term used to indicate the range of technologies that realize the raw bit-carrying fabric of the infrastructure. Included in this set are the communication links (copper, microwave, fiber, wireless, and so on) and the communication switches (packet switches, ATM switches, circuit switches, store-and-forward switches, and optical wave-length-division multiplexers, among others). This set also includes the functions of switching, routing, network management and monitoring, and possibly other mechanisms needed to ensure that bits are delivered with the desired QOS. The Open Data Network must be seen not as a single, monolithic technology, but rather as a set of interconnected technologies, perhaps with very different characteristics, that nonetheless permit interchange of information and services across this set. At the next level, the transport layer, are the enhancements that transform the basic bearer service into the range of end-to-end delivery services needed by the applications. Service features typically found at the transport layer include reliable, sequenced delivery, flow control, and end-point connection establishment.8 In this organization of the levels, the transport layer also includes the conventions for the format of data being transported across the network.9 The bit streams are differentiated into identifiable traffic types such as voice, video, text, fax, graphics, and images. The common element among these different types of traffic is that they are all digital streams and are therefore capable of being carried on digital networks. Currently much of the work in the commercial sector is aimed at defining these sorts of format standards, mostly driven by workstation and PC applications. The distinction between the bearer service and the transport layer above it is that the bearer service defines those features that must be implemented inside the network, in the switches and routers, while the transport layer defines services that can be realized either in the network or the end node. For example, bounds on delay must be realized inside the network by controls on queues. Delay, once introduced, cannot be removed at the destination. On the other hand, reliable delivery is normally viewed as a transport-layer feature, since the loss of a packet inside the network can be detected and corrected by cooperating end nodes. Since transport services can be implemented in the end node if they are not provided inside the network, they do not have to be mandated as a core part of the bearer service. This suggests that they should be sepa-

OCR for page 43
Realizing the Information Future: The Internet and Beyond rated into a distinct transport layer, since it is valuable, as the committee discusses, to minimize the number of functions defined in the bearer service. While the service enhancements provided by the transport layer are very important, this report does not elaborate further on this layer (as it does below for the other three layers), since these services are a well-understood and mature aspect of networking today. The third layer, middleware, is composed of higher-level functions that are used in common among a set of applications. These functions, which form a toolkit for application implementors, permit applications to be constructed more as a set of building blocks than as vertically integrated monoliths. These middleware functions distinguish an information infrastructure from a network providing bit-level transport. Examples of these functions include file system support, privacy protection, authentication and other security functions, tools for coordinating multisite applications, remote computer access services, storage repositories, name servers, network directory services, and directory services of other types. A subset of these functions, such as naming, will best be implemented in a single, uniform manner across all parts of the ODN. There is a need for one or more global naming schemes in the ODN. For example, there may have to be a low-level name space for network end nodes, and a higher-level name space for naming users and services. These name spaces, since they are global, cannot be tied to a particular network technology choice but must be part of the technology-independent layers of the architecture. These somewhat more general service issues will benefit from a broad architectural perspective, which governmental involvement could sustain. The uppermost layer is where the applications recognized by typical users reside, for example, electronic mail (e-mail), airline reservation systems, systems for processing credit card authorizations, or interactive education. It is at this level that it is necessary to develop all the user applications that will be run on the ODN. The benefit of the common services and interfaces of the middleware layer is that applications can be constructed in a more modular manner, which should permit additional applications to be composed from these modules. Such modularity should provide the benefit of greater flexibility for the user, and less development cost and risk for the application implementors. The complexity of application software development is a major issue in advancing the ODN, and any approach to reducing the magnitude and risk of application development is an important issue for the NII. As a wider range of services is provided over the network, it will be important that users see a uniform interface, to reduce the learning necessary to use a new service. A set of common principles is needed for the

OCR for page 43
Realizing the Information Future: The Internet and Beyond construction of user interfaces, a framework within which a new network service can present a familiar appearance, just as the Macintosh or the X Window interface attempts to give the user a uniform context for applications within the host. A generation of computer-literate users should be able to explore a new network application as confidently as they use a new television remote control today. Much effort is under way in the commercial sector to identify and develop approaches in this area, and it will be necessary to wait and see if market forces can produce a successful set of options in this case. A critical feature of the ODN architecture is openness to change. Since the committee sees a continuous evolution in network technology, in end-node function, and, most importantly, in user-visible services, the network standards at all the levels must be evolvable. This requires an overall architectural view that fosters incremental evolution and permits staged migration of users to new paradigms. There must be an agreed-upon expectation about evolution and change that links the makers of standards and the developers of products. This expectation defines the level of effort that will be required to track changing requirements, and it permits the maintainers to allocate the needed resources. The need for responsiveness to change can represent a formidable barrier, since the status quo becomes embedded in user equipment, as well as in the network. If standards are not devised to permit graceful and incremental upgrade, as well as backwards compatibility, the network is likely either to freeze in some state of evolution or to proceed in a series of potentially disruptive upheavals. The Internet is currently planning for a major change, the replacement of its central protocol, IP. This change will require replacing software in every packet switch in the Internet and in the millions of attached hosts. To avoid the disruption that might otherwise occur, the transition will probably be accomplished over several years. In a similar approach the telephone industry has made major improvements to its infrastructure and standards (such as changes to its numbering plan) in a very incremental and coordinated manner. Part of what has been learned in these processes is the wisdom of planning for change. The committee thus concludes that the definition of an ODN architecture must proceed at many levels. At each of these levels technical decisions will have to be made, and in some cases making the correct decision will be critical to the ODN's success in enabling a true NII. The committee recognizes that if the government attempts to directly set standards and conventions, there is a risk of misguided decision making.10 There is also concern that if the decisions are left wholly to the marketplace, certain key decisions may not be made in a sufficiently timely and coherent manner. Some overarching decisions, such as the specification

OCR for page 43
Realizing the Information Future: The Internet and Beyond of the bearer service discussed in the next section, must of necessity be made early in the development process, since they will shape the deployment of so much of the network.11 Therefore, both the nature of the architecture's layers and the decision process guiding their implementation are important. In the interests of flexibility, the committee emphasizes the architecture, services, and access interfaces composing an ODN. It describes the characteristics the infrastructure technology should have, leaving to the engineers and providers how those characteristics should be realized. The Centrality of the Bearer Service The nature of the bearer service plays a key role in defining the ODN architecture. Its existence as a separate layer—the abstract bit-level network service—provides a critical separation between the actual network technology and the higher-level services that actually serve the user. One way of visualizing the layer modularity is to see the layer stack as an hourglass, with the bearer service at the narrow waist of the hourglass (Figure 2.1). Above the waist, the glass broadens out to include a range of options for transport, middleware, and applications. Below the waist, the glass broadens out to include the range of network technology substrate options. Imposing this narrow point in the protocol stack isolates the application builder from the range of underlying network facilities, and the technology builder from the range of applications. In the Internet protocols, the IP protocol itself sits at this waist in the hourglass. Above IP are options for transport (TCP, UDP, or other specialized protocols); below are all the technologies over which IP can run. The benefit of this architecture is that it forces a distinction between the low-level bearer service and the higher-level services and applications. The network provider that implements the basic bearer service is thus not concerned with the standards in use at the higher levels. This separation of the basic bearer service from the higher-level conventions is one of the tools that ensures an open network; it precludes, for example, a network provider from insisting that only a controlled set of higher-level standards be used on the network, a requirement that would inhibit the development and use of new services and might be used as a tool to limit competition. This partitioning of function is not meant to imply that one entity cannot be a provider of both low-and higher-level services. What is critical is that the open interfaces exist to permit fair and open competition at the various layers. The committee notes that along with this open competition environment comes the implication that the low-level and high-level services should be unbundled.

OCR for page 43
Realizing the Information Future: The Internet and Beyond Even if the providers of the bearer service are indifferent to the higher-level standards, those standards should be specified and promulgated. If there are accepted agreements among software developers as to the standards and services at the middleware and application levels, users can benefit greatly by obtaining software that permits them to participate in the services of the network. Characterizing the Bearer Service If the ODN is indeed to provide an open and accessible environment for higher-level service providers, then there must be some detailed understanding of the characteristics of this underlying bearer service, so that higher-level service providers can build on that foundation. A precedent can be seen in the telephone system's well-developed service model, which was initially developed to carry a voice conversation. Other applications such as data modems and fax have been developed over this service, precisely because the basic "bearer service," the service at the waist of the hourglass, was well defined. In the case of data networks, defining the base characteristics of the underlying service is not easy. Applications vary widely in their basic requirements (e.g., real-time video contrasted with e-mail text). Current data network technologies vary widely in their basic performance, from modem links (at 19,200, 14,400, 9,600, or fewer bits per second) to local area networks such as the 10-Mbps Ethernet, to the 100-Mbps FDDI. A number of conclusions can be drawn about the definition of the bearer service. The bearer services are not part of the ODN unless they can be priced separately from the higher-level services, since the goal of an open and accessible network environment implies that (at least in principle) higher-level services can be implemented by providers different from the provider of the bearer service. As long as two different providers are involved in the complete service offering, the price of each part must be distinguished. The definition of the bearer services must not preclude effective pricing for the service. The committee recognizes that such pricing elements have been bundled together and that this history will complicate a shift to the regime advanced here in the interest of a free market for entry at various levels.12 We must resist the temptation to define the bearer service using simplistic measures such as raw bandwidth alone. We must instead look for measures that directly relate to the ability of the facilities to support the higher-level services, measures that specify QOS parameters such as bandwidth, delay, and loss characteristics.

OCR for page 43
Realizing the Information Future: The Internet and Beyond FIGURE 2.1 A four-layer model for the Open Data Network.

OCR for page 43
Realizing the Information Future: The Internet and Beyond Computer and Communications Security Classical end-node security is based on the idea that each node should separately defend itself by using controls on that machine. However, current-generation PCs and workstations are not engineered with a high degree of security assurance, so as a practical matter, an alternative is being deployed based on putting "firewalls" into the network, machines that separate the network into regions of more and less trust and regulate the traffic coming from the untrusted region. Firewalls raise a number of serious issues for the Internet protocol architecture, since they violate a basic assumption of the Internet, which is that two machines on the same internetwork can freely exchange packets. Firewalls explicitly restrict the sorts of packets that can be exchanged, which can cause a range of operational problems. Research with the goal of making firewalls work better—making them both more secure and more operationally robust—would be very important at the present time. The strongly decentralized nature of the NII makes security issues more difficult, because it will be necessary to establish communication among a set of sites, each of which implements its own controls (e.g., user authentication) and is not willing to trust the other. Trustworthy interaction among mutually suspicious regions is a fundamental problem for which there are few general models. Security techniques using any form of encryption are no more robust than the methods used to distribute and store the encryption keys. Personal computers today offer no secure way to store keys, which severely imperils many security schemes. A proposal for solving this problem would be very important. The other issue with keys is the need for trustworthy distribution of keys in the large, decentralized NII. How can two sites with no common past history of shared trust exchange keys to begin communication in a way that cannot be observed or corrupted? The most direct solution would seem to be a trusted third party who "introduces" the two sites to each other, but there is no framework or model to define such a service or to reason about its robustness. Research in this area is a key aspect of fitting security into a system with the scale of the NII. In addition to protection of host computers and the data they hold, the network itself must be protected from hostile attack that overloads the network, steals service, or otherwise renders the system useless. Additional research and development should be done on technical mechanisms, better approaches to operation, and new approaches to training and education. Methods and technology for ensuring security are relevant to both the lower levels of the network and to the higher levels of the information infrastructure. Protecting intellectual property rights is a security

OCR for page 43
Realizing the Information Future: The Internet and Beyond concern, as is anticipating problems of fraud in payment schemes (control of fraud depends on identifying users in a trustworthy manner). Again, achieving security requires the study of specific mechanisms and overall architecture. Much is known about techniques such as encryption. What is equally important is a proposal for an overall plan that combines all useful techniques into a consistent and effective approach to ensuring security. This overall plan must be developed, validated, and then replicated in such a way that users and providers can understand the issues and implications associated with their parts of the overall system. This effort, not the study of specific mechanisms, is the hard part, and the key to success. Research in the Development of Software The continuing need for research in means to develop large and complex software packages is not new, nor is it specific to networking and the information infrastructure. At the same time, it is a key issue for which there seems no ready solution. Problems of software development are a key impediment to realization of the NII. A new generation of applications developed to deal with information and its use are likely to be substantially more complex than the application packages of today: they will deal with large quantities of information in heterogeneous formats, they will deal with distributed information and will be distributed themselves, they will provide a level of intelligence in processing the quantities of information present on the network, and they will be modular—capable of being reconfigured and reorganized to meet new and evolving user objectives. These requirements represent a level of sophistication that is very difficult to accomplish with reliability, very expensive to undertake, and thus very risky. The committee adds its support to the continued attempts to advance this area. Experimental Network Research Experimental research, which involves the design of real protocols, the deployment of real systems, and the construction of testbeds and other experimental facilities, is a critical part of the research needed to build the NII. Since this sort of work is often difficult to fund and execute, especially within the limits of the academic context, the committee stresses the importance of facilitating it. The Internet has provided an experimental environment for a number of practical research projects. In the early stages of the Internet, the network itself was viewed as experimental, and indeed these experiments

OCR for page 43
Realizing the Information Future: The Internet and Beyond played an important role in the Internet's development. However, the increasingly operational nature of the Internet has essentially precluded its use as a research vehicle. In the future, any remaining opportunity for large-scale network research will vanish, given that the NSFNET backbone is about to disappear and will be replaced by commercial networks and a backbone with only a small number of nodes, the very high speed backbone network service (vBNS), which is to provide high bandwidth for selected applications. In addition, it is likely that most of the Internet, like the larger NII, will be operated by commercial organizations over the next few years. This transition has required the implementation of separate networks used specifically for research and experimentation. The gigabit testbeds provide facilities for investigating state-of-the-art advanced technologies and applications. ARPA has also provided a lower-speed experimental network, the DARTnet, connecting a number of ARPA-funded research sites. However, these networks are small and do not provide any real means to explore issues of scale. Indeed, there does not seem to be any affordable way to build a free-standing experimental network large enough to explore issues of scale, which is a real concern, since practical research in this area is key to the success of the NII. Currently, the research community attempts to deal with this problem by using the resources of the Internet to realize a ''virtual network" that researchers then use for large-scale experiments. Thus, multicast has been developed by means of a virtual network called the M-bone (multicast backbone) that runs over the Internet.42 Similar schemes have been used to develop new Internet protocols. There is a danger that the success of the Internet, much of which has been based on its openness to experimentation, will lead to a narrowing of opportunities for such experimentation. It is important that a portion of the NII remain open to controlled experiments. A balance must thus be maintained between the need to experiment and the need to provide stable service using commercial equipment. Attention should be given to the technical means to accomplish these goals. Funding should be allocated for the deployment of network experiments and prototype systems on the NII, even though they may be relatively more expensive than other research paradigms. Experimental Research in Middleware and Application Services Conducting testbed experimentation at the middleware level is usually less problematic than doing network research, because operation of experimental higher-level services cannot easily disrupt the ongoing operational use of the network by applications not depending on those ser-

OCR for page 43
Realizing the Information Future: The Internet and Beyond vices. The Internet thus remains a major facility for development and evaluation of middleware services, an opportunity that should be recognized and encouraged. Testbeds can address associated management of rights and responsibilities, including assessment of needs and mechanisms for the protection of privacy, security, and intellectual property rights. Experimental and testbed efforts are needed to support a transition to higher-level, information management uses of networks. As John Diebold has observed,43 applications of information technology progress through a cycle encompassing modernization of old ways, innovation (involving the development of new access tools and services), and ultimately transformation from one kind of activity to another (including doing the previously inconceivable). A great deal of experimentation is needed to achieve truly transformational applications. The challenge can be illustrated by reference to the emergence of "casual publishing." The ability to publish from a desktop has changed publication practices; desktop video generation and reception will change them more. Although computer technology is making publishing changes possible, who can benefit, how, and at what costs will depend on the nature of the infrastructure. A similar set of technical, market, and policy issues arises in the digital library context, where experimentation has begun with support from NSF, ARPA, and NASA.44 Rights Management Testbed More generally, an example of a useful testbed relating to rights management would incorporate systematic identification of the rationale for actions appropriate to government and industry into a joint industry-government project demonstrating model contractual and operational relationships to support the carriage of multimedia proprietary content. The computer, telecommunications carrier, cable provider, software provider, and content provider industries should participate, perhaps providing matching funds complementing a small contribution from the federal government, with broad dissemination of results a requirement. Questions that should be answered include the following: How can electronic authorization or execution of electronic contracts be provided over the network? This is an example of a general and flexible piece of infrastructure that the private sector is not likely to provide. What means can be developed to quickly provide varying degrees of authorization for particular uses of a work, for example, when the work may be used by different users for different purposes and at different pricing schemes?

OCR for page 43
Realizing the Information Future: The Internet and Beyond What various technological means—and the associated best times to use them—can be found for protecting data? What are the options for formatting multimedia information in a consumer-friendly fashion for distribution over the network to "episodic" users? This area is now the focus of considerable amounts of research and industry activity. All efforts should be aimed at the most cost-efficient and interoperable means of achieving goals. A variant or a component of the above concept might include a series of multimedia projects that explore provision of electronic access to collections and materials generally inaccessible in the past, but of high research value, including photographs, drawings, films, archival data, sound recordings, spatial data, written manuscripts, and so on. Research to Characterize Effects of Change It will be important to understand how the evolving infrastructure will affect both the infrastructure for research and education as well as processes for research and education. This continuing process of change presents new challenges that militate against NSF assuming that it has successfully demonstrated the value of networking to research and therefore can diminish activity in that area. The new NSF-ARPA-NASA digital libraries initiative and NSF and ARPA information infrastructure-oriented activities under the IITA component of the HPCC program are steps in the right direction, but they are only first steps. RECOMMENDATION: Network Research The committee recommends that the National Science Foundation, along with the Advanced Research Projects Agency, other Department of Defense research agencies, the Department of Energy, and the National Aeronautics and Space Administration, continue and, in fact, expand a program of research in networks, with attention to emerging issues at the higher levels of an Open Data Network architecture (e.g., applications and information management), in addition to research at the lower levels of the architecture. The technical issues associated with developing and deploying an NII are far from resolved. Research can contribute to architecture, to new concepts for network services, and to new principles and designs in key areas such as security, scale, heterogeneity, and evolvability. It is important to ensure that this country maintains its clear technical leadership and competitive advantage in information infrastructure and networking.

OCR for page 43
Realizing the Information Future: The Internet and Beyond NOTES 1.   The term "open" has been used in a variety of ways in the networking and standards community. Some of the uses describe rather different situations from that which is described in this chapter. For example, the telephone companies have been developing a concept they call open network architecture. That architecture does not address the conceres listed here; it is a means to allow third-party providers to develop and attach to existing telephone systems alternative versions of advanced services such as 800-number service. 2.   Tolerance of heterogeneity must be provided for at more than the physical layer. At the higher levels, information must be coded to deal with a range of devices. For example, different video displays may have very different resolution: one may display a high-definition TV picture, while another may have a picture the size of a postage stamp. To deal with this either (1) the picture must be simultaneously transmitted with multiple codings, or the postage stamp display must possess the computational power of an HDTV, so that it can find within the high-resolution picture the limited information it needs, or (2) (preferably) the information stream must have been coded for heterogeneity: the data must have been organized so that each resolution display can easily find the portions relevant to it. 3.   An illustration of this point can be seen in the history of a protocol suite called XNS that was developed by Xerox. XNS was proposed in the early 1980s and received considerable attention in the commercial community, since it was perceived as rather simple to implement. The interest in XNS continued until it became clear that Xerox did not intend to release the specification of one protocol in the XNS suite, Interpress, which was a protocol for printing documents. Within a very short time, all interest in XNS ceased, and it is essentially unknown today. 4.   The notion of a multilayer approach is consistent with directions now being undertaken by ARPA and NSF in supporting the NREN and IITA components of the HPCC initiative. It also appears in such projects as the proposed industry-university "I-95" project to "facilitate the free-market purchase, sale and exchange of information services." See Tennenhouse, David, et al. 1993. I-95: The Information Market, MIT/LCS/TR-577. Massachusetts Institute of Technology, Cambridge, Mass., August. 5.   The committee notes that conceptual models of the sort offered here may differ from models used to organize implementations, and emphasizes that the purpose of its conceptual model is to provide a framework for discussion and understanding. Models intended to guide actual implementation must be shaped by such issues as performance and may thus be organized in a somewhat different manner. In particular, a modularity based on strong layering may not be appropriate for organizing software modules. 6.   This four-layer taxonomy is not inconsistent with a three-layer model that has been articulated in recent NII and HPCC presentations, a model based on "bitways," middleware, and applications. The taxonomy suggested in this report further divides the lower bitways layer to emphasize the importance of the bearer service, as is discussed in text below. 7.   Quality of service (QOS) is discussed again later in this chapter. Although somewhat technical, this matter is a key aspect of defining the ODN. Today's Internet does not provide any variation in QOS; it provides a single sort of service often called "best effort." The telephone system also provides only one QOS, one designed for telephony. The Internet is currently undertaking to add user-selected QOS to its core service; it seems a requirement for a next-generation general service network. 8.   In the Internet today, these transport features are provided by a protocol called the Transmission Control Protocol, or TCP, which is the most common version of the transport layer of the Internet. The TCP assigns sequence numbers to data being transferred across the network and uses those sequence numbers at the receiver to assure that all data is

OCR for page 43
Realizing the Information Future: The Internet and Beyond     received and that it is delivered in order. It a packet is lost or misordered, these sequence numbers detect that fact. To detect whether any of the data being transferred over the network become damaged due to transmission errors, TCP computes a "checksum" on the data and uses that checksum to discover any corruption. If a damaged packet is detected, the receiving TCP will ask the sending TCP to retransmit that packet. The TCP also con­tains an initial connection synchronization mechanism, based on the exchange of unique identifiers in packets, to bring an end-node connection into existence reliably. While TCP is the most prevalent of the transport protocols used in the Internet, it is not mandatory, nor is it the only transport service. A range of situations, such as multicast delivery of data, and delivery where less than perfect reliability is required, imply the use of an alternative to TCP. For this reason, TCP is defined in such a way that no part of its implementation is inside the network. It is implemented in the end nodes, which means that replacing it with some other protocol does not require changes inside the network. 9.   The transport layer defined in this report is not exactly the same as the layer with the same name in the OSI reference model, the OSI layer 4, because it also includes protocols for data formats, which are a part of the OSI presentation layer. Thus the ODN transport layer is a more inclusive collection of services that gathers together all the services that are provided in the networks of today to support applications in the end node. 10.   For example, the government required that television sets include UHF tuners. In retrospect, most people would argue that the policy was seriously flawed. UHF television has never lived up to its expectations, the service has hoarded billions of dollars worth of valuable spectrum for decades, and the cost of television sets was increased with little net benefit to consumers— especially those living in less populated areas. 11.   The committee recognizes that the Information Infrastructure Task Force has begun to explore the concept of technically based "road maps" for the Nil. 12.   The committee recognizes that unbundling is a controversial issue under current debate among state and federal regulatory agencies. The Ameritech proposals to open up its facilities present one indication that recognition of tendencies toward unbundling may be widening within industry. See Teece, David J. 1993. Restructuring the U.S. Telecommuni­cations Industry for Global Competitiveness: The Ameritech Program in Context, University of California at Berkeley, April This monograph describes how Ameritech offers to unbundle its local loops and provide immediate access to practically all local facilities and switching systems, with significantly lower costs for the unbundled loop compared to the revenue available from exchange telephone and related services: Once effectuated, the Ameritech unbundling plan will make the local exchange effectively contestable. Basically, anyone wanting to enter any segment could do so at relatively low cost. Entry barriers would in essence be eliminated. . . . [I]nterconnectors can literally isolate and either use or avoid any segment of the network. They are also flee to intercon­nect using their own transport or purchasing transport from Ameritech. . . . [A]ll elements of the network must be correctly priced since any underpriced segment can be used sepa­rately from the balance of the network and overpriced segments can easily be avoided. (p. 64) 13.   With each emerging network technology, including Ethernet, personal computers, high-speed LANs such as FDDI, and high-speed long-distance circuits, there have been predictions that IP or TCP would not be effective, might perform poorly, and would have to be replaced. So far these predictions have proved false. This concern is now being repeated in the context of network access to mobile end-node devices, such as PCs and other com­puters, and other new communications paradigms. It remains to be seen if there are real issues in these new situations, but the early experiments suggest that IP will indeed work in the mobile environment. 14.   There are some other less central IP features, such as the means to deal with lower-level technology with different maximum packet sizes. There is also a small set of IP-level

OCR for page 43
Realizing the Information Future: The Internet and Beyond     control messages, which report to the end node various error conditions and network sta­res, such as use of a bad address, relevant changes in network routing, or network failures. 15.   A related issue is development of standard format sets for publishing over the Internet, for requiring headers and/or signatures, or for requiring some kind of registration that might automatically put the "work" in a directory. 16.   Indeed, many applications cannot predict in advance what their bandwidth needs will be, as these depend very dynamically on the details of execution, for example, which actions are requested, which data are fetched, and so on. 17.   Providing a refusal capability has implications for applications and user interfaces designed in the Internet tradition, which today do not ask permission to use the network but simply do so. The concept of refusal is missing. 18.   By late 1993, perhaps 1,000 people worldwide were using real-time video over the Internet. The consequence was that at times fewer than 0.1 percent of Internet users con­sumed 10 percent of the backbone capacity. Personal communication, Stephen Wolff, Na­tional Science Foundation, December 20, 1993. 19.   This point is relevant to a current debate in the technical community about whether the basic bearer service that can be built using the standards for ATM should support statistical sharing of bandwidth. Some proposals for ATM do not support best-effort ser­vice, but rather only services with guaranteed QOS parameters. This position is motivated by a set of speculations that a "better" quality of service better serves user needs. However, taking into account cost structures and the success of best-effort service on the Internet, a "better" service may not be more desirable. Technical decisions of this sort could have a major bearing on the success of ATM as a technology of choice for the NIL 20.   The guarantee issue is related to the scheduling algorithm that packets see. A packet (or ATM) switch can have either a very simple or a rather complex scheduling algorithm for departing network traffic. The simple method is First In, First Out (FIFO). There are a number of more complex methods, one of which is Weighted Fair Queueing (WFQ). In FIFO, a burst of packet traffic put into the network goes through immediately, staying in front of other later packets. WFQ services different packet classes in turn, so that the burst is fed into the network in a regulated way and then mixed by the scheduler with packets from other classes. One alternative for achieving fairness is to allocate bandwidth in a very conservative manner (peak rate allocation) so that the user is externally limited (at the entry point of the net) to some rate, and then to assure that on every link of the network over which this flow of cells will pass, there is enough bandwidth to carry the full load of every User at once. Such an approach using peak allocation eliminates from the network any benefit of sta­tistical bandwidth sharing, which is one of the major benefits of packet switching. On the other hand, WFQ is one method for ensuring that we benefit from statistical multiplexing. The way this decision is settled will have real business consequences for the telephone companies and other ATM network providers. 21.   One estimate for the accounting file for only long-haul intra-U.S. Internet traffic is that it would exceed 45 gigabytes per month per billion packets; the NSFNET backbone was approaching 40 billion packets per month by late 1993. See Roberts, Michael M. 1993. "Internet Accounting--Revisited," EDUCOM Review, November-December (December 6 e-mail). 22.   This explicitly does not preclude implementing similar services in noncompliant ways as well. Thus, video might be provided according to the standards required for NII compliance, and as well in some proprietary noncompliant coding. 23.   There is a great deal of uncertainty about the limitations of wireless. The low Earth orbiting satellites could provide considerable bandwidth; in this area, ARPA has funded a gigabit satellite experiment. At the local level (ground radio) the limitations are also un

OCR for page 43
Realizing the Information Future: The Internet and Beyond     clear, but bandwidth will always be a problem to some extent with wireless. The question is how pervasive wireless will be for data communications. The predictions are indeed muddied. To quote a February 15, 1994, article in America's NETWORK, "The most experi­enced analyst with the best research data can't predict with certainty how the coming wireless data market will develop." Robert Pepper (FCC) reminds us that when cellular began, the best guesses were that there would be 1 million customers by the end of the century; today there are 60 million. It is clear that the lower-speed data services will surely be used widely, it is use of the high-speed services that is hard to predict. 24.   The current status of the GOSIP is in doubt. NIST has convened the Federal Internet-working Requirements Panel to advise it on options for dealing with the GOSIP. At this writing, the draft report of this panel, opened for comments, was not yet final. However, the overall direction of the report appears to be to abandon the current GOSIP, which mandates one required protocol suite (the OSI suite), and to move to a more open approach based on multiple suites and an explicit acceptance of the Internet protocols. 25.   See also U.S. Congress, Office of Technology Assessment. 1992. Global Standards: Building Blocks of the Future. TCT-512. Government Printing Office, Washington, D.C., March. 26.   In the early 1970s, ARPA undertook the development of TCP/IP for the specific purpose of providing a standard approach to interoperation of DOD networks. The techni­cal development was done by a working group convened and funded by ARPA, with academic and industrial research participants. In the late 1970s, ARPA worked with the Defense Communications Agency (DCA) to mandate TCP/IP as a preliminary standard for internetworked DOD systems. The DCA and ARPA cooperated on the establishment of a more formal review committee to oversee the establishment and deployment of TCP/IP within the DOD. 27.   Additionally, the committee notes the emerging issues of addressing in the cable networks. Today, the cable networks have no real need for a global addressing architec­ture, since distinguishing between individual end nodes is needed only for directing the control messages sent to the set-top box. However, as the entertainment products become more complex and interactive, the need for an explicit addressing scheme will increase. If the cable networks expand to interwork with other parts of the information infrastructure, their addressing scheme should probably be unified with the scheme for telephony and information networks. 28.   Computer Science and Telecommunications Board (CSTB), National Research Coun­cil. 1990. Computers at Risk: Safe Computing in the Information Age. National Academy Press, Washington, D.C. 29.   Wireless radio transmission is especially subject to security risks for a number of reasons. First, the transmission is broadcast into the air, and so it is relatively simple to "tap" the transmission. Second, since the transmission is broadcast, a number of other radio receivers can easily receive it, and more than one of them may decode the message, opening up more opportunities for a breach of security. Third, since the medium is radio, it is easy to "jam" the transmission. Fourth, since radios are usually (though not always) portable, they are more vulnerable to being stolen, lost, damaged, and so on. 30.   The committee recognizes that security is an emphasis of the administration's Infor­mation Infrastructure Task Force, but it seeks a sufficiently broad and deep technical frame­work, beginning with a security architecture. 31.   CSTB has previously recommended more security-related research. See CSTB, 1990, Computers at Risk. 32.   Lewis, Peter H. 1994. "Computer Security Experts See Pattern in Internet Break-ins," New York Times, February 11; and Burgess, John. 1994. "DOD Plan May Cut Ties to Internet,' Network World, January 10, p. 95.

OCR for page 43
Realizing the Information Future: The Internet and Beyond 33.   CSTB, 1990, Computers at Risk. 34.   CSTB will launch a separate study of encryption and cryptography policy in mid-1994. 35.   See CSTB, 1990, Computers at Risk. 36.   Lewis, 1994, "Computer Security Experts See Pattern in Internet Break-ins,' 37.   Recognition of this problem is developing in the relevant industries, but problems of design and implementation remain. See "National Information Infrastructure and Grand Alliance HDTV System Operability," February 22, 1994. 38.   Note that specificity is a theme of technology decisions for most interactive televi­sion trials to date. See Yoshida, Junko, and Terry Costlow. 1994. "Group Races Chip Makers to Set-top," EE Times, February 7 (electronic distribution), which observes that "many of the digital interactive TV trials and commercial rollouts are married to a particu­lar set-top box design that is directly tied to a specific network architecture. Examples range from the set-top box Silicon Graphics is basing on its Indy workstation for Time Warner Cable's Full Service Network project in Orlando, Florida, to the box Scientific-Atlanta is building around 312)O Inc.'s graphics chip set for US West's trial in Omaha, Nebraska.' 39.   Continental Cablevision. 1994. "Continental Cablevision, PSI Launch Internet Ser­vice: First Commercial Internet Service Delivered via Cable Available Beginning Today in Cambridge, Massachusetts," News Release, March 8. 40.   A wide range of speeds might be offered from the user back into the network. To­day's options for access speeds range from voice-grade modems to current higher-speed modems at 56 kbps to ISDN at 128 kbps. None of these speeds are sufficient either for low-delay transfer of significant quantities of data or for delivery of video from the home into the network. Since high-quality compressed video seems to require between 1.5 and 4 Mbps, a channel of this size (at least a T1 channel) would permit a user to offer one video stream. It would still represent a real bottleneck for a site offering access to significant data. For comparison, in today's LAN environments 10 Mbps is considered minimal for access to data stored on file servers. Finally, for networks whose primary purpose is to provide access to entertainment video, the operator of the network presumably has the access ca­pacity to deliver several hundred video streams into the network simultaneously. It is unlikely that this sort of inbound capacity will be readily available to any other user of the network. But at lower and more realistic input speeds, perhaps from T1 to 10 Mbps, there are a variety of interesting opportunities for becoming an information provider. 41.   In an attempt to explore what these costs might be, the committee discovered that there are technical disagreements about the degree of additional complexity and cost im­plied by its objectives. Comments from inside the cable and telephone industries indicate that these industries have already assessed the costs of adding these more general features and have concluded internally that they cannot afford them in the current competitive climate. The committee thus takes as a given that these features will not be incorporated any time soon without policy intervention. 42.   A virtual network such as the M-bone is constructed by attaching to the Internet a set of experimental routers. The operational IP addressing and routing is used to establish paths between these routers, which then use these paths as if they were point-to-point connections. Experimental routing algorithms, for example, can then be evaluated in these new routers. These new algorithms can neither see nor disrupt the operational routing running at the lower level, and so the experiment does not disrupt normal operation. The isolation is not perfect, however. In the case of the M-bone, quantities of multicast traffic might possibly flood the real Internet links, preventing service. Explicit steps have been taken in the experimental routers to prevent this occurrence. Building a virtual network requires care to prevent any chance of lower-level disruptions, since it does involve sending real data over the real network.

OCR for page 43
Realizing the Information Future: The Internet and Beyond 43.   Diebold was quoted by Paul Bran Peters in a June 1993 briefing to the committee. 44.   The NSF-ARPA-NASA digital libraries initiative solicits proposals for research in three areas: (1) "Capturing data (and descriptive information about such data) of all forms (text, images, sound, speech, etc.) and categorizing and organizing electronic information in a variety of formats." (2) "Advanced software and algorithms for browsing, searching, filtering, abstracting, summarizing and combining large volumes of data, imagery, and all kinds of information." (3) "The utilization of networked databases distributed around the nation and around the world." Examples of relevant research are listed in National Science Foundation. 1993. "Re­search on Digital Libraries: Announcement," NSF 93-141. National Science Foundation, Washington, D.C.