National Academies Press: OpenBook

The Unpredictable Certainty: Information Infrastructure Through 2000 (1996)

Chapter: 4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW

« Previous: 3 WHERE IS THE BUSINESS CASE?
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

4
Technology Options and Capabilities: What Does What, How

The market cannot explore a space that technology has precluded.

—David D. Clark, Massachusetts Institute of Technology

The technology landscape of today is marked by rapid evolution, and advances in technology intertwine with evolution in the communications industry itself. This chapter offers an overall perspective on the drivers of change and on the capabilities of and trends in communications-related technology, with the goal of putting into perspective many of the specific developments of today. It also presents some of the steering committee's assessments of specific technology trends. Chapter 5 contains complementary statistics and an analysis of the deployment of these technologies.

The Changing Nature Of Technology And Communications

Communications services have been transformed by a long series of innovations, including copper wire, coaxial cable, microwave transmission, and optical fiber. Each has expanded the available bandwidth, and therefore carrying capacity, at reduced unit cost. Over many years the cost per voice channel has fallen annually by about 10 percent. With the huge increase in carrying capacity enabled by fiber and optoelectronics, the potential for cost reduction has become even greater—but only if the newly available bandwidth can be utilized profitably. During the same

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

period, silicon integrated-circuit technology has allowed the price of computer performance to fall at a rate of 15 to 25 percent per year. This rapid progress in reducing the cost of computing is what makes computer technology the means to exploit the great growth in communications carrying capacity made possible by satellites and fiber technology.

Advances in the power of the general-purpose processor are easy to see and well understood; workstation speed has more than doubled every 2 years, and memory sizes have grown at equivalent rates. In the more specialized areas of communications, this increased processing power can be exploited in a number of ways—to achieve a simple increase in speed, for example, and to make computers easier to use (simplicity and natural logic in the user interface require very complex processes in software and hardware). Continued rapid progress assures continued advances in both function and usability. But increased processing power can also often be used to greater advantage to increase flexibility and generality, attributes that are key to much of the ongoing transformation of communications technology and thus the communications industry itself. Three specific trends relating to increased flexibility and generality are relevant to the steering committee's assessment: the increasing use of software rather than hardware for implementation of functions, the increasing modularity of design, and the increasing ability to process and transform the data being transported within the communications system.

Implementation of functions in software can reduce cost and permit modification of a function by upgrading the program. Costs can also be reduced by replacing a number of special-purpose or low-level hardware elements with a single integrated processor, which then performs all the same tasks as the multiple hardware elements by executing a program. Continuous cost reduction is central to the current pace of technology advance; it permits rapid technology rollover and restructuring of the hardware base. Implementation in software, however, has the added advantage of permitting the functions of a device to be changed after manufacture, to correct "bugs" or meet evolving user needs, whereas hardware, once manufactured, is frozen. The flexibility to change a product during its lifetime is critical, because of rapidly changing user requirements driven by new applications.

One way to build more generality into a system is to make the design more modular, dividing the system into separable elements that implement different parts of a function. These modules can then be used in different ways to create new services. Adding modularity requires the implementation of interfaces between the elements of the system, and this step adds cost, especially if the modules are physical hardware elements in the design, but to some extent even if the modules are software modules and the interfaces are subroutine calls. However, even though modu-

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

larity and interfaces do add cost, current trends in design and implementation suggest (compellingly) that modularity, if properly done, has a powerful justification in the generality, flexibility, and potential for growth of the resulting system.

A third key trend resulting from increasing processing power is the ability to process and transform data carried in the communications system. One consequence is increased interoperation among previously incompatible systems. For example, the broadcast video formats in different parts of the world, NTSC, PAL, and SECAM, were a real barrier to interchange of video content until, as is now the case, it was possible to build economical format converters. Today, digital and analog telephone systems interwork, as do digital telephone systems with different voice codings, such as the several emerging digital cellular systems. In the future, as digital television broadcasting is deployed, digital and analog systems for television broadcasting will interwork.

These trends act in combination. For example, data transformations such as the real-time encoding and decoding of video streams or data encryption or compression are now migrating to software. Inexpensive digital signal processors permit the manufacture of $150 modems (today) that encode data at 28,800 bits per second (bps) for transmission down a phone line—an excellent example of the power of a processor chip and software to replace dedicated hardware.

These three trends together lead to a communications world with increasing generality and flexibility and increasing options for interoperability, with additional interfaces permitting the reorganization of the communications infrastructure to offer new services and support new applications. In turn, the increasing modularity of communications technology has transformed the whole landscape of the business environment. Interfaces represent a technological form of unbundling, which permits new forms of competition and new business strategies.

How Trends In Technology Are Changing Communications Infrastructure And Services

The various effects of technology on the nature of communications discussed above have three specific consequences, which represent major factors that shape the future of the information infrastructure:

  1. The separation of infrastructure facilities and service offerings,
  2. The construction of services layered over other services, and
  3. The tension between supporting mature and emerging applications.
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

Separation of Infrastructure Facilities and Service Offerings

In the past, there were two important communications services, telephony and television. Each of these services drove the development of a technology infrastructure to serve its needs, and both of these infrastructures are today prevalent and of great economic importance.

Telephony and television have evolved considerably since their inception. The telephone system started with copper wires and mechanical switches; moved to modulating many calls on a wire, and then to digital representation of the speech channel and electronic switches, which together led to the current digital hierarchy; and next evolved to the use of fiber optics to carry the aggregated data and cellular wireless technology as an alternate access path. Television delivery (like radio before it) started as over-the-air broadcast but developed the cable infrastructure, and more recently consumer satellite dishes.

For both of these services during their initial evolution, the service objective remained the same: specifically, the delivery of a telephone call or a television channel. However, in both the telephone industry and the broadcast and cable industries, the trend now is to add modularity to the technology and to define explicit interfaces to the infrastructure that permit offering a wider range of services and applications over a common infrastructure.

Perhaps the earliest significant example of this trend toward separation of infrastructure and service offerings is the selling of trunk circuits by the telephone industry. These circuits, such as T1 at 1.5 Mbps and most recently DS3 at 45 Mbps, were first conceived to carry aggregated voice. But they are also sold as a separate component, to carry either voice or data for private customers. This splitting out of the lower-level infrastructure facility by revealing and marketing the interfaces to the point-to-point circuits is the single critical change in the facilities infrastructure that has created the long-haul data network revolution. It is these trunks that permitted the construction of the Internet and the switched packet networks such as frame relay and switched multimegabit data service, and in the past, X.25 networks. These trunks permitted the construction of on-line information service networks and the private networks that today serve almost all of the major corporations.

Recent developments more clearly articulate the separation of service from infrastructure. The telephone industry's new technology approach, called asynchronous transfer mode (ATM), will offer much greater flexibility in service offerings: ATM can carry voice, provide private circuits at essentially any specified capacity (rather than at just 1.5 or 45 Mbps), and also support more advanced services in the data area. Cable technology is following a similar path. While the early cable systems were prac-

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

tical only for the transport of video, their recent evolution to hybrid fiber coaxial cable allows the infrastructure to be used for a range of services, including telephony and data transfer.

The separation of facilities from service offerings makes good sense in the context of current business trends (see Chapters 2 and 3). The regulatory opening of more communications markets to competition, for example, puts a premium on infrastructure that facilitates rapid entry into such markets. Video and telephony providers alike wish to be poised to enter each other's business, as well as to participate in new businesses that may emerge. Also, as pointed out in previous chapters, the steering committee has concluded that the NII will not serve one "killer app" primarily, but rather will enable a wide range of objectives and applications arising from a broad set of business domains and societal functions. Many of these applications do not yet exist, and thus a critical objective for the NII is to be open to the development and deployment of new applications.

Building Services on Each Other

Just as the separation of infrastructure facilities from services permits the construction of a range of services on top of a common infrastructure, so, too, can one service be constructed by building it on top of another. This layered approach to constructing services is a consequence of the trends discussed above—increased processing power, and more modular design with defined interfaces to basic infrastructure facilities. In fact, a wide and sometimes surprising range of service offerings is being created by building one service on top of another. The teaching of networking often involves a simple, layered model of technology, in which infrastructure components are installed and then used as a foundation for next-level services, and so on in an orderly manner. Current reality is much messier and much less well structured. Neither the technology nor the business relationships show a simple order, but instead reflect a very dynamic and creative building of services on top of each other, with the players both competing and cooperating to build the eventual service sold to the consumer.

This layered building of services, of course, has been going on for some time, as noted above in the discussion of telephone trunk circuit sales. There are many other examples of service overlays, some of which are quite unexpected and at times confusing. With the expenditure of enough ever-cheaper computing cycles, one kind of service can be made into an infrastructure for another in quite creative ways. For example, software is available that permits the creation of telephone connections (with some limitations) over the Internet. The data stream that drives the

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

StarSight on-screen television guide is carried, for lack of any better transport service, in an otherwise unused portion (the vertical blanking interval) of the Public Broadcasting Service broadcast signal—an example of transforming, presumably with considerable processing power, an application-specific service (NTSC video; named after the standards-developing National Television System Committee) into a more general transport service. Perhaps the most confusing situation occurs when two services each can be built out of the other: the Internet can carry digital video, and video delivery facilities can be used to carry Internet service; frame relay service can be used to carry Internet packets, but at least one major provider of frame relay today uses Internet packets to carry frame relay packets.

Beneath these services and overlays of other services lie the physical facilities, such as the fiber trunks, the hybrid fiber coaxial cable and cable systems, the local telephone loops, and the satellites, as well as the switches that hook all of these components together. These are the building blocks on which all else must stand, and it is thus the technology and the economics of this sector that require detailed study and understanding.

The Tension Between Supporting Mature and Emerging Services

From an engineering point of view, the existing network infrastructure is still largely designed and constructed to achieve the objective of very cost-effective and high-penetration delivery of mature services, in particular telephony and video. While the separation of facilities from services has important business advantages, adding generality to the infrastructure, so that it can support a range of future applications, raises the critical concern of increased costs for infrastructure. The tension between cost-effective delivery of mature services and a general platform to support emerging applications is illustrated by the issue, often raised by participants in the NII 2000 project, of how much bandwidth should be provided to the residence, especially back-channel capacity from the residence into the network. A number of participants called for substantial back-channel bandwidth, often to avoid precluding new applications.1 But the infrastructure facilities providers voice the real concern that adding back-channel capability in too large a quantity would add unacceptable cost to the infrastructure and could threaten the economics of their basic business, which requires considerable bandwidth to the home for video but only enough bandwidth from the home to support voice or low-bandwidth interactive control. They emphasize that new applications must prove themselves and that investment in new infrastructure capabilities can be undertaken only incrementally. From their perspec-

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

tive, it is important to have enough capability to allow the market to explore new sectors, but they cannot be expected to invest fully until an application is shown to be viable. This stance may limit the rate of growth of new applications, but there seems no economical alternative. Research, as noted in Chapter 6, may illuminate ways to reduce investment requirements or enhance applications to accelerate return on investment.

The steering committee concluded that the resolution of the tension between supporting mature and emerging applications was a key factor in determining the shape of the future NII. Two conclusions emerged from the discussions and materials presented. First, current technology plans show a studied balance between a cost-reduced focus on provision of mature applications and provision of a general environment for innovation of new applications, and second, there is broad recognition that the Internet is the primary environment for such innovation.

Resolving The Tension: The Internet As An Example

The simple approach to separation of infrastructure from service involves defining an interface to the basic infrastructure facilities and then constructing on top of that interface both cost-reduced support for the mature services and general support for new applications. Thus, the telephone system provides interfaces directly to the high-speed trunks, and the television cable systems define an interface to the analog spectrum of the cable. However, to support emerging applications, the interface to the underlying infrastructure is not by itself sufficient. New applications should not be constructed directly on top of the technology-specific interfaces, because doing so would tie the applications too directly to one specific technology. For example, a very successful technology standard for local area data networking is Ethernet. But building an application directly on top of Ethernet interfaces locks in the application to that one technology and excludes alternatives such as telephone lines, wireless links, and so on.

What is needed is a service interface that is independent of underlying technology options, and also independent of specific applications. An earlier report from the Computer Science and Telecommunications Board, Realizing the Information Future (RTIF; CSTB, 1994b), advocated an open interface with these characteristics to support innovation of new network applications. The report called this interface the technology-independent bearer service and called the network that would result from providing this interface the Open Data Network, or ODN. This interface would normally be effected in software; it is an example of a general-purpose capability that would be implemented in a computer as the basic building block for higher-level services and applications.

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

The architecture of the ODN was modeled somewhat on the architecture of the Internet, which has a form of the bearer service in its Internet protocol. However, RTIF was careful to discuss this critical interface in general terms and not prejudge the suitability of the Internet protocols to meet this need.

The Importance of the Internet

It would appear at this time that the Internet and its protocols represent the best approach for providing a general service for the support of emerging applications because of the effectiveness with which the Internet protocol serves as a bearer service and the overall architecture functions as an ODN. In the course of this project, the steering committee heard from a wide range of application developers in areas such as electronic commerce, information access, and business-to-business collaboration. In nearly all cases, the applications were based on one of two kinds of interfaces to the technology below. Either they were modeled on some mature service and used the existing service interface of that technology (such as video on demand over existing cable systems or fax over voice) or they were based on the features of the Internet. The steering committee heard repeatedly that the Internet standards were the basis on which new applications were being crafted, and even in cases in which the Internet was described as unsuitable, a careful exploration of the concerns usually suggested that the issue was not the standards themselves, but rather the existing public Internet as a delivery vehicle, with its current level of security, provisioning, and stability. The current volume of deployed devices using the Internet standards, together with the observed level of investment in Internet-related products and services, constitutes a unique foundation, one for which there is no alternative now or in any reasonable time frame. See Box 4.1.

Based on its assessment of industry trends, the steering committee thus concluded that the call for an open, technology-independent bearer service as a basis for emerging applications, as voiced in RTIF, was correct, and that a more concrete conclusion is now justified: the Internet standards are being widely used to enable new applications and are seen by a great majority of commercial players as the only viable option for an open, application-independent set of service interfaces at this time.

For this reason, the steering committee further concluded that specific attention should be paid to ensuring the viability of the Internet, in terms of both enhancing the standards to meet evolving application needs and making sure that networks based on these interfaces are deployed and made widely available as an environment for the innovation of new applications. The key topics to consider, then, are (1) what aspects of the

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

BOX 4.1 Commercial Importance of the Internet: A Selection of Views

The commercialization of the Internet is already happening, with commercial users representing the largest user domain type registered in the United States. Participants in the January 1995 workshop and May 1995 forum commented on the present and future importance of the Internet for their conduct of business. Some focused on the barriers to commercial use, such as needs for guarantees of security and intellectual property rights and the difficulty novice users experience in using applications on the Internet. Others noted that commercial services available today can meet these needs, indicating there may be at least a temporary mismatch of perceptions between providers and users. In the longer term, the Internet's sheer ability to build connections was seen as a powerful draw.

The Internet is not about technology fundamentally. It is a social phenomenon that is basically the world's largest interconnected set of computers. That is what makes the Internet have all the energy behind it and what gives it the power of the marketplace and interoperability at all kinds of levels.

—Marty Tenenbaum, Enterprise Integration

Technologies Corporation/CommerceNet

We want the ability to coordinate among the industries. Yet we need some push. We need some standardization to make interoperability, encryption, and a number of other things happen. We need culture change on the Internet from the information provider's standpoint, from the educator's standpoint, from the health care perspective. Our users need to better understand what we are trying to give them, and we need to better understand what they want, how they want it, and how they need to get it.

—Cynthia Braddon, The McGraw-Hill Companies

The Internet excites a variety of people, but there are … a lot of people out there that are not going to be excited about the Internet in the next 10 years. … How do we get lower-level software for the part of the population that is not capable or is not interested in … sophisticated stuff? … How do we get all this aimed at all segments of society?

—Joseph Donahue, Thomson Consumer Electronics Inc.

Not a lot of people, including me, had the nerve to raise our hands this morning when Bob Lucky asked the perhaps inappropriately phrased question about whether the Internet is the NII or not. But the truth of the matter is, if you have the Internet, you do not need much else to have the NII.

—Andrew Lippman, Massachusetts Institute of Technology

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

BOX 4.2 The Architecture of the Internet

Most users of the Internet see it through experiencing its applications, most obviously the World Wide Web, but also the ubiquitous electronic mail, remote login, file transfer, and other applications. But from the perspective of the Internet designers, the essence of the Internet is not the applications, but rather the more basic functionality that makes the Internet a suitable place for those applications to operate. The structure of the Internet reflects two major design objectives: first, to support as many sorts of applications as possible, and second, to operate over as many sorts of network infrastructure as possible.

Although this may sound rather odd at first hearing, the Internet is more like a computer than a traditional network. The point is that most computers are designed to be general-purpose devices, capable of running a wide variety of applications—spreadsheets and word processors, databases and process control, and so on. Similarly, the Internet was designed to support a wide range of applications, including those that had not been conceived at the time the design was undertaken. The recent explosion of the World Wide Web, clearly not envisioned when the Internet was born, is a measure of the success of this ambition. In contrast, most traditional networks were designed to support a specific application, in particular telephony or video delivery.

At the same time that the Internet was intended to support a range of applications, it was also designed to utilize a range of network technologies: LANs, telephone trunks, wireless links, and so on. Over the last 20 years, it has evolved to operate over different underlying technologies with a wide range of speeds, distances, and error rates. It is organized to permit this adaptability as follows. It provides a set of basic functions, which all applications then use to obtain their network service. To

Internet have contributed to its apparent wide acceptance in the commercial world and (2) how the Internet will need to evolve and mature over the next decade to meet the growing needs of this sector. Box 4.2 discusses the organization of the protocols of the Internet and explains how its design allows for a general service to be constructed that takes advantage of the cost-reduced infrastructure that is engineered for the delivery of mature applications. The section titled ''The Internet," included below in this chapter, further clarifies what the Internet really is and elaborates on some of its future directions.

The Coexistence of New and Mature Services

Experience with the Internet shows the power of a general bearer service as an environment in which new applications can come into existence. The sudden advent of the World Wide Web makes this point emphatically. However, advocacy of a network with bearer-service interface

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

permit the use of as wide a range of technology as possible, these services are defined not in terms of the detailed features of one technology, but rather in a very general way that does not depend on specifics such as bandwidth or latency. For each sort of infrastructure that is put into use, software is then written that translates the specific features of that infrastructure into the general, universal form of the service. The applications invoke that software, which in turn calls on the actual network technology.

This ability to operate over different sorts of network infrastructure is a key to the success of the Internet and its protocols. HFC, for example, is one possible infrastructure that might be used to extend the reach of the Internet to homes at higher speeds (see section in text below titled "Hybrid Fiber Coaxial Cable"). A number of products are now available that make it possible to carry the Internet protocols across this form of network technology. While the details of HFC differ from those for other network technologies, the Internet can operate in this context precisely because the details are hidden from the applications by the intervening software.

As discussed in the beginning of this chapter, the trend in the evolution of network infrastructure is that the infrastructure itself is separate from the services that are offered on it. The Internet is an example of this trend and an illustration of its power. In almost all cases, Internet service providers do not install separate infrastructure for their Internet service. They make use of existing facilities, such as long-distance trunks installed to support voice, "dark" fibers in metropolitan areas, copper pairs and cable systems for residential access, and so on. The only hardware items normally purchased and installed by an Internet service provider are the devices that connect the different infrastructures together, the routers, and, for providers that support dial-up access, the equipment to terminate these telephone calls. Most of the expenses for an Internet service provider are the costs of renting the underlying facilities and the costs associated with supporting the customer.

architecture, as called for in RTIF, raised in some industry players the concern that a more inclusive approach was being suggested, that is, that all communications services, including mature services such as telephony and television, should be migrated to this general architecture as an objective of the NII. This objective was not what was advocated in RTIF or in the material gathered for this project. Nor is it advocated in this report.

Voice (telephony) is a case in point. The standards specific to that application of course predate the Internet standards. The infrastructure and interface standards supporting the telephone system are mature and stable and have been engineered to provide very cost-effective delivery of the service. Meanwhile, the growth in technology to support voice communications over the Internet has caused some speculation about the transfer of voice traffic from the public switched telephone network to the Internet. There is no reason to migrate this service to a network such as the Internet that might be less cost-effective for voice, since it was not optimized for that purpose. The recent fad of voice communications and

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

telephony over the Internet may suggest to some that the Internet is perhaps more rather than less cost-efficient than the existing telephone system. A more realistic conclusion is that voice over the Internet currently appears appealing due to differences in pricing (in part a reflection of regulatory circumstances), differences that must prove temporary if Internet telephony becomes a significant component of total telephone traffic.

It is critical to understand the following distinction: the Internet must be able to carry voice (and video), but it is not necessary, or indeed desirable, that all voice and video be carried on the Internet. Today, the Internet is only partially suited for carrying real-time voice, because traffic overloads can cause excessive delay in the delivery of voice packets, which disrupts the playback of the speech. Plans are now under way to evolve the Internet so that it can carry real-time voice streams with predictable delays. Voice, and other real-time traffic such as video, can then be used as a component of any multimedia applications that might emerge. Achieving this objective does not require that the Internet carry voice with the same efficiency as the telephone system, since the goal of the Internet is generality, not cost-reduced application solutions.

The Internet has shown clearly that an open and level playing field leads to vigorous innovation, and the steering committee believes that this opportunity for innovation should be open to anyone willing to offer a new service or attempt a new application. Since the eventual business structure of an unproven innovation is usually unclear, it seems reasonable by default to innovate in an open context, which the steering committee sees as maximizing the chance of success. The interface that needs to be open is the application-independent interface (the bearer service, or in specific terms the Internet protocol).

New physical infrastructure need not be deployed if, in 10 years, some now-emerging application has become so prevalent that economics justifies moving it to a new set of specialized service interfaces. Precisely because physical infrastructure is being engineered to be decoupled from service, it should be possible to build a new specialized service on the deployed infrastructure. None of this effort would interfere with the long-term viability of the general bearer service (or, in concrete terms, the Internet), which would continue to support both new services and those mature services that operate well there. Experience with the mechanisms of the Internet suggests that their use of bandwidth is sufficiently cost-effective for the services the Internet has spawned that so far there has been little pressure to move any applications off the Internet to more cost-reduced delivery services and interfaces.

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

Current Technology—Evaluating The Options

The discussion above has alluded to a number of technologies of significant current relevance, such as hybrid fiber coaxial cable or the Internet. This section briefly reviews the key features of several important technologies and assesses the extent to which they address some of the concerns raised above, such as a capability for generality and flexibility.

Hybrid Fiber Coaxial Cable

Fiber-optic cable has an information-carrying capacity that is orders of magnitude greater than that of copper. In the long term, some time in the next century, ubiquitous deployment of fiber to every home and office would open up vast amounts of bandwidth to end users. (See the white paper by Paul Green for a complete discussion.) However, deployment of fiber requires a large investment to cover the costs not only of the fiber cable and associated optoelectronic equipment, but also of the labor and construction needed to lay physical cables through cities and suburban neighborhoods. Thus, although fiber has been deployed extensively in the backbone sections of telephone and cable television networks nationwide, it is only relatively recently that the access portions of these networks—the multitudes of separate links that connect end users to the networks and that account for most of the total mileage in the system—have begun to be upgraded to include fiber. Wireline access networks comprising a mix of fiber and copper elements are now being deployed in residential areas.

For such access networks, a very important technical approach—currently embraced by many cable providers and also being evaluated by telephone companies—is hybrid fiber coaxial cable (HFC; often abbreviated as hybrid fiber coax). This approach is best understood as an extension of current cable television infrastructure. In first-generation cable systems, a distribution system of coaxial cables and amplifiers fans out from the community head end to each house. In an HFC system, fiberoptic links connect the community head end to small neighborhoods, and the traditional cable technology is then used to fan out inside each neighborhood to reach individual homes. The advantage of this system is that the fiber replaces long cable runs that include many amplifiers, each of which represents a point of failure, a limit on capacity, and a potential source of signal degradation.

The expectation for HFC is that it will transform an existing cable plant dedicated to analog video delivery to a more general infrastructure capable of supporting a range of consumer services. The higher band-

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

width capacity from the head end to each home will permit traditional analog video services and newer digital video services to coexist. Additionally, the reduced number of amplifiers and better noise characteristics of the HFC environment will permit practical use of the reverse-channel capacity of the system, which will thus allow for two-way services such as telephony and data transmission.

Although different vendors offer different specific technical features, the basic characteristics of planned HFC systems are well understood. Traditional, all-coaxial (coax) systems can carry no more than about 80 analog television channels. In most cable systems currently being upgraded to HFC, an additional 200 MHz of spectrum is available for digital services. The digital modulation technique most commonly contemplated for this context, called 256 QAM, yields a downstream digital capacity of 1.4 Gbps within this spectrum—enough to deliver multiple digitized video and other data streams to each user simultaneously.

Some assumptions about service penetration suggest how an HFC system could be used for delivery of digital services. A typical HFC network is expected to pass 500 homes per node—a neighborhood of homes served from a single fiber—of which about 300 might be customers of the system. If about 40 percent of those customers subscribe to the digital services (at some extra charge), then 120 homes are sharing the available bandwidth. If it is assumed that perhaps one-third of the subscribers might be using digital services at a given moment, then the likely peak load is 40 homes. Divided among 40 homes, the downstream capacity is about 35 Mbps per home, which is sufficient to support several video streams to each home.2

Given the considerable interest in what sort of data services might be possible with an HFC approach, it is useful to look at what can be achieved with current products. In the last year a number of devices called cable modems were offered to the cable industry that use one or more video channels in each direction to carry bidirectional data across coax and HFC systems (Robichaux, 1995b). At a current cost of $500 to $1,000 per consumer (a cost expected to drop to perhaps $200 with volume), they seem to represent a studied balance between provision of generality and controlled cost. Existing cable modems use a single analog television channel on the cable (6 MHz of bandwidth) to transmit about 27 Mbps of data downstream, which is almost three times the capacity of an Ethernet. (Commercial cable data modems available within 1 to 2 years will provide for 39 Mbps per channel based on 256 QAM.)

Traffic in the upstream direction uses the 5- to 40-MHz portion of the available spectrum, which is used today (if at all) by community access television for feeding video back to the head end. This part of the cable spectrum has generally been found to have problems of electrical noise

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

but can be utilized if specific technical issues are attended to. In particular, a more robust, lower-efficiency modulation technique must be used, which yields an upstream bandwidth inside a single analog channel of perhaps 10 Mbps. Most cable data modems today divide this upstream capacity into smaller channels, either 1.5 Mbps (the same as a T1 telephone circuit) or 128 kbps (the same as the basic-rate ISDN telephone service). Another approach takes the full 10 Mbps and allocates it dynamically among all the potential users—much as an Ethernet LAN is operated. Thus today's cable data modems using one downstream channel and one upstream channel, can currently provide up to 27 Mbps downstream and up to 10 Mbps upstream.

This capacity can be shared by all the homes in a fiber neighborhood, typically 500 homes, noted above. However, to allow for increased demand, most HFC systems are being designed to permit eventual separation of neighborhoods into units as small as 100 homes, in which case the 27 Mbps downstream and the 10 Mbps upstream per channel (however subdivided) would be shared among no more than 100 potential customers. For video or other advanced services, it is reasonable to assume that only some of the potential customers would subscribe to the service and that of those, only some would be active. The resulting degree of sharing would be similar to what is experienced today on many corporate LANs such as Ethernet. In addition, more than one down and/or up channel could be allocated to data services, if there were sufficient demand or if different sorts of data services were to be provided on a single cable. While there is not yet a great deal of operational experience with these cable data modems, the prospect is that two-way data services over cable can be successfully deployed.

Deploying cable data services today requires that the cable system be upgraded at least by the addition of up-channel amplifiers and the use of fiber to serve a neighborhood of reasonable size. Even if the cable provider does not add the additional downstream amplifiers to activate the additional 200 MHz of capacity discussed above, it is still possible to deploy current-generation data modems without disrupting existing services. At least some of these cable modems transmit data toward the home using a frequency above that of all of the channels carried by current cable systems today. Data services thus can be added to an existing cable system without requiring the cable operator to give up any of its existing cable video offerings.

Current cable infrastructure can be upgraded at a fairly modest cost (estimated at perhaps $125 to $130 per house passed, on average) to yield the more advanced HFC system, which would give existing cable operators a short-term advantage (compared to new entrants, such as telephone companies) in the capability to move to a more general infrastruc-

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

ture. However, there was considerable discussion in the workshop, forum, and white papers about the degree to which cable HFC systems would actually provide sufficient bandwidth in a general form (as opposed to being tied to video delivery) that could serve as a practical infrastructure for emerging applications, or more specifically to support the Internet standards for this purpose.

Despite some disagreement, a reasonable conclusion is that although the HFC systems may or may not provide enough capacity to serve a fully developed market of as yet unknown applications, there is enough capacity to explore the marketplace and to let consumers who wish to lead the market purchase enough capacity to get started. It was the opinion of various of the facilities providers that if this point could be reached successfully, further investment to upgrade the capacity would not be an impediment. In fact, additional technical steps already exist to increase the capacity further. For example, as fiber is deployed further into the network and the number of homes per node decreases, an upper range of frequencies (such as 900 MHz to 1 GHz) could be made available for upstream traffic as well.3 Chapter 3 discusses related issues of economics, including the appeal of incremental investment, in greater detail.

Fiber to the Curb

Hybrid fiber coax is not the only approach to bringing higher-bandwidth services to the home. "Fiber to the curb" (FTTC), an alternative that has been proposed for broadband access, is a term that could, of course, be applied to any technology that carries fiber to this point in the distribution network, with either twisted pair copper or coaxial cable reaching from the curb to the home. In some discussions, the term "FTTC" is used to describe a system for delivery of traditional telephone service and narrowband switched services. In other cases, FTTC is used to describe a system for the delivery of video and broadband services. FTTC systems designed for video are in general designed for advanced services, and not for distribution of analog television signals, since for analog video there are more economical points at which to terminate the fiber. Thus, FTTC systems are typically all-digital. Further, because most FTTC proposals envision services such as video on demand, the head end for the FTTC system often includes a high-capacity digital switch, which can route different video signals to different channels on the FTTC system. Cost estimates for FTTC systems may increase substantially if the cost of such a switch is included in the components, a factor that makes it difficult to compare the costs of HFC and FTTC systems.

A typical FTTC system might carry several hundred digital channels. The fiber carrying these signals would terminate at a small electronic unit,

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

typically in a pedestal, which would serve 10 to 30 residences. The connection from the pedestal to the homes could be coaxial cable, twisted copper pairs, or both. Over such a short length of twisted pairs, coding schemes can carry a number of channels of video on copper wire, so that FTTC schemes could, in some cases, provide a means to reuse some of the existing copper wires in the telephone system.

The pedestal in FTTC systems is somewhat more complex than the fiber-to-coax converter in an HFC system, but the increased costs might be somewhat balanced by the option of cheaper equipment in the residence, since the very short wire runs permit use of a simpler coding scheme on the wire. However, a major cost issue for FTTC is powering the electronics in the pedestal. Running a power cable from the head end along with the fiber adds to the system cost. One proposal for using FTTC is to build an FTTC system alongside some other older system, such as a simple coaxial system, so that the coaxial cable can be used to carry power as well as to deliver basic analog video.

Beyond FTTC systems are systems that carry fiber all the way to the home. Several participants in the forum and workshop observed that the issue of fiber to the home is one of economics. As discussed in Chapter 5, it was predicted that this step might occur within 20 years. However, systems such as FTTC with coaxial cable to the home are capable of delivering very substantial data rates into the home, potentially hundreds of megabits per second, depending on the specific approach used. Thus a lack of fiber all the way to the home should not be equated with an inevitable bottleneck for bandwidth in the path.

Digital Services and the Telephone Infrastructure

Central to Alexander Graham Bell's invention of the telephone was the idea of analog transmission—that the voltage transmitted was to be proportional to the sound pressure at the microphone. Thus for almost a century the telephone plant was designed to transmit analog signals in a narrowband channel.

Beginning with the introduction of the T1 carrier circuit in the 1960s, the telephone plant has evolved to digital operation. Essentially all interoffice telephone networks are internally digital, designed around the notion of voice-sized (64,000 bps) chunks of capacity that are set up and taken down to form connections that last typically for many seconds. First the intracity links were digitized, followed by the switches, and then optical fiber made the digitization of the long-haul links economically possible in the span of only a few short years. Optical fiber has increased the capacity of the digital long-haul transmission system to the point that the cost, if computed as a cost per bit, is nearly free. Although there are

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

obviously costs associated with the long-haul system, there seem to be few concerns that it can meet the capacity requirements of an evolving global information infrastructure.

The current standard for transmission of digital information over fiber is SONET (an acronym for Synchronous Optical NETwork). SONET is largely a tool of the telephone companies to provide capacity, both for data and for voice telephone traffic, whereby each voice telephone connection is granted a two-way, 64-kbps data stream. SONET uses the raw bit capacity of a fiber stream to create multiplexed substreams of synchronous data. Thus SONET turns a big pipe into a collection of smaller pipes, which are individually synchronized at specified bit rates. Long-distance fiber trunks are now utilizing the SONET standard, and in the metropolitan areas, the telephone companies have widely deployed SONET fiber rings.

Now only the "last mile" remains as the vestige of the analog telephone plant. But of course much of the economics of communications resides in this ubiquitous gap between the homes of the country and the reservoir of digital capacity in the network beyond. This part of the system is now being digitized for voice service through the use of a digital overlay known as digital loop carrier technology that enables a number of homes to share a single connection back to the central office by converting the analog signals from the telephones into digital streams, and then multiplexing the streams from the different homes together on the single connection at a point near the home. These digital loop carrier systems can be more economical than individual wire pairs to each home, but they complicate the problem of sufficient broadband access by allocating to each home only the equivalent of one voice channel of digital capacity. This constraint prevents the use of the copper pair to support higher data rates, a capability that telephone providers have sought for several years, in order to be able to support delivery of video or interactive data access.

Data Over the Telephone System

To provide data services, the telephone companies have over the last two decades evolved a series of changes and overlays to the access network. The simplest is of course the brute-force approach of a voiceband modem, which essentially turns a data stream into a voiceband analog signal that can be transmitted over the telephone network just like any other voice-like signal. Although modem technology has become very sophisticated and ubiquitous, it is limited by the voiceband capacity of the telephone network—ultimately something less than 64 kbps; currently available products enable 28.8-kbps data rates.

For the digital transmission of higher-speed data over the access loop,

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

a number of approaches have been developed, most prominently integrated services digital network (ISDN) and asymmetric digital subscriber line (ADSL) technology. In either case a special modem is used on the customer premises to enable the transmission of digits directly into the network. For ISDN the rate is 128 kbps, full duplex, while for ADSL the rate is 1.5 Mbps from the network (up to 6 Mbps over shorter distances) and considerably less in the upstream direction. Both ISDN and ADSL solutions depend on the particular circumstances of an individual home, such as the distance to the central office, the presence of a digital subscriber line, and the amount of interference encountered.

ADSL was developed with a view toward the transmission of video signals over the existing access circuits. There are other forms of digital subscriber line technology as well, such as the symmetric high-bit-rate digital subscriber line (HDSL), which can transmit 1.5 or 2 Mbps over two pairs of copper wire for up to 18,000 feet. Broadband digital subscriber line (BDSL) technology, also under development, carries data downstream at rates of 12 Mbps to 52 Mbps and upstream at 1.5 Mbps to 3.1 Mbps over distances of a few thousand feet. These newer forms of digital subscriber line technology could be used over the installed copper wires, where distance and other characteristics permit, for interactive data access (such as connection to the Internet) and would provide higher data rates than does ISDN. For data services, the HDSL or BDSL path from the residence would be connected to a packet switch located either in the central office or remotely in the serving area, if necessary, to ensure that the copper runs were not too long. Equipment of this sort is now being tested in field trials.

Making effective use of access circuits is not the only problem to be solved if data are to be transmitted over the telephone system. It is also necessary to specify the format of the data, the signaling to be used to control the transmission, and all the other components of a transport system. This information is necessary so that switches within the telephone system can operate on this data. A number of proposals have been adopted by the telephone industry, with varying degrees of success, in the last two decades. These data communications standards relate to ISDN; X.25 and its newer incarnation, frame relay; switched multimegabit data service (SMDS); and asynchronous transfer mode (ATM) (see Table 4.1 for a concise description). These standards are not strictly comparable, in that the associated services are organized in somewhat different ways, but they all deal with the packaging and handling of data by both the end terminals and the network itself.

Internet service, which is also defined by data communication standards, has not yet been offered widely by the telephone providers, although several are now marketing it, or have declared their intention to

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

TABLE 4.1 Data Communications Service Standards Adopted by the Telephone Industry

Standard

Speed

Type

Integrated services digital network (ISDN)

128 kbps basic rate, 1.5 Mbps primary rate

Connection-orienteda streams

Frame relay

Up to 1.5 Mbps

Virtual private lines, connection-orienteda

Switched multimegabit data services (SMDS)

From 1.2 Mbps to 34 Mbps

Public addressing, connectionless packetsb

Asynchronous transfer mode (ATM)

155 Mbps (other rates available)

Small, fixed-size packets, connection-orienteda

a''Connection-oriented" means that the call route is prearranged, as in today's voice telephone call, so that a packet (or cell) carrie only an indication of the connection with which it is associated.

b"Connectionless" means that each packet contains the routing address, like an envelope in the postal system.

do so. Internet service is different from the services listed above in that its delivery is less directly tied to a set of hardware technologies. Thus, from the perspective of a telephone provider, the Internet is a service that would operate on top of one of the services listed above.

While ISDN was originally defined as a switched service fully capable of carrying data across the entire telephone system, today it is important to the information infrastructure primarily as a fast method for accessing private data networks and the Internet. As mentioned above in the discussion of digital options for access circuits, it may be thought of simply as a "better modem." To support Internet access, the customer establishes an ISDN connection from the end point to some Internet access packet switch and then sends Internet Protocol (IP) packets over the ISDN connection.

Frame relay represents a way to provide a virtual private network among a number of sites, such as a closed corporate network. The term "virtual" has been used to describe these networks because, while they appear to the user as a set of private connections among the sites, there are no separate circuits that have been allocated from the telephone infrastructure to support them. Frame relay is a packet service, and the packets of various frame relay subscribers are mixed together across the underlying trunks. This statistical multiplexing, typical of services that are

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

called virtual, provides a much lower cost service, at the risk of some uncertainty about instantaneous capacity. Such uncertainty about allocation of capacity does not seem to be a serious issue for many users, however, since frame relay is proving popular in the marketplace.

SMDS is another packet service like frame relay, but it differs in that it does not involve preestablished paths among a set of sites. Instead, it can be used as a public network, in which any site can send to any other by putting the proper address on the packet. In this respect SMDS is somewhat similar to the Internet protocols. However, its addresses are in a different format (they resemble telephone numbers), and SMDS can also be configured to provide a closed user group, similar to the service provided by frame relay.

Whereas ISDN, frame relay, SMDS, and the Internet are overlay services designed to deliver data access over the existing telephone infrastructure, ATM represents a radical rethinking of the network itself. ATM has grown from a mere descriptor of payload within what was previously called "broadband ISDN" to a force of its own that uniquely has gained support from both the computer and telecommunications industries. Since ATM will probably be an important building block for the networks of the future, it is explained in some detail below.

Asynchronous Transfer Mode

The aim of ATM is to provide a flexible format for handling a mix of future traffic—on the one hand, streams of high-speed, real-time information like video, and, on the other, packetized, non-real-time information like electronic mail. ATM represents one attempt to handle both kinds of traffic efficiently based on a technical compromise—small packets or "cells" with a fixed length of 48 bytes, and a connection-oriented approach built around "virtual channels" and "virtual paths'' so that routes for packets are preestablished and packets within a given connection experience similar delays on their trips through the network. The design goal for ATM is to optimize network cost and speed by adopting a single compromise for all applications. Thus ATM is seen as flexible, powerful, economical, and particularly suited for multimedia applications.

ATM is being commercialized on two major fronts. Telecommunications carriers are preparing to use it as a backbone infrastructure for flexible provisioning of the multimedia services of the future. Meanwhile, the computer industry has adapted ATM to the needs of high-speed LANs. The two industries have joined in forming an independent standards body, the ATM Forum, which has been focusing on formulating "adaptation layers" for ATM that specialize its operation for particular classes of application.

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

If ATM is to serve as the common building block for digital services in the telecommunications industry, it must be able to provide effective support for the Internet protocol, since most data networks today communicate through the networks using IP. To accomplish this, the IP packets must be broken up into the smaller ATM cells so that the relatively large IP packets can "ride on top of" the small and nimble ATM cells. Next, the connectionless addressing of the IP must be converted to the connection-oriented ATM system; i.e., a "call" must be established for the IP packet's route via ATM. Third, the congestion control and bandwidth allocation mechanisms of ATM and the Internet protocols must be made to work together. These issues are not trivial, and much work is currently being done to resolve the efficiency of Internet transport through an ATM system. But if these issues can be resolved, both ATM and the Internet protocols should benefit, since the telecommunications industry will acquire a new technology that it can deploy for high-speed data transfer, and the Internet will acquire a new and advanced infrastructure over which it can operate.

The potential benefits of Internet-based networking over an ATM infrastructure are illustrated in the white paper by Gregory Bothun et al. The paper discusses operational trials in Oregon and Colorado, where an ATM-switched network running IP was used to dramatically increase the capacity of university networks. Because the new infrastructure supported IP, university users did not have to abandon their existing protocol structure. Moreover, this approach facilitated collaboration with other institutions, particularly local K-12 schools, whose networks were not based on ATM but did support IP. As Bothun et al. point out,

One might assume that high-end transport options would preclude the participation of secondary schools; however, this did not prove to be the case. The use of IP-based networking provided the ability to cost-effectively bring secondary schools into the networked environment. … With the central dominance of IP-based routers, telecommunications transport became a transparent commodity to be mixed and matched based on cost-performance analyses for each individual location.

Local Area Networks

The last few years have seen major advances in the performance and cost-effectiveness of network technology for the local area, ranging from the work group to the multibuilding campus. Indeed, there was very little discussion of this technology area in the white papers or the forum, largely because it does not seem to represent any serious impediments to progress.

In terms of performance, the major advances have been to increase

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

speeds. ATM local area network (LAN) technology with 155-Mbps host interfaces has now entered the marketplace. In parallel, standards have been defined for Ethernet operation at 100 Mbps, and host interface cards for this option are now also on the market. The faster Ethernet is seen by many in industry as the next generation of faster LAN, since the interface cards are now priced at not much above the cost of a premium 10-Mbps Ethernet interface only a year ago.

A closely related advance has occurred in hubs, the points at which the cables from hosts are connected, which now include a range of products with different levels of performance (at, of course, different costs). Higher-performance hubs incorporate switching among the interface ports, which means that separate pairs of hosts can communicate at once without interfering with each other, in contrast to the earlier hubs in which a packet from only one host could be handled at any one time. ATM has always been based on switching at the hub, but both 10- and 100-Mbps Ethernet now also offer switching as a product option.

Wireless

Wireless communication offers a number of options for local networking as well as for advancing access to the information infrastructure. This section discusses wireless in the telephony and data transmission applications. Trends in over-the-air broadcast, and the impact on the broadcast industry, are discussed in the next section.

Wireless is being used to address two problems in network operation. The first is mobility, the ability of the user to move around while being connected. Mobile users are now increasingly evident, ranging from agents at rental car return facilities and roving checkout clerks in shopping areas to traveling office workers in locations lacking access to wireline devices. The other role of wireless is to reduce the cost and complexity of deploying new services. Some of the new video distribution services clearly illustrate the latter objective, whereas cellular telephony seems to illustrate the former—but perhaps the latter as well, since in some developing countries cellular is seen as a cheaper technology to deploy for general telephony.

Current trends in wireless telephony suggest that there will be rapid changes and advances in the market. The providers of advanced cellular services and personal communication service (PCS) believe that their market will be demand driven, as Chapter 5 indicates. The recent Federal Communications Commission (FCC) auctions of spectrum for PCS and the evolving standards for cellular telephones both suggest a substantial industry investment, which should lead to increased product availability in a very few years. Indeed, since the FCC licenses mandate a schedule of

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

BOX 4.3 Personal Communication Service Licensing Parameters

Narrowband1

Nationwide: (50 states, D.C., American Samoa, Guam, Northern Mairana Islands, Puerto Rico, U.S. Virgin Islands, generally)

Base stations must provide coverage to a composite area of 750,000 square kilometers, or serve 37.5 percent of the U.S. population, within 5 years of the date that an initial license is granted.

Base stations must provide coverage to a composite area of 1,500,000 square kilometers, or serve 75 percent of the U.S. population within 10 years of the date that an initial license is granted.

Regional: (5 regions: Northeast, South, Midwest, Central, West)

Base stations must provide coverage to a composite area of 150,000 square kilometers, or serve 37.5 percent of the service area population, within 5 years of the date that an initial license is granted.

Base stations must provide coverage to a composite area of 300,000 square kilometers, or serve 75 percent of the service area population, within 10 years of the date that an initial license is granted.

Major Trading Areas (MTAs): 47

Base stations must provide coverage to a composite area of 75,000 square kilometers, or 25 percent of the geographic area, or serve 37.5 percent of the population of the service area, within 5 years of the date that an initial license is granted.

transmitter installation for PCS, the time frame of new facilities deployment is more or less predictable (see Box 4.3). In most cases, the FCC licensing parameters have 5- and 10-year penetration targets, with the 10-year target representing substantial (50 percent to 75 percent) penetration.

Much of the interest in wide area wireless has been in cellular telephony and narrowband services such as paging. This focus reflects the relative maturity of those markets, which permits somewhat better business planning. It also reflects the fact that, given current technology options, installing wireless infrastructure that meets more general service requirements such as higher-speed interactive data access is rather more

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

Base stations must provide coverage to a composite area of 150,000 square kilometers, or 50 percent of the geographic area, or serve 75 percent of the population of the service area, within 10 years of the date that an initial license is granted.

Basic Trading Areas (BTAs): 487

Must construct at least one base station and provide service within 1 year of the date that an initial license is granted.

Broadband2

30-MHz Blocks

Must provide a signal level sufficient to provide adequate service to at least one-third of the population in the licensed area within 5 years of being licensed, and two-thirds of the population in the licensed area within 10 years of being licensed (population may be defined according to 1990 or 2000 census).

10-MHz Blocks

Must provide a signal level sufficient to provide adequate service to at least one-quarter of the population in the licensed area within 5 years of being licensed, or make a showing of substantial service in the licensed area within 5 years of being licensed (as based on 1990 or 2000 census).

1Narrowband: 901-902 MHz; 930-931 MHz; 940-941 MHz

2Broadband: 1850-1890 MHz; 1930-1970 MHz; 2130-2150 MHz; 2180-2200 MHz

SOURCE: MTAs and BTAs based on Rand McNally 1992 Commercial Atlas and Marketing Guide, 123rd Edition, BTA/MTA map; exceptions as noted in FCC rules (47 CFR 24.102; CFR, 1994).

costly. The perceived costliness of more general wireless infrastructure naturally suggests that deployment decisions in this sector will tend toward more targeted solutions. In fact, a major part of the wireless investment over the next few years will be in real estate and in the construction of antenna towers. These towers probably will permit the installation of several different radio systems, so that in terms of tower construction, the wireless industry is also building an infrastructure of potential generality. (Note, however, that some options for cellular and PCS systems involve relatively small cell sizes and small antenna structures that do not need towers.)

Although the options for higher-speed wireless data transmission are

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

less fully developed, the possibilities are quite exciting. The cost of radios that operate at higher regions of the radio spectrum has declined rapidly—only a few years ago, producing an inexpensive radio operating above 1 GHz was considered challenging, whereas now one can purchase, for a few hundred dollars per station, wireless LAN technology that operates at 5.8 GHz. This technical advance opens up a large range of spectrum for consumer devices. Other products are available that offer short-range LAN emulation and point-to-point data communications over a few miles at data rates of up to LAN speeds (a few megabits per second) and at prices of a few thousand dollars or less. These sorts of devices, which are available from five or ten vendors and are entering the market for workstation interconnection, show some of the potential for wireless.4

Most of the current products for interconnecting workstations operate in unlicensed FCC bands at about 900 MHz or 2.4 GHz. Both of these bands have power limits for unlicensed operation that restrict the range and speed of service. Devices that operate at about a megabit per second have a range of a few hundred feet, and devices that operate at lower speeds, 100 kbps or less, can blanket a region with a radius of perhaps half a mile. These technical limitations mean that devices currently on the market do not have the capacity to provide full access to the residential market, which would be better served by a device with a data rate of a few megabits per second and a service radius of a few miles. However, if current products indicate the state of the technology, it would seem possible to build a higher-capacity device if the spectrum were available under suitable conditions for transmitter power and information coding. The recent proposal to the FCC from Apple for an "NII band" around the 5-GHz portion of the spectrum, potentially supporting applications over 10-km distances at about 24 Mbps, had the objective of serving this purpose (see Apple, 1995).

Noted elsewhere in this report is the possibility that the forces of competition and the motivations of private industry for investment may limit the interoperability of emerging telecommunications infrastructure. Next-generation cellular standards in the United States offer a possible example of such an outcome. Differences of opinion about the relative merits of two emerging approaches to PCS, together with the pressures for the rapid introduction of products into the high-demand market, have resulted in two digital cellular standards being brought to the U.S. market, one based on time division and one based on code division. This outcome, while allowing different providers to explore different approaches, suggests that a consumer will not be able to purchase a single digital cellular telephone and use it for roaming to all other digital systems within the United States.5 Some consumer dissatisfaction with this situation has already been expressed.

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

In contrast, more active involvement by the government in Europe has essentially led to the selection of a single digital standard for next-generation cellular telephony (Global System for Mobile Communications; GSM), which will permit the use of a single device almost anywhere within Europe and in many other parts of the world. This stronger participation by government in technology creation perhaps helps European citizens and European industry, but the European standard has also been criticized for using spectrum less efficiently, which may add to long-term costs. The European standard has also been criticized for disrupting the operation of hearing aids.

Broadcasting6

The terrestrial broadcast system of today was designed for one specific purpose, the delivery of entertainment video. However, broadcasting represents another example of the current trend to separate the specifics of the infrastructure from the set of higher-level services being offered and to push into more general technology. Since the major investment of the broadcaster is in the tower and the operation of the transmitter, any use of a television channel that offers new service has a low incremental cost.

The essential first step in providing new types of services to consumers is the transition to digital standards for video transmission. The development of standards for digital television signals can permit four channels at the same resolution to be transmitted in the space where one was carried before. These new digital channels represent an asset that can be used to provide additional traditional television channels, or perhaps deflected into some new service. This development will give consumers a much larger selection of programs. In particular, many niche markets may now be served economically, catering to every consumer taste. The broadcasting industry is clearly assessing its options, taking into account the current regulatory situation.

Digital terrestrial broadcasting is also necessary to implement high-definition television (HDTV) and advanced television broadcast and cable service. Standards for these services may be approved by the FCC in the coming year (Andrews, 1995b). Broadcasters will be able to profit from the sale and transmission of many types of program- and nonprogram-related digital data, in addition to the HDTV and multiprogram stream standard television services. HDTV will start as a prime-time service in part because prime-time programming is already shot mostly on HDTV 35-mm film. In other parts of the day, standard digital 525-line television will be broadcast, with four program streams filling the standard channel in place of one, through the use of digital compression.

A consequence of the digital conversion is a plethora of standards for

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

representing the television picture. At least three important standards are required for digital transmission of video:

  • The resolution of the picture on the screen. The HDTV standard defines a higher resolution and a different ratio of height to width than does the standard definition television (SDTV) standard, which yields a picture more similar to the NTSC picture of today.
  • The compression technique for encoding the picture. For this purpose, the industry is converging on the MPEG-2 standard (named after the standards-developing Motion Picture Experts Group; see Plantec, 1995), but there are variations within the standard (called the profile and the level) that determine the picture quality, the efficiency of the compression, and the cost of the decoder.
  • The method of transmitting the compressed digital signal. Different transmission methods will be used depending on whether or not there is a limitation to the power of the transmitter, and, in consequence, on how "noisy" the reception is.

The different industries concerned with delivery of video—terrestrial broadcast, cable, and satellite—have in some cases settled on different options for these standards. Cable systems generally are not limited in transmitter power, and they have very good noise characteristics. The current proposals for digital transmission across cable systems thus tend toward schemes that can pack as much digital information as possible into an existing analog video channel. The scheme currently preferred, called 64-QAM, can fit 27 Mbps of data into a 6-MHz analog channel. Terrestrial broadcast standards, which must take into account worse noise conditions, are currently using a more robust scheme, called 8-VSB, that fits 19 Mbps of data into the same channel. In contrast, satellite television broadcast, because of the extreme limits on power and resultant poor noise conditions, uses a very robust but bandwidth-consuming scheme, called QPSK, that uses four analog video channels to carry 27 Mbps of data. The different industries have also, at present, tended to adopt different profiles and levels of the MPEG-2 standard, again because the options represent different trade-offs in bandwidth, quality, and cost.7

To deal with this range of formats, the nature of the television will have to change. It is technically possible to build a television that can decode multiple standards, but this adds to the cost of the device. The alternative is to provide the consumer with a set-top device specific to the method being used to deliver the signal. The television itself would then become a monitor, with many of the advanced features in the set-top box. Such a change in the terrestrial broadcast industry would represent a major transition, since today the design of all televisions (as mandated by

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

the FCC) permits the reception of over-the-air broadcast without a translation box. However, it seems inevitable that such boxes will exist, if only to permit the analog NTSC televisions of today to receive the digital signals soon to come.

Research today is addressing the question of building televisions that can, at least to some extent, decode multiple standards at a reasonable cost. An HDTV decoder for a lower-resolution SDTV television can be built at a cost only slightly above that for an SDTV decoder and will permit reception of HDTV on an SDTV television. Building an MPEG decoder for the more complex "main profile" requires additional memory in the device, a requirement that could add $40 to $50 to the cost of the decoder at today's prices for memory (Reed, 1995). Receivers capable of dealing with both QAM and VSB should reasonably be built at a cost only slightly higher than the cost of a single dedicated decoder (see the white paper by Jill Boyce et al.). But the final arrangement of modules and components in the digital television of the future is far from clear.

Finally, systems are emerging for the wireless transmission of video signals in a low-power broadcasting mode. Multichannel multipoint distribution service (MMDS), also known as wireless cable, uses low-power microwave signals sent from a central tower to customer equipment. Depending on the set of licenses available for channels in any particular area, up to 33 analog channels may be available. Use of digital compressed encoding can further increase the number of channels that can be broadcast. Pacific Telesis recently announced its intention to offer 100 channels of digital video over MMDS in southern California. A more recent technology using a smaller cellular structure at a higher transmission frequency (28 GHz) is local multipoint distribution service (LMDS). This service, still in the experimental stage, might offer two sets of 50 channels each. These wireless approaches may provide an interim way for telephone companies, broadcasters, and other players interested in video delivery to compete with current cable providers without building a complete hybrid fiber coaxial cable or fiber-to-the-curb infrastructure.

The NII is intended to provide for interactivity between the consumer and the program provider. Currently, the capability for interactive response to terrestrial broadcasts is provided by the telephone. However, because more than half of all television households receive their broadcast signal via cable, these households will be able to interact with the program provider using the return channel on the digital cable system. The potential of MMDS and LMDS for two-way interactivity is less clear and might rely on basic telephone service for low-data-rate upstream communications. In the next 5 years, it is expected that cellular telephone service and PCS will be widely deployed, providing yet another means of low-data-rate interactive response to the program provider.

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

Satellite

Satellites have long played a role in the backbone segment of long-distance communications networks. A variety of satellite technologies also exists for end-user access to communications services. In the business communications market, very small aperture terminal (VSAT) systems are used by firms such as hotels, department store chains, and car dealerships to conduct data communications between a central office and remote sites, at variable data rates depending on the nature of the system; some have sufficient capacity to support video transmission. VSAT systems offer two-way communications, although data rates are frequently slower from remote sites toward the center than in the outbound direction. The remote site receivers use an approximately 1.8-meter dish, and signals are carried by satellites in geosynchronous orbit above the earth's equator. Hughes Communications has announced (and discussed at the January 1995 workshop) plans to build a geosynchronous satellite system, Spaceway, which would allow end users access to the Internet, online information services, and other applications at speeds of up to 400 kbps, using a 24-inch transmitter/receiver dish. The system is expected to be aimed at both consumer and business markets.

Geosynchronous satellites also support one-way broadcast services to consumers, primarily video entertainment. Satellite systems operating in the C and Ku8 bands of the frequency spectrum were developed for television networks to distribute programming to their affiliates for terrestrial broadcast. However, rooftop and backyard satellite receivers, with dishes of up to 10 feet in diameter, are an alternative to terrestrial broadcast and cable television for many users. C-band systems can receive approximately 150 unscrambled analog signals and another 100 scrambled channels. In the past 2 years, two direct broadcast satellite systems based on digital standards have become available, Primestar and RCA's Direct Satellite System. These services carry a package of free and pay television channels and support higher-quality video and sound than do analog systems (Samuels, 1995).

Both voice and data systems depending on networks of satellites in low earth orbit have been announced by several firms in the past few years. One of the higher-capacity systems envisioned is Teledesic, which by the year 2001 is expected to incorporate 840 satellites and provide full telephone, high-speed data (up to 2 Mbps), and interactive video services, accessible worldwide (OTA, 1995; Brodsky, 1995).

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

Power Industry as Infrastructure Provider

Electric utilities invest in replacing wires on a regular basis, since wires are hit by lightning, and thus carry a risk of failure over time. The industry has observed that the incremental cost of including a fiber-optic cable inside its ground wires is very low, which naturally suggests that investment in fiber represents a good risk as a basis for entering into new business opportunities. Because it does not carry an electrical current, fiber is a natural means to transport control signals through a utility's power distribution grid. In addition, deployment of fiber communications networks reaching into customer premises would enable utilities to reduce their power-generation costs by offering demand management services such as automated, remote control of air conditioning, lighting, and heating systems. Reducing customers' energy usage would reduce overall power demand and enable utilities to delay investing in new power-generation facilities.

As discussed in the white paper by John Cavallini et al., only a very small fraction of the capacity of a fiber network that reached into customer premises would be needed to support energy services. Remaining capacity could be resold or used to support the utility's entry into additional lines of business, such as telephony or entertainment distribution. Utilities currently face choices about how and where to invest in making these infrastructures general-purpose. Technical, regulatory, and economic factors that are as yet undetermined will influence the form a multipurpose utility-owned communications network might take, but the prospect appears feasible in principle.

The Internet

To understand the Internet, one must distinguish its two aspects. First, there is ''the Internet," usually written with a capital "I," which is the actual collection of links and switches that make up the current public network. This, and the applications that run over it such as the World Wide Web, are the aspects that have excited the public and press.

The other aspect of the Internet is the set of documents that define its protocols and standards. These standards, which are printed in the document series called the Internet Requests for Comment, define how networks that are part of the Internet can interconnect, and they guide the development of the software that is needed for any machine to attach to the Internet.

But the public Internet is not the only use for these protocols and standards. Anyone can purchase the necessary network technology and connect a number of computers into an internet (with a lower case "i").

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

And there are probably thousands of such internets, since many businesses today run private corporate networks that support the Internet protocols (as well as others)9. Over the last 5 years, privately constructed networks have by far exceeded the public Internet in driving the demand for packet routers and related products.

Change and Growth

The continued and rapid growth of the public Internet is a well-known phenomenon. But the growth of the Internet is reflected in more than the rate of new deployment. The protocols, too, are changing in response to evolving needs. The Internet standards should not be thought of as frozen for all time. Indeed, a major part of the success of the Internet has been its ability to change as new technologies and service requirements emerge. At the present time, the explosive growth of the public Internet has forced a redesign of the central protocol—the Internet protocol (IP) itself—to produce a new version called IPv6. IP defines that very central service that ties the range of applications to the range of network infrastructures. One example of a change forced by growth is the need to increase the size of the Internet addresses used to identify the computers attached to the network. If the Internet continues to grow as predicted, there will not be enough addresses available to name all the devices that will be attached. A different address format with room for longer addresses is thus a central part of IPv6. Other features, too, such as increased security and automatic configuration (so-called "plug-and-play" attachment to the Internet) are being added to the next generation of IP. This advance to IPv6 will be difficult to accomplish, since the IP is now. implemented and installed in millions of computers worldwide. The success or failure of the migration to IPv6 will be a measure of the Internet's ability to continue to grow and evolve.

Other changes relate to new application services. For example, multi-cast, which is the ability to send a single message and have it reach a number of recipients with high efficiency, is now being added to IP. This capability, together with the coming support for real-time services such as audio and video, permits the development of a wide range of new applications, such as multiway teleconferences, provision of audio and video information on the World Wide Web, and broadcast of specialty audio and television sources (Berners-Lee et al., 1994; Eriksson, 1994). For example, live audio feeds from the U.S. House of Representatives and and U.S. Senate floor are now available as an experiment across the Internet.

One of the ways that the design of the Internet allows for change and evolution is that as much as possible, the functions of the Internet are implemented in the end nodes, the attached computers, rather then in the

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

core of the network, the routers and the infrastructure. First, as discussed above, having fewer dependencies on the infrastructure permits the use of a broader range of infrastructure. But perhaps more importantly, if functions and services are implemented on the end node, they can be changed or replaced without having to modify the internals of the network. In particular, creating a new application on the Internet requires only that appropriate software be distributed to interested users, and does not require making changes to the router code.10 Thus, users can experiment with new applications at will.

The limitation of pushing function to the edge of the network is that the minimal end node suitable for attachment to the Internet must have a certain amount of processing power. The alternative, in which the network contains more of the functions, permits the end node to be cheaper. Indeed, it will be a while until an Internet-style end node can be manufactured for the cost of a cheap telephone (see Chapter 2). But for networks such as the Internet that attempt to deliver a general set of services, there is no evidence today that overall costs can be reduced by shifting function into the center of the network. With the trend toward increased functionality and programmability, even for more specialized devices such as telephones and televisions, it is reasonable to predict that soon even very simple and inexpensive devices will contain the functions necessary to implement access to the Internet.

The Internet phenomenon is tied as much to the increasing power of the processing chip as to the advent of higher-capacity communications links such as optical fibers. The ability of the user to purchase inexpensive end-node devices, of course, is basic to the Internet's existence. But it is also the processor that permits the construction within the network of cost-effective routers and other devices that realize the Internet, such as name servers or data caches. The power and flexibility of programmable devices influence the economics, the functions, and the future of the Internet.

Of course, to many, the Internet phenomenon is not fibers or silicon, but people. The Internet, by its open architecture, has permitted a great number of individuals to conceive and try out innovative ideas, and some of these, like the World Wide Web, have taken root and become a basic part of the success of the Internet. The process by which Internet standards are set reflects this philosophy of open involvement. The Internet Engineering Task Force meets three times a year to debate and set standards. It is open to any who want to come, and attendance at meetings has topped 1,000. The Internet is not constrained by an over-arching vision of what it should become (although such visions are certainly offered and do serve as guides); it flourishes by the bottom-up enhancement that arises when people propose ideas, demonstrate that they work,

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

and offer them to the community for debate and standardization. This process has worked well for the last 20 years. How well it will continue to work, given the increasing commercial pressures on the standards-setting process and the increasing size and visibility of the Internet and its design community, remains to be seen (see Box 4.4).11

Transport Infrastructure to Information Infrastructure

For its first 20 years, the designers of the Internet were concerned with the basic problem of moving data between machines, in the context of computers and network facilities with differing characteristics. At this level, the data being moved are viewed as simple strings of bits or bytes. The format of the information is viewed as a higher-level problem. Of course, the Internet standards included a description of how those bytes were to be formatted for critical applications such as electronic mail. But the central problem was the basic one of getting these bits across the wires to the recipient.

At this time, the emphasis of the Internet developers has expanded. While considerable efforts are still being made on the lower layers of the standards—to incorporate new infrastructure elements such as ATM, or to add new delivery modes such as real time, and to deal with critical issues such as scale and management—new problems now being addressed relate to the format and meaning of the bits, in other words, the task of turning the bits into real information.

The World Wide Web provides a good example of some of the issues. Central to the success of the Web is a standard that describes the encoding of information so that it can be displayed as a Web page. This standard, called HTML (an abbreviation for Hypertext Mark-up Language), allows the creator of information to control the presentation of the material on the page (its size and style, for example) and to specify how the text, graphics, images, and other media modes will be combined on the screen of the user. In fact, although the basic HTML format is very simple (most users who can deal with a word processor can create a Web page with an hour or so of training), it enables the creation of rather complex information objects, such as forms on a Web server that can be filled out by the user. It is this standard for presentation of information that has made possible the construction of different "Web browsers"—the user interface software for interacting with the information on the Web, and has made it possible for so many providers of information to format that information for use there.

But, in fact, there is a need for a variety of information formats on the Internet. Even for the simple task of formatting information for display and printing, a range of standards exist. As the white paper by Stephen

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

BOX 4.4 What Will Happen to the Internet?

The Internet has undone what Columbus discovered 500 years ago: It has made the earth flat once again. Maybe it has even made it a bit like a shallow bowl, where every being can "see" every other with relative ease. Broadcast radio and television have made it such that most humans can sense remotely what others are doing; but the Internet allows for the two-way communication that spurs evolution.

The question is, Will the Internet collapse of its own disorganized weight, get regulated to death, or shrug off all attempts to control it and simply take over the earth? In biological terms these alternatives can be likened to cancer, drugs, and green slime.

What are the main factors that will determine the outcome of the struggle to "grow" the Internet in a reasonable way? I see a handful of such factors:

The "media hype" factor. By the time this gets published, I expect that the Internet will have been the subject of a Geraldo or Oprah expose.

The "build-a-better-one" factor. The Internet was originally designed to assume very little about the underlying data transport facilities. This assumption has stood the test of time—the mail goes through, files get transferred, and the Mosaic browser roams the planet in search of interesting information. The number of addresses (potential subscribers) has grown so fast that there is a move afoot to add the capability for many more subscribers. However, the desire to add more and more functionality to the base service threatens to create a design that only an airplane designer could love.

The "common-good" factor. Talk about "data have-nots" and social imperatives pose the prospect of new rules, but legislating universal data access will kill it before it builds enough of a natural market to drive down prices through volume and competition. When it is a cheap enough, data access will be ubiquitous. The other, more intractable problem is that on the Internet, local is global. Anything I do in California can be observed and interacted with by persons in Hong Kong or Sao Paolo or Toledo. Whose rules of censorship, taxes, and prices should apply?

The "elephant-view" factor. The Internet is a vast opportunity space. There are many suppliers of solutions at every layer of the data communications infrastructure. And they all work like the devil to enhance their local benefits. This results in a gradual blurring of the nice clean architectural model that we started with and creates short-term disruption in the marketplace while competitors try to respond to new approaches.

The "trust-it-to-luck" factor. There is a large contingent of Internet pioneers who say "leave it alone." Don't regulate it, don't tax it, don't improve it except to allow it to take on more customers. Just let it evolve. It is not everything it could be, but it is great for what it is: a cheap way of allowing anyone to share digital information with anyone else. To those who wish to make it serve particular niche needs, they say, go build those facilities on the base system, but do not make all the rest of us have to pay your overhead.

—Daniel Lynch, Interop Company and Cybercash Inc.

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

Zilles and Richard Cohn observes, in some cases an author of information may want to specify its format totally—controlling the layout and appearance of each page exactly. A document format standard such as Portable Document Format can be used for this purpose. The Web's HTML represents a middle ground in which the creator and the viewer both have some control over the format. At the other end of the spectrum, permitting the user maximum flexibility in reformatting and reprocessing information is enabled by database representations such as SQL (Structured Query Language), for example, that are concerned not with display formats but with capturing the semantics of the information so that programs can process them. The current trend, consistent with the approach taken in the Internet to deal with multiple standards, is for all of these to coexist, permitting the creator to impose as much or as little format control as is warranted for any particular document.

Looking at the Internet today, one sees tremendous innovation in these areas, just as there was in the past concerning the basic issues of bit transport. Examples of current standardization efforts include "name-spaces" and formats of information objects, protocols for electronic commerce, and a framework for managing multimedia conference sessions.

Over the next few years, this increased attention to higher-level information architecture issues will have an impact on the "inside" of the network, the routers and internal services. For example, some Internet service providers are planning to deploy computers with large disk arrays at central points within the Internet to store popular information (e.g., Web pages and related files such as images) close to the user, so that it can be delivered on demand without having to be fetched from across the globe. This sort of enhancement will increase the apparent responsiveness of the network and at the same time reduce the load on the wide-area trunks. It represents another example of how the increasing processing power that becomes available can be used to enhance the performance of the network, independent of advances in network infrastructure.

Open Interfaces and Open Standards

The Internet is perhaps an extreme example of a system that is open; in fact it is open in a number of ways. First, all of its standards and specifications are available for free, and without any restriction on use. The meetings at which standards are set are open to all.12 Second, one objective of the design of the standards is to make it as easy as possible for networks within the Internet—both public networks of Internet service providers and private networks of corporations, institutions, and individual—to connect together. Thus the Internet is open to providers as well as users. Third, its internal structure is organized to be as open as

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

possible to new applications. For example, some of its traditional features, such as the software that ensures ordered, reliable delivery of data (the protocol called TCP), are not mandatory, but can be bypassed if this better suits the needs of an application.13 This openness has made the Internet an environment conductive to the innovation of new applications.

Standards And Innovation In The Marketplace

It is important not to underestimate the total number of standards that collectively define the existing personal computer (PC) marketplace, the Internet, the telephone system, or the video delivery infrastructure. Standards describe low-level electrical and mechanical interfaces (e.g., the video plug on the back of a television or the Ethernet plug on the back of a computer). They define how external modules plug into a PC. They define the protocols, or agreements for interaction between computers connected to a common network. They define how functions are partitioned up among different parts of a system, as in the relationship between the television and the decoder now being defined by the FCC.14 They define the representation of information, in circumstances as diverse as the format of a television signal broadcast over the air and a Web page delivered over the Internet.

Corresponding to the volume of standards is the range of standards-setting activity. The United States has more than 400 private standards-developing organizations. Most are organized around a given industry, profession, or academic discipline. They include professional and technical societies, industry associations, and open-membership organizations. Among the most active U.S. information technology standards developers are the Institute of Electrical and Electronics Engineers, a professional society; the Information Technology Industry Council, which administers information processing standards development in Committee X3; and the Alliance for Telecommunications Industry Solutions (ATIS), coordinator of Committee T1 for telecommunication standards. Inputs from domestic standards activities by such organizations to international standards organizations (discussed below) are coordinated by the American National Standards Institute.

Standard interfaces allow new products related to information infrastructure to interoperate with each other and with existing products. They are therefore essential for new markets to develop. However, differences in how standards are set can be found among industries, and the approach to standards setting may affect progress in the multiindustry, multitechnology world of the NII. Consider historic differences between the telecommunications and computer industries.

The telecommunications industry has depended on a variety of na-

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

tional and international standards organizations. The International Telecommunications Union (ITU) is the primary telecommunications standards organization at the international level. As a United Nations treaty organization, its members are governments. The ITU sets thousands of standards for telecommunications services and equipment and for their interoperability across interfaces, as well as allocating radio-frequency spectrum. U.S. representation at the ITU is coordinated by the State Department, with participation by other public agencies and private industry. Since AT&T's divestiture, domestic U.S. telecommunications standards have generally been developed by industry-led formal standards organizations, such as ATIS, mentioned above. All of these organizations seek to produce formal "de jure" standards through a process of consensus in technical committees. Standards for some kinds of equipment and services must be approved by the FCC, sometimes adding months or years to the process.

To a much greater extent than the telecommunications industry, the modern computer industry has relied on the marketplace for creating "de facto" standards. In such a system, companies' fortunes depend to a significant extent on their ability either to create de facto standards or to supply products rapidly that conform to emerging de facto standards. In the computer industry, this trend has been driven by market competition, the rapid pace of computer technology change, and the "bandwagon" effect that leads consumers to adopt technologies that appear to be emerging as widespread standards rather than risk being left unable to interoperate with other users and systems.15

In practice, standards development exists within a continuum.16 Many computer industry standards are formalized in national and international standards organizations, such as the International Organization for Standardization—although these standards frequently lag the de facto processes of the market. There are also a multitude of "hybrid" systems. For example, it was the combination of market forces and formal standards committees that created many of the LAN standards currently in use. The emergence of standards consortia in both the computer and telecommunications industries reflects a compromise between the slower pace of consensus standards setting in formal organizations and the chaos of the market. If different consortia produce different standards, however, the fundamental problem of reaching a standard remains.17 No matter what the method, real and meaningful standards are essential to mass deployment of technology. Anything less is immaterial; standards mean volume!

Neither system of developing standards is perfect. De jure standards creation, while it may be more orderly, tends to be slow and is not immune to political pressures. Furthermore, it may not result in a common

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

standard—witness the regional differences in ISDN deployment. De facto standards, while they may emerge more rapidly, can result in a period of market chaos that delays mass deployment—and the marketplace also sometimes fails to produce a single, dominant standard.18 They may also lead to antitrust pressures, as experienced by both IBM and Microsoft.

Both formal and market-driven standards setting can favor established major players, but for different reasons. In the case of de jure standards, the long delay in settling on standards allows the established major companies to adapt to the new technology capability and provides for formal representation of users as well as producers. De facto standards, on the other hand, require market power, and the established major players have it. thus, they can benefit, but only if they can move rapidly.

The process of setting standards for the Internet is an interesting and important example of the balance of concerns. The Internet standards are somewhat between de jure and de facto. While the Internet standards body (the Internet Engineering Task Force; IETF) has not been endorsed by any national or international standards body,19 it operates with open membership and defined processes for setting standards, and it attempts to avoid domination by any industry sector or large market players. It has been praised for producing standards that work, because it looks for implementation experience as part of its review process. It has also been criticized for the slowness of its processes and for what some see as a somewhat disorderly approach to consensus building.

It is worth noting that some of the important standards in wide use over the Internet, including the standards for the World Wide Web, were not developed formally through the IETF process. Instead, they were proposed by other groups, discussed informally at IETF meetings, distributed on-line over the Internet, and then accepted by industry without further IETF action. Although this partial bypass of the formal IETF processes worries some observers, there can be no argument with the success of the World Wide Web in achieving rapid market penetration. It remains to be seen how IETF and informal Internet standards-setting processes will evolve and function in the future. 20

There is strong private sector motivation for effective standards setting. Many participants at the forum and workshop said in effect that while the government should act to facilitate effective standards setting, it must not create roadblocks to such efforts by imposing government-dictated standards processes.21 Government use of private, voluntary standards in its own procurement, however, can be supportive. 22

The process of setting standards is only one part of the delay in getting a new idea to market. If the idea requires software that is interoperable on a number of different computing platforms, the sequence of

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

steps today to push a new innovation into the marketplace is to propose the new idea, have it discussed and accepted by a standards body, and then have it implemented by some party on all the relevant computers. A brute-force way of bypassing this process is for one single industry player to write the necessary code for all the relevant computers, as Netscape did for its Web browser. Netscape coded, and gave away in order to create the market, three versions of its Web browser—for Windows, for the Macintosh, and for Unix. One drawback to this approach is the large effort required by one industry player. An additional drawback is that the person responsible for each computer needs to retrieve and install the software package.

An idea now being proposed to avoid these drawbacks is to define a high-level computer language and a means for automatic distribution and execution of programs written in this language. Under this scheme, an interpreter for such a language would be installed on all the relevant computers. Once this step was taken, a new application could be written in this language and immediately transferred automatically to any prepared computer. This would permit a new innovation to be implemented exactly once and then deployed essentially instantly across the network to interested parties. An example of such a scheme is the proposal for the Java language from Sun Microsystems. Sun has implemented an interpreter for the Java language, called HotJava, which can be incorporated into almost any computer. Netscape and Microsoft have announced that they will put an interpreter for Java into their Web browsers. Such developments will permit new applications to be written in Java, downloaded over the Web, and executed by any computer running most of the popular browsers.

This set of ideas, if successful, could have a substantial impact on the process of innovation, by speeding up evolution and reducing implementation costs in areas where it is relevant. David Messerschimtt of the University of California at Berkeley observed:

[O]ne of the key attributes of the NII should be dynamic application deployment; we should be able to deploy new applications on the infrastructure without the sort of community-of-interest problems and standardization problems associated, say, with users having to go out to their local software store and buy the appropriate applications. … [It] should be possible to deploy applications dynamically over the network itself, which basically means download the software descriptions of applications over the network.

This set of ideas could also permit the construction of new sorts of applications. For example, programs could be sent to remote machines to perform searches on information there. Thus, remote program execution

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

could be a means to implement intelligent agents on the network. More speculatively, these ideas might change the distribution of function within the network. Clearly, remote interpretation of programs exploits the increased processing power of the PC of today. But some have speculated that this approach, by downloading on demand only the required software into a PC, could reduce the complexity and cost of that PC by eliminating some of the requirement for disk space and memory. This shift might help ameliorate the economic challenge of providing NII access for the less affluent. However, many are skeptical that this shift of processing power back from the end node and into the network, which runs counter to the recent history of the computer industry, will prove effective. Specifically, it would require increased bandwidth in the subscriber access path, which seems difficult to justify economically. There is no clear conclusion to this debate today.

Management And Control Of The Infrastructure

The aspect of the information infrastructure that is most exciting to the user is the service interface that defines what the network can provide to the user—how fast the network will operate, what sorts of services it can support, and so on. Perhaps understandably, these issues received the most attention at the workshop and forum. Equally important as networks grow bigger, however, are the issues of management and control. At the January 1995 workshop, Mahal Mohan of AT&T commented on the ''tremendous number of numbering, switching, overall administration, [and] service-provider-to-service-provider compensation" details involved in supporting network-based services, details, he lamented, that "tend to get overlooked in just counting out the bandwidth of what is coming to the home and who owns it."

The issues of control and management for the Internet are particularly instructive. The Internet grew with a very decentralized model of control. There is no central point of oversight or administration. This model was part of the early success of the Internet; it allowed it to grow in a very autonomous manner. However, as the Internet grows larger and, at the same time, expectations for the quality and stability of the service increase, there are those who believe that changes are needed in the approach to Internet management and control. In the 1994 to 1995 period, there were a number of reports of errors (usually human errors) in the operation of the Internet routing protocols that have caused routing failures, so that information is misdirected and fails to reach its destination. The protocols and controls in place today may not be adequate to prevent these sorts of failures, which can only grow more common as the number of networks and humans involved in the Internet continues to grow.

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

At the forum, Howard Frank of the Advanced Research Projects Agency expressed surprise at having heard "no discussion at all about the management structures, information management technologies, help services," and so on. He observed that progress in these areas is necessary so that "an internet can evolve from a rather chaotic, independent, loose collection of things to a managed system of the quality of the telephone system, which will allow us to go from a few percent to the 50 percent mark."

A white paper by David Clark of the Massachusetts Institute of Technology argues that the Internet and the infrastructure over which it runs, which were totally separated in the early days of the Internet, must now come together in some ways to facilitate better management and control. Talking about maturing services such as the Internet, Clark observes:

If the service proves mature, there will be a migration of the service "into" the infrastructure, so that it becomes more visible to the infrastructure providers and can be better managed and supported.

This is what is happening with the Internet today. The Internet, which started as an overlay on top of point-to-point telephone circuits, is now becoming of commercial interest to the providers as a supported service. A key question for the future is how the Internet can better integrate itself into the underlying technology, so that it can become better managed and operated, without losing the fundamental flexibility that has made it succeed. The central change will be in the area of network management, including issues such as accounting, fault detection and recovery, usage measurement, and so on. These issues have not been emphasized in many of the discussions to this point, and they deserve separate consideration in their own right.

Milo Medin of @Home called for approaches such as distributed caching to "avoid vaporizing the Internet" due to excess traffic, yet there is no mechanism to encourage or enforce such prudent practice.

Any change in the overall approach to Internet management and control will require the development of an overall architecture or model for the new approach. This sort of major redesign is very difficult to contemplate in the Internet today, due to the large installed base of equipment and the bottom-up approach of the standards process. Whether and how to evolve the management and control model of the Internet thus represents a major point of concern for the future.

At the same time that some are calling for more regimented approaches to Internet management and control, others argue that the Internet style of control is preferable to the model that more closely derives from the traditions of the telephone company. The current ATM standards have been criticized by some for this reason. The signaling and control systems being developed come from a heritage in the telecommu-

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×

nications industry that, while a reasonable model when it was first adopted, may be coming under increasing strain. The Internet community has developed a different technical approach to signaling and control, which may prove to be simpler while more robust. Instead of building complex mechanisms to ensure that any fault inside the network can be locally detected and corrected, the end nodes attached to the network periodically repeat their service requests. These periodic re-requests for service reinstate the needed information at any points inside the network that have lost track of this service request due to a transient failure. It is thus the case that there is still a significant set of technical disagreements and uncertainties about the best approach to network management and operation, both for the maturing Internet and for the next generation of technologies for the mature services such as voice and video.

An issue that is now receiving considerable attention is pricing and cost recovery in the Internet (see Chapter 3 for more discussion). In the past, the Internet has been paid for on a subscription or fixed-fee basis. There is now considerable debate as to whether some forms of usage-based charging are appropriate or necessary. The white paper by Robert Powers et al. notes the need to balance recovery of consumed service with the cost of implementing the billing mechanism. Since billing systems and their use have costs, it is an open question whether telephony-style billing is the right model. The answer no doubt depends on the type of applications that users will demand. Electronic mail places rather small demands on the network, but video conferencing is quite different. Pricing is relevant to network technologists not only also because of issues relating to implementing accounting and billing systems, but also because pricing influences how (and how much) networks are used. These incentive effects interact with the network architecture to affect the performance as well as the profitability of networks.23

Notes

  • 1.  

    Andrew Lippman of the Massachusetts Institute of Technology observed that industry has not been good at predicting what applications will prevail and should thus engineer for the unexpected, and not focus on the specific application of today.

  • 2.  

    Different assumptions about market penetration and traffic load would obviously change these results.

  • 3.  

    Time Warner has demonstrated the feasibility of such a system in trials in Orlando.

  • 4.  

    The Wireless Information Networks Forum (WINForum), an industry association, petitioned the FCC in May 1995 to set aside additional spectrum for wireless local area networking at higher data rates than current options allow.

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
  •    

    The proposed allocation would support high-speed LANs at 20 Mbps, sufficient for multimedia applications. See WINForum (1995).

  • 5.  

    All digital cellular phones are expected to be able to be used with traditional analog facilities, which will remain in place for the foreseeable future. However, the added complexity of a device that can support multiple standards may add to its cost, especially in the short run.

  • 6.  

    Material for this section was taken from Henderson (1995) and from Reed (1995).

  • 7.  

    A further variation is whether fixed- or variable-rate video encoding is used. A variable-rate encoding can permit better representation of complex scenes within a program, at the cost of a higher peak rate. However, other sorts of digital information can be transmitted in the instants that the full channel capacity is not needed for the video. In one recent experiment, a 50-second commercial provided enough unused capacity to transmit 60 megabytes of data.

  • 8.  

    Ka band frequencies are also in high demand as companies rush to offer satellite video-conferencing and computer networking. Using this band, systems such as Hughes' Spaceway could offer "bandwidth on demand" where "today's 24-minute download from the Internet would take less than 4 seconds at a cost no higher than today's rates." See Cole (1995), OTA (1995), and Markoff (1995).

  • 9.  

    Some corporate networks should properly be thought of as being part of the public Internet, since they are directly connected and exchange packets. However, many corporate networks, if connected at all to the public Internet, exchange only specific and limited applications such as electronic mail.

  • 10.  

    This statement is not accurate in every instance; certain enhancements to the Internet such as support for audio and video will require upgrades to the router code itself.

  • 11.  

    The CSTB (1994b) report Realizing the Information Future includes an expanded discussion of the pressures and concerns now arising in the Internet standards process.

  • 12.  

    Internet standards are discussed and set by the Internet Engineering Task Force (IETF) and its working groups, which collectively meet three times a year. In addition, much work is carried out on the Internet itself. For information, see the Web page of the IETF at http://www.ietf.org.

  • 13.  

    This situation prevails with real-time transport of audio and video, where quick delivery is more important that 100 percent reliable delivery.

  • 14.  

    See the 1991 Cable Act (P.L. 98-549), amending 47 USC Sec. 544. It required that "[w]ithin one year after October 5, 1992, the Commission shall prescribe regulations which establish minimum technical standards relating to cable systems' technical operation and signal quality. The Commission shall update such standards periodically to reflect improvements in technology. A franchising authority may require as part of a franchise (including a modification, renewal, or transfer thereof) provisions for the enforcement of the standards prescribed under this subsection. A franchising authority may apply to the Commission for a waiver to impose standards that are more stringent than the standards prescribed by the Commission under this subsection." These efforts have been codified at the FCC as Section 15.115 "TV interface devices, including cable

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
  •    

    system terminal devices" and Section 68.110 "Compatibility of the telephone network and terminal equipment."

  • 15.  

    IBM System 360 and Microsoft MS-DOS and Windows are the classic examples.

  • 16.  

    Government-mandated regulations and procurement specifications constitute a third category of standards. Agencies at all levels of government set regulatory standards for products and processes in order to protect health, safety, and the environment. They also produce specifications for public procurement of goods and services. The Federal Register regularly publishes requests for comments on standards proposed by federal agencies. Some of these are developed by agencies, while others originate as voluntary standards set in the private sector and are adopted by reference in the text of regulations and specifications.

  • 17.  

    A very broad consortium, known as the Information Infrastructure Standards Panel (IISP) and spearheaded by the American National Standards Institute, has attempted since mid-1994 to bring together a large number and variety of organizations and entities concerned with standards relating to the national and global information infrastructure. In late November 1995, the IISP issued a list of 35 "standards needs," ranging across such areas as reliability, quality of service, provision of protections (e.g., security), specific types of interfaces, and data formatting (see Lefkin, 1995). It is premature to judge the outcome of this effort, although anecdotal reports from some parties familiar with it have noted the difficulty in cross-industry forums of achieving results with sufficient focus and specificity to constitute an advance from the basis in disparate, separate standards-developing activities.

  • 18.  

    An example is AM stereo, in which the FCC forbore from picking a standard; none ever emerged because no market could develop in the absence of a standard, and no standard could develop in the absence of a market.

  • 19.  

    Recently, through the auspices of the Internet Society, the IETF has been establishing liaison with organizations such as the ITU. But sanctioning of the IETF and its standards by other formal standards bodies has not been a factor in the success of those standards. Market acceptance has been the key issue.

  • 20.  

    In the case of the World Wide Web, a consortium of industry and academic partners has been organized at the Massachusetts Institute of Technology with the goal of furthering the standards for the Web. It is an attempt to create a neutral body especially organized to deal with the very rapid advances and strong industrial tensions present in the Web architecture. Whether it represents a cooperative complement to the IETF or an explicit rejection of the IETF processes remains to be seen.

  • 21.  

    A recent National Research Council (1995) study examined standards development in multiple industries. It concluded that the relatively decentralized, private-sector-led U.S. standards-setting process, while messy and chaotic, is generally the most effective way to set standards in a market economy.

  • 22.  

    Federal selection of standards for the government's own systems is a topic that has been considered by the Technology Policy Working Group. Its December 1995 draft report calls for a process that "minimizes the number of required standards for the Federal Government's purchasing of NII products and services, limiting them to those that relate to cross-agency interoperability" (TPWG, 1995b,

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
  •    

    p. i). It notes that "there is already existing government policy which covers the preference and advantages to government selection of voluntary standards (e.g., consensus standards). This policy is contained in OMB Circular No. A-119 Revised 10/20/93, `Federal Participation in the Development and Use of Voluntary Standards'" (p. 14).

  • 23.  

    The interaction between pricing and architecture was the focus of a special interdisciplinary panel at the 1995 Telecommunications Policy Research Conference (September 30—October 2, Solomons, Maryland). Entitled "Architecture and Economic Policy: Lessons from the Internet," the panel featured papers prepared jointly by network technologists and economists, which were revised for spring 1995 publication in the journal Telecommunications Policy.

Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 115
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 116
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 117
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 118
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 119
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 120
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 121
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 122
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 123
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 124
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 125
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 126
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 127
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 128
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 129
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 130
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 131
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 132
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 133
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 134
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 135
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 136
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 137
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 138
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 139
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 140
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 141
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 142
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 143
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 144
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 145
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 146
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 147
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 148
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 149
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 150
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 151
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 152
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 153
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 154
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 155
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 156
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 157
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 158
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 159
Suggested Citation:"4 TECHNOLOGY OPTIONS AND CAPABILITIES: WHAT DOES WHAT, HOW." National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. Washington, DC: The National Academies Press. doi: 10.17226/5130.
×
Page 160
Next: 5 TECHNOLOGY CHOICES: WHAT ARE THE PROVIDERS DEPLOYING? »
The Unpredictable Certainty: Information Infrastructure Through 2000 Get This Book
×
Buy Paperback | $49.95
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

We have available an impressive array of information technology. We can transmit literature, movies, music, and talk. Government, businesses, and individuals are eager to go on-line to buy, sell, teach, learn, and more. How, then, should we go about developing an infrastructure for on- line communication among everyone everywhere?

The Unpredictable Certainty explores the national information infrastructure (NII) as the collection of all public and private information services. But how and when will the NII become a reality? How will more and better services reach the home, small businesses, and remote locations? The Unpredictable Certainty examines who will finance the NII, exploring how technology companies decide to invest in deployment and the the vain search for "killer apps" (applications that drive markets). It discusses who will pay for ongoing services and how they will pay, looking at past cost/price models relevant to the future. The Unpredictable Certainty discusses the underlying technologies, appliances, and services needed before the NII becomes a reality; reviews key features of important technologies; and analyzes current levels of deployment in telephone, cable and broadcast television, and wireless systems, and the difficulties in interconnection.

The volume explores the challenge of open interfaces that stimulate new applications but also facilitate competition, the trend toward the separation of infrastructure from specific services, the tension between mature services and new contenders, the growth of the Internet, and more. The roles governments at different levels might play in fostering NII deployment are outlined, including R&D and the use of information infrastructure for better delivery of government services and information.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!