4
Technology Options and Capabilities: What Does What, How

The market cannot explore a space that technology has precluded.

—David D. Clark, Massachusetts Institute of Technology

The technology landscape of today is marked by rapid evolution, and advances in technology intertwine with evolution in the communications industry itself. This chapter offers an overall perspective on the drivers of change and on the capabilities of and trends in communications-related technology, with the goal of putting into perspective many of the specific developments of today. It also presents some of the steering committee's assessments of specific technology trends. Chapter 5 contains complementary statistics and an analysis of the deployment of these technologies.

The Changing Nature Of Technology And Communications

Communications services have been transformed by a long series of innovations, including copper wire, coaxial cable, microwave transmission, and optical fiber. Each has expanded the available bandwidth, and therefore carrying capacity, at reduced unit cost. Over many years the cost per voice channel has fallen annually by about 10 percent. With the huge increase in carrying capacity enabled by fiber and optoelectronics, the potential for cost reduction has become even greater—but only if the newly available bandwidth can be utilized profitably. During the same



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 115
--> 4 Technology Options and Capabilities: What Does What, How The market cannot explore a space that technology has precluded. —David D. Clark, Massachusetts Institute of Technology The technology landscape of today is marked by rapid evolution, and advances in technology intertwine with evolution in the communications industry itself. This chapter offers an overall perspective on the drivers of change and on the capabilities of and trends in communications-related technology, with the goal of putting into perspective many of the specific developments of today. It also presents some of the steering committee's assessments of specific technology trends. Chapter 5 contains complementary statistics and an analysis of the deployment of these technologies. The Changing Nature Of Technology And Communications Communications services have been transformed by a long series of innovations, including copper wire, coaxial cable, microwave transmission, and optical fiber. Each has expanded the available bandwidth, and therefore carrying capacity, at reduced unit cost. Over many years the cost per voice channel has fallen annually by about 10 percent. With the huge increase in carrying capacity enabled by fiber and optoelectronics, the potential for cost reduction has become even greater—but only if the newly available bandwidth can be utilized profitably. During the same

OCR for page 115
--> period, silicon integrated-circuit technology has allowed the price of computer performance to fall at a rate of 15 to 25 percent per year. This rapid progress in reducing the cost of computing is what makes computer technology the means to exploit the great growth in communications carrying capacity made possible by satellites and fiber technology. Advances in the power of the general-purpose processor are easy to see and well understood; workstation speed has more than doubled every 2 years, and memory sizes have grown at equivalent rates. In the more specialized areas of communications, this increased processing power can be exploited in a number of ways—to achieve a simple increase in speed, for example, and to make computers easier to use (simplicity and natural logic in the user interface require very complex processes in software and hardware). Continued rapid progress assures continued advances in both function and usability. But increased processing power can also often be used to greater advantage to increase flexibility and generality, attributes that are key to much of the ongoing transformation of communications technology and thus the communications industry itself. Three specific trends relating to increased flexibility and generality are relevant to the steering committee's assessment: the increasing use of software rather than hardware for implementation of functions, the increasing modularity of design, and the increasing ability to process and transform the data being transported within the communications system. Implementation of functions in software can reduce cost and permit modification of a function by upgrading the program. Costs can also be reduced by replacing a number of special-purpose or low-level hardware elements with a single integrated processor, which then performs all the same tasks as the multiple hardware elements by executing a program. Continuous cost reduction is central to the current pace of technology advance; it permits rapid technology rollover and restructuring of the hardware base. Implementation in software, however, has the added advantage of permitting the functions of a device to be changed after manufacture, to correct "bugs" or meet evolving user needs, whereas hardware, once manufactured, is frozen. The flexibility to change a product during its lifetime is critical, because of rapidly changing user requirements driven by new applications. One way to build more generality into a system is to make the design more modular, dividing the system into separable elements that implement different parts of a function. These modules can then be used in different ways to create new services. Adding modularity requires the implementation of interfaces between the elements of the system, and this step adds cost, especially if the modules are physical hardware elements in the design, but to some extent even if the modules are software modules and the interfaces are subroutine calls. However, even though modu-

OCR for page 115
--> larity and interfaces do add cost, current trends in design and implementation suggest (compellingly) that modularity, if properly done, has a powerful justification in the generality, flexibility, and potential for growth of the resulting system. A third key trend resulting from increasing processing power is the ability to process and transform data carried in the communications system. One consequence is increased interoperation among previously incompatible systems. For example, the broadcast video formats in different parts of the world, NTSC, PAL, and SECAM, were a real barrier to interchange of video content until, as is now the case, it was possible to build economical format converters. Today, digital and analog telephone systems interwork, as do digital telephone systems with different voice codings, such as the several emerging digital cellular systems. In the future, as digital television broadcasting is deployed, digital and analog systems for television broadcasting will interwork. These trends act in combination. For example, data transformations such as the real-time encoding and decoding of video streams or data encryption or compression are now migrating to software. Inexpensive digital signal processors permit the manufacture of $150 modems (today) that encode data at 28,800 bits per second (bps) for transmission down a phone line—an excellent example of the power of a processor chip and software to replace dedicated hardware. These three trends together lead to a communications world with increasing generality and flexibility and increasing options for interoperability, with additional interfaces permitting the reorganization of the communications infrastructure to offer new services and support new applications. In turn, the increasing modularity of communications technology has transformed the whole landscape of the business environment. Interfaces represent a technological form of unbundling, which permits new forms of competition and new business strategies. How Trends In Technology Are Changing Communications Infrastructure And Services The various effects of technology on the nature of communications discussed above have three specific consequences, which represent major factors that shape the future of the information infrastructure: The separation of infrastructure facilities and service offerings, The construction of services layered over other services, and The tension between supporting mature and emerging applications.

OCR for page 115
--> Separation of Infrastructure Facilities and Service Offerings In the past, there were two important communications services, telephony and television. Each of these services drove the development of a technology infrastructure to serve its needs, and both of these infrastructures are today prevalent and of great economic importance. Telephony and television have evolved considerably since their inception. The telephone system started with copper wires and mechanical switches; moved to modulating many calls on a wire, and then to digital representation of the speech channel and electronic switches, which together led to the current digital hierarchy; and next evolved to the use of fiber optics to carry the aggregated data and cellular wireless technology as an alternate access path. Television delivery (like radio before it) started as over-the-air broadcast but developed the cable infrastructure, and more recently consumer satellite dishes. For both of these services during their initial evolution, the service objective remained the same: specifically, the delivery of a telephone call or a television channel. However, in both the telephone industry and the broadcast and cable industries, the trend now is to add modularity to the technology and to define explicit interfaces to the infrastructure that permit offering a wider range of services and applications over a common infrastructure. Perhaps the earliest significant example of this trend toward separation of infrastructure and service offerings is the selling of trunk circuits by the telephone industry. These circuits, such as T1 at 1.5 Mbps and most recently DS3 at 45 Mbps, were first conceived to carry aggregated voice. But they are also sold as a separate component, to carry either voice or data for private customers. This splitting out of the lower-level infrastructure facility by revealing and marketing the interfaces to the point-to-point circuits is the single critical change in the facilities infrastructure that has created the long-haul data network revolution. It is these trunks that permitted the construction of the Internet and the switched packet networks such as frame relay and switched multimegabit data service, and in the past, X.25 networks. These trunks permitted the construction of on-line information service networks and the private networks that today serve almost all of the major corporations. Recent developments more clearly articulate the separation of service from infrastructure. The telephone industry's new technology approach, called asynchronous transfer mode (ATM), will offer much greater flexibility in service offerings: ATM can carry voice, provide private circuits at essentially any specified capacity (rather than at just 1.5 or 45 Mbps), and also support more advanced services in the data area. Cable technology is following a similar path. While the early cable systems were prac-

OCR for page 115
--> tical only for the transport of video, their recent evolution to hybrid fiber coaxial cable allows the infrastructure to be used for a range of services, including telephony and data transfer. The separation of facilities from service offerings makes good sense in the context of current business trends (see Chapters 2 and 3). The regulatory opening of more communications markets to competition, for example, puts a premium on infrastructure that facilitates rapid entry into such markets. Video and telephony providers alike wish to be poised to enter each other's business, as well as to participate in new businesses that may emerge. Also, as pointed out in previous chapters, the steering committee has concluded that the NII will not serve one "killer app" primarily, but rather will enable a wide range of objectives and applications arising from a broad set of business domains and societal functions. Many of these applications do not yet exist, and thus a critical objective for the NII is to be open to the development and deployment of new applications. Building Services on Each Other Just as the separation of infrastructure facilities from services permits the construction of a range of services on top of a common infrastructure, so, too, can one service be constructed by building it on top of another. This layered approach to constructing services is a consequence of the trends discussed above—increased processing power, and more modular design with defined interfaces to basic infrastructure facilities. In fact, a wide and sometimes surprising range of service offerings is being created by building one service on top of another. The teaching of networking often involves a simple, layered model of technology, in which infrastructure components are installed and then used as a foundation for next-level services, and so on in an orderly manner. Current reality is much messier and much less well structured. Neither the technology nor the business relationships show a simple order, but instead reflect a very dynamic and creative building of services on top of each other, with the players both competing and cooperating to build the eventual service sold to the consumer. This layered building of services, of course, has been going on for some time, as noted above in the discussion of telephone trunk circuit sales. There are many other examples of service overlays, some of which are quite unexpected and at times confusing. With the expenditure of enough ever-cheaper computing cycles, one kind of service can be made into an infrastructure for another in quite creative ways. For example, software is available that permits the creation of telephone connections (with some limitations) over the Internet. The data stream that drives the

OCR for page 115
--> StarSight on-screen television guide is carried, for lack of any better transport service, in an otherwise unused portion (the vertical blanking interval) of the Public Broadcasting Service broadcast signal—an example of transforming, presumably with considerable processing power, an application-specific service (NTSC video; named after the standards-developing National Television System Committee) into a more general transport service. Perhaps the most confusing situation occurs when two services each can be built out of the other: the Internet can carry digital video, and video delivery facilities can be used to carry Internet service; frame relay service can be used to carry Internet packets, but at least one major provider of frame relay today uses Internet packets to carry frame relay packets. Beneath these services and overlays of other services lie the physical facilities, such as the fiber trunks, the hybrid fiber coaxial cable and cable systems, the local telephone loops, and the satellites, as well as the switches that hook all of these components together. These are the building blocks on which all else must stand, and it is thus the technology and the economics of this sector that require detailed study and understanding. The Tension Between Supporting Mature and Emerging Services From an engineering point of view, the existing network infrastructure is still largely designed and constructed to achieve the objective of very cost-effective and high-penetration delivery of mature services, in particular telephony and video. While the separation of facilities from services has important business advantages, adding generality to the infrastructure, so that it can support a range of future applications, raises the critical concern of increased costs for infrastructure. The tension between cost-effective delivery of mature services and a general platform to support emerging applications is illustrated by the issue, often raised by participants in the NII 2000 project, of how much bandwidth should be provided to the residence, especially back-channel capacity from the residence into the network. A number of participants called for substantial back-channel bandwidth, often to avoid precluding new applications.1 But the infrastructure facilities providers voice the real concern that adding back-channel capability in too large a quantity would add unacceptable cost to the infrastructure and could threaten the economics of their basic business, which requires considerable bandwidth to the home for video but only enough bandwidth from the home to support voice or low-bandwidth interactive control. They emphasize that new applications must prove themselves and that investment in new infrastructure capabilities can be undertaken only incrementally. From their perspec-

OCR for page 115
--> tive, it is important to have enough capability to allow the market to explore new sectors, but they cannot be expected to invest fully until an application is shown to be viable. This stance may limit the rate of growth of new applications, but there seems no economical alternative. Research, as noted in Chapter 6, may illuminate ways to reduce investment requirements or enhance applications to accelerate return on investment. The steering committee concluded that the resolution of the tension between supporting mature and emerging applications was a key factor in determining the shape of the future NII. Two conclusions emerged from the discussions and materials presented. First, current technology plans show a studied balance between a cost-reduced focus on provision of mature applications and provision of a general environment for innovation of new applications, and second, there is broad recognition that the Internet is the primary environment for such innovation. Resolving The Tension: The Internet As An Example The simple approach to separation of infrastructure from service involves defining an interface to the basic infrastructure facilities and then constructing on top of that interface both cost-reduced support for the mature services and general support for new applications. Thus, the telephone system provides interfaces directly to the high-speed trunks, and the television cable systems define an interface to the analog spectrum of the cable. However, to support emerging applications, the interface to the underlying infrastructure is not by itself sufficient. New applications should not be constructed directly on top of the technology-specific interfaces, because doing so would tie the applications too directly to one specific technology. For example, a very successful technology standard for local area data networking is Ethernet. But building an application directly on top of Ethernet interfaces locks in the application to that one technology and excludes alternatives such as telephone lines, wireless links, and so on. What is needed is a service interface that is independent of underlying technology options, and also independent of specific applications. An earlier report from the Computer Science and Telecommunications Board, Realizing the Information Future (RTIF; CSTB, 1994b), advocated an open interface with these characteristics to support innovation of new network applications. The report called this interface the technology-independent bearer service and called the network that would result from providing this interface the Open Data Network, or ODN. This interface would normally be effected in software; it is an example of a general-purpose capability that would be implemented in a computer as the basic building block for higher-level services and applications.

OCR for page 115
--> The architecture of the ODN was modeled somewhat on the architecture of the Internet, which has a form of the bearer service in its Internet protocol. However, RTIF was careful to discuss this critical interface in general terms and not prejudge the suitability of the Internet protocols to meet this need. The Importance of the Internet It would appear at this time that the Internet and its protocols represent the best approach for providing a general service for the support of emerging applications because of the effectiveness with which the Internet protocol serves as a bearer service and the overall architecture functions as an ODN. In the course of this project, the steering committee heard from a wide range of application developers in areas such as electronic commerce, information access, and business-to-business collaboration. In nearly all cases, the applications were based on one of two kinds of interfaces to the technology below. Either they were modeled on some mature service and used the existing service interface of that technology (such as video on demand over existing cable systems or fax over voice) or they were based on the features of the Internet. The steering committee heard repeatedly that the Internet standards were the basis on which new applications were being crafted, and even in cases in which the Internet was described as unsuitable, a careful exploration of the concerns usually suggested that the issue was not the standards themselves, but rather the existing public Internet as a delivery vehicle, with its current level of security, provisioning, and stability. The current volume of deployed devices using the Internet standards, together with the observed level of investment in Internet-related products and services, constitutes a unique foundation, one for which there is no alternative now or in any reasonable time frame. See Box 4.1. Based on its assessment of industry trends, the steering committee thus concluded that the call for an open, technology-independent bearer service as a basis for emerging applications, as voiced in RTIF, was correct, and that a more concrete conclusion is now justified: the Internet standards are being widely used to enable new applications and are seen by a great majority of commercial players as the only viable option for an open, application-independent set of service interfaces at this time. For this reason, the steering committee further concluded that specific attention should be paid to ensuring the viability of the Internet, in terms of both enhancing the standards to meet evolving application needs and making sure that networks based on these interfaces are deployed and made widely available as an environment for the innovation of new applications. The key topics to consider, then, are (1) what aspects of the

OCR for page 115
--> BOX 4.1 Commercial Importance of the Internet: A Selection of Views The commercialization of the Internet is already happening, with commercial users representing the largest user domain type registered in the United States. Participants in the January 1995 workshop and May 1995 forum commented on the present and future importance of the Internet for their conduct of business. Some focused on the barriers to commercial use, such as needs for guarantees of security and intellectual property rights and the difficulty novice users experience in using applications on the Internet. Others noted that commercial services available today can meet these needs, indicating there may be at least a temporary mismatch of perceptions between providers and users. In the longer term, the Internet's sheer ability to build connections was seen as a powerful draw. The Internet is not about technology fundamentally. It is a social phenomenon that is basically the world's largest interconnected set of computers. That is what makes the Internet have all the energy behind it and what gives it the power of the marketplace and interoperability at all kinds of levels. —Marty Tenenbaum, Enterprise Integration Technologies Corporation/CommerceNet We want the ability to coordinate among the industries. Yet we need some push. We need some standardization to make interoperability, encryption, and a number of other things happen. We need culture change on the Internet from the information provider's standpoint, from the educator's standpoint, from the health care perspective. Our users need to better understand what we are trying to give them, and we need to better understand what they want, how they want it, and how they need to get it. —Cynthia Braddon, The McGraw-Hill Companies The Internet excites a variety of people, but there are … a lot of people out there that are not going to be excited about the Internet in the next 10 years. … How do we get lower-level software for the part of the population that is not capable or is not interested in … sophisticated stuff? … How do we get all this aimed at all segments of society? —Joseph Donahue, Thomson Consumer Electronics Inc. Not a lot of people, including me, had the nerve to raise our hands this morning when Bob Lucky asked the perhaps inappropriately phrased question about whether the Internet is the NII or not. But the truth of the matter is, if you have the Internet, you do not need much else to have the NII. —Andrew Lippman, Massachusetts Institute of Technology

OCR for page 115
--> BOX 4.2 The Architecture of the Internet Most users of the Internet see it through experiencing its applications, most obviously the World Wide Web, but also the ubiquitous electronic mail, remote login, file transfer, and other applications. But from the perspective of the Internet designers, the essence of the Internet is not the applications, but rather the more basic functionality that makes the Internet a suitable place for those applications to operate. The structure of the Internet reflects two major design objectives: first, to support as many sorts of applications as possible, and second, to operate over as many sorts of network infrastructure as possible. Although this may sound rather odd at first hearing, the Internet is more like a computer than a traditional network. The point is that most computers are designed to be general-purpose devices, capable of running a wide variety of applications—spreadsheets and word processors, databases and process control, and so on. Similarly, the Internet was designed to support a wide range of applications, including those that had not been conceived at the time the design was undertaken. The recent explosion of the World Wide Web, clearly not envisioned when the Internet was born, is a measure of the success of this ambition. In contrast, most traditional networks were designed to support a specific application, in particular telephony or video delivery. At the same time that the Internet was intended to support a range of applications, it was also designed to utilize a range of network technologies: LANs, telephone trunks, wireless links, and so on. Over the last 20 years, it has evolved to operate over different underlying technologies with a wide range of speeds, distances, and error rates. It is organized to permit this adaptability as follows. It provides a set of basic functions, which all applications then use to obtain their network service. To Internet have contributed to its apparent wide acceptance in the commercial world and (2) how the Internet will need to evolve and mature over the next decade to meet the growing needs of this sector. Box 4.2 discusses the organization of the protocols of the Internet and explains how its design allows for a general service to be constructed that takes advantage of the cost-reduced infrastructure that is engineered for the delivery of mature applications. The section titled ''The Internet," included below in this chapter, further clarifies what the Internet really is and elaborates on some of its future directions. The Coexistence of New and Mature Services Experience with the Internet shows the power of a general bearer service as an environment in which new applications can come into existence. The sudden advent of the World Wide Web makes this point emphatically. However, advocacy of a network with bearer-service interface

OCR for page 115
--> permit the use of as wide a range of technology as possible, these services are defined not in terms of the detailed features of one technology, but rather in a very general way that does not depend on specifics such as bandwidth or latency. For each sort of infrastructure that is put into use, software is then written that translates the specific features of that infrastructure into the general, universal form of the service. The applications invoke that software, which in turn calls on the actual network technology. This ability to operate over different sorts of network infrastructure is a key to the success of the Internet and its protocols. HFC, for example, is one possible infrastructure that might be used to extend the reach of the Internet to homes at higher speeds (see section in text below titled "Hybrid Fiber Coaxial Cable"). A number of products are now available that make it possible to carry the Internet protocols across this form of network technology. While the details of HFC differ from those for other network technologies, the Internet can operate in this context precisely because the details are hidden from the applications by the intervening software. As discussed in the beginning of this chapter, the trend in the evolution of network infrastructure is that the infrastructure itself is separate from the services that are offered on it. The Internet is an example of this trend and an illustration of its power. In almost all cases, Internet service providers do not install separate infrastructure for their Internet service. They make use of existing facilities, such as long-distance trunks installed to support voice, "dark" fibers in metropolitan areas, copper pairs and cable systems for residential access, and so on. The only hardware items normally purchased and installed by an Internet service provider are the devices that connect the different infrastructures together, the routers, and, for providers that support dial-up access, the equipment to terminate these telephone calls. Most of the expenses for an Internet service provider are the costs of renting the underlying facilities and the costs associated with supporting the customer. architecture, as called for in RTIF, raised in some industry players the concern that a more inclusive approach was being suggested, that is, that all communications services, including mature services such as telephony and television, should be migrated to this general architecture as an objective of the NII. This objective was not what was advocated in RTIF or in the material gathered for this project. Nor is it advocated in this report. Voice (telephony) is a case in point. The standards specific to that application of course predate the Internet standards. The infrastructure and interface standards supporting the telephone system are mature and stable and have been engineered to provide very cost-effective delivery of the service. Meanwhile, the growth in technology to support voice communications over the Internet has caused some speculation about the transfer of voice traffic from the public switched telephone network to the Internet. There is no reason to migrate this service to a network such as the Internet that might be less cost-effective for voice, since it was not optimized for that purpose. The recent fad of voice communications and

OCR for page 115
--> Zilles and Richard Cohn observes, in some cases an author of information may want to specify its format totally—controlling the layout and appearance of each page exactly. A document format standard such as Portable Document Format can be used for this purpose. The Web's HTML represents a middle ground in which the creator and the viewer both have some control over the format. At the other end of the spectrum, permitting the user maximum flexibility in reformatting and reprocessing information is enabled by database representations such as SQL (Structured Query Language), for example, that are concerned not with display formats but with capturing the semantics of the information so that programs can process them. The current trend, consistent with the approach taken in the Internet to deal with multiple standards, is for all of these to coexist, permitting the creator to impose as much or as little format control as is warranted for any particular document. Looking at the Internet today, one sees tremendous innovation in these areas, just as there was in the past concerning the basic issues of bit transport. Examples of current standardization efforts include "name-spaces" and formats of information objects, protocols for electronic commerce, and a framework for managing multimedia conference sessions. Over the next few years, this increased attention to higher-level information architecture issues will have an impact on the "inside" of the network, the routers and internal services. For example, some Internet service providers are planning to deploy computers with large disk arrays at central points within the Internet to store popular information (e.g., Web pages and related files such as images) close to the user, so that it can be delivered on demand without having to be fetched from across the globe. This sort of enhancement will increase the apparent responsiveness of the network and at the same time reduce the load on the wide-area trunks. It represents another example of how the increasing processing power that becomes available can be used to enhance the performance of the network, independent of advances in network infrastructure. Open Interfaces and Open Standards The Internet is perhaps an extreme example of a system that is open; in fact it is open in a number of ways. First, all of its standards and specifications are available for free, and without any restriction on use. The meetings at which standards are set are open to all.12 Second, one objective of the design of the standards is to make it as easy as possible for networks within the Internet—both public networks of Internet service providers and private networks of corporations, institutions, and individual—to connect together. Thus the Internet is open to providers as well as users. Third, its internal structure is organized to be as open as

OCR for page 115
--> possible to new applications. For example, some of its traditional features, such as the software that ensures ordered, reliable delivery of data (the protocol called TCP), are not mandatory, but can be bypassed if this better suits the needs of an application.13 This openness has made the Internet an environment conductive to the innovation of new applications. Standards And Innovation In The Marketplace It is important not to underestimate the total number of standards that collectively define the existing personal computer (PC) marketplace, the Internet, the telephone system, or the video delivery infrastructure. Standards describe low-level electrical and mechanical interfaces (e.g., the video plug on the back of a television or the Ethernet plug on the back of a computer). They define how external modules plug into a PC. They define the protocols, or agreements for interaction between computers connected to a common network. They define how functions are partitioned up among different parts of a system, as in the relationship between the television and the decoder now being defined by the FCC.14 They define the representation of information, in circumstances as diverse as the format of a television signal broadcast over the air and a Web page delivered over the Internet. Corresponding to the volume of standards is the range of standards-setting activity. The United States has more than 400 private standards-developing organizations. Most are organized around a given industry, profession, or academic discipline. They include professional and technical societies, industry associations, and open-membership organizations. Among the most active U.S. information technology standards developers are the Institute of Electrical and Electronics Engineers, a professional society; the Information Technology Industry Council, which administers information processing standards development in Committee X3; and the Alliance for Telecommunications Industry Solutions (ATIS), coordinator of Committee T1 for telecommunication standards. Inputs from domestic standards activities by such organizations to international standards organizations (discussed below) are coordinated by the American National Standards Institute. Standard interfaces allow new products related to information infrastructure to interoperate with each other and with existing products. They are therefore essential for new markets to develop. However, differences in how standards are set can be found among industries, and the approach to standards setting may affect progress in the multiindustry, multitechnology world of the NII. Consider historic differences between the telecommunications and computer industries. The telecommunications industry has depended on a variety of na-

OCR for page 115
--> tional and international standards organizations. The International Telecommunications Union (ITU) is the primary telecommunications standards organization at the international level. As a United Nations treaty organization, its members are governments. The ITU sets thousands of standards for telecommunications services and equipment and for their interoperability across interfaces, as well as allocating radio-frequency spectrum. U.S. representation at the ITU is coordinated by the State Department, with participation by other public agencies and private industry. Since AT&T's divestiture, domestic U.S. telecommunications standards have generally been developed by industry-led formal standards organizations, such as ATIS, mentioned above. All of these organizations seek to produce formal "de jure" standards through a process of consensus in technical committees. Standards for some kinds of equipment and services must be approved by the FCC, sometimes adding months or years to the process. To a much greater extent than the telecommunications industry, the modern computer industry has relied on the marketplace for creating "de facto" standards. In such a system, companies' fortunes depend to a significant extent on their ability either to create de facto standards or to supply products rapidly that conform to emerging de facto standards. In the computer industry, this trend has been driven by market competition, the rapid pace of computer technology change, and the "bandwagon" effect that leads consumers to adopt technologies that appear to be emerging as widespread standards rather than risk being left unable to interoperate with other users and systems.15 In practice, standards development exists within a continuum.16 Many computer industry standards are formalized in national and international standards organizations, such as the International Organization for Standardization—although these standards frequently lag the de facto processes of the market. There are also a multitude of "hybrid" systems. For example, it was the combination of market forces and formal standards committees that created many of the LAN standards currently in use. The emergence of standards consortia in both the computer and telecommunications industries reflects a compromise between the slower pace of consensus standards setting in formal organizations and the chaos of the market. If different consortia produce different standards, however, the fundamental problem of reaching a standard remains.17 No matter what the method, real and meaningful standards are essential to mass deployment of technology. Anything less is immaterial; standards mean volume! Neither system of developing standards is perfect. De jure standards creation, while it may be more orderly, tends to be slow and is not immune to political pressures. Furthermore, it may not result in a common

OCR for page 115
--> standard—witness the regional differences in ISDN deployment. De facto standards, while they may emerge more rapidly, can result in a period of market chaos that delays mass deployment—and the marketplace also sometimes fails to produce a single, dominant standard.18 They may also lead to antitrust pressures, as experienced by both IBM and Microsoft. Both formal and market-driven standards setting can favor established major players, but for different reasons. In the case of de jure standards, the long delay in settling on standards allows the established major companies to adapt to the new technology capability and provides for formal representation of users as well as producers. De facto standards, on the other hand, require market power, and the established major players have it. thus, they can benefit, but only if they can move rapidly. The process of setting standards for the Internet is an interesting and important example of the balance of concerns. The Internet standards are somewhat between de jure and de facto. While the Internet standards body (the Internet Engineering Task Force; IETF) has not been endorsed by any national or international standards body,19 it operates with open membership and defined processes for setting standards, and it attempts to avoid domination by any industry sector or large market players. It has been praised for producing standards that work, because it looks for implementation experience as part of its review process. It has also been criticized for the slowness of its processes and for what some see as a somewhat disorderly approach to consensus building. It is worth noting that some of the important standards in wide use over the Internet, including the standards for the World Wide Web, were not developed formally through the IETF process. Instead, they were proposed by other groups, discussed informally at IETF meetings, distributed on-line over the Internet, and then accepted by industry without further IETF action. Although this partial bypass of the formal IETF processes worries some observers, there can be no argument with the success of the World Wide Web in achieving rapid market penetration. It remains to be seen how IETF and informal Internet standards-setting processes will evolve and function in the future. 20 There is strong private sector motivation for effective standards setting. Many participants at the forum and workshop said in effect that while the government should act to facilitate effective standards setting, it must not create roadblocks to such efforts by imposing government-dictated standards processes.21 Government use of private, voluntary standards in its own procurement, however, can be supportive. 22 The process of setting standards is only one part of the delay in getting a new idea to market. If the idea requires software that is interoperable on a number of different computing platforms, the sequence of

OCR for page 115
--> steps today to push a new innovation into the marketplace is to propose the new idea, have it discussed and accepted by a standards body, and then have it implemented by some party on all the relevant computers. A brute-force way of bypassing this process is for one single industry player to write the necessary code for all the relevant computers, as Netscape did for its Web browser. Netscape coded, and gave away in order to create the market, three versions of its Web browser—for Windows, for the Macintosh, and for Unix. One drawback to this approach is the large effort required by one industry player. An additional drawback is that the person responsible for each computer needs to retrieve and install the software package. An idea now being proposed to avoid these drawbacks is to define a high-level computer language and a means for automatic distribution and execution of programs written in this language. Under this scheme, an interpreter for such a language would be installed on all the relevant computers. Once this step was taken, a new application could be written in this language and immediately transferred automatically to any prepared computer. This would permit a new innovation to be implemented exactly once and then deployed essentially instantly across the network to interested parties. An example of such a scheme is the proposal for the Java language from Sun Microsystems. Sun has implemented an interpreter for the Java language, called HotJava, which can be incorporated into almost any computer. Netscape and Microsoft have announced that they will put an interpreter for Java into their Web browsers. Such developments will permit new applications to be written in Java, downloaded over the Web, and executed by any computer running most of the popular browsers. This set of ideas, if successful, could have a substantial impact on the process of innovation, by speeding up evolution and reducing implementation costs in areas where it is relevant. David Messerschimtt of the University of California at Berkeley observed: [O]ne of the key attributes of the NII should be dynamic application deployment; we should be able to deploy new applications on the infrastructure without the sort of community-of-interest problems and standardization problems associated, say, with users having to go out to their local software store and buy the appropriate applications. … [It] should be possible to deploy applications dynamically over the network itself, which basically means download the software descriptions of applications over the network. This set of ideas could also permit the construction of new sorts of applications. For example, programs could be sent to remote machines to perform searches on information there. Thus, remote program execution

OCR for page 115
--> could be a means to implement intelligent agents on the network. More speculatively, these ideas might change the distribution of function within the network. Clearly, remote interpretation of programs exploits the increased processing power of the PC of today. But some have speculated that this approach, by downloading on demand only the required software into a PC, could reduce the complexity and cost of that PC by eliminating some of the requirement for disk space and memory. This shift might help ameliorate the economic challenge of providing NII access for the less affluent. However, many are skeptical that this shift of processing power back from the end node and into the network, which runs counter to the recent history of the computer industry, will prove effective. Specifically, it would require increased bandwidth in the subscriber access path, which seems difficult to justify economically. There is no clear conclusion to this debate today. Management And Control Of The Infrastructure The aspect of the information infrastructure that is most exciting to the user is the service interface that defines what the network can provide to the user—how fast the network will operate, what sorts of services it can support, and so on. Perhaps understandably, these issues received the most attention at the workshop and forum. Equally important as networks grow bigger, however, are the issues of management and control. At the January 1995 workshop, Mahal Mohan of AT&T commented on the ''tremendous number of numbering, switching, overall administration, [and] service-provider-to-service-provider compensation" details involved in supporting network-based services, details, he lamented, that "tend to get overlooked in just counting out the bandwidth of what is coming to the home and who owns it." The issues of control and management for the Internet are particularly instructive. The Internet grew with a very decentralized model of control. There is no central point of oversight or administration. This model was part of the early success of the Internet; it allowed it to grow in a very autonomous manner. However, as the Internet grows larger and, at the same time, expectations for the quality and stability of the service increase, there are those who believe that changes are needed in the approach to Internet management and control. In the 1994 to 1995 period, there were a number of reports of errors (usually human errors) in the operation of the Internet routing protocols that have caused routing failures, so that information is misdirected and fails to reach its destination. The protocols and controls in place today may not be adequate to prevent these sorts of failures, which can only grow more common as the number of networks and humans involved in the Internet continues to grow.

OCR for page 115
--> At the forum, Howard Frank of the Advanced Research Projects Agency expressed surprise at having heard "no discussion at all about the management structures, information management technologies, help services," and so on. He observed that progress in these areas is necessary so that "an internet can evolve from a rather chaotic, independent, loose collection of things to a managed system of the quality of the telephone system, which will allow us to go from a few percent to the 50 percent mark." A white paper by David Clark of the Massachusetts Institute of Technology argues that the Internet and the infrastructure over which it runs, which were totally separated in the early days of the Internet, must now come together in some ways to facilitate better management and control. Talking about maturing services such as the Internet, Clark observes: If the service proves mature, there will be a migration of the service "into" the infrastructure, so that it becomes more visible to the infrastructure providers and can be better managed and supported. This is what is happening with the Internet today. The Internet, which started as an overlay on top of point-to-point telephone circuits, is now becoming of commercial interest to the providers as a supported service. A key question for the future is how the Internet can better integrate itself into the underlying technology, so that it can become better managed and operated, without losing the fundamental flexibility that has made it succeed. The central change will be in the area of network management, including issues such as accounting, fault detection and recovery, usage measurement, and so on. These issues have not been emphasized in many of the discussions to this point, and they deserve separate consideration in their own right. Milo Medin of @Home called for approaches such as distributed caching to "avoid vaporizing the Internet" due to excess traffic, yet there is no mechanism to encourage or enforce such prudent practice. Any change in the overall approach to Internet management and control will require the development of an overall architecture or model for the new approach. This sort of major redesign is very difficult to contemplate in the Internet today, due to the large installed base of equipment and the bottom-up approach of the standards process. Whether and how to evolve the management and control model of the Internet thus represents a major point of concern for the future. At the same time that some are calling for more regimented approaches to Internet management and control, others argue that the Internet style of control is preferable to the model that more closely derives from the traditions of the telephone company. The current ATM standards have been criticized by some for this reason. The signaling and control systems being developed come from a heritage in the telecommu-

OCR for page 115
--> nications industry that, while a reasonable model when it was first adopted, may be coming under increasing strain. The Internet community has developed a different technical approach to signaling and control, which may prove to be simpler while more robust. Instead of building complex mechanisms to ensure that any fault inside the network can be locally detected and corrected, the end nodes attached to the network periodically repeat their service requests. These periodic re-requests for service reinstate the needed information at any points inside the network that have lost track of this service request due to a transient failure. It is thus the case that there is still a significant set of technical disagreements and uncertainties about the best approach to network management and operation, both for the maturing Internet and for the next generation of technologies for the mature services such as voice and video. An issue that is now receiving considerable attention is pricing and cost recovery in the Internet (see Chapter 3 for more discussion). In the past, the Internet has been paid for on a subscription or fixed-fee basis. There is now considerable debate as to whether some forms of usage-based charging are appropriate or necessary. The white paper by Robert Powers et al. notes the need to balance recovery of consumed service with the cost of implementing the billing mechanism. Since billing systems and their use have costs, it is an open question whether telephony-style billing is the right model. The answer no doubt depends on the type of applications that users will demand. Electronic mail places rather small demands on the network, but video conferencing is quite different. Pricing is relevant to network technologists not only also because of issues relating to implementing accounting and billing systems, but also because pricing influences how (and how much) networks are used. These incentive effects interact with the network architecture to affect the performance as well as the profitability of networks.23 Notes 1.   Andrew Lippman of the Massachusetts Institute of Technology observed that industry has not been good at predicting what applications will prevail and should thus engineer for the unexpected, and not focus on the specific application of today. 2.   Different assumptions about market penetration and traffic load would obviously change these results. 3.   Time Warner has demonstrated the feasibility of such a system in trials in Orlando. 4.   The Wireless Information Networks Forum (WINForum), an industry association, petitioned the FCC in May 1995 to set aside additional spectrum for wireless local area networking at higher data rates than current options allow.

OCR for page 115
-->     The proposed allocation would support high-speed LANs at 20 Mbps, sufficient for multimedia applications. See WINForum (1995). 5.   All digital cellular phones are expected to be able to be used with traditional analog facilities, which will remain in place for the foreseeable future. However, the added complexity of a device that can support multiple standards may add to its cost, especially in the short run. 6.   Material for this section was taken from Henderson (1995) and from Reed (1995). 7.   A further variation is whether fixed- or variable-rate video encoding is used. A variable-rate encoding can permit better representation of complex scenes within a program, at the cost of a higher peak rate. However, other sorts of digital information can be transmitted in the instants that the full channel capacity is not needed for the video. In one recent experiment, a 50-second commercial provided enough unused capacity to transmit 60 megabytes of data. 8.   Ka band frequencies are also in high demand as companies rush to offer satellite video-conferencing and computer networking. Using this band, systems such as Hughes' Spaceway could offer "bandwidth on demand" where "today's 24-minute download from the Internet would take less than 4 seconds at a cost no higher than today's rates." See Cole (1995), OTA (1995), and Markoff (1995). 9.   Some corporate networks should properly be thought of as being part of the public Internet, since they are directly connected and exchange packets. However, many corporate networks, if connected at all to the public Internet, exchange only specific and limited applications such as electronic mail. 10.   This statement is not accurate in every instance; certain enhancements to the Internet such as support for audio and video will require upgrades to the router code itself. 11.   The CSTB (1994b) report Realizing the Information Future includes an expanded discussion of the pressures and concerns now arising in the Internet standards process. 12.   Internet standards are discussed and set by the Internet Engineering Task Force (IETF) and its working groups, which collectively meet three times a year. In addition, much work is carried out on the Internet itself. For information, see the Web page of the IETF at http://www.ietf.org. 13.   This situation prevails with real-time transport of audio and video, where quick delivery is more important that 100 percent reliable delivery. 14.   See the 1991 Cable Act (P.L. 98-549), amending 47 USC Sec. 544. It required that "[w]ithin one year after October 5, 1992, the Commission shall prescribe regulations which establish minimum technical standards relating to cable systems' technical operation and signal quality. The Commission shall update such standards periodically to reflect improvements in technology. A franchising authority may require as part of a franchise (including a modification, renewal, or transfer thereof) provisions for the enforcement of the standards prescribed under this subsection. A franchising authority may apply to the Commission for a waiver to impose standards that are more stringent than the standards prescribed by the Commission under this subsection." These efforts have been codified at the FCC as Section 15.115 "TV interface devices, including cable

OCR for page 115
-->     system terminal devices" and Section 68.110 "Compatibility of the telephone network and terminal equipment." 15.   IBM System 360 and Microsoft MS-DOS and Windows are the classic examples. 16.   Government-mandated regulations and procurement specifications constitute a third category of standards. Agencies at all levels of government set regulatory standards for products and processes in order to protect health, safety, and the environment. They also produce specifications for public procurement of goods and services. The Federal Register regularly publishes requests for comments on standards proposed by federal agencies. Some of these are developed by agencies, while others originate as voluntary standards set in the private sector and are adopted by reference in the text of regulations and specifications. 17.   A very broad consortium, known as the Information Infrastructure Standards Panel (IISP) and spearheaded by the American National Standards Institute, has attempted since mid-1994 to bring together a large number and variety of organizations and entities concerned with standards relating to the national and global information infrastructure. In late November 1995, the IISP issued a list of 35 "standards needs," ranging across such areas as reliability, quality of service, provision of protections (e.g., security), specific types of interfaces, and data formatting (see Lefkin, 1995). It is premature to judge the outcome of this effort, although anecdotal reports from some parties familiar with it have noted the difficulty in cross-industry forums of achieving results with sufficient focus and specificity to constitute an advance from the basis in disparate, separate standards-developing activities. 18.   An example is AM stereo, in which the FCC forbore from picking a standard; none ever emerged because no market could develop in the absence of a standard, and no standard could develop in the absence of a market. 19.   Recently, through the auspices of the Internet Society, the IETF has been establishing liaison with organizations such as the ITU. But sanctioning of the IETF and its standards by other formal standards bodies has not been a factor in the success of those standards. Market acceptance has been the key issue. 20.   In the case of the World Wide Web, a consortium of industry and academic partners has been organized at the Massachusetts Institute of Technology with the goal of furthering the standards for the Web. It is an attempt to create a neutral body especially organized to deal with the very rapid advances and strong industrial tensions present in the Web architecture. Whether it represents a cooperative complement to the IETF or an explicit rejection of the IETF processes remains to be seen. 21.   A recent National Research Council (1995) study examined standards development in multiple industries. It concluded that the relatively decentralized, private-sector-led U.S. standards-setting process, while messy and chaotic, is generally the most effective way to set standards in a market economy. 22.   Federal selection of standards for the government's own systems is a topic that has been considered by the Technology Policy Working Group. Its December 1995 draft report calls for a process that "minimizes the number of required standards for the Federal Government's purchasing of NII products and services, limiting them to those that relate to cross-agency interoperability" (TPWG, 1995b,

OCR for page 115
-->     p. i). It notes that "there is already existing government policy which covers the preference and advantages to government selection of voluntary standards (e.g., consensus standards). This policy is contained in OMB Circular No. A-119 Revised 10/20/93, `Federal Participation in the Development and Use of Voluntary Standards'" (p. 14). 23.   The interaction between pricing and architecture was the focus of a special interdisciplinary panel at the 1995 Telecommunications Policy Research Conference (September 30—October 2, Solomons, Maryland). Entitled "Architecture and Economic Policy: Lessons from the Internet," the panel featured papers prepared jointly by network technologists and economists, which were revised for spring 1995 publication in the journal Telecommunications Policy.