National Academies Press: OpenBook

The Unpredictable Certainty: White Papers (1997)

Chapter: The Fiber-Optic Challenge of Information Infrastructure

« Previous: The Economics of Layered Networks
Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 248

32
The Fiber-Optic Challenge of Information Infrastructures

P.E. Green, Jr.
IBM T.J. Watson Research Center

This paper discusses the role of optical fiber as the one physical medium upon which it will be possible to base national and global infrastructures that will handle the growing demands of bandwidth to the desktop in a post-2000 developed society.

A number of individual applications today demand large bit rate per user, such as supercomputer interconnection, remote site backup for large computer centers, and digital video production and distribution, but these are isolated niches today. The best gauge of the need for the infrastructure to supply large amounts of bandwidth to individual users is probably to be found in the phenomena associated with the use of the Internet for graphics-based or multimedia applications.

Just as the use of noncommunicating computers was made much easier by the emergence of the icon- and mouse-based graphical user interface of the Macintosh, Windows and OS2, the same thing can be observed for communicating computers with the World Wide Web. The modality in which most users want to interact with distributed processing capability (measured in millions of instructions per second [MIPs]) is the same as it has always been with local MIPs: they want to point, click, and have an instant response. They will in fact want to have a response time from some remote source on any future information infrastructure that is a negligible excess over the basic propagation time between them and the remote resource. They will want the infrastructure to be not only widebanded for quick access to complex objects (which are evolving already from just still graphics to include voice and video) but also to be symmetric, so that any user can become the center of his or her own communicating community. This need for an "any-to-any" infrastructure, as contrasted to the one-way or highly asymmetrical character of most of our wideband infrastructure today (cable and broadcast), is thought by many political leaders to be the key to optimizing the use of communication technology for the public good.

Thus, a dim outline of many of the needs that the information infrastructure of the future must satisfy can be discerned in the emerging set of high-bandwidth usage modes of the Internet today 1R, particularly the Web. The picture that emerges from examining what is happening in the Web is most instructive. Figure 1 shows the recent and projected growth of Web traffic per unit time per user assuming the present usage patterns, which include almost no voice, video clips, or high response speed applications such as point-and-shoot games or interactive CAD simulations. As these evolve, they could exacerbate the already considerable bit rate demand per user, which Figure 1 shows as a factor of 8 per year. If the extrapolations in Figure 1 are correct, this means that in the decade to 2005, the portion of the available communication infrastructure devoted to descendants of the Web must undergo a capacity growth of about 109 in order to keep up with demand.

There is only one physical transmission technology capable of supporting such growth: optical fiber. Fortunately for the prospects of an infrastructure that will provide society what it needs, fiber has been going into the ground, on utility poles, within buildings, and under the oceans at a rapid rate. The installation rate has been over 4,000 miles per day for some years, just in the continental United States alone, so that by now over 10 million miles of installed fiber exist here. Even more fortunately, each fiber has a usable bandwidth of some 25,000 GHz, roughly 1,000 times the usable radio spectrum on planet Earth, and quite enough to handle all the phone calls in the U.S. telephone system at its busiest. While this gigantic capacity is underused by at least a

Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 249

image

Figure 1
Predicted World Wide Web bandwidth demand.

SOURCE: Data courtesy of the Internet Society.

image

Figure 2
The ''last mile" bandwidth bottleneck.

factor of 10,000 in today's practice, which is based on time division multiplexing, the technical means are rapidly evolving to open up the full fiber bandwidth. This is the all-optical networking technology, based on dense wavelength division, in which different channels travel at different "colors of light."

So, why isn't it true that we already have the physical basis in place over which to send the traffic of the future? Most of the answer is summarized in Figure 2 2R. All the communication resources we have been installing seem to be improving in capacity by roughly 1.5 per year, totally out of scale with the 8-times-per-year growth of demand shown in Figure 1. The top curve of Figure 2 shows the capability of desktop computers to absorb and emit data into and out of the buses that connect them to the external world 3R. The next line shows local area network capacity as it has evolved. The third one shows the evolution of high-end access to the telco backbone that allows users at one location connectivity to users elsewhere outside the local LAN environment. The capacity of this third curve has been available only to the most affluent corporations and universities, those that can afford T1, T3, or SONET connections to their premises.

Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 250

While all three of these capacities are evolving at the rate of only a factor of 1.5 per year, they represent really substantial bit-rate numbers. Current Web users who can afford 10 Mb/s LANs and T-carrier connections into the backbone experience little response time frustration. However, the situation of most of us is represented more accurately by the bottom curve, which shows the data rate available between our desktop and the backbone over the infamous "last mile" using telco copper connections with either modems or ISDN. There is a 104 performance deficiency between the connectivity available between local and long-haul resources and the internal performance of both these resources.

If one compares the rate of growth of Web traffic in Figure 1 with the data of Figure 2, it is clear that there is an acute need to bridge the 104 gap of the last mile with fiber and inevitably to increase the bandwidths of the backbone also, probably at a greater rate than the traditional factor of 1.5 per year.

As for bridging the gap between the desktop and the telco backbone, the proposed solution for years now has been "fiber to the home" 4R, expressing the notion that it must pay for itself at the consumer level. The alternative of coaxial cable to the premises, while having up to a gigahertz of capacity, is proving an expensive and nonexpandable way to future-proof the last mile against the kind of bandwidth demands suggested by Figure 1, and the architectures used have assumed either unidirectional service or highly asymmetrical service. What is clearly needed is fiber, probably introduced first in time-division mode, and then, as demand builds up, supporting a migration to wavelength division (all-optical).

Figure 3 shows the rate at which fiber to the premises ("home") has been happening 5R in the United States. The limited but rapidly growing amount of fiber that is reaching all the way to user premises today is mostly to serve businesses. The overall process is seen to be quite slow; essentially nothing very widespread will happen during the next 5 to 7 years to serve the average citizen. However, the bandwidth demand grows daily. Meanwhile, all-optical networks are beginning to migrate off the laboratory bench and into real service in small niches.

What Figure 3 shows is the steady reduction of the number of homes that, on the average, lie within the area surrounding the nearest fiber end. In 1984, when fiber was used only between central offices (COs), this figure was the average number of homes or offices served by such a CO. As the carriers 6R, cable companies 7R, and competitive local access providers 8R found great economies in replacing copper with fiber outward from their COs and head-ends, the number decreased. A linear extrapolation down to one residence per fiber end predicts that 10 percent of U.S. homes will be reached by fiber by about 2005, at best. In Japan, it is quite possible that a strong national effort will be launched that will leapfrog this lengthy process using large government subsidies 9R.

During the coming decade, several things will happen, in addition to ever increasing end-user pressure for more bandwidth to the desktop. Competition between telcos, cable companies, and competitive access providers may or may not accelerate the extrapolated trend shown in Figure 3. Advances in low-cost optoelectronic technology, some of them based on mass production by lithography, could also accelerate the trend, because analyses of costs of fiber to the home consistently show a large fraction of the cost to lie in the set-top box, splitters, powering 10R, and, in the case of wavelength division multiplexing (WDM) approaches, ultiwavelength or wavelength-tunable sources and receivers. It is widely felt that the price of the set-top box itself will have to be below $500 for success in the marketplace. This is probably true, whether the "set-top box" is really a box sitting atop a TV set or a feature card within a desktop computer. By 2005 it should become quite clear whether the TV set will be growing keyboards, hard disks, and CPUs to take over the PC, whether the PC will be growing video windows to take over the TV set, or whether both will coexist indefinitely and separately. In any case, the bottleneck to these evolutions will increasingly be the availability by means of fiber of high bit rates between the premises and the backbone, plus a backbone bandwidth growth rate that is itself probably inadequate today.

Meantime, looking ahead to the increasing availability of fiber paths and the customers who need them to serve their high-bandwidth needs, the all-optical networking community is hard at work trying to open up the 25,000 GHz of fiber bandwidth to convenient and economical access to end users. Already, the telcos are using four-wavelength WDM in field tests of undersea links 11R. IBM has recently made a number of commercial installations 12R of 20-wavelength WDM links for achieving fiber rental cost savings for some of its large customers who have remote computer site backup requirements. The rationale bhind both these commercialization efforts involves not only getting more bandwidth out of existing fiber, but also making the

Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 251

image

Figure 3
Predicted rate at which fiber reaches user premises.

image

Figure 4
Three wavelength division architecture.

installation "multiprotocol" or "future-proof" by taking advantage of the fact that each wavelength can carry an arbitrary bit rate and framing convention format, or even analog formats, up to some maximum speed set by the losses on the link.

These successful realizations of simple multiwavelength links represent the simplest case of the three different kinds of all-optical systems, shown in Figure 4. In addition to the two-station WDM link (with multiple ports per station), the figure shows the two forms taken by full networks, structures in which there are many stations (nodes), with perhaps only one or a few ports per node.

The second type, the broadcast and select network, usually works by assigning to the transmit side of each node in the network a fixed optical frequency, merging all the transmitted signals at the center of the network in an optical star coupler and then broadcasting the merge to the receive sides of all nodes. The entire inner structure, consisting of fiber strands and the star coupler, is completely passive and unpowered. By means of a suitable protocol, when a node wants to talk to another (either by setting up a fixed lightpath "circuit" or by exchanging packets), the latter's receiver tunes to the former's transmit wavelength and vice versa. Broadcast and

Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 252

select networks have been prototyped and, while still considered not quite in the commercialization cost range, have been used in live application situations, for digital video distribution 13R and for gigabit supercomputer interconnection at rates of 1 gigabit per second 14R.

Aside from high cost, which is currently a problem with all WDM systems, there are two other things wrong with broadcast and select networks. The power from each transmitter, being broadcast to all receivers, is mostly wasted on receivers that do not use it. Secondly, the number of nodes the network can have can be no larger than the size of the wavelength pool, the number of resolvable wavelengths. Today, even though there are 25,000 GHz of fiber capacity waiting to be tapped, the wavelength resolving technology is rather crude, allowing systems with only up to about 100 wavelengths to be built, so far 15R. The problems of both cost and number of wavelengths are gradually being solved, often by the imaginative use of the same tool that brought cost reductions to electronics two decades ago: lithography.

Clearly, a network architecture that allows only 100 nodes does not constitute a networking revolution; some means must be provided for achieving scalability by using each wavelength many places in the network at the same time. Wavelength routing accomplishes this, and also avoids wastage of transmitted power, by channeling the energy transmitted by each node at each wavelength along a restricted path to the receiver instead of letting it spread over the entire network, as with the broadcast and select architecture. As the name "wavelength routing" implies, at each intermediate node between the end nodes, light coming in on one port at a given wavelength gets routed out of one and only one port.

The components to build broadcast and select networks have been available on the street for 4 years, but optical wavelength routers are still a reality only in the laboratory. A large step toward practical wavelength routing networks was recently demonstrated by Bellcore 16R.

The ultimate capacity of optical networking is enormous, as shown by Figure 5, and is especially great with wavelength routing (Figure 6). Figure 5 shows how one might divide the 25,000 GHz into many low-bit-rate connections or a smaller number of higher-bit-rate connections. For example, in principle one could carry 10,000 uncompressed 1 Gb/s HDTV channels on each fiber. The figure also shows that erbium amplifiers, needed for long distances, narrow down the 25,000 GHz figure to about 5,000 GHz, and also that the commercially available tunable optical receiver technology is capable of resolving no more than about 80 channels.

With broadcast and select networks the number of supportable connections is equal to the number of available wavelengths in the pool of wavelengths. However, with wavelength routing, the number of supportable connections is the available number of wavelengths multiplied by a wavelength reuse factor 17R that grows with the topological connectedness of the net work, as shown in Figure 6. For example, for a 1,000-node network of nodes with a number of ports (the degree) equal to four, the reuse factor is around 50, meaning that with 100 wavelengths, there could, in principle, be five connections supportable for each of the 1,000 nodes.

As far as the end user is concerned, there is sometimes a preference for circuit switching and sometimes for packet switching. The former provides protocol transparency during the data transfer interval, and the latter provides concurrency (many apparently simultaneous data flows over the same physical port, by the use of time-slicing). In both cases, very large bit rates are possible without the electronics needing to handle traffic bits from extraneous nodes other than the communicating partners.

The very real progress that has been made to date in all-optical networking owes a great deal to the foresight of government sponsors of research and development the world over. The three big players have been the Ministry of Posts and Telecommunications (MPT) in Japan, the European Economic Community (EEC), and the U.S. Advanced Research Projects Agency (ARPA). The EEC's programs, under RACE-1 and RACE-2 (Rationalization of Advanced Communication in Europe), have now been superseded by ACTS (Advanced Communication Technology Systems).

In 1992, ARPA initiated three consortia aimed at system-level solutions, and all three have been successful. The Optical Networking Technology Consortium, a group of some 10 organizations led by Bellcore, has demonstrated an operating wavelength routing network using acoustooptic filters as wavelength routers. The All-Optical Networking Consortium, consisting of the Massachusetts Institute of Technology, AT&T Bell Laboratories, and Digital Equipment Corporation, has installed a network that combines wavelength routing, wavelength shifting, broadcast-and-select, and electronic packet switching between Littleton, Lexington, and Cambridge, Massachusetts. With ARPA and DOE support, IBM (working with Los Alamos National

Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 253

image

Figure 5
Capacity of broadcast and select networks.

image

Figure 6
Wavelength reuse.

Laboratory) has developed an extensive set of algorithms for distributed control of very large wavelength-routing networks, and has studied offloading of TCP/IP for supercomputer interconnection in its Rainbow-2 network.

It is fair to say that the United States now holds the lead in making all-optical networking a commercial reality, and that ARPA support was one of the important factors in this progress. At the end of 1995, ARPA kicked off a second round of 3-year consortia in the all-optical networking area, with funding roughly five times that of the earlier programs 18R.

Whether all-optical networking will be a commercially practical part of the NII depends on three factors: (1) whether the investment will be made to continue or accelerate the installation of fiber to the premises and desktop (Figure 3), (2) whether it proves feasible to reduce component costs by two to three orders of magnitude below today's values, and (3) the extent to which providers offer the fiber paths in the form of "dark fiber"—that is, without any electronic conversions between path ends.

Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 254

This last problem seems to be solving itself in metropolitan and suburban areas of many countries, simply by competition between providers, but the problems of long dark fiber paths that cross jurisdictions and require amplification have yet to be faced. In the United States, the Federal Communications Commission has viewed dark fiber as being equivalent to copper, within the meaning of the Communication Act of 1934 19R20R; that is, if the public interest requires making dark fiber ends available, one of the monopoly obligations implied by monopoly privileges is that the public should be offered it at a fair price.

The optoelectronic component cost issue is under active attack. Considering that there are significant efforts under way to use lithography for cost reduction of tunable and multichannel WDM transmitters and receivers, it seems possible to predict a one-order-of-magnitude decrease in price by 2000 and two orders of magnitude by 2005. This implies that the optoelectronics for each end of WDM links of 32 wavelengths should cost $15K and $1.5K, respectively, and that the optoelectronics in each node of a broadcast and select network of 32 to 128 nodes should cost $1,000 and $100, respectively. If these last numbers are correct, this means that broadcast and select MANs and LANs should be usable by desktop machines some time between 2000 and 2005, since the costs would be competitive with the several hundred dollars people typically spend year after year on each modem or LAN card for PCs.

The sources of investment in the "last fiber mile" are problematical. In the United States the telcos and the cable companies are encountering economic problems in completing the job. In several other countries with strong traditions of centralized telecommunication authority, for example Japan and Germany, a shortcut may be taken using public money in the name of the public interest. So far in the United States it is "pay as you go." This has meant that only businesses can afford to rent dark fiber, and even then this has often been economical only when WDM has been available to reduce the number of strands required 12R.

Whether a completely laissez-faire approach to the last mile is appropriate is one of the problems governments are facing in connection with their information infrastructures. Fiber has ten orders of magnitude more bandwidth (25,000 GHz vs. 3.5 kHz) and can operate with ten orders of magnitude better raw bit error rates that can voice grade copper (10-15 vs. 10-5), and yet on the modest base of copper we have built the Internet, the World Wide Web, ten-dollar telephones at the supermarket, communicating PCs and laptops, prevalent fax and answering machine resources, and other innovations. It is the vision of those working on all-optical networking that a medium with ten orders of magnitude better bandwidth and error rate than one that gave us today's communication miracles is unlikely to give us a future any less miraculous, once the fiber paths, the network technology, and the user understanding are all in place.

References

[1] A.R. Rutkowski, "Collected Internet growth history of number of hosts and packets per month," private communication, March 26, 1995.

[2] A.G. Fraser, banquet speech, "Second IEEE Workshop on High Performance Communication Subsystems," September 2, 1993.

[3] R. Dodson, "Bus Architectures," IBM PC Company Bulletin Board at 919-517-0001 (download files PS2REFi.EXE, where i = 1, 2, 3 and 4), 1994.

[4] P.W. Shumate, "Network Alternatives for Competition in the Loop," SUPERCOM Short Course, March 22, 1995.

[5] D. Charlton, "Through a Glass Dimly," Corning, Inc., 1994.

[6] J. Kraushaar, "Fiber Deployment Update—End of Year 1993," FCC Common Carrier Bureau, May 13, 1994.

[7] "Ten-year Cable Television Industry Projections," Paul Kagan Associates, Inc., 1994.

[8] "Sixth Annual Report on Local Telephone Competition," Connecticut Research, 1994.

[9] J. West, "Building Japan's Information Superhighway," Center for Research on Information Technology and Organization, University of California at Irvine, February, 1995.

[10] P.R. Shumate, Bell Communications Research, private communication, March 1995.

[11] J.J. Antonino (ed.), "Undersea Fiber Optic Special Issue," AT&T Technical Journal, vol. 74, no. 1, January–February 1994.

[12] F.J. Janniello, R.A. Neuner, R. Ramaswami, and P.E. Green, "MuxMaster: A Protocol Transparent Link for Remote Site Backup," submitted to IBM Systems Journal, 1995.

Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×

Page 255

[13] F.J. Janniello, R. Ramaswami, and D.G. Steinberg, "A Prototype Circuit-switched Multi-wavelength Optical Metropolitan-area Network," IEEE Journal of Lighwave Technology, vol. 11, May/June 1993, pp. 777–782.

[14] W.E. Hall, J. Kravitz, and R. Ramaswami, "A High-Performance Optical Network Adaptor with Protocol Offload Features," submitted to IEEE Journal on Selected Areas in Communication, vol. 13, 1995.

[15] H. Toba et al., "100-channel Optical FDM Transmission/Distribution at 622 Mb/s Over 50 km Using a Waveguide Frequency Selection Switch," Electronics Letters, vol. 26, no. 6, 1990, pp. 376–377.

[16] G.K. Chang et al., "Subcarrier Multiplexing and ATM/SONET Clear-Channel Transmission in a Reconfigurable Multiwavelength All-Optical Network Testbed," IEEE/OSA OFC Conference Record, February 1995, pp. 269–270.

[17] R. Ramaswami and K. Sivarajan, "Optimal Routing and Wavelength Assignment in All-optical Networks," IEEE INFOCOM-94 Conference Record, 1995.

[18] R. Leheny, "Advanced Network Initiatives in North America," Conference Record, OFC-95, March 2, 1995.

[19] "Four BOCs Denied Authorization to Cease Providing Dark Fiber Service," Document CC-505, FCC Common Carrier Bureau, March 29, 1993.

[20] "Dark Fiber Case Remanded to FCC," U.S. Court of Appeals for the District of Columbia, April 5, 1994.

Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 248
Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 249
Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 250
Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 251
Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 252
Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 253
Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 254
Suggested Citation:"The Fiber-Optic Challenge of Information Infrastructure." National Research Council. 1997. The Unpredictable Certainty: White Papers. Washington, DC: The National Academies Press. doi: 10.17226/6062.
×
Page 255
Next: Cable Television Technology Deployment »
The Unpredictable Certainty: White Papers Get This Book
×
Buy Paperback | $120.00 Buy Ebook | $94.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This book contains a key component of the NII 2000 project of the Computer Science and Telecommunications Board, a set of white papers that contributed to and complements the project's final report, The Unpredictable Certainty: Information Infrastructure Through 2000, which was published in the spring of 1996. That report was disseminated widely and was well received by its sponsors and a variety of audiences in government, industry, and academia. Constraints on staff time and availability delayed the publication of these white papers, which offer details on a number of issues and positions relating to the deployment of information infrastructure.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!