Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 120
Broadband Bringing Home the Bits 4 Technology Options and Economic Factors Although great technical and business strides have been made in improving the data transmission speeds of communications networks, the local access technologies that make up the last (or first) mile connections in the network have mostly lagged far behind. Enhancing the local access infrastructure to bring high-speed services to residences and small businesses requires upgrading or building infrastructure to each premises served. There are a variety of technology options with different characteristics and cost structures and variation in willingness to pay among potential customers. This chapter explores the characteristics of the various local access technologies and the interplay among relevant economic considerations. LOCAL ACCESS TECHNOLOGIES IN CONTEXT While this chapter focuses on local access, the other network elements through which content, applications, and services are provided also contribute to the total cost and performance characteristics of broadband service. Local access links carry communications to and from points at which communications from multiple premises are aggregated and funneled onto higher-capacity links that ultimately connect to the Internet or other broadband services. The first point of aggregation, also known as the point of presence, is most commonly located at a telephone company central office, cable system head end, or radio tower (which may be at a considerable distance from the premises) but may also be in a piece of
OCR for page 121
Broadband Bringing Home the Bits equipment in a vault, pedestal, wireless antenna site, or pole-top device located nearby to the premises. Circuits installed or leased by the provider in turn run from the point of presence to one or more public or private access points for interconnection with the Internet. The so-called second mile connects local access facilities with upstream points of aggregation. In connecting to the Internet, broadband providers either pay for transit service or establish peering agreements with other ISPs to exchange traffic on a settlement-free (barter) basis. Caches, e-mail and content servers, and servers supporting specialized services such as video-on-demand or voice telephony are located at points of presence and/or data centers. Routers located in points of presence and data centers take care of directing data packets on to the next point in the cross-network trip to their eventual destination. ESSENTIAL FEATURES OF THE LOCAL ACCESS TECHNOLOGY OPTIONS The future of broadband is sometimes described as a shootout among competing technologies that will result in a single technology dominating nationwide. This view, however, is simplistic and unrealistic; there is no single superior technology option. Broadband is going to be characterized by diverse technologies for the foreseeable future. There are a number of reasons for this: Incremental investment in existing infrastructure. While some firms may have access to large amounts of venture capital, the expectations of investors in existing firms is for short-term payoffs. As a result, the technological approach chosen by an incumbent is likely to make use of existing equipment and plant, and the deployment strategy must be amenable to incremental upgrades. The infrastructures of the various incumbents in the broadband marketplace—telephone local exchange carriers with copper loops, cable television companies with coaxial cable, cellular companies with towers for point-to-point wireless telephony—will continue to make incremental improvements unique to their respective technologies to provide and enhance broadband services. Continued exploitation of skills. Technologies require distinctive skills and knowledge—those needed, for example, to design, launch, and operate a satellite. Similarly, cable and telephone companies understand the technological challenges associated with their respective systems. Companies that know how to do one or another thing well will attempt to find market opportunities where these skills give them an advantage. Different demographics and density. The United States (and world) population is very diverse in topography, density, wealth, and demand
OCR for page 122
Broadband Bringing Home the Bits for communications services. The particular economic and technical characteristics of each broadband technology will provide specific advantages in serving certain geographical areas or demographic groups. Some may have an economic advantage in particular locales owing to the nature of the infrastructure already in place or to inherent physical attributes of the environment. Planning should reflect the existence of a diverse set of solutions that depend on particular circumstances rather than a technology monoculture. This section discusses the salient characteristics of each technology option and provides a brief road map of how existing technology and anticipated research and development will play out in coming years. Wireline Options In rough terms, access technologies are either wireline or wireless. Wireline includes telephone network copper pairs and the coaxial cable used for cable television service. Incumbent telephone companies and cable operators are both in the process of upgrading their infrastructures to provide broadband services. Wireline infrastructure is also being built in some areas by so-called overbuilders, who are building new wireline infrastructure in competition with the incumbent wireline providers. In the United States, this has largely been through deployment of hybrid fiber coax to provide some mix of television, data, and voice services. There are also a few overbuilders that are using or plan to use fiber. The wireline technologies all share the feature that labor and access to a right-of-way are significant components of the cost. These costs are more significant where infrastructure must be buried than where it can be installed on existing poles.1 The other major component is the electronics at each end of the line, where costs are subject to rapid decreases over time as a result of Moore’s law improvements in the performance-to-cost ratio and increasing production volumes. Labor, on the other hand, is not subject to Moore’s law, so there is no obvious way within the wireline context for dramatic declines in cost for new installation (though one cannot rule out very clever solutions that significantly reduce the labor required for some elements of the installation). 1 One estimate provided to the committee is that aerial installation is almost twice as inexpensive as when the infrastructure must be buried.
OCR for page 123
Broadband Bringing Home the Bits Hybrid Fiber Coax Cable systems pass 97 percent of the homes in the United States.2 The older generation of cable technology uses a branching structure of coaxial cables fanning out from a central point or head end to the buildings in a community (see Figure 4.1a). The older systems rely on long chains of coaxial cables and amplifiers, with each segment feeding into a smaller coaxial segment. Hybrid fiber coax (HFC) is the current generation of cable system technology. HFC systems carry analog signals that feed conventional television sets as well as digital signals encoded onto analog signals that carry digital video programming and up- and downstream data. In the new architecture, the system is divided into a number of small coaxial segments with a fiber optic cable used to feed each segment or cluster. By using fiber instead of coax to feed into neighborhoods, the system’s performance and reliability is significantly improved. Another benefit of an HFC upgrade is that the resulting system can carry two-way data communications, such as Internet access. Additional equipment is installed to permit information to flow both to and from the home (see Figure 4.1b). Internet service is provided using a device called a cable modem in the home and a device known as a cable modem termination system in the head end. The ability to offer competitive video, voice, and high-speed data services using the present generation of technology has attracted several nonincumbent companies to enter a few markets as overbuilders using the HFC technology. Over 70 percent of the homes in the United States are now passed by this upgraded form of cable infrastructure. The fraction of homes served by HFC is continuing to increase as cable companies upgrade connections to all homes in their franchise areas and can, with continued investment in upgrades, increase until it approaches the 97 percent of households that currently have cable service available at their property lines. A technology standard for cable modems known as DOCSIS has been adopted industrywide. Developed by an industry consortium seeking a quicker alternative to the more traditional standards development process then underway under the auspices of the IEEE, the DOCSIS standard is stable, and more than 70 modems have been certified as compliant. Standardization has helped modems become a mass-market product. The standard provides consumers the assurance that if they purchase certified modems at retail, or have them built into PCs or other appliances, cable operators will support them across the country. Further helping push down costs, several competing suppliers have developed highly inte- 2 Paul Kagan Associates. 2001. The Kagan Media Index, Jan. 31, 2001.
OCR for page 124
Broadband Bringing Home the Bits FIGURE 4.1 Evolution of cable systems to support two-way data. SOURCE: James Chiddix. 1999. “The Evolution of the U.S. Telecommunications Infrastructure Over the Next Decade. TTG2: Hybrid-Fiber-Coax Technology” (IEEE workshop paper).
OCR for page 125
Broadband Bringing Home the Bits grated silicon, and single-chip DOCSIS solutions are available to modem manufacturers. With increasing volumes, a single standard, and single-chip solutions, the cost of a cable modem at wholesale has already dropped to $150 or less and can be expected to continue to drop as volumes increase. Digital Subscriber Line Digital subscriber line (DSL) is the current method by which twisted copper pairs (also known as loops), the decades-old technology used by the telephone companies to reach the residence, can be upgraded to support high-speed data access. In some newer builds, analog transmission over copper wire is only used between the premises and a remote terminal (which may be at curbside or, more commonly in a pedestal or underground vault within a neighborhood), while a digital loop carrier (DLC) generally using fiber optic cable connects the remote terminal with the central office. In a traditional, all-copper plant, the first segment of the loop plant is referred to as the “feeder plant,” in which hundreds of phone lines are bundled in a cable that runs from the central office to a smaller distribution point. From the distribution point, smaller cables containing fewer phone lines run to pedestals or cabinets within a neighborhood, where they in turn connect to the twisted pairs that run to the customer premises (see Figure 4.2). All transmission of data over wire involves coding these data in some way consistent with the carrying capacity and noise conditions of the wire. The familiar dial-up modems code (and decode) data in such a way that the data can pass through the traditional switches and transmission links that were designed to carry voice, which more or less limits speeds to today’s 56 kbps. DSL uses an advanced coding scheme that is not compatible with existing switches. Consequently, new electronics known as a DSL access multiplexer (DSLAM) has to be installed in any central office where DSL is to be offered. The DSLAM must in turn be connected to a switched data network that ultimately connects the central office to the Internet (see Figure 4.3). DSL service enables the transmission of packet-switched traffic over the twisted copper pairs at much higher speeds than a dial-up Internet access service can offer. DSL can operate at megabits per second, depending on the quality and length of the particular cable. It is thus the upgrade of choice to bring copper pairs into the broadband market. DSL standards have existed since 1998, and new versions of these standards, which add enhancements to asynchronous transfer mode (ATM), IP, and voice services over DSL, are expected in 2001 or 2002 from the International Telecommunication Union (ITU). Large interoperability
OCR for page 126
Broadband Bringing Home the Bits FIGURE 4.2 Telephone company copper loop plant. SOURCE: Adapted from a figure supplied by John Cioffi, Stanford University. programs with dozens of qualified suppliers have been implemented by the DSL Forum, which has about 400 member companies. The forum develops implementation agreements in support of interoperable DSL equipment and services and acts as an industry marketing organization for DSL services, applications, and technology for DSL in general. Also, to help reduce the cost of asymmetric DSL (ADSL) deployment by specifying a common product and increasing volumes, several companies formed a procurement consortium. The present generation of DSL products can, depending on line length and conditions, reach 1.5 to 8 Mbps downstream and 150 to 600 kbps upstream in the near future. The present generation of DSL technology aimed at residential customers, ADSL, currently supports a typical maximum of 8 Mbps downstream and 800 kbps upstream (different flavors deployed by various providers vary somewhat). A related flavor, G.lite, which makes compromises in order to permit customer self-installation on the same line being used for analog voice service, supports up to 1.5 Mbps downstream. (Another variant, symmetric DSL [SDSL], supports
OCR for page 127
Broadband Bringing Home the Bits higher, symmetric speeds.) All of these speeds are maximums—the actual speed obtainable over the DSL link depends on line length, noise, and other aspects of the line condition as well as on the maximum speed supported by the particular service that a customer has subscribed to. Higher-speed versions of DSL, known as very high data rate DSL (VDSL), are in development. These depend on investment in new fiber in the loop plant that shortens the copper loop length to enable higher speeds—tens of megabits per second in both directions. Figure 4.4 summarizes the rate and distance trade-offs for the various flavors of DSL. DSL is available to a large fraction of homes and businesses in the United States over normal phone lines (the exact fraction is hard to determine because of the factors discussed below). However, not all of the homes that are passed by telephone cables easily support DSL, and some homes cannot be offered DSL service at all without major upgrades to the infrastructure. Certain pairs are unsuited for such upgrades because of how they were engineered—for example, using bridge taps or loading coils. Also, where the loop between central office and premises includes a digital loop carrier, the remote terminal equipment must be upgraded to FIGURE 4.3 DSL connections at the central office.
OCR for page 128
Broadband Bringing Home the Bits FIGURE 4.4 Rate and maximum distances for various flavors of DSL. SOURCE: Adapted from a figure provided to the committee by Ted Darcie, AT&T Research. support DSL. More significantly, DSL does not work over wires longer than a certain distance (18,000 feet for the primary flavor used for residential service today, ADSL). It should be noted that wire lengths are substantially shortened by the deployment of remote terminals. Crosstalk—the coupling of electrical signals between nearby wires— gives rise to interference that degrades the carrying capacity of each copper pair. The level of crosstalk depends on the number of pairs within the bundle carrying DSL, their proximity, and the power and bandwidths they use. It is even possible for DSL signals from adjacent lines to create signals larger than the intended DSL signal on the line. The interference has the effect of reducing the maximum data rate at a particular loop
OCR for page 129
Broadband Bringing Home the Bits length (or the maximum loop length for a given data rate). In essence, an issue of spectrum sharing within the cable bundles arises. The term “spectrum” is appropriate because the crosstalk and interference effects depend on how the signals on the different pairs make use of the different frequencies used for transmission over the lines. Today, incumbents and competitive providers using unbundled loops are free to choose among a number of flavors of DSL, without regard to how the spectrum used by one service affects services running over other copper pairs. At the request of the FCC, a working group of carriers and vendors worked to develop a spectrum management standard for DSL. The present standard, released in 2001, places forward-looking limits on signal power, bandwidth, and loop length.3 By establishing thresholds with which the current DSL technology is generally compliant, the standard seeks to prevent future escalation (where each DSL product or service would try to “out-shout” the others) and thus place a bound on the level of crosstalk that will be faced in the future. While the standard is currently voluntary, it is generally expected that it will provide the technical basis for future FCC rulemaking. Issues that the standard does not address—which are being explored by a Network Reliability and Interoperability Council subgroup under American National Standards Institute (ANSI) T1 auspices that is developing guidance to the FCC on crosstalk—include how many DSL lines are permitted per binder group, what standards apply to lines fed from digital loop carriers, how products should be certified or self-certified, and how rule compliance should be enforced. Advanced Wireline Offerings—Fiber Optics in the Loop Optical fiber has a theoretical capacity of about 25,000 GHz, compared to the roughly 155 megahertz (MHz) possible over short copper pairs, the roughly 10 GHz4 capacity of coaxial cable. (The relationship 3 Working Group on Digital Subscriber Line Access (T1E1.4). 2001. American National Standard for Telecommunications—Spectrum Management for Loop Transmission Systems (T1.4172001). Standards Committee T1. Alliance for Telecommunications Industry Solutions, Washington, D.C. 4 The practical upper limit for data transmission over coaxial cable has not been well explored. The upper cutoff frequency for a coaxial cable is determined by the diameter of the outer copper conductor. Smaller cables (1/4-inch- to 1/2-inch-diameter) probably have a cutoff frequency well in excess of 10 GHz. It is unclear what the upper limit is on modulation efficiency. The 256 quadrature amplitude modulation (QAM) currently in wide use allows 7 bits per hertz, but in short, passive runs in neighborhoods, much more efficient modulation schemes are possible, suggesting that HFC could evolve to speeds exceeding 100 Gbps to small clusters of customers.
OCR for page 130
Broadband Bringing Home the Bits between hertz and bits per second depends on the modulation scheme; the number of bits per hertz typically ranges from 1 to more than 7.) This very high capacity and consequent low cost per unit of bandwidth are the primary reasons why fiber is preferred wherever individual demand is very high or demand from multiple users can be aggregated. Other considerations in favor of fiber include high reliability, long service lifetime,5 protocol transparency, and consequent future-proof upgradability.6 Thus, fiber predominates in all of the telecommunications links (voice and data) except the link to the premises, where cost considerations come into play most, or for untethered devices. Because of their large demand for bandwidth, an increasing fraction of large businesses is being served directly by fiber links. There is also increasing attention to fiber technologies for local area and local access networks, as evidenced by recent development of new technologies such as gigabit Ethernet over fiber. One important use of fiber for broadband is that of increasing the performance of other wireline technologies through incremental upgrades. Both HFC systems and DSL systems benefit from pushing fiber further into the system. To increase the performance of DSL, the copper links must get shorter. As penetration and the demand for higher speed increase, the upgrade strategy is to push fiber deeper, with each fiber feeding smaller service areas in which shorter copper connections run to the individual premises. So a natural upgrade path for copper infrastructure is to install electronics ever closer to the residence, to a remote terminal located in a pedestal or underground vault or on a telephone pole; to run fibers from the central office to this point; and only to use copper 5 In the 1970s, researchers worried about the possibility of fiber degradation over time. A number of experiments were conducted and no degradation effects were found. Thus— barring an accidental cut—the only reason fiber is replaced is when some new transmission scheme reveals the old fiber to have too much eccentricity of the core or too much material dispersion. These factors have only come into play in very particular situations. For example, when OC192 (10 Gbps) transmission was introduced, there were concerns that old fiber with an out-of-round cross-section would cause problems. But in the end, only a limited amount of fiber required replacement to support the new, higher-speed transmissions. 6 “Protocol transparency” refers to the ability to run any communications protocol over the fiber by changing the end equipment and/or software. Other communications media display some degree of protocol transparency, but with fiber, the large RF spectrum on an individual fiber is entirely independent of other fibers (in contrast to DSL, which has crosstalk issues; wireless, which has obvious spectrum-sharing; and HFC, which also has shared spectrum). This transparency property only holds true over the fiber segments that are unshared—where passive splitting is done, all must agree on at least the time division multiplexing (TDM) or wavelength division multiplexing (WDM) scheme, and where active switching is used, all must agree on the packet protocol. True protocol transparency— and true future-proofing—is thus greatest in a home-run architecture.
OCR for page 156
Broadband Bringing Home the Bits FIGURE 4.5 Paying for broadband. Focus on the Consumer The factors discussed in the previous section notwithstanding, the consumer is the pivot around which all of the economic issues swing. Without consumer demand and a (somewhat) predictable willingness to pay (or evidence that advertising will be a large source of revenue), there is no market. Evidence from early deployment demonstrates demand. The national average penetration (somewhat more than 8 percent as of summer 2001) reflects and masks an uneven pace of deployment. In localities where the service has been available for a reasonable time, cable
OCR for page 157
Broadband Bringing Home the Bits industry reports on markets that have had cable modem service available for several years suggest considerable demand.17 Although the committee is not aware of definitive studies of consumer willingness to pay for broadband (and the notion proposed in the past, that consumer willingness to pay for entertainment and/or communications is a fixed percentage of income, is generally discounted by economists today), the general shape of the market for communications, entertainment content, and information technology is beginning to emerge. Over 50 percent of homes in America have some sort of PC, with prices that averaged near $2,000 in recent years, and which are now dropping below $1,000 for lower-end machines, illustrating that many consumers are willing to make a significant investment in computing hardware and software. In rough terms, a typical $1,200 home computer replaced after 4 years costs around $25 per month. A majority of the homes that have PCs are going online and connecting to the Internet, and it is a reasonable projection that only a very small fraction of machines will remain offline in the coming years. Using the primary residence phone line, and purchasing a somewhat more limited dial-up Internet service, the price approaches the $10 per month (providers have also experimented with service and PCs that are provided free, so long as the consumer will allow advertisements to be displayed during network sessions, although recent reports from this market segment put in question the long-term viability of this approach). The entry price today for broadband is not dramatically different from that for high-end dial-up service. A separate phone line costs as much as $20 per month, and unlimited-usage dial-up Internet service generally runs $20 or more per month. Of course, the market offers a range of price and performance points from which the consumer can pick. At the high-end, high-speed DSL can cost up to several hundred dollars per month, and business-oriented cable services are offered at a premium over the basic service. The total consumer expenditure for such a computer plus basic broadband service is potentially as much as $90 per month, of which the Internet provider can expect to extract less than half. From this revenue base a business must be constructed. If 100 million homes were to purchase broadband service at $50 per month, this would result in total annual revenues to broadband Internet providers of more than $50 billion, which is similar in magnitude to current consumer expenditures on long-distance services. 17 For example, information supplied to the committee by Time Warner Cable is that take-rates have reached 17.5 percent of subscribers in Boston, Massachusetts, and 25 percent of subscribers in Portland, Maine.
OCR for page 158
Broadband Bringing Home the Bits One question that the market has not yet explored is whether the consumer would make a significant capital investment, similar to the $1,000 to $2,000 that a computer costs today, as part of obtaining Internet service. For example, if there were a home-run system with fiber running to the residence (making it a relatively future-proof investment), but the consumer had to activate that fiber by purchasing the end-point equipment, would this be an attractive option if the equipment costs were comparable? Would residents be willing to finance the capital costs of installing that fiber in the first place? While there is no hard evidence, wealthier consumers, who have demonstrated a willingness to make purchases such as multiple upscale multimedia PCs and expensive consumer electronics, might well be willing to make such investments, and some residential developers have opted to include fiber. The Pace of Investment The rapid evolution of some aspects of the Internet can lead observers into thinking that if something does not happen within 18 months, it will not happen. But the phenomena associated with deployment cycles measured in months have generally been in the non-capital-intensive software arena. The cost of entirely new broadband infrastructure—rewiring to provide fiber-to-the-home to all of the roughly 100 million U.S. households—would be some $100 billion, reflecting in considerable part construction costs that are not amenable to dramatic cost reductions. Even for cable and DSL, for which delivering broadband is a matter of upgrading existing infrastructure, simple economics gates the pace of deployment. For both new builds and incremental improvements, an accelerated pace of deployment and installation would bring with it an increased per-household cost. Some broadband deployment will be accomplished as part of the conventional replacement and upgrade cycles associated with telephone and cable systems. In some cases, this process will have dramatic effects—two examples are HFC replacement of all-coaxial cable plants and aerial replacement of copper with fiber as part of a complete rehabilitation of old telephone plant—but in many others cases, the improvements will be incremental. To accelerate beyond this pace means increasing and training an ever-larger workforce devoted to this task. As more new people are employed for this purpose, people with increasingly higher wages in their current jobs will have to be attracted away from those jobs. Similar considerations apply to the materials and manufacturing resources needed to make the equipment that is needed. The investment rate also depends critically on the perspective and time horizon of the would-be investor. For an owner of existing facilities—the incumbent local exchange carriers and cable multiple system
OCR for page 159
Broadband Bringing Home the Bits operators—realistic investment is incremental, builds on the installed base, and must provide return on a relatively short timescale. The tendency to make incremental upgrades to existing telephone and cable plants reflects the view that a replacement of the infrastructure (such as with fiber) would necessitate installation costs that can be avoided by opting to upgrade. The perception is that users would not be willing to pay enough for the added functionality that might be achieved with an all-fiber replacement to offset the extra costs of all-new installation. Changes in either costs or perceived willingness to pay could, of course, shift the investment strategy. Once the provider has a broadband-capable system, it will only have incentives to spend enough on upgrades to continue to attract subscribers and retain existing customers by providing a sufficiently valuable service. Where facilities-based competition exists, these efforts to attract and retain customers will help drive service-performance upgrades. From this perspective, the level of investment associated with building entirely new infrastructure is very difficult for the incumbents to justify. Viewing the incumbent’s incentives to invest in upgrades from the perspective of the two broadband definitions provided above, investment to meet definition 1 will be easier than that to meet definition 2. That is, it is easier to justify spending so that the local access link supports today’s applications, while it is harder to justify spending enough to be in front of the demand so as to stimulate new applications. Two types of nonincumbent investor have also entered the broadband market, tapping into venture capital that seeks significant returns— and generally seeks a faster investment pace. One is the competitive local exchange carrier, which obtains access to incumbent local exchange carrier facilities—primarily colocation space in central offices and the copper loops that run from the central office to the subscriber—to provide broadband using DSL. The other is the overbuilder, which seeks to gain entry into a new market by building new facilities, most commonly hybrid fiber coax for residential subscribers, but also fiber-to-the-premises and terrestrial wireless. Satellite broadband providers in essence overbuild the entire country, though with the capacity to serve only a fraction of the total number of households. The 2000-2001 drying up of Internet-related venture capital has presented an obstacle to continued deployment, and the CLECs have also reported obstacles in coordinating activities with the ILECs that control the facilities they depend on. Because public sector infrastructure investment generally is based on a long-term perspective, public sector efforts could both complement and stimulate private sector efforts. The key segment of the public sector for such investment is likely to be subfederal (state, local, regional), though the federal sector can provide incentives for these as well as private sector
OCR for page 160
Broadband Bringing Home the Bits investment. But decision making for such investments is not a simple matter, and, if present trends are any indication, such investments will be confined to those locales that project the greatest returns from accelerated access to broadband or possess a greater inclination for a public sector role in entrepreneurship. Investment, Risk Taking, and Timelines The myth of the “Internet year,” by analogy to a “dog year,” is well known. Where the Internet is concerned, people have been conditioned to expect 1-year product cycles, startups that go public in 18 months, and similar miracles of instant change. The 2000-2001 downturn in Internet and other computing and communications stocks dampened but did not eliminate such expectations. In fact, some things do happen very rapidly in the Internet—the rise of Napster is a frequently noted example. These events are characterized by the relatively small investments required to launch them. Software can diffuse rapidly once conceived and coded. But this should not fool the observer into thinking that all Internet innovation happens on this timescale. As noted earlier, broadband infrastructure buildout will be a capitalintensive activity. In rough figures, a modest upgrade that costs $200 per passing would cost $20 billion to reach all of the approximately 100 million homes in the United States. Broadband deployment to households is an extremely expensive transformation of the telecommunications industry, second only to the total investment in long-haul fiber in recent years. In light of these costs, the availability of investment capital, be it private sector or otherwise, imposes a crucial constraint on broadband deployment—it is very unlikely that there will be a dramatic one-time, nationwide replacement of today’s facilities with a new generation of technology. Instead, new technology will appear piecemeal, in new developments and overbuild situations. Old technology will be upgraded and enhanced; a mix of old, evolving, and new should be anticipated. Whether national deployment takes the form of upgrades or new infrastructure, the relevant timescale will be “old fashioned”—years, not days or months. As a consequence, observers who are conditioned to the rapid pace of software innovation may well lose patience and assume that deployment efforts are doomed to fail—or that policies are not working—simply because deployment did not occur instantly. One should not conclude that there is something wrong—that something needs fixing—when the only issue is incorrectly anticipating faster deployment. Much private sector investment, especially by existing firms, is incremental, with additional capital made available as investments in prior quarters show acceptable payoff. As a result, the technological approach
OCR for page 161
Broadband Bringing Home the Bits chosen by an incumbent is likely to make use of existing equipment and plant, and the deployment strategy must be amenable to incremental upgrades. The evolution of cable systems is a good example. The previous generation of one-way cable systems is in the process of being upgraded to hybrid fiber coax systems, and these in turn are being upgraded to provide two-way capability, greater downstream capacity, and packet transport capabilities. The various incumbents now in the broadband marketplace have very different technology and business pasts—the telecommunications providers selling voice service over copper, the cable television companies using coaxial cable to deliver video, the cellular companies constructing towers for point-to-point wireless telephony, and so forth, and each will evolve to support broadband by making incremental improvements to its respective technologies and infrastructure. Incumbents seeking to limit regulators’ ability to demand unbundling have an incentive to avoid technologies that facilitate such unbundling. Because they exist to take greater risks but possibly provide much greater returns by identifying new promising areas, venture capitalists seek to invest in opportunities that offer high payoff, not incremental improvements. So it is no surprise that the more mature technologies, such as cable and DSL, have attracted relatively little venture capital in recent years. Another investment consideration for the venture capitalist is the total available market, with niche markets being much less attractive than markets that have the potential to grow very large. Finally, because the eventual goal is usually to sell a company (or make an initial public offering) once it has been successfully developed, venture capitalists must pay attention to trends in the public equity markets.18 Uncertain Investment Prospects in the Private Sector Over the past few years, broadband infrastructure has to some extent followed the overall trend of technology-centered enthusiasm for venture capital investment and high-growth planning. Broadband may similarly be affected by the current slowdown in investment and by the more careful assessment of business models to which companies are now being subjected. At this time, broadband providers, as well as Internet service providers more generally, are facing problems of lack of capital and cash flow. This could lead to consolidation, and perhaps to a slowdown in the overall rate of progress. 18 In a white paper written for this project in mid-2000, George Abe of Palomar Ventures characterized venture capital investing as “faddish” and observed that “there is a bit of a herd mentality.” There are hints that with the 2001 market drop, venture capitalists have adopted a longer-term view and are seeking well thought-out opportunities rather than chasing fads.
OCR for page 162
Broadband Bringing Home the Bits Investment Options for the Public Sector If and when the public sector chooses to intervene financially to encourage service where deployment is not otherwise happening, it will have a different set of constraints. Governments have access to bond issues and other financial vehicles that best match one-time capital investments with payback over a number of years, and they also have access to a tax base that reduces the risk of default. If a major, one-time investment is to be made, the implication is that this technology must be as future-proof as possible, because it must remain viable for the period of the payoff. The most defensible technology choice in this case is fiber-to-the-home, with a separate fiber to each residence. Fiber has an intrinsic capacity that is huge, but the actual service is determined by the equipment that is installed at the residence and at the head end. With dark fiber running to each customer, the end equipment need not be upgraded for all the users at once but can be upgraded for each consumer at the time of his or her choosing. Thus, this technology base permits different consumers to use the fibers in different ways, for different services, and with different resulting costs for end-point equipment. The consumer can make these subsequent investments, reusing the fiber over the life of the investment. Upgrades are not, however, fully independent as they depend on the backhaul infrastructure. An upgrade will require not only new central office or remote terminal line cards, but also a compatible infrastructure beyond that; the remote terminal or central office rack itself may not be able to switch or route a higher-speed input due to hardware or software constraints. Businesses look at risk as an intrinsic part of doing business and manage risk as a part of normal planning. Some investments pay off; others may not. For residential access, for example, demand may exceed expectation, or perhaps not, and a business will mitigate these risks by investment in a number of situations—communities, services, and so on. In contrast, a municipality serves only its own citizens, so any risk of bad planning must be carried within that community. Further, the voter reaction to miscalculation may amplify the perception of the error, which can have very bad personal implications for individual politicians. Long-term investment in services that do not bring visible short-term value to the citizens may be hard for some politicians to contemplate, because the payoff from this investment may not occur in a time frame that is helpful to them. So a planner in the public sector must balance the fact that most sources of capital imply a long-term investment with the fact that citizens may not appreciate the present value of long-term investment, and may assess the impact of investment decisions based on short-term consequences. This may lead to decision making that is either more or less risk-
OCR for page 163
Broadband Bringing Home the Bits averse (given the level of knowledge among the citizens and apparent level of popular demand) than the decision making of the private sector. Moore’s Law and Broadband This report defines broadband deployment as an ongoing process, not a one-time transition. The first proposed definition of what it means for a service to be broadband reflects this reality: Access is broadband if it is fast enough on an ongoing basis to not be the limiting factor for today’s applications. With that definition in mind, unfavorable comparisons are sometimes made between the sustained improvements in the performance-to-price ratio of computing (which relate to what is known as Moore’s law, the 18-month doubling of the number of transistors on an integrated circuit) and improvements in the capacity of broadband access links. In fact, communications technologies, as exemplified by sustained improvements in fiber optic transmission speeds, have by and large kept pace with or surpassed improvements in computing. The gap one sees is between deployed services and underlying technology, not an inherent mismatch of technology innovation. This committee spent some time exploring why broadband local access has not kept pace with other areas in computing and communications, and it considered how the economics of broadband service providers, long-haul communications providers, and computer equipment vendors might differ. In the end, the committee concluded that present understanding is too limited to reach definitive conclusions on this question. Why productivity growth in access has not kept pace with other communications sectors is an interesting question worthy of further research. ECONOMICS OF SCALING UP CAPACITY: CONGESTION AND TRAFFIC MANAGEMENT Once initial systems are deployed, successful broadband providers are almost certain to experience continued demands on their networks owing to increased subscribership and increased traffic per subscriber. These demands have implications both for how the access links themselves are configured and managed and for the network links between the provider and the rest of the Internet. This section provides an overview of traffic on the Internet and discusses some of the common misunderstandings about broadband technology. The term “congestion” describes the situation in which there is more offered traffic than the network can carry. Congestion can occur in any shared system; it leads to queues at emergency rooms, busy signals on the
OCR for page 164
Broadband Bringing Home the Bits telephone system, inability to book flights at the holidays, and slowdowns within the Internet. As these examples illustrate, congestion may be a universal phenomenon, but the way it is dealt with differs in different systems. In the telephone system, certain calls are just refused, but this would seem inhumane if applied to an emergency room (although this is sometimes being done—emergency rooms are closing their doors to new emergencies and sending the patients elsewhere). In the Internet, the “best effort” response to congestion is that every user is still served, but all transfers take longer, which has led to the complaints and jokes about the “World Wide Wait.” Congestion is not a matter of technology but of business planning and level of investment. In other words, it is a choice made by a service provider whether to add new capacity (which presumably has a cost that has to be recovered from the users) or to subject the users to congestion (which may require the provider to offer a low-cost service in order to keep them). Shared links can be viewed as either a benefit or a drawback, depending on one’s viewpoint. If a link is shared, it represents a potential point of congestion: if many users attempt to transmit at once, each of them may see slow transfer rates and long delays. Looked at in another way, sharing of a link among users is a central reason for the Internet’s success. Since most Internet traffic is very bursty—transmissions are not continuous but come in bursts, as for example when a Web page is fetched—a shared communications path means that one can use the total unused capacity of the shared link to transfer the burst, which may make it happen faster. In this respect, the Internet is quite different from the telephone system. In the telephone system, the capacity to carry each telephone call is dedicated to that one connection for its duration—performance is established a priori. There is still a form of sharing—at the time the call is placed, if there is not enough capacity on the links of the telephone system, the call will not go through. Callers do not often experience this form of “busy signal,” but it is traditionally associated with high-usage events such as Mother’s Day. In contrast, the Internet dynamically adjusts the rate of each sender on the basis of how many people are transferring data, which can change in a fraction of a second. The links that form the center of the Internet carry data from many thousands of users at any one time, and the traffic patterns observed there are very different from those observed at the edge. While the traffic from any one user can be very bursty (for a broadband user on the Web, a ratio of peak to average receiving rate of 100 to 1 is realistic), in the center of the network, where many such flows are aggregated, the result is much smoother. This smoothness results from the natural consequences of aggregating many bursty sources, not because the traffic is “managed.”
OCR for page 165
Broadband Bringing Home the Bits With enough users, the peaks of some users align with the valleys of other users with high odds. One of the reasons that the Internet is a cost-effective way to send data is that it does not set up a separate “call” with reserved bandwidth for each communicating source, but instead combines the traffic into one aggregate that it manages as a whole. For dial-up Internet users, the primary bottleneck to high throughput is the modem that connects the user to the rest of the Internet. If broadband fulfills its promise to remove that bottleneck, the obvious question is, Where will that bottleneck go? There has a been a great deal of speculation about how traffic patterns on the Internet will change as more and more users upgrade to broadband. Some of these speculations have led to misapprehensions and myths about how the Internet will behave in the future. Cable systems have the feature that the coaxial segment that serves a particular neighborhood is shared. This has led to the misconception that broadband cable systems must slow down and become congested as the number of users increases. This may happen, but it need not. Indeed, shared media in various forms are quite common in parts of the Internet. For example, the dominant local area network standard, Ethernet, which is a shared technology with some of the same features as HFC cable modems, has proved very popular in the market, even though it, too, can become congested if too many people are connected and using it at once. Cable systems have the technical means to control congestion. They can allocate more channels to broadband Internet, and they can divide their networks into smaller and smaller regions, each fed by a separate fiber link, so that fewer households share bandwidth in each segment. Whether they are, in fact, so upgraded is a business decision, relating to costs, demand, and the potential for greater revenue. Of course, less sharing would tend to reduce the cost advantage of HFC relative to other higher-capacity solutions such as FTTH. DSL is generally thought to suffer from fewer access network congestion problems because the user has a dedicated link from the residence to the central office. It is true that the user will never see contention from other users over the dedicated DSL link; however, it also means that the user can never go faster than the fixed dedicated capacity of this link, in contrast to being able to use the total unused capacity of a shared system. Both the cable and DSL systems bring the traffic from all their users to a point of presence (central office or head end), where this traffic is combined and then sent out over a link toward the rest of the Internet. This link from the termination point to the rest of the Internet is, in effect, shared by all of the subscribers connected to that point of presence, whether the broadband system behind it is a shared cable system or a dedicated DSL system, making the link a common source of congestion
OCR for page 166
Broadband Bringing Home the Bits for all of the subscribers. The cost of the link depends on both the capacity of the physical link and the compensation that must be paid to other Internet providers to carry this traffic to the rest of the Internet. The cost of these links can be a major issue in small communities where it is difficult to provision additional capacity for broadband. So there is an incentive not to oversize that link. The economics and business planning of this capacity are similar for a cable or a DSL system. The fact that the links from the point of presence to the rest of the Internet are often a source of congestion illustrates an important point. The number of users whose traffic must be aggregated to make the total traffic load smooth is measured in the thousands, not hundreds. So there may be a natural size below which broadband access systems become less efficient. For example, if it takes 10,000 active users to achieve good smoothing on the path from the rest of the Internet, then a provider who gets 10 percent of the market,19 and who can expect half of his users to be active in a busy hour, needs a total population of 200,000 households as a market base in a particular region. Even if the broadband local access links themselves are adequately provisioned, bottlenecks may still exist, owing to such factors as peering problems between the broadband service provider and the rest of the Internet, host loading, or other factors. Performance will also be dependent on the performance of elements other than the communications links themselves, such as caches and content servers located at various points within the network (or even performance limitations of the user’s computer itself). These problems, which will inevitably occur on occasion, have the potential to confuse consumers, who will be apt to place blame on the local broadband provider, whether rightly or wrongly. 19 For an examination of the smoothing phenomenon, see David D. Clark, William Lehr, and Ian Liu, “Provisioning for Bursty Internet Traffic: Implications for Industry Structure,” to appear in L. McKnight and J. Wroclawski, eds., 2002, Internet Service Quality Economics, MIT Press, Cambridge, Mass.
Representative terms from entire chapter: