Appendixes



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 243
Broadband Bringing Home the Bits Appendixes

OCR for page 243
Broadband Bringing Home the Bits This page in the original is blank.

OCR for page 243
Broadband Bringing Home the Bits A Broadband Technologies In the course of its work, the Committee on Broadband Last Mile Technology developed highly detailed material related to various broadband technologies. The committee decided that this level of detail was not appropriate for the main text of its report, but provides the material, which is not intended to be comprehensive, in this appendix for the reader interested in learning more about broadband technologies. HYBRID FIBER COAX TECHNOLOGY1 Coaxial Cable The foundation upon hybrid fiber coax (HFC) broadband communications networks are based is coaxial cable (Figure A.1), a radio frequency (RF) transmission line capable of transporting a large number of carriers (channels). At the head end, or central signal-processing center, each carrier is modulated with baseband analog or digital information, and all carriers are multiplexed together in the frequency domain (Figure A.2). Spectral separation is accomplished through the use of frequency-selective diplex filters to allow simultaneous transmission of information in opposite directions (Figure A.3), commonly called “reverse” (i.e., from the home to the head end) and “forward” (from the head end to the 1   Adapted from James Chiddix. 1999. “The Evolution of the U.S. Telecommunications Infrastructure Over the Next Decade. TTG2: Hybrid-Fiber-Coax Technology” (IEEE workshop paper).

OCR for page 243
Broadband Bringing Home the Bits FIGURE A.1 Coaxial cable (cut-away view). FIGURE A.2 Typical RF spectrum for analog cable television. FIGURE A.3 Forward and reverse spectra and diplex filter.

OCR for page 243
Broadband Bringing Home the Bits home). This physical medium provides for the transport of RF energy within a reasonably secure network with an enormous amount of signal capacity and flexibility. Conceptually, coaxial cable provides cable operators with a private conduit through which RF signals are transported; in addition, the medium can support multiple signaling channels without regard to the baseband signals or modulation scheme that may be employed. This medium is generally immune to interfering influences that may exist in free space. From a practical standpoint, coaxial cable supports transmission of signals at frequencies from baseband to more than 1 GHz. Transmission losses (attenuation) within these cables can be significant; attenuation increases proportionally with frequency, making it necessary to use RF amplification to cover the long distances encompassed by the cable plant. Amplifiers may be spaced from a few hundred feet to one or two thousand feet apart. Although the theoretical frequency limit of the cable itself is significantly greater than 1 GHz, and cable systems have been built using upper frequencies in excess of 1 GHz, practical limitations are set by the frequency responses of active and passive components (e.g., amplifiers and filters) used in the network. Tree-and-Branch Architecture Until recently, coaxial cable systems have followed a “tree-and-branch” topology (Figure A.4), delivering the same RF spectrum of sig FIGURE A.4 Tree-and-branch architecture.

OCR for page 243
Broadband Bringing Home the Bits FIGURE A.5 Carrier-to-noise and carrier-to-intermodulation distortion. nals to every customer within a particular community. This design served the cable industry well, but it did have limitations. The most significant restriction imposed by this topology was the accumulation of noise and distortions (Figure A.5) through the extended cascades of broadband RF amplifiers needed to compensate for transmission losses. This architectural facet affected plant reliability and signal quality at the customer’s home. Additionally, for a given design bandwidth, there were practical and theoretical limits to the number of amplifiers that could be cascaded. In order to maintain acceptable performance levels, it was necessary to limit the operational bandwidth of such cable systems to a few hundred megahertz, far below the potential of the cable alone. Another limitation was imposed by this topology: every customer receives the same complement of signals. This is generally acceptable for TV services, but makes the delivery of individually switched or routed services difficult. Fiber-Optic Transmission Technology By the late 1980s, optical lasers were successfully adapted for use in a broadband environment. Optical transmission had been practical for some time through the mechanism of turning the transmitting laser “on” and

OCR for page 243
Broadband Bringing Home the Bits “off” in synchronization with the ones and zeros of a digital signal. A breakthrough came when it was determined that a laser could be left “on” and intensity-modulated with the highly complex analog signal representing the broadband RF spectrum (Figure A.6). Lasers used in this way required characteristics different from their digital counterparts. The most critical were very low internal noise and an extremely linear transfer function. Such devices had been in development for the digital market in an effort to achieve higher data-transmission speeds over optical fibers (in contrast to coaxial cable), but further optimization was required for broadband applications. At the receiving end of an optical link, a relatively simple photo-detector was used to convert the optical signals back into an RF spectrum essentially identical to the one presented at the input (transmitting) end. The cable industry quickly adopted this technology for a portion of its transmission plant, and continues to use it as a way to cost-effectively transform coaxial tree-and-branch systems into something much more powerful—hybrid fiber coax (HFC) architecture (Figure A.7). In essence, this approach transforms large systems into highly concentrated collections of smaller systems. This is a very important characteristic, as discussed below. Current HFC designs are now providing transmission to and from neighborhood clusters of a few hundred homes or fewer (Figure A.8). This arrangement of fiber and coaxial cables allows segmentation of the traditional coax-only transmission plant into many localized areas (called nodes), each of which is capable of providing a unique assortment of information to end users (Figure A.8). The coaxial network that connects to homes from each optical node remains a small version of the original tree-and-branch system (more of a bush than a tree). FIGURE A.6 750-MHz forward spectrum.

OCR for page 243
Broadband Bringing Home the Bits FIGURE A.7 HFC networks allow smaller serving areas. FIGURE A.8 HFC networks allow narrowcasting of content to the customer.

OCR for page 243
Broadband Bringing Home the Bits Design Considerations Current HFC designs call for fiber nodes serving about 500 homes on average, but these nodes can be further segmented into arbitrarily small coaxial serving areas. Figure A.9 illustrates one way that the spectrum available within one node may be used. The ability to assign and reassign spectrum to different uses is an important benefit of HFC architecture, because it allows for advances in digital services and technologies while continuing to support existing services. Thus, the architecture can simultaneously support many separate virtual networks. This makes the investment to upgrade to HFC a sustainable one for most cable companies. At least some cable operators plan to build as many as five separate (virtual) networks on the foundation of their upgraded fiber transport plant (Figure A.10). The HFC architecture enables great flexibility to segment the service area. Step-by-step segmentation can match investment with revenues from new, high-bandwidth services; in the extreme case, fiber can be extended to the property lines of homes and businesses (not shown in FIGURE A.9 Forward and reverse spectra at node.

OCR for page 243
Broadband Bringing Home the Bits FIGURE A.10 Capability to support multiple networks within HFC. Figure A.10), or at least to those with the need for services requiring hundreds of megabits per second of connectivity. Only those nodes that have need of greater data capacity (and the potential for greater revenue) have to be divided; the rest can remain undisturbed. As nodes are divided and fiber is deployed closer to the customer, the total amount of usable bandwidth becomes greater; this makes it possible for every node division to more than double the available data capacity while reducing the number of users who share it.2 Similarly, breaking a 500-home node into four parts, each passing an average of 125 homes, increases the available reverse and forward capacities significantly more than fourfold and provides more than four times the bandwidth per user. Trials within the industry have made use of the spectrum from 900 MHz to 1 GHz (as compared with the traditional use of the 5- to 50-MHz region) for reverse signals. Because of reduced RF interference at these higher frequencies and the resulting higher-modulation efficiencies, it is possible to provide an additional 200 Mbps of transmission capability. Again, this number can be multiplied through segmentation, as outlined above. 2   The accompanying reduction in noise over the coaxial portion of the network—in accord with Shannon’s law—means that the usable bandwidth within each subloop also increases significantly.

OCR for page 243
Broadband Bringing Home the Bits It is possible to push these numbers even farther. If very high speed, truly symmetric capacity is required, frequencies above 1 GHz can be used. Some cable plants being constructed today use fiber to feed neighborhoods of 60 homes or fewer with a more-than-commensurate increase in the per-user capacity for both switched and routed digital services. In 2001, the latest version of the industry standard, DOCSIS 2.0, embraced two optional refinements that can substantially increase upstream throughput by using improved modulation in situations where the noise level permits. One is the use of advanced time-division multiple access (TDMA), which allows modulations up to 256 quadrature amplitude modulation (QAM) in upstream bursts (theoretically 8 bits per hertz, realworld about 6.5), compared with the 16 QAM (4 bits per hertz theoretically) of the current version. The other is synchronous code-division multiple access (CDMA), which permits much more robust transmissions in the presence of certain kinds of interference. Providing Services in Year 2010 Information and entertainment services can be classified in two broad categories—common and dedicated. Common services include such programming as off-air broadcast, PEG (public, educational, and government) channels, basic networks (such as ESPN and CNN), and subscription services (such as HBO, Cinemax, and Starz). Dedicated services include any number of specialized programs that are delivered to the end user on an individual basis; video-on-demand (VOD) and high-speed Internet access are examples of this type of service. The cable television (CATV) industry in the United States typically thinks of a channel as being represented by a contiguous 6-MHz portion of the available spectrum—thus, a standard 750-MHz HFC plant has approximately 112 such “channels” within a total usable spectrum of 672 MHz. Table A.1 provides some details regarding a hypothetical 750-MHz HFC plant’s ability to provide almost unlimited service options for customers, including the following: Standard analog television. The cable television industry will probably always carry some amount of NTSC signals, perhaps 20 or so RF channels; but it is anticipated that the number of these signals will decrease as most of them are incorporated into compressed digital formats. Digital standard definition television (SDTV). This will become the “standard” signal as 256-QAM channels are used to distribute some 200 simultaneous networks (HBO, ESPN, CNN, and so on), including most of the subscription services.

OCR for page 243
Broadband Bringing Home the Bits and it may be expected that commercial products will soon deliver 10 Mps+ services with cellular reuse and spectral efficiency on the order of 5 bps/Hz/sector. With continuing advances in signal processing, achievable bit rates should increase to 100 Mbps+ with spectral efficiency of 10 bps/Hz over the next 5 years. Cellular technology capable of scaling to small cells and multiple sectors necessary for effective coverage of areas with higher population density. Scaling of broadband wireless services to small cells is inevitable in areas with higher population density, where throughputs on the order of 100 Mbps to 1 Gbps/km2 must be achieved in order to serve even a modest fraction of the population. Efficient cellular reuse implies the need for modem technology that can operate at relatively low carrier-to-interference (C/I) ratios. This can be achieved by a suitable combination of time/frequency/ space processing. For example, spread spectrum achieves high spatial reuse via time-frequency processing, while multiple antenna spatial processing OFDM modems do so using frequency-space processing. Wideband CDMA adopted for IMT-2000 radio access achieves a spatial reuse factor of 1:1 using spread spectrum and interference cancellation techniques. However, net throughput per square kilometer is limited by the relatively low ~0.5-bps/Hz efficiency of spread spectrum modulation. Spatial processing techniques mentioned earlier have the potential for achieving spectral efficiencies on the order of 5 to 10 bps/Hz/cell with ~1:3 spatial reuse. Further gains can be achieved for both CDMA and spatial processing OFDM with directional remote antennas and base station sectorization. Spectrum regulation and management policies that facilitate rapid deployment of broadband services, while promoting efficient use. The pace of wireless network deployment is critically dependent on spectrum regulation policies, both international and domestic. Historically, the process of frequency allocation has been rather slow, with the United States and to some extent the European Union taking the lead in introducing both spectrum auctions and unlicensed bands in order to stimulate efficient economic usage. While one-time spectrum auctions in the United States have had their intended effect (e.g., PCS and MMDS bands), it may be time to consider introducing more dynamic market mechanisms that allow spectrum to change hands in time-constants of minutes or hours rather than months or years. For example, it may be possible to establish an online commodity trading system for spectrum that would permit operators with higher economic utility to bid for their peak usage needs without having to go through a lengthy procurement process. Rapid deployment of wireless services would be further facilitated by streamlined approval processes for a wider range of customer equipment,

OCR for page 243
Broadband Bringing Home the Bits including those with higher-powered directive antennas. This would probably require further advances in antenna beam and power control, but should be technically feasible in the near term. The broadband WLL business model depends to a large extent on user-installable or self-configuring customer premises equipment (CPE), something that would require some relaxation of current rules in MMDS and other fixed access bands. In addition, it may be expected that fixed access will gradually migrate toward semimobile services as cell sizes become smaller, further increasing the need for simple approval policies. Unlicensed spectrum, such as the 5-GHz unlicensed national information infrastructure (U-NII) band in the United States, is an important facilitator for broadband access. FCC’s allocation of the U-NII spectrum has stimulated considerable commercial activity in the high-speed wireless LAN area. It is recognized that the same type of technology (perhaps with somewhat higher power levels and larger coverage areas) could be used as a broadband PCS access network for public semimobile services in urban and suburban communities. There is, however, one remaining technical problem—that of spectrum etiquette—a decision on which was deferred by the FCC in its initial U-NII ruling. The problem is that existing unlicensed band etiquettes such as listen-before-talk (LBT) do not work well for stream services with quality-of-service (QoS) requirements. In such cases, the etiquette must be designed for equitable sharing among contending stream users, without reducing all of them to an unacceptable QoS level. The FCC has invited the industry to propose a suitable etiquette, but a specific scheme has yet to be identified. A possible technical solution is to introduce a common spectrum coordination channel at the edge of each unlicensed band and require users to execute mutually agreed sharing procedures (priority, dynamic auction, and so on) using a standardized etiquette protocol. Radio Link Protocol Broadband wireless access requires a new type of radio link protocol (RLP) capable of reliably transporting both packets and media streams with specified QoS. The broadband RLP itself may be decomposed into a medium access control (MAC) layer for channel sharing among multiple subscribers, and a data link control (DLC) protocol for error recovery. Broadband wireless networks tend to use either a packet CDMA, dynamic TDMA type, or an extended 802.11 carrier sense, multiple access/ collision avoidance (CSMA/CA) MAC protocol. CDMA is the basis for the emerging IMT-2000 wideband CDMA standard for 3G mobile, and is associated with the choice of spread spectrum modulation believed to be appropriate for vehicular mobile systems. Dynamic TDMA has generally

OCR for page 243
Broadband Bringing Home the Bits been adopted for broadband applications, as well as for some high-speed LANs (such as wireless ATM and the European Telecommunications Standards Institute’s broadband radio access networks) in view of its ability to support a combination of packet data and constant bit-rate streams (voice and video). Extended 802.11 protocols provide streaming extensions for QoS support, and may be suitable for Ethernet-equivalent WLAN scenarios. DOCSIS MAC protocols used in cable networks have also been modified for WLL applications, but will generally incur a performance penalty owing to large packet sizes. Data-link-layer retransmission for error recovery is an essential feature for broadband wireless service, since higher-layer protocols are critically dependent on low packet error rates on each link of the end-to-end connections. DLC involves fragmentation of data packets into relatively small units, the optimum for which is typically between 40 and 200 bytes, depending on the channel and traffic model. Many current implementations have adopted the ATM cell payload of 48 bytes as the basic unit of fragmentation on the radio link. This has the advantage of simplifying the interface to ATM backhaul networks, which are often used in carrier broadband and DSL networks. Error control on the radio link involves the addition of a wireless link header containing a sequence number used for identifying data units to be retransmitted. Implementation results have shown that significant improvements in end-to-end protocol performance (typically 2 orders of magnitude in packet error rate) can be achieved with fragmentation and retransmission on the radio link. This in turn permits wireless systems to operate in a higher C/I environment, thus increasing overall capacity of cellular networks. Infrastructure Network Broadband wireless access links are being designed as “plug-ins” to existing fixed network architectures based on IP and/or ATM. In order to facilitate ubiquitous deployment, it is important that both fixed WLL and mobile access be easily integrated with broadband DSL and cable networks currently being deployed. This means that the radio air application programming interface should be harmonized with both IP and ATM to the extent possible, particularly in terms of providing generic parameters for service establishment and QoS control. For fixed wireless access, interface functions specific to the radio link are performed by the base station, which puts out standard IP and/or ATM data and control into the infrastructure network. For mobile scenarios, services (such as location management and handoff) specific to mobile users may be provided either with a mobility overlay, used in current cellular systems, or by integrating mobility sup-

OCR for page 243
Broadband Bringing Home the Bits port into the core network protocols, such as IP or ATM. The latter method (i.e., support integrated mobility in IP or ATM) is the preferred method for broadband in view of performance and scalability requirements. Moreover, as an increasing proportion of user devices becomes portable, the distinction between fixed and mobile user addresses will become more difficult to administer (the integrated approach does not require a priori partitioning of mobile and fixed addresses). Protocol specification work aimed at integrating mobility support into IP and ATM has been done in both the Internet Engineering Task Force (IETF) (mobile IP) and the ATM Forum. While much further work remains (3G.IP and so on), it may be expected that mobility will increasingly be integrated as a standard feature into fixed network infrastructures. Ultimately, this technical direction will further accelerate fixed and wireless network convergence, which has been predicted for some time. MEDIA COMPRESSION Media signals include (digital) data as well as analog information and entertainment signals: speech, audio, image, video, graphics, and other audiovisual signals such as hand gestures and handwriting. These signal classes are universal and are representative of most if not all information that needs to traverse the first mile, in either direction. Complementary Roles of Modems and Compression Systems (Codecs) Modem and access technologies have evolved to expand the transmission pipe for conveying digital information. In parallel, compression technology has evolved to compact the amount of digital information that is needed to convey the information in a signal with a specified level of fidelity. Access speeds have generally advanced on a faster track than has compression technology. That said, it is the combination of faster modems and greater levels of compression that has enabled advances and revolutions in digital communication. This section focuses on the impact of media compression as a direct enabler of digital communication over channels and networks with limited capacity. Computing is an overarching enabler of multimedia communications, whether one is implementing coders or decoders (codecs for short) or is implementing modulators or demodulators (modems for short). Moore’s law has direct implications on the rate at which computing technology (memory and arithmetic capability) advances as a function of time. In this view, advances in computing are much more rapid than are advances in access technologies. That said, advances in computing will only help

OCR for page 243
Broadband Bringing Home the Bits speed up advances in access, but these advances are strictly knowledge-or algorithm-limited, as are the advances in compression. The Dimensions of Performance in Media Compression There are four dimensions of performance in a compression system: (1) quality, (2) bit rate, (3) delay, and (4) complexity. “Quality” refers to the quality of the signal after compression, measured in absolute terms or in terms of closeness to the original version of it. The “bit rate” is the data rate after compression. The “delay” is the sum of delays in the encoding and decoding parts of the system: the compression and decompression algorithms. (This does not include delay components resulting from specific implementation details or specific transmission latencies in the communication of the encoder bit stream.) Finally, “complexity” refers to the computational effort needed to perform the compression and decompression algorithms, measured for example, in millions of instructions per second (mips) and kilobytes (the read-only and random-access memories used in the codec [coder-decoder]). As processing technology improves, the importance of the complexity parameter tends to diminish, but delay remains as a fundamental performance metric. Delay is particularly important in interactive, or two-way, communications. The Fuzzy Fifth Dimension: Richness of Content In studies of compression efficiency, where one measures quality degradation as a function of increasing levels of compression, one assumes that the bandwidth, or frequency content in the signal, is a prespecified characteristic. For example, telephony is always associated with a speech signal of 4-kHz bandwidth, and television with a signal whose effective horizontal and vertical resolutions are on the order of a few hundred pixels in each case (a total number of pixels per frame on the order of 100,000). In Table A.2, the notation of pps refers to pixels per second. It is the product [H × V × F] of horizontal resolution (H pixels per row), vertical resolution (V pixels per column), and temporal resolution (F frames per second). With the evolution of flexible and scalable communications technology, one often has the option of considering input signals of higher bandwidth, as long as the compression is strong enough to delimit the output data rate to a specified number. Examples are high-bandwidth audio (such as FM-grade speech with 12- to 15-kHz bandwidth or CD-grade music with 20-kHz bandwidth, or multichannel sound) and high-definition television (a total number of pixels on the order of 2 million, 60 frames per second).

OCR for page 243
Broadband Bringing Home the Bits TABLE A.2 Multimedia Formats Format Sampling Rate Frequency Band Telephone 8 kHz 200-3,400 Hz Teleconference 16 kHz 50-7,000 Hz Compact disk 44.1 kHz 20-20,000 Hz Digital audio tape 48 kHz 20-20,000 Hz CIF Video 3 Mpps [360 × 288 × 30]   CCIR Video 12 Mpps [720 × 576 × 30]   HDTV 60 Mpps [1,280 × 720 × 60]   NOTE: Mpps = megapixels per second. SOURCE: Nikil Jayant, 1993, “High Quality Networking of Audio-Visual Information,” IEEE Communications Magazine 31(9). Scalability in bandwidth is a somewhat fuzzy situation in that users are often not conditioned to the continuum in this parameter between, or beyond, well-established anchors. For example, wideband speech is a fuzzy term that implies any bandwidth in the range between well-defined telephone and CD grades (4 and 20 kHz), and first-generation Internet video often is understood to mean anything that is usable, albeit below TV quality (such as 10,000 to 100,000 pixels per frame). The video situation has the additional dimensions of viewing distance, physical picture size, and fractional-screen displays, which further control user appreciation of picture quality or user perception of picture degradation. The Algorithms of Media Compression The description of compression algorithms is beyond the scope of this appendix. It is also not needed for the purposes of this report. What is important, however, is to note that all compression algorithms are based on only two basic principles: removal of redundancy in the input signal, and the reduction of irrelevancy in it. “Redundancy” is usually characterized in a statistical fashion, while “irrelevancy” is best linked to a perceptual criterion. Compression techniques are also usefully classified into three types: (1) lossy, (2) lossless, and (3) perceptually lossless. Mathematically lossless compression is used in some archival, legal, and medical applications, while perceptual losslessness is a pragmatic criterion for a large class of applications in transmission and storage. Most compression standards tend to address this criterion. Other characteristics to keep in mind are the delay and complexity of the algorithms, and how they are

OCR for page 243
Broadband Bringing Home the Bits distributed between the compression and decompression parts of the system. For example, interactive and two-way applications look for low-delay compression, servers can typically afford high complexity, and client systems need to be relatively simple to implement. Implementation platforms can be ASIC (application-specific integrated circuit), DSP (digital signal processor), or NSP (native signal processor, as on a Pentium). As a matter of calibration, a Pentium II (400-MHz) processor can decode MPEG1 video streams in real time, and a pocket PC in 2001 has a processor that works at half the speed (about 200 MHz). Compression Standards Tables A.3 and A.4 provide nonexhaustive lists of compression standards for audiovisual signals. In general, results refer to lossy compression, although these standards include special functions for lossless compression. For example, in JPEG image compression, there is a lossy (perceptually lossless) version with typical bit rates of 0.5 to 2 bits per pixel (bpp), while a mathematically lossless version may use a bit rate of 5 to 6 bpp. In Figure A.30, the horizontal axis displays bit rates after compression for classes of applications that are arranged in clusters that represent speech, audio, and image applications. The bit rates range from 1 kbps to 100 Mbps. Interestingly, the geometric mean of this range is 300 kbps, a number typical of conservative ADSL and cable modem rates in the year 2000. The data rates in Figure A.30 are strict lower bounds in the sense that in most applications, the compressed information needs to be supplemented with ancillary data. Bit Error Protection In a rate k/n error correction code, k information bits are protected for transmission over an error-prone channel by adding (n-k) redundant bits. The fractional overhead is 1-k/n. Sophisticated methods of error protection include these: Unequal error protection, in which different parts of the compressor output receive different levels of error protection, depending on models of their relative perceptual importance; Joint source and channel coding, in which, for example, the total bit rate available is shared dynamically between source bits and error protection bits, depending on the model or knowledge of the channel state.

OCR for page 243
Broadband Bringing Home the Bits TABLE A.3 Standards for Speech Compression Standard (year) Algorithm Bit Rate Application G.722 (1988) Subband ADPCM 64, 56, 48 kbps Teleconferencing MPEG-1 (1992) Musicam ASPEC 384, 256, 128 kbps Two-channel audio w/video on CD MPEG-2 (1996) PAC 320 kbps Five-channel surround sound for multimedia recording DAB (1996) PAC 160 kbps Two-channel audio for terrestrial broadcast JBIG (1991) Run length coding 0.05-0.1 bpp Binary coded images (half-tone) JPEG (1991) DCT 0.25-8 bpp Still continuous-tone images MPEG-1/2 (1991, 1994) MD-DCT 1-8 Mbps Addressable video on CD P × 64 (1991) MC-DCT 64-1,536 kbps Videoconferencing HDTV (1996) MD-DCT 17 Mbps Advanced TV G.711 (1972) Mu-Law and A-Law PCM56-64 kbps Network transmission G.721 (1984, 1987) ADPCM 32 kbps Bit-rate multiplexers, undersea cable G.723 (1988) ADPCM 24, 40 kbps Overload on undersea cable, data modem G.726/G.727 ADPCM 16, 24, 32, 40 kbps High overload rate for undersea cable G.728 (1992) LD-CELP 16 kbps Transmission at low delay G.729 (1995) ACELP 8 kbps Second-generation digital cellular G.723 (1995) ACELP 6.3, 5.3 kbps Low-bit-rate videophone GSM (1987) RPE-TLP 13 kbps European digital cellular full-rate IS-54 (1989) VSELP 8 kbps North American digital cellular-TDMA IS-96 (1993) QCELP 8.5, 4, 2, 0.8 kbps North American digital cellular-CDMA GSM-1/2 (1994) VSELP 5.6 kbps European digital cellular half-rate EVRC (1996) RCELP 8.5, 4, 0.8 kbps NA CDMA, 2nd generation IS-136 (1995) CELP 8 kbps NA TDMA, 2nd generation JDC (1989, 1992) VSELP 8, 4 kbps Japanese digital cellular—full and half rates FS-1016 (1975) CELP 4.8 kbps Secure telephony—full rate FS-1015 (1975) LPC-10E 2.4 kbps Secure telephony—half rate FS-1015 (1996)   2.4 kbps Secure telephony—half rate, 2nd generation SOURCE: After R.V. Cox. 1999. “Current Methods of Speech Coding,” in N. Jayant (ed.), 1999, Signal Compression: Coding of Speech, Audio, Image and Video, World Scientific, Singapore.

OCR for page 243
Broadband Bringing Home the Bits TABLE A.4 Standards for Audio, Image, and Video Compression Standard (Year) Algorithm Bit Rate Application G.722 (1988) Subband ADPCM 64, 56, 48 kbps Teleconferencing MPEG-1 (1992) Musicam/ASPEC 384, 256, 128 kbps Two-channel audio w/video on CD MPEG-2 (1996) PAC 320 kbps Five-channel surround sound for MM recording DAB (1996) PAC 160 kbps Two-channel audio for terrestrial broadcast JBIG (1991) Run length coding 0.05-0.1 bpp Binary coded images (half-tone) JPEG (1991) DCT 0.25-8 bpp Still continuous-tone images MPEG-1/2 (1991, 1994) MC-DCT 1-8 Mbps Addressable video on CD P × 64 (1991) MC-DCT 64-1,536 kbps Videoconferencing HDTV (1996) MC-DCT 17 Mbps Advanced TV NOTES: kbps = kilobits per second; bpp = bits per pixel. SOURCE: After R.V. Cox. 1999. “Current Methods of Speech Coding,” in N. Jayant (ed.), 1999, Signal Compression: Coding of Speech, Audio, Image and Video, World Scientific, Singapore. FIGURE A.30 Data rates in digital representations of signals. Rates are numbers after compression. SOURCE: After R.V. Cox. 1999. “Current Methods of Speech Coding,” in N. Jayant (ed.), 1999, Signal Compression: Coding of Speech, Audio, Image and Video, World Scientific, Singapore.

OCR for page 243
Broadband Bringing Home the Bits Resiliency to Packet Losses Packet networks are often limited by packet losses rather than bit errors. Packet losses can be addressed by retransmission in delay-insensitive applications. In delay-sensitive communications, packet losses can be anticipated by redundancy in the packet generator. In sophisticated algorithms, such as embedded coding and multiple description coding, this redundancy is contained by unequal protection of subpackets, depending on models of perceptual importance of these subpackets, as in unequal bit error protection. Information Hiding, Steganography, Watermarking, and Multimedia Annotations Increasingly, digital communications will include ancillary information that may convey a variety of information to the end user that is related to authentication (information about sender and intended receiver, for example), and such information is embedded in the main message in an unobtrusive and imperceptible form. These are the techniques of information hiding, with subclasses called steganography and watermarking. Multimedia annotations also involve additional data, but not necessarily in imperceptible or hidden form. The overall effect of all of the above processes is that the data rate for digital communication is strictly higher than the data rates at the output of the signal compression stage. While there is no rigorous way of measuring the resulting overhead in data rate without regard to the application and the needs of it, it is useful to use the following guideline: Typical overall overheads are in the range of 10 to 100 percent, and the rates on the horizontal axis of the compression chart in Table A.4 need to be increased by factors as high as 2.0, especially in the case of unfriendly access methods such as wireless links that are power- and interference-limited and/or in networks that are operating in situations of overload. In scalable media communications, the inherent excursions in data rate in the compression algorithm can well exceed the factor of 2 referred to above. In these cases, the metrics of importance are the average data rate in the scalable compression algorithm, and, where available and usable, more detailed descriptions of the data rate histogram. In fact, assessments of traffic and channel loading depend directly on these difficult and highly variable characterizations of the information source. The least complex nontrivial measure of overall data rate is the average data rate after compression, multiplied by the overhead mentioned in the guideline above.

OCR for page 243
Broadband Bringing Home the Bits Media-Specific Examples: Access Implications and Questions Toll-quality telephony versus Internet or cable modem telephony. What are the quality and delay targets in IP-telephony and cable telephony? What are the consumer expectations? Is there a business case for AM-radio-grade telephony? What is the competitive landscape? Audio/video streaming at lite-ADSL, cable modem, and wireless speeds. Are user expectations going to be tied to television quality? What is the longevity of partial-screen solutions? What is the competitive landscape? Uploading of information from a home. What are the primary cases for upload-speed on demand? What are the demands of such applications as telemedicine, teleworking, home publishing? Is there a case for symmetric uplink and downlink? Definitive answers to these questions do not exist, but as applications mature, it will be possible to understand and quantify them at least implicitly and qualitatively. Research and Technology Outlook At this time, compression technologies are mature. Although it is difficult to define the fundamental limits in the game, typical data rates for specified levels of quality are generally known. Increasing compression ratios will become the preoccupation of specialists. Likewise, decoders and clients will become pervasive and affordable. New advances in first mile and first meters multimedia communications will depend increasingly on advances in access speed and on innovations in networking.