Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 33
Wireless Technology Prospects and Policy Options 2 Key Technology Considerations Radio-frequency (RF) communication saw a progression of innovation throughout the 20th century. In recent years, it has been transformed profoundly by technological advances, both in the capabilities of individual radios and in the design of networks and other systems of radios. This discussion presents some highlights of recent advances and their implications for the design of radios and radio systems and for regulation and policy. It does not aim to describe the full range of technical challenges associated with wireless communications; the interested reader is referred to the 1997 NRC report The Evolution of Untethered Communications,1 which describes many of the fundamental challenges associated with wireless communications or, for a more recent view of the technology and its applications, several recent textbooks on wireless communications.2 1 National Research Council, The Evolution of Untethered Communications, National Academy Press, Washington, D.C., 1997. 2 See, e.g., Andrea Goldsmith, Wireless Communications, Cambridge University Press, Cambridge, England, 2005; David Tse and Pramod Viswanth, Fundamentals of Wireless Communication, Cambridge University Press, Cambridge, England, 2005; and Theodore S. Rappaport, Wireless Communications: Principles and Practice, 2nd Edition, Prentice-Hall, Upper Saddle River, N.J., 2001.
OCR for page 34
Wireless Technology Prospects and Policy Options TECHNOLOGICAL ADVANCES IN RADIOS AND SYSTEMS OF RADIOS Digital Signal Processing and Radio Implementation in CMOS Modern communications technologies and systems, including those that are wireless, are mostly digital. However, all RF communications ultimately involve transmitting and receiving analog signals; Box 2.1 describes the relationship between digital and analog communication. Digital signal processing (Box 2.2) is increasingly used to detect the desired signal and reject other “interfering” signals. This shift has been enabled by several trends: Increasing use of complementary metal oxide semiconductor (CMOS) integrated circuits (Box 2.3) in place of discrete components; The application of dense, low-cost digital logic (spawned primarily by the computer and data networking revolutions) for signal processing; New algorithms for signal processing; Advances in practical implementation of signal processing for antenna arrays; and Novel RF filter methods. The shift relies on an important tradeoff: although the RF performance of analog components on a CMOS chip is worse than that of discrete analog components, more sophisticated computation can compensate for these limitations. Moreover, the capabilities of radios built using CMOS can be expected to continue to improve. The use of digital logic implies greater programmability.3 It is likely that radios with a high degree of flexibility in frequency, bandwidth, and modulation will become available, based on highly parallel architectures programmed with special languages and compilers. These software-defined radios will use software and an underlying architecture that is quite different from conventional desktop and laptop computers, but they will nonetheless have the ability to be programmed to support new applications. High degrees of flexibility do come at a cost—both financial and in terms of power consumption and heat dissipation. As a result, the wireless transceiver portion (as opposed to the application software that communicates using that transceiver) of low-cost consumer devices is unlikely to become highly programmable, at least in the near future. On the other 3 Programmability of radio functionality is distinct from the increasing degree of application programmability being introduced into mobile phones and exemplified by smart phones for which a large number of user-selected applications are available.
OCR for page 35
Wireless Technology Prospects and Policy Options BOX 2.1 Analog Versus Digital Communications In common usage, the term “analog” has come to mean simply “not digital,” as in “analog wristwatch” or “analog cable TV.” But for the purposes of this report it is useful to trace the meaning to its original technical usage, in early computing. From about 1945 to 1965, an era when digital computers were very slow and very costly, differential equations describing a hypothetical physical system were solved (one might say modeled) by an interconnected network of properly weighted passive components (resistors and capacitors) and small amplifiers, so that the smoothly time-varying voltages at various points in this network were precisely analogous to the time behavior of the corresponding variables (velocity, acceleration, flow, and so on) of the system being modeled. Today, we solve these same equations numerically on a digital computer, very quickly and at low cost. In a similar way, for roughly 100 years, signals were transmitted in analog form (over wires or wirelessly) with a smoothly varying signal, representing the changing level and pitch of voice; the hue, saturation, and brightness of each point in a video image; and so forth. But just as high-speed and low-cost numerical representations and digital computations replaced analog computing, it likewise became much more reliable and less expensive to transmit digital coded numerical samples of a signal to be reconstituted at the receiver rather than to faithfully transmit a continuously varying analog representation. In digital communications, information is encoded into groups of ones and zeroes that represent time-sampled numerical values of the original (voice, music, video, and so on) signal. Ironically, in the wireless domain, once the analog signal has been encoded into a sequence of digital values, smoothly varying forms for the ones and the zeroes must be generated so that the transmitted signal will propagate. Figure 2.1.1 shows a digital sequence of ones and zeros. The sharp on-off pulses that work so well inside a computer do not work well at all when sent through space between antennas. And so groups of ones and zeroes are represented by smooth changes in frequency, phase, or amplitude in a sinusoidal carrier, the perfect waveform of propagation. Three schemes are illustrated in Figures 2.1.2 through 2.1.4: amplitude shift keying of the carrier wave from 1 volt to 0 volts (Figure 2.1.2), frequency shift keying of the transmission frequency from f0 to f1 (Figure 2.1.3), and phase shift keying of the phase by 180 degrees (Figure 2.1.4). These ones and zeroes are interpreted at the receiver in groups of eight or more bits, representing the numerical value or other symbol transmitted.
OCR for page 36
Wireless Technology Prospects and Policy Options FIGURE 2.1.1 Digital sequence of ones and zeroes—0010110010. SOURCE: Charan Langton, “Tutorial 8—All About Modulation—Part 1,” available at http://www.complextoreal.com. Used with permission. FIGURE 2.1.2 Amplitude shift keying. SOURCE: Charan Langton, “Tutorial 8—All About Modulation—Part 1,” available at http://www.complextoreal.com. Used with permission.
OCR for page 37
Wireless Technology Prospects and Policy Options FIGURE 2.1.3 Frequency shift keying. SOURCE: Charan Langton, “Tutorial 8—All About Modulation—Part 1,” available at http://www.complextoreal.com. Used with permission. FIGURE 2.1.4 Phase shift keying. SOURCE: Charan Langton, “Tutorial 8—All About Modulation—Part 1,” available at http://www.complextoreal.com. Used with permission.
OCR for page 38
Wireless Technology Prospects and Policy Options BOX 2.2 Digital Signal Processing For the continuous sinusoidal signals that can be propagated from transmitter to receiver to be encoded, modulated, demodulated, and decoded using digital technology, they must be put into a digital form by using an analog-to-digital converter (ADC), and then a digital-to-analog converter (DAC) to return to analog form. For example, an ADC might take 500 million samples per second, with a resolution of 10 bits (1 part in 1024 accuracy). Then, the continuous signal being received would be represented by a series of samples each spaced 2 nanoseconds apart. A series of dots approximately represents the continuous function shown in Figure 2.2.1. To find the frequency domain representation of this function, we can calculate its Fourier transform. But because it is now a sequence of discrete samples rather than a continuous mathematical function, we use an algorithm known as the discrete Fourier transform (DFT). It has the form And the inverse DFT has the form In these two expressions, we use N time domain samples to compute N frequency components, and vice versa. A huge improvement on the DFT is the fast Fourier transform (FFT) and the inverse FFT (IFFT). By always using N equal to a power of 2 (16, 32, 64, 128…), the calculation is greatly simplified. The FFT and IFFT are the foundation of modern digital signal processing, made possible by high-speed, low-cost digital CMOS (see Box 2.3). FIGURE 2.2.1 Representation of continuous function as series of digital samples. SOURCE: Charan Langton, “Tutorial 6—Fourier Analysis Made Easy—Part 3,” available at http://www.complextoreal.com. Used with permission.
OCR for page 39
Wireless Technology Prospects and Policy Options BOX 2.3 Complementary Metal Oxide Semiconductor Technology The transformation of communications from analog to digital and the related dramatic reduction in costs and increased performance are a consequence of the revolution in semiconductor design and manufacturing caused by the emergence of the personal computer (PC) industry. In particular, the remarkable and steady increase in performance and reduction in feature size by a factor of two every 18 months, generally known as Moore’s law, has driven aggressive innovation far beyond the PC industry. By far, the majority investment to enable this progress has been in the design and process development of complementary metal oxide semiconductor (CMOS) technology. Introduced in the 1960s, CMOS is now used widely in microprocessors, microcontrollers, and other digital logic circuits as well as in a wide variety of analog circuits. This technology for constructing integrated circuits uses complementary and symmetrical pairs of p-type and n-type metal oxide semiconductor field-effect transistors. Investments also spawned a new industry structure: “fabless” companies, which design, market, and sell innovative products, along with silicon foundries, which manufacture the chips for these companies, spreading the capital investment in exotic equipment over large volumes. For example, today even a new, small company can design a complex part in CMOS and have a foundry charge $1,000 to process a silicon wafer yielding, say, 5,000 chips (20 cents each). Adding 10 cents for packaging and testing gives a cost of 30 cents for a part that is sold to a cell phone manufacturer for 40 to 60 cents. Well over 1 billion cell phones are sold each year. hand, there are other applications, such as cellular base stations, where concurrent support of multiple standards and upgradability to new standards make transceiver programmability highly desirable. Also, the decreasing cost of computation and memory opens up new possibilities for network and application design. The low cost of memory, for example, makes practical store-and-forward voice instead of always-on voice. This capability creates new opportunities for modest-latency rather than real-time communication and may be of increasing importance to applications such as public safety communications. Digital signal processing of the audio can also, for example, be used to enhance understandability in (acoustically) noisy environments.4 4 Note that some forms of digital signal processing—compression and some algorithms used to encode and decode audio (vocoders)—can adversely affect audio quality in certain applications. For example, the vocoders in early digital mobile phones did not cope well with wind and road noise, and there have been reports that vocoders in digital public safety systems poorly transmit such important sounds as sirens and gunshots.
OCR for page 40
Wireless Technology Prospects and Policy Options The pace of improvement in digital logic stands in contrast to the much slower pace of improvement in analog components. One consequence of this trend is that it becomes potentially compelling to reduce the portion of a radio using discrete analog devices and instead use digital signal processing over very wide bandwidths. However, doing so presents significant technical challenges. As a result, at least for the present, the development of radios is tied to the pace of improvements in analog components as well as the rapid advances that can be expected for digital logic, although promising areas of research exist that may eventually overcome these challenges. Digital Modulation and Coding Modulation is the process of encoding a digital information signal into the amplitude and/or phase of the transmitted signal. This encoding process defines the bandwidth of the transmitted signal and its robustness to channel impairments. Box 2.4 describes how waveforms can be constructed as a superposition of sinusoidal waves, and Box 2.5 describes several modern modulation schemes in use today. The introduction of the more sophisticated digital modulation schemes in widespread use today—such as CDMA and OFDM, whereby different users using the same frequency band are differentiated using mathematical codes—have further transformed radio communications (see Box 2.6). Many important advances have also been made in channel coding, which reduces the average probability of a bit error by introducing redundancy in the transmitted bit stream, thus allowing the transmit power to be reduced or the data rate increased for a given signal bandwidth. Although some of the advances come from the ability to utilize ever-improving digital processing capacity, others have come from innovative new coding schemes (Box 2.7). Low Cost and Modularity The low cost and modularity (e.g., WiFi transceivers on a chip) that have resulted from the shift to largely digital radios built using CMOS technology make it cheaper and easier to include wireless capabilities in consumer electronic devices. As a result, developing and deploying novel, low-cost, specialized radios have become much easier, and many more people are capable of doing so. A likely consequence is continued growth in the number of wireless devices and in demand for wireless communications.
OCR for page 41
Wireless Technology Prospects and Policy Options BOX 2.4 Power Spectra and the Frequency Domain Late in the 1600s, Josef Baron Fourier first proved that any periodic waveform can be represented by a (possibly infinite) sum of pure sinusoidal functions of various amplitudes. This result is surprising but true, however little the original waveform may resemble a smooth sine or cosine function. For example, a perfect square wave x(t) can be represented by the infinite series Figure 2.4.1 shows that adding the waveforms of just the first four terms of this equation already begins to approximate the square wave, an approximation that improves as more terms are added. This square wave can be composed by adding an increasing number of sine waves that are odd harmonics of the basic frequency of the square wave—that is 3, 5, 7, and so forth times the frequency—and 1/3, 1/5, 1/7, and so forth times the amplitude. Needless to say, it is impossible in practice to combine an infinite number of sine waves, but then it is also impossible to produce a perfect square wave, rising and falling in zero time. But we certainly can generate waves with very, very fast rise and fall times, and the faster they are the larger the number of harmonics they contain. Consider just the simple case of the 3rd, 5th, and 7th harmonics. This collection of sine waves can be represented in another way, by showing the amplitude of each frequency component visually. This amplitude spectrum (Figure 2.4.2) represents the signal amplitude in the frequency domain. A signal FIGURE 2.4.1 Representation of square wave (solid line) by the sum of 1, 2, 3, and 4 sinusoidal waveforms (dashed lines).
OCR for page 42
Wireless Technology Prospects and Policy Options also has a frequency domain representation of the power in a signal, which is proportional to the square of the amplitude. Especially in the case of signals radiating from an antenna, we usually show the signal power spectrum as consisting of equal positive and negative frequencies or sidebands, with half of the power in each sideband. Thus, the power spectrum of the signal from Figure 2.4.1 would look like the spectrum shown in Figure 2.4.3. These ideal-looking spectra result from combining perfectly stable, pure sine waves of precise frequencies, which are also impossible to achieve in practice. Nevertheless, the spectra do illustrate the relationship between the coefficients of the time-domain harmonics in the Fourier series, and the frequency-domain components in the amplitude and power spectra. These are more clearly related by the Fourier transform, which accepts a time domain representation of a signal, such as x(t), and returns a frequency domain representation: FIGURE 2.4.2 Signal amplitude represented in the frequency domain.
OCR for page 43
Wireless Technology Prospects and Policy Options The inverse Fourier transform accepts a frequency domain representation and returns the corresponding time domain representation: These two transformations are extremely important in modern wireless, because they allow information to be encoded by including or excluding different frequencies from a transmitted signal and then detecting these at the receiver, in order to symbolize data in a way that is very resistant to interference and noise. These continuous integral equations form the basis for the discrete computations described in Box 2.2. This requires high-speed, specialized computations. FIGURE 2.4.3 Power spectrum representation of the signal shown in Figure 2.2.1.
OCR for page 56
Wireless Technology Prospects and Policy Options approach can be used is constrained by the intermodulation distortion associated with real-world radio components, which limit the bandwidth that can be handled practically using digital signal processing alone. A variety of avenues are being pursued by researchers to overcome these constraints. One of them has long been of interest but has not been realized in commercial products: the use of narrow filters that are tunable under digital control over a wide range, perhaps using microelectro-mechanical systems (MEMS) technology. Nomadic Operation and Mobility Supporting nomadic operation and mobility requires more dynamic adaptation of radio operating parameters than is needed for fixed radios, which only need to cope with changes in environmental conditions. Moreover, nomadic operation and mobility make it more difficult to neatly segment space or frequency, and they complicate dynamic market approaches because they make it more difficult to buy and sell rights at the rate at which radios can move between segments. Heterogeneity of Capabilities As more sophisticated radios are deployed, the heterogeneity of capabilities—especially the existence of radios with much poorer performance than others—will present growing challenges. At any point in time, there will be a legacy in terms of deployed equipment, existing frequency allocations, and existing businesses and government operations that are being made obsolete, in some sense, by new capabilities. The problem is not new, but a rapid pace of technological advancement and concomitant explosion of applications, especially applications with different purposes and capabilities, magnifies the challenges. Not all heterogeneity will arise from legacy systems. Some applications will have cost and/or power requirements that preclude the use of highly sophisticated radios that coordinate their behavior. For example, the constraints on cost and power consumption for embedded networked sensors preclude the use of highly sophisticated radios that are able to do very sophisticated signal processing or complex computation to coordinate their behavior. Another manifestation of heterogeneity is the contrast between active use, which involves both transmitter(s) and receiver(s), and passive spectrum use (e.g., remote sensing and radio astronomy), which involves receivers only.14 Figuring out how to simultaneously 14 For a detailed discussion of passive scientific uses, see National Research Council, Frequency Allocations and Spectrum Protection for Scientific Uses, The National Academies Press, Washington, D.C., 2007.
OCR for page 57
Wireless Technology Prospects and Policy Options accommodate more sophisticated and adaptable radios with those that are necessarily less sophisticated will be an ongoing challenge. TIMESCALES FOR TECHNOLOGY DEPLOYMENT A particular challenge in contemplating changes to policy or regulatory practice is determining just how quickly promising new technologies will actually be deployable as practical devices and systems and thus how quickly, and in what directions, policy should be adjusted. Rate for Deployment of New Technologies as Practical Devices and Systems As is natural with all rapidly advancing technology areas, concepts and prototypes are often well ahead of what has been proven feasible or commercially viable. The potential of adaptive radios, for example, has been explored (particularly for military use), but the technology has not yet been used in mainstream commercial devices or services. As described above, there is reason to expect the capabilities of radios to improve and their hardware costs to steadily decline, but many important details of operation and protocols must be worked out in parallel with technical development and regulatory change. Moreover, although great technical progress has been made in recent years, resulting in the deployment of new wireless services, wireless communications will remain a fertile environment for future basic research as well as product and service development. Timescales for Technology Turnover Different wireless services are characterized by the different timescales on which technology can be upgraded. The factors influencing the turnover time include the time to build out the infrastructure and the time to convince existing users (who may be entrenched and politically powerful) to make a shift. For instance, public safety users tend to have a long evolution cycle, as government procurement cycles are long and products are made to last a long time. Cellular turnover is rapid by comparison, and technology can be changed out relatively readily (a 2-year handset half-life and a 5- to 7-year time frame for a shift to new technology are typical). The digital television transition that finally occurred in the United States in 2009 is emblematic of the challenge of making a transition where technology turnover is very slow, in part because of expectations raised by static technology and services that were developed over many decades. Importantly, the rate at which turnover is possible depends on the incentives for upgrading as well as the size of the installed base. For
OCR for page 58
Wireless Technology Prospects and Policy Options instance, firms operating cellular networks have demonstrated an ability to upgrade their technology fairly quickly despite having an enormous user base, whereas aviation has a relatively small set of users but a very long turnover rate, having yet to transition from essentially 1940s radio voice technology. The primary driver of successful upgrades is for users to see tangible benefits and for service providers to have an incentive to push for the switch. Cellular subscribers gain tangible benefits from newer capabilities commensurate with the added costs. (Also, U.S. mobile operators generally subsidize handset cost, because it makes it easier to upgrade their network technologies and increase system capacity, somewhat offsetting the visible costs to the end user),15 whereas private pilots would incur a large capital cost and have to learn a new system even though the existing technology already meets their requirements. TALENT AND TECHNOLOGY BASE FOR DEVELOPING FUTURE RADIO TECHNOLOGY The changing nature of radios is creating new demands for training and education. Research and development (R&D) for radios depend on skills that span both the analog and the digital realms and encompass multiple traditional disciplines in electrical and computer engineering. Similarly, making progress in wireless networks often requires expertise from both electrical engineering and computer science. It is thus not straightforward for a student to obtain the appropriate education and training through a traditional degree program. The nature of modern radios presents another barrier to advanced education and university-based research, because the CMOS chips that lie at their heart require very large-scale fabrication facilities, presenting a significant logistical barrier to university-based groups that seek to test and evaluate new techniques. This report assumes a continued stream of innovation in radio technology. Such sustained innovation depends on the availability of scientific and engineering talent and on sustained R&D efforts. Considerable attention has been focused in recent years on broad concerns about the declining base of scientific and engineering talent and levels of research support in the United States and its implications for competitiveness, including in the area of telecommunications. For a broad look at trends and their implications for science, engineering, and innovation, see Rising 15 Incentives may differ across markets and regulatory regimes. For example, cellular upgrades have been market-driven in the United States and government-driven in the European Union (EU). The effect has been mixed. On the one hand, the EU push for third generation arguably got ahead of actual market demand, whereas the U.S. market moved slowly from analog to second-generation digital services, arguably giving the EU higher-quality wireless voice services sooner.
OCR for page 59
Wireless Technology Prospects and Policy Options Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future;16 for a study focused on telecommunications research, see Renewing U.S. Telecommunications Research.17 The issues and opportunities described in this report involve considerations of many areas of science and engineering—including RF engineering, CMOS, networking, communications system theory, computer architecture, applications, communications policy, and economics. Addressing the challenges and realizing the opportunities will require a cadre of broad systems-oriented thinkers. Building this talent will be a major national advantage. Radio engineering is an important area for consideration in this context, given that wireless is a fast-moving, high-technology industry that is economically important in its own right and that has much broader economic impacts. Moreover, wireless engineering encompasses an extensive skill set—including RF engineering, an ability to do RF work in CMOS technology, and an ability to work on designs that integrate RF and digital logic components—that is difficult to learn in a conventional degree program. Similarly, wireless networks involve expertise that spans both electrical engineering and computer science. Finally, for R&D to be effective, it is important to be able to implement and experiment with new ideas in actual radios and systems of radios. Work on new radio designs requires access to facilities for IC design and fabrication. Work on new radio system architectures also benefits from access to test beds that allow ideas to be tested at scale. Given the high cost of such facilities, university R&D can be enhanced by collaboration with industry. MEASUREMENTS OF SPECTRUM USE The standard reference in the United States for the use of spectrum is the U.S. Frequency Allocation Chart that is published by the NTIA. The chart separates the spectrum from 30 MHz to 300 GHz into federal or nonfederal use and indicates the current frequency allocations for a multitude of services (cellular, radiolocation, marine, land mobile radio, military systems, and so on). Although this chart is an invaluable reference in providing a comprehensive view of what frequencies are potentially in use for various 16 National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future, The National Academies Press, Washington, D.C., 2007. 17 National Research Council, Renewing U.S. Telecommunications Research, The National Academies Press, Washington, D.C., 2007.
OCR for page 60
Wireless Technology Prospects and Policy Options services and in giving some indication of the complexity of frequency use, it does not shed light on a particularly critical issue—the actual density of use of the spectrum. That is, are there blank spaces in frequency, time, and space that could potentially be used for other purposes? It is increasingly asserted that much spectrum goes unused or is used inefficiently. Yet relatively little is known about actual spectrum utilization. Licensees and users are not required to track their use of spectrum. There are no data available from any sort of ongoing, comprehensive measurement program. And when spectrum measurements have been made, they were often aimed at addressing a specific problem. Proxy measurements, such as the number of licenses issued in a frequency range, have been used to characterize trends and extrapolate likely use, but they do not measure actual use and do not, of course, yield any insight into unlicensed use.18 Why Spectrum Measurement Is Hard Perhaps the greatest challenge is that any program of measurement will be limited in its comprehensiveness if all the degrees of freedom are actually to be measured. Measurements can be made only at specific locations and times; measurements at one place may not reveal much about even nearby points. Results obtained by one set of measurements are not easily applied to a different situation. The full scope of measurement is suggested by the electrospace model, in which one specifies the frequency, time, angle of arrival (azimuth, elevation angle), and spatial location (latitude, longitude, elevation) to be measured.19 Other measurement considerations include polarization, modulation scheme, location type (e.g., urban, suburban, or rural),20 and which signals are being measured (known signals, unknown signals, or noise). Many radio systems are designed to operate with very low average power levels, and naive spectrum measurement techniques may miss use by such low-power devices.21 Moreover, a directional signal will be missed if the receiver is not pointing in the right direction. Often designed to operate with very low average power levels, point-to-point microwave 18 Robert Matheson, Spectrum Usage for the Fixed Services, NTIA Report 00-378, March 2000, p. xi. 19 Robert Matheson, “The Electrospace Model as a Frequency Management Tool,” Addendum to the Proceedings of the 2003 ISART Conference, 2003. 20 Allen Petrin, “Maximizing the Utility of Radio Spectrum: Broadband Spectrum Measurements and Occupancy Model for Use by Cognitive Radio,” Ph.D. Thesis, Georgia Institute of Technology, August 2005, p. 6. 21 Robert Matheson, letter to David Liddle in follow-up to presentation to the committee, August 27, 2004.
OCR for page 61
Wireless Technology Prospects and Policy Options links and radar systems are examples of use that may be missed by spectrum measurements efforts. Radar emits narrow high-power pulses infrequently, making them easy to miss. Some uses, such as public safety communications, are inherently sporadic and random in time and location. Because they are normally confined to military installations, defense uses may take place in well-defined locations but will vary considerably over time. Also, measurements by definition measure only active use of the spectrum; passive use of the spectrum and remote sensing cannot be detected and, worse, could be interpreted as non-use of parts of the spectrum that would be seen as empty. Similarly, without careful interpretation, guard bands established to mitigate interference for existing services could be interpreted as unused portions of the spectrum even though these bands are in a real sense being used to enable those services. These considerations suggest that spectrum measurement is a challenging endeavor that requires measurements at many points in space and time and the collection of a very large amount of data. They also suggest that spectrum measurement has an inherent element of subjectivity, because results may depend significantly on the particular assumptions made and methods employed. Looking forward, measurement might be improved over the long term by requiring systems to provide usage statistics, as might the development and adoption of a formal framework for measuring, characterizing, and modeling spectrum utilization. Such a framework might provide researchers a way to cogently discuss spectrum utilization and provide policy makers with evidence-based information about technical factors affecting efficient utilization.22 Results from Some Measurement Activities The NTIA has a long history of spectrum measurement work going back to at least 1973.23 Those early efforts included federal land mobile radio measuring use in the 162-174 and 406-420 MHz range, and Federal Aviation Administration radar bands in the 2.7-2.9 GHz range. These projects were generally considered successful because the measurements focused on a definite problem and were able to address specific questions, such as whether claimed interference was real and whether minor changes to receivers could mitigate the problem of overcrowded use. The 22 F. Weidling, D. Datla, V. Petty, P. Krishnan, and G.J. Minden, “A Framework for R.F. Spectrum Measurements and Analysis,” Proceedings of IEEE Symposium on New Frontiers in Dynamic Spectrum Access Networks, 2005, pp. 573-576. 23 Ibid.
OCR for page 62
Wireless Technology Prospects and Policy Options NTIA conducted a number of broadband spectrum surveys in different cities in the 1990s.24 An NTIA report from 1993 (and updated in 2000) used proxy information as a “measurement” of spectrum usage for fixed services (e.g., common carriers).25 That report examined historical license data and observations about market and technology factors likely to affect spectrum use, in order to gain insight on the degree to which the existing fixed-service spectrum bands would continue to be needed for their allocated services. One conclusion to be drawn from that report is that point-to-point microwave bands are probably underused and that the growth expected when these bands were allocated decades ago did not occur.26 Anticipated use of point-to-point microwave has moved largely to optical fiber instead, although it is still used in many rural areas where the traffic does not justify the cost of laying fiber. A number of research projects have attempted to directly measure spectrum utilization.27 Shared Spectrum Company, a developer of spectrum-sensing cognitive radio technology, has made several measurement studies since 2000, including occupancy measurements in urban settings such as New York City and Chicago, suburban settings such as northern Virginia, and rural environments in Maine and West Virginia.28 Spectrum measurements for the New York City study were done during a period of expected high occupancy, the Republican National Convention.29 The studies aimed to determine how much spectrum might be allocated for more sophisticated wireless applications and secondary users relative to primary (licensed) users. Some important conclusions can be drawn from these measurements. The measurements indicate that some frequency bands are very heavily 24 Frank H. Sanders and Vince S. Lawrence, Broadband Spectrum Survey at Denver, Colorado, NTIA Report 95-321, September 1995; Frank H. Sanders, Bradley J. Ramsey, and Vincent S. Lawrence, Broadband Spectrum Survey at San Diego, California, NTIA Report TR-97-334, December 1996; Frank H. Sanders, Bradley J. Ramsey, and Vincent S. Lawrence, Broadband Spectrum Survey at San Francisco, California, May-June 1995, NTIA Report 99-367, July 1999. 25 Robert Matheson, Spectrum Usage for the Fixed Services, NTIA Report 00-378, March 2000, p. 1. 26 Robert Matheson, letter to David Liddle in follow-up to presentation to the committee, August 27, 2004. 27 P.G. Steffes and A.J. Petrin, “Study of Spectrum Usage and Potential Interference to Passive Remote Sensing Activities in the 4.5 cm and 21 cm Bands,” Proceedings of the IEEE Geoscience and Remote Sensing Symposium 3(20-24):1679-1682, 2004; S.W. Ellingson, “Spectral Occupancy at VHF: Implications for Frequency-Agile Cognitive Radios,” Proceedings of the IEEE Vehicular Technology Conference 2(25-28):1379-1382, 2005. 28 Mark McHenry, “NSF Spectrum Occupancy Measurements Project Summary,” Shared Spectrum Company, August 15, 2005. 29 Mark McHenry and Dan McCloskey, “New York City Spectrum Occupancy Measurements September 2004,” Shared Spectrum Company, December 15, 2004.
OCR for page 63
Wireless Technology Prospects and Policy Options used and that some other currently assigned frequency bands are only lightly used, at least over some degrees of freedom. Above all, the picture that emerges clearly from the measurements made to date is that frequency allocation and assignment charts are misleading in their suggestion that little spectrum is theoretically available for new applications and services—provided that the right sharing or interference mitigation measures could be put in place. One might legitimately quibble over the details or the precise level of use; the real point is that there is a good deal of empty space, provided that ways of safely detecting and using it can be found. Another broad conclusion is that the density of use becomes lower at higher frequency. The advent of low-cost radios that can operate at frequencies in the tens of gigahertz points to a promising arena for introducing new services. Finally, measurements of spectrum use do not capture the value of use. In addition, if a licensee internalizes the opportunity cost of underutilized spectrum and has a way to mitigate that cost, there is no need for centralized measurement and management; that empty space exists, but the best way to use it is not necessarily for the government to allow additional users. CHALLENGES FACING REGULATORS Technology advances bring new issues before regulators that require careful analysis. Some require a subtle understanding of the ways in which new technology may necessitate new regulatory approaches and a challenging of past assumptions about limitations and constraints. Several examples are discussed below. Use of White Space to Increase Spectrum Utilization The basic goal of “white space” utilization is to let operators with lower priority use the space when higher-priority users leave the spectrum unoccupied. From a technical perspective this approach requires adding sensing capability to devices to determine if a higher-priority user is using the spectral band (or bands). Such a dynamic use of spectrum has not been supported in past regulatory models. In the dynamic situation envisaged in the white-space model, several new questions and considerations have to be addressed. For instance, “occupancy” must be defined thoughtfully. Higher-priority users opposed to the use of white space might say that any use of their spectrum could cause harm to their transmissions, so that only “no interference” is acceptable. Yet achieving no interference has never been possible because all
OCR for page 64
Wireless Technology Prospects and Policy Options radios transmit energy outside their allowed bands and generate interference with other adjacent users. Therefore, the only question, ultimately, is the degree to which interference is allowed. In the absence of a clear technical analysis of when a given level of interference is actually causing significant degradation of signal, it is difficult to determine an acceptable level. How best to do so is of importance in formulating rules to open up spectrum as well as for private parties to negotiate what level of interference they would accept in return for a market price. A clear technical analysis requires that several factors be considered. Estimating the total interference load depends on a realistic statistical model for the number of likely secondary users, the transmitted power spectrum for each user, the susceptibility of the primary occupant’s receivers to these secondary signals, and the ability of the primary user to adapt its transmissions to reduce the impacts of the secondary users. Given that the analysis is statistical in nature, it may be useful to approach the question in terms of a probability of degradation that should not be exceeded. If the likelihood of degradation by secondary users falls below this probability, then those secondary users would be considered as not occupying the band of the primary user. An analysis done from this perspective would help avoid situations in which highly improbable scenarios (as opposed to situations that can reasonably be expected to cause a problem) lead to the rejection of sharing arrangements. Second, considering frequency as the only degree of freedom available to separate users makes for simpler technical analysis but is highly limiting. Radios built to perform dynamic beam forming, for instance, allow highly sophisticated spatial separation. Also, if sensing is fast enough, then it is possible to exploit white spaces in time. Thus frequency, time, and space could all be considered as tools to reduce the effects of interference to below the level of degradation defined as noninterference. Third, spectral emissions regulations have historically considered each transmitter working independently. Yet, considering sensing performed by the network might mean much greater opportunity for more efficient spectrum use. Just how much might be gained from such an approach is not well understood, because it depends on an understanding of the statistical correlation between sensing at different locations. Considering such an approach requires the same mind-set change as described previously, which allows for statistically based improvements. Finally, there is the issue of sensitivity of detection. Greater sensitivity increases the probability of detection but also leads to a greatly increased probability of false alarms. In other words, at some point increasing sensitivity causes any random noise to appear as occupancy. To make a proper analysis requires a level of understanding about sensing that goes beyond just sensing the energy in a spectral band. Most signals have distinctive
OCR for page 65
Wireless Technology Prospects and Policy Options signatures that can be used to differentiate them from noise or other spurious emissions. One opportunity to make use of white space is in the broadcast television bands. To that end, in late 2008, the FCC issued a set of rules30 under which devices use geolocation and access to an online database of television broadcasters together with spectrum-sensing technology to avoid interfering with broadcasters and other users of the television bands. (Alternatively, the ruling provides for devices that rely solely on sensing, provided that more rigorous standards are met.) Debate and litigation ensued following the 2008 order on such issues as how to establish and operate a database of broadcaster locations. In a second order issued in 2010 to finalize the rules, the requirement was dropped that devices incorporating geolocation and database access also must employ sensing.31 Adaptive Antenna Arrays and Power Limits Antenna arrays at transmitters and receivers are being used increasingly to provide greater range, robustness, and capacity. Yet the basic regulatory strategy of defining an equivalent isotropically radiated power level for transmitters ignores many of the special characteristics of antenna arrays. As one example, this regulatory approach does not encourage the use of beam forming, which has considerable advantages in reducing interference over omnidirectional antennas. Decreasing Cost of Microwave Radio Links The present report describes above how standard CMOS technology can now be used to transmit in the microwave bands (60-GHz links have been demonstrated). As desired data rates rise into the gigabit-per-second range, adaptive antenna arrays will be used to obtain the necessary received power for both mobile and fixed devices. As with the previous examples, this technology is very different from what has been in use until now. 30 FCC, “Second Report and Order and Memorandum Opinion and Order in the Matter of Unlicensed Operation in the TV Broadcast Bands,” ET Docket No. 04-186, and “Additional Spectrum for Unlicensed Spectrum for Unlicensed Devices Below 900 MHz and in the 3 GHz Band,” ET Docket 02-380, FCC 08-260, Washington, D.C., November 14, 2008. 31 FCC, “Second Memorandum Opinion and Order in the Matter of Unlicensed Operation in the TV Broadcast Bands,” ET Docket No. 04-186, and “Additional Spectrum for Unlicensed Spectrum for Unlicensed Devices Below 900 MHz and in the 3 GHz Band,” ET Docket 02-380, FCC 10-174, Washington, D.C., September 23, 2010.
OCR for page 66
Wireless Technology Prospects and Policy Options ENGINEERING ALONE IS OFTEN NO SOLUTION The previous section describes several specific issues where engineering insights would help to inform future policy and regulation. At the same time, it is important not to oversell the extent to which better engineering or understanding of the technology alone can yield solutions. In the end, an engineering analysis depends on a knowledge of possible scenarios and what the acceptable outcomes are. These inform a complex set of business, marketing, and political judgments about value and risk. For example, Engineering alone does not determine whether a service supporting aviation merits greater protection from interference than a service delivering entertainment. The density and the distribution of a constellation of mobile devices (which affect their ability to interfere) cannot be determined fully a priori. They will reflect market and consumer behavior, and moreover they will change over time.