Today’s framework for wireless policy—which governs the operation of devices that make use of radio-frequency (RF) transmissions—has its roots in the technology of 80 years ago and the desire at that time for governmental control over communications. It has evolved to encompass a patchwork of legacy rules and more modern approaches that have been added over time. Although views vary considerably on whether the pace of reform has been commensurate with the need or opportunity, there have been a number of significant policy changes in recent decades to adjust to new technologies and to decrease reliance on centralized management. These developments have included the use of auctions to make initial assignments (along with the creation of secondary markets to trade assignment rights) and the designation of open bands1 in which all users are free to operate subject only to a set of “rules of the road.”
There remains, nonetheless, much debate about how the overall framework should be changed, what trajectory its evolution should follow, and how dramatic or rapid the change should be. Many groups have opinions, positions, and demands related to these questions reflecting multiple commercial, social, and political agendas and a mix of technical, economic, and social perspectives.
PRESSURES ON TODAY’S WIRELESS POLICY FRAMEWORK
The current framework for wireless policy in the United States is under pressure on several fronts:
It continues to rely heavily on service-specific allocations and assignments that are made primarily by frequency band and geographic location and does not embrace all of the spectrum management approaches possible with today’s technologies and expected to be available with tomorrow’s technologies.
Despite revisions aimed at creating greater flexibility, it continues to rely significantly on centrally managed allocation and assignment, with government regulators deciding how and by whom wireless communications are to be used despite growing agreement that central management by regulators is inefficient and insufficiently flexible.
It will not be able to satisfy the increasing and broadening demand for wireless communications that is spurred by interest in richer media, seemingly insatiable demand for mobile and untethered access to the Internet and the public telephone network, and growing communication among devices as well as people.
It does not fully reflect changes in how radios are being built and deployed now or in how they could be built and deployed in the future in response to different regulations, given that technological innovation has expanded the range of potential wireless services and the range of technical means for providing those services and at the same time has dramatically lowered the cost of including wireless functionality in devices.
Today, the complexity and density of existing allocations, assignments, and uses, and the competing demands for new uses, all make policy change difficult. Decisions will necessarily involve (1) addressing the costs and benefits of proposed changes that are (often unevenly) distributed over multiple parties, (2) resolving conflicting claims about costs and benefits, and (3) addressing coordination issues, which are especially challenging if achieving a particular change requires actions by a large number of parties. Moreover, some parties stand to gain by changing—or advocating for change—while others stand to gain by delay or retaining the status quo.
FORWARD-LOOKING POLICY DIRECTIONS
The Committee on Wireless Technology Prospects and Policy Options believes that, moving forward, the unambiguous goal for spectrum policy
should be to make the effective supply of spectrum plentiful so as to make it cheaper and easier to innovate and introduce new or enhanced services. Put another way, the goal should be to reduce the total cost—which includes the cost, if any, of licenses, and the cost of equipment, both for the end user and the network—of introducing or enhancing services. The financial cost of adverse impacts to existing users and services should also be fairly evaluated and debated in advance of regulatory changes.
Given the plethora of existing allocations and assignments, and the multitude of existing services and users associated with them, it is not possible to take a clean-slate approach. Achieving the goal stated above will thus involve several parallel efforts:
Leveraging advanced technologies, regulation, and market-based incentives to support sharing, including overlay and underlay approaches, so that new services can share spectrum with legacy services.
Streamlining and modernizing the use of bands allocated or assigned to old services to free up new areas of “white space” that can be used for new services, by using market mechanisms, relinquishing government-controlled bands used for obsolete services, and shutting down obsolete services (as has happened with analog television).
Establishing “open” as the default policy regime used at 20 to 100 gigahertz (GHz). At these higher frequencies, sparser use and technical characteristics that significantly reduce the chance for interference suggest that nontraditional management approaches can predominate.
The likelihood of ongoing technological change also points to the value of establishing a more adaptive learning system for setting policy that would be better able to track and even anticipate advances in wireless technology and emerging ways of implementing and using wireless services.
The sections that follow provide a brief description of key technology considerations and outline policy options, many enabled by new technology, that will be useful in achieving the goal of increasing the supply of spectrum for enhanced or new services.
KEY TECHNOLOGY CONSIDERATIONS
Radio-frequency communication has been transformed profoundly in recent years by a number of technological advances. This section outlines key recent advances and associated trends and their implications for the design of radios and radio systems and for regulation and policy.
Profound Changes in Radio-Frequency Communication as a Result of Technological Advances in Radios and Radio Systems
Digital processing is used increasingly to detect the desired signal and to reject interfering signals. The shift to largely digital radios built using complementary metal oxide semiconductor (CMOS) technology; (a high-density, low-power-consumption technology for constructing integrated circuits) has made it much cheaper and easier to include wireless capabilities in consumer electronic devices. As a result of the reduction in costs for radio technology, the barriers to developing and deploying novel, low-cost, specialized radios have become much lower, and more firms and other organizations have become capable of and potentially motivated to participate. Growth in the number of wireless devices of various types and in the demand for wireless communications is likely to continue.
Technological capabilities are also driving the introduction of new radio system architectures, including a shift away from centralized systems to more localized transmissions in distributed systems that use very small cells (the smallest of those being deployed today are called femtocells) or mesh networks, and a shift from centralized switching to more distributed, often Internet-Protocol-based, networks.
Another important shift in radios has been the ability to use new techniques to permit greater dynamic exploitation of all available degrees of freedom—frequency, space, time, and polarization—which makes it possible to take greater advantage in a dynamic, fine-grained, and automated fashion of all the degrees of freedom to distinguish signals. This capability offers the opportunity to introduce new options for assigning usage rights.
The ability to leverage sustained improvements in the performance of digital logic also opens up opportunities to build radios that are much more flexible and adaptable. Such radios can change their operating frequency and modulation scheme, can sense and respond to their environment, and can operate cooperatively to create new opportunities to make more dynamic, shared, and independently coordinated use of spectrum. (They cannot, however, directly sense passive users, which means that special measures such as registries or beacons are needed for detection of passive users.) The result is that radios and systems of radios can operate and cooperate in an increasingly dynamic and autonomous manner.
Although increased flexibility involves greater complexity, cost, and power consumption, it enables building radios that can better coexist with existing radio systems, through both underlay (low-power use intended to have a minimal impact on the primary user) and overlay (agile use by a secondary user of “holes” in the time and space of use by the primary user). Moreover, flexibility makes it possible to build radios with operating parameters that can be modified to comply with future policy or rule changes or future service requirements.
The use of CMOS to build radios and digital processing together with other advances in RF technology opens up a new set of opportunities in the form of low-cost, portable radios that are becoming increasingly practical at frequencies of 60 GHz and above. Radios operating in this domain must confront a number of challenges, including limited free-space propagation distances (especially in the oxygen absorption bands around 60 GHz) and very limited penetration through and diffraction around walls of buildings or other obstacles. On the other hand, these characteristics make such radios very useful in providing very large bandwidths over short range.
Interference as a Property of Radio Receivers and Radio Systems, Not Radio Signals
It is commonplace to talk about radio signals interfering with each other, a usage that mirrors the common experience of hearing broadcast radio signals that are transmitted on the same channel overlay each another. However, radio signals themselves do not, generally speaking, interfere with each other in the sense that information is destroyed. Interference reflects a receiver’s inability to distinguish between the desired and undesired signals. The cost of separating these signals is ultimately reflected in design complexity, hardware cost, and power consumption. As a result, any practical radio (i.e., one of practical size, cost, and power consumption) will necessarily throw away some of the information needed to resolve signal ambiguity. As the performance and capabilities of radios continue to improve over time, their ability to distinguish between signals can be expected to improve. However, power consumption will remain an especially challenging constraint, especially for portable devices, and even a modest additional device cost can jeopardize the commercial viability of a product or service.
Persisting Technical Challenges
Even as the capabilities and the performance of radios continue to improve, several hard technical problems can be expected to persist. These technical challenges—discussed in more detail below in this report—include power consumption, nonlinearity of radio components, support for nomadic operation and mobility, and coping with the heterogeneity of capabilities, including both legacy equipment and systems that are inherently constrained, such as embedded network sensors and scientific instruments that passively use spectrum (e.g., for remote Earth sensing and radio astronomy).
Nonuniform Timescales for Technology Replacement
Different wireless services are characterized by the different timescales for removal of old technology from service and deployment of new technology. The factors influencing the turnover time include the time to build out the infrastructure, the time to turn over the base of end-user devices, and the time to convince existing users (who may be entrenched and politically powerful) to make—and pay for—a shift, as well as the incentives for upgrading and the size of the installed base.
Considerable Uncertainty About the Rate at Which New Technologies Can Be Deployed Practically
A particular challenge in contemplating changes to policy or regulatory practice is determining just how quickly promising new technologies will be deployable as practical devices and systems and thus how quickly, and in what directions, policy should be adjusted. As is natural with all rapidly advancing technologies, the concepts and prototypes are often well ahead of what has been proved to be technically feasible or commercially viable. At the same time, technical advances sometimes can be commercialized quickly, although deployment and use might also require adjustments to regulations, a process that historically has taken longer.
Spectrum Use Lower Than Allocations and Assignments Suggest, Especially at Higher Frequencies
Quantifying how well and how efficiently spectrum is used is quite challenging. Measurements may miss highly directional or periodic use and cannot detect passive uses such as radio astronomy. These caveats notwithstanding, measurements suggest that some allocated and assigned frequency bands are very heavily used whereas others are only lightly used, at least in certain places and at certain times. The published frequency allocation and assignment charts are thus potentially misleading in their suggestion that little spectrum is available for new applications and services. A good deal of empty space exists in the spectrum; the challenge is to find ways of safely detecting and using it.
ENABLERS OF A MORE NIMBLE, FORWARD-LOOKING SPECTRUM POLICY FRAMEWORK
The committee identified the following approaches as enablers of a more nimble approach to spectrum policy.
Abandon the Extremes in the “Property Rights” Versus “Commons” Debate
The terms “property rights” and “commons” are shorthand for particular approaches to spectrum management—approaches that reflect philosophical and ideological perspectives as well as technical and policy alternatives. The property rights approach relies on a well-specified and possibly exclusive license to operate and on rights that can be established or transferred through an administrative proceeding, auction, or market transaction. It is intended to facilitate the creation of a market in infrastructure access and use rights. The commons or open-access approach relies on establishing license-free bands in which users must comply with specified rules, such as limits on transmitted power. It is intended to facilitate a market in devices and services based on symmetrically applied infrastructure use and access rights.
Each has advantages and disadvantages and associated transaction costs. Each involves different incentives, and different and complementary loci, for innovation. When carefully specified, neither pure version can at present be determined to be uniquely “better” than the other. Moreover, there is a much larger space of alternatives, and commercial forces can help drive their evolution and selection provided that the regulatory structure is not overly rigid. This suggests adopting a policy framework that avoids detailed allocation of spectrum in favor of one that uses market mechanisms for spectrum allocation where they make sense and uses an open-access mechanism in other instances. Where to draw the line between the two general approaches (licensed or exclusive-use allocations versus open access)—and which hybrids of the two approaches might be useful—will shift as technological capabilities, deployed services, and business models continue to evolve.
Leverage Standards Processes but Understand Their Limitations
Regulators often rely, either explicitly or implicitly, on standards bodies to define the technical standards that are ultimately needed to implement rulings for proposed new allocations and services. On the one hand, standards-setting organizations are viewed as being more nimble and better able than regulatory bodies to focus on technical issues. On the other hand, as standards take on greater importance, the number of competing players and conflicting interests grows, raising the risks that a large player may try to dominate the process, that standards setting may deadlock, or that only certain societal interests are reflected. Some ways to address these risks have been identified, such as the use of one company, one vote to deal with attempts to dominate by sending multiple delegates, but such an approach has tradeoffs as well.
Collect More Data on Spectrum Use
There are many gaps today in knowledge about the use of spectrum. Measuring use is difficult and has not been done systematically, leading to uncertainty for policy makers, who are not able to readily assess claims and counterclaims about the use or nonuse of spectrum. Advances in radio technology, however, make it possible to contemplate new ways of collecting data on spectrum use, such as by the deployment of networks of sensors and the incorporation of sensing capabilities in equipment deployed for other purposes. Such capabilities would enhance the ability of regulators to enforce compliance with operating rules, and to more quickly assess conflicting claims about harmful interference and provide the data required to implement spectrum management schemes that depend on identifying unused spectrum.
Ensure That Regulators Have Access to Technology Expertise Needed to Address Highly Technical Issues
As this report argues, spectrum policy is entering an era in which technical issues are likely to arise on a sustained basis as technologies, applications, and services continue to evolve. The committee believes that the Federal Communications Commission (FCC) would therefore benefit from enhancing its technology assessment and engineering capabilities and suggests several ways to gain such expertise:
Make it a priority to recruit top-caliber engineers/scientists to work at the FCC, perhaps for limited terms.
Use an external advisory committee to provide the FCC with outside, high-level views of key technical issues. (Indeed, in the past, the FCC convened the Technology Advisory Council to play just such a role.2)
Add technical experts to the staff of each commissioner.
Tap outside technical expertise, including expertise elsewhere in the federal government such as at the Department of Commerce’s Institute for Telecommunication Sciences and the National Institute of Standards and Technology (NIST), or through a federally funded research and development center.
Sustain Talent and Technology Base for Future Radio Technology
The opportunities described in this report rely on innovation in both technology and policy. Innovation in wireless technology involves many areas of science and engineering—including RF engineering, digital logic, CMOS, networking, computer architecture, applications, policy, and economics—and often expertise in combinations of these areas that is difficult to obtain in a conventional degree program. Research investments in wireless technologies by federal agencies such as the National Science Foundation, Defense Advanced Research Projects Agency, National Telecommunications and Information Administration, and NIST help to build the knowledge base for future innovation and to educate and train tomorrow’s wireless engineering talent. Research efforts can be buttressed by an infrastructure for implementing and testing new ideas in radios and systems of radios. Test beds allow radio system architectures to be tested at scale, and access to facilities for integrated circuit design and fabrication makes it possible to build prototypes.
FORWARD-LOOKING POLICY OPTIONS
Consider “Open” as the Default Policy Regime at a Frequency Range of Approximately 20 to 100 GHz
At frequencies of 20 to 100 GHz, the potential for legacy problems and for interference (in the classical sense) is lower, suggesting that non-traditional (open) approaches could predominate for use of spectrum at 20 to 100 GHz.3 Adopting an open approach for a frequency domain that will become increasingly more technologically accessible and commercially attractive several years from now would set the stage for more flexible and adaptive future spectrum management. FCC policy has already moved in this general direction, with an unlicensed regime established in a band at 57 to 64 GHz and licensed access to bands at 80 and 95 GHz made available on a first-come, first-protected basis.
Spectrum use is relatively low at 20 to 100 GHz compared to use at frequencies below 20 GHz, but existing users are likely to argue vociferously for ongoing protection, and some exceptions to the open rule will probably be needed to protect certain established services and passive scientific uses.
Use New Approaches to Mitigate Interference and a Wider Set of Parameters in Making Assignments
Protecting against harm from interference has both technical aspects (how well a radio or radio system can separate the desired from undesired signals) and economic dimensions (the costs of building, deploying, and operating a radio or radio system with particular technical characteristics that make it easier to separate the signals).
Provided that the transaction costs are low enough and that agreed-upon protocols for coordination exist, usage “neighbors” can negotiate mutually satisfactory solutions to interference problems that take into account the financial benefits, costs, and technology opportunities.4 Given the complexity of defining the technological options for any given communication in the context of other local attempts to communicate, as well the difficulties of determining who is a “neighbor,” particularly for mobile and nomadic systems, the transaction costs may be significant.5 The size of these costs and their implications for solutions that rely on negotiations will depend on such factors as the number and diversity of systems and users and is a subject of ongoing debate.
Receivers are increasingly able to discriminate a desired signal from an undesired one, some technologies provide new tools for mitigating interference, and other new technologies make it possible to exploit all degrees of freedom in a dynamic fashion, opening new avenues for mitigating interference. Mitigation of interference can also be addressed in terms of the behavior of systems of radios rather than of individual radios and by coordinating the behavior of multiple systems. A key question is how best to establish incentives for such cooperation.
Introduce Technological Capabilities That Enable More Sophisticated Spectrum Management
The use of certain technologies, some of them emerging and some of them available but not widely deployed, would make it easier to introduce new services into crowded frequency bands. In particular it might be possible to overlay unlicensed use onto licensed use if receivers were suitably equipped. Another enabling technology is smart antennas that could be used to focus transmitted power, scan the environment for other transmissions, and spatially separate transmissions to help avoid interference. Migrating current nondigital services to more efficient digital
transmission will be a major challenge, especially for services that have large and/or politically powerful legacy bases.
Migrating to higher-quality receivers has a cost in dollars, design complexity, and power consumption. Even small additional costs matter a great deal when service providers are fighting for pennies. But the additional investment could have a big payoff for those who seek to introduce enhanced or new services.
Trade Near-Absolute Outcomes for Statistically Acceptable Outcomes
Although statistical models have long been used in spectrum analysis, the underlying conservative assumptions have emphasized avoidance of interference to an extent that has significantly affected efficient use of spectrum. An alternative is to relax constraints so as to normally (but not always) provide good outcomes, as is done in both Internet communication (best-effort packet delivery) and cellular telephony (which provides mobility in exchange for gaps in coverage and lower audio quality). With this approach, adverse impacts on users would be rare even though technical performance might be measurably but tolerably worse for users. A relaxation of requirements could significantly open up opportunities for nonexclusive use of frequency bands through a rebalancing of the risk of interference and the benefits of new services. This approach might not be appropriate, however, for services that demand guarantees of especially high-quality service (e.g., for certain safety-critical systems). Although regulatory proceedings could be used to implement such a shift, it might be preferable for licensees to negotiate mutually beneficial arrangements.
Design for Light as Well as Design for Darkness
Many systems, notably cellular phones, have been “designed for darkness”—that is, with the assumption that a particular band has been set aside for a particular service or operator and that there are no other emissions in that band. An alternative is to “design for light,” with the assumption that the operating environment will be noisy and cluttered. Both approaches are reasonable for certain applications and services, but there are tradeoffs between (1) the ease with which higher spectral efficiency can be achieved under design for darkness, thus allowing for lower cost and reduced power consumption and (2) the greater flexibility to support multiple and diverse uses under design for light. The historical preference has been to design for darkness, but today technological advances suggest opening up more bands in the design-for-light modality.
Consider Regulation of Receivers and Networks of Transceivers
Much regulation has focused on transmitters, and rules have specified transmission frequency and bandwidth, geographical location, and transmission power. Increasing use of new radio architectures (discussed above) suggests that the scope of inquiry can be broadened to look at the properties and behaviors of receivers and networks of transceivers. Better receiver standards would create an environment in which receiver capabilities present a lower barrier than they do today for implementing new spectrum-sharing schemes. Expanding the scope of policy or regulation to include a system of radios rather than an individual radio would open up new opportunities, such as the possibility of exploiting a network of radios to reliably use a listen-before-send protocol to avoid interference and thereby avoid the hidden node problem, in which one radio cannot detect transmissions from another radio.
Exploit Programmability So That Radio Behavior Can Be Modified to Comply with Operating Rule Changes
Because radios can be made highly programmable, albeit with tradeoffs in complexity, cost, and power consumption, their operating parameters can be made modifiable to comply with policy or rule changes. Deployment of devices with such capabilities opens up new opportunities for more flexible regulation and more incremental policy making: (1) policies could be written less precisely up front, (2) policies would not have to be homogeneous and could be adapted to local environmental conditions such as signal density, (3) the operating rules of existing devices could be revised to accommodate new technology, and (4) devices could more easily be certified for international use because they can readily be switched to comply with local policy. One result could be greater speed of deployment for new technologies and services.6 Over time, the introduction of such capabilities could be expected to impose a less onerous performance and cost penalty. Future regulations could take advantage of this opportunity by specifying, for example, that licenses granted after a certain date would require use of devices with a certain degree of reprogrammability.
Use Adaptive and Environment-Sensing Capabilities to Reduce the Need for Centralized Management
As agility, sensing, and coordination capabilities improve and as etiquettes and standards for these capabilities develop, opportunities will arise for scaling back centralized management. Potential advantages of this approach include a lower barrier to entry (because neither engagement with a regulator for spectrum assignment nor negotiation with an existing license holder would be necessary) and greater flexibility of use (because operation would be defined primarily by the attributes of radio equipment rather than regulation). Potential disadvantages of this approach include uncertainty about the technical feasibility and the costs of building more capable radios with the degree of agility, coordination, and environmental sensing required for effective decentralized operation. Such a shift would also involve assessing tradeoffs between the more rapid introduction of services made possible in a decentralized regime and the significant capital investment made and efficiencies achieved, at least in some instances, under a centralized regime.
Establish Enhanced Mechanisms for Dealing with Legacy Systems
In recent years, notable efforts to deal with legacy systems have included relocating point-to-point microwave services to allow deployment of personal communications service cellular telephony and the relocation of Nextel cell services out of public safety bands. More recently, relocation of government services as well as broadcast radio services and fixed services has been undertaken to allow the introduction of new 3G/advanced wireless services bands. Modifying infrastructure to accommodate such change can be difficult and expensive; an even bigger legacy challenge is the need to migrate potentially millions of devices owned and operated by consumers and other end users. This task has proven easier when the market dynamics are such that end-user technology is regularly refreshed (as in mobile telephony, where new handsets with new features enter the market frequently and where the cost of handsets is often partly covered in the services fees and regular upgrades are made available at little additional cost to the subscriber) and harder where retrofitting is not practical and hardware has historically had a long lifetime (as in aircraft and public safety radios). The difficulty of making changes also depends, of course, on the relative political clout of the incumbents and those seeking to introduce new services.