PRESSURES ON TODAY’S WIRELESS POLICY FRAMEWORK
The current wireless policy framework is based on the technology of more than 8 decades ago and on the desire, at that time, for governmental control over communications. It has evolved to encompass a patchwork of legacy rules and more modern approaches that have been added over time. Nonetheless, there is wide acceptance that the rules are ripe for change, to better reflect the technological options available today and in the future. The current framework is under pressure today on several fronts:
The current framework continues to rely heavily (with a few exceptions) on service-specific allocations and assignments that are made primarily by frequency band and geographic location and does not encompass all of the spectrum management approaches possible. Allocation and assignment of services by frequency band were historically seen as the only technologically feasible way of allowing multiple wireless systems and services to coexist. Today, technology advances make it possible to use additional degrees of freedom to separate transmissions, introducing new options for allocating usage rights. In addition, new frontiers are being opened by the emergence of inexpensive, small devices that operate at 20 to 100 GHz.
Despite revisions aimed at ensuring greater flexibility, the current framework continues to rely significantly on centrally managed allocation and assignment, with government regulators deciding how and by whom wireless communications are to be used. Spectrum policy has become more flexible over the past several decades in such areas as permitted modulation waveforms and types
of use and the adoption of less centralized models such as unlicensed bands and white space. Nonetheless, the past decade has seen widespread agreement that central management by regulators is inefficient and insufficiently flexible—an agreement that reflects the complexity of the problem and the dispersion in the economy of the information that is required to make decisions.1 It also reflects concerns about whether government institutions are sufficiently nimble to make efficient and timely decisions.
The current framework will not be able to satisfy the increasing and broadening demand for wireless communications. One source of this demand is greater use of richer media (such as video) that requires higher data rates. Another is the continued growth in Internet applications and services and the growing demand for untethered and mobile access to them. Demand for mobile access to the public telephone network has continued—the leading example of a more general shift toward mobile interpersonal communication. Together, these have resulted in rapid growth in the number of users of wireless devices and services. Increasingly, communications are between devices as well as people, notably reflected in growing interest in sensor networks, and together, these trends may overwhelm the ability of the existing framework to enable introduction of new communications services to meet demand.
The current framework does not fully reflect changes in how radios are built and deployed now or in how they could be built and deployed in the future in response to different regulations. Technological innovation has expanded the range of potential wireless applications and services and the technical means for providing them. At the same time, it has dramatically lowered the cost of including wireless capabilities in devices. The old regime and technology placed a premium on simple, low-cost receivers and did not impose tight cost constraints on transmitters. New technology enables the deployment of many more, and more capable yet inexpensive, transceivers. Today, the population of deployed radios has shifted from one dominated by a small number of transmitters and many receivers to a population that also contains many more transceivers (e.g., every cell phone is a transmitter as well as a receiver).
KEY CONSIDERATIONS FOR A FUTURE POLICY FRAMEWORK
Enabling More Nimble Evolution of Spectrum Policy
The current spectrum plan reflects decades of historical practice and in its myriad allocations and assignments reflects many stages of tech-
nology and policy development. It thus encompasses not only the fixed allocations made years ago for services that use now-outdated technology (e.g., AM radio) but also new regulatory and technology approaches (e.g., ultrawideband).
Generally speaking, allocations for services reflect the frequency range that was practical at the time a particular service was introduced, and many services introduced decades ago persist today. It was often possible to fulfill demand for new services by exploiting the higher frequencies that would become available as advances were made in device and radio technology. Today, propagation and penetration considerations constrain, for many applications, the utility of the higher frequencies that are less crowded, and so it is no longer possible to free up spectrum for those applications by simply moving to higher frequencies. In addition, following many decades of fitting in new services wherever there was free space, there is little or no unclaimed spectrum at lower frequencies. Much of the current spectrum frame work also reflects a time when operating rights were fairly well defined and when there were relatively few systems, system operators, and transmitters.
The complexity and density of existing allocations, assignments, and uses, coupled with competing demands for new uses, especially at lower frequencies, mean that any change will be difficult. It will involve careful consideration of the specifics of allocations, assignments, and uses in specific frequencies as well as the particular technical characteristics of particular frequencies and proposed applications. Regulators must approach this evaluation carefully lest they end up simply reinventing old command-and-control approaches. Change may also involve addressing the costs and benefits of proposed changes that are (often unevenly) distributed over multiple parties, resolving conflicting claims about costs and benefits, and addressing coordination issues, which are especially challenging if achieving a particular change requires actions by a large number of parties. Moreover, some parties gain by changing while others gain by waiting. As a result, decision making ends up, broadly, being a political question.
Today, more flexible and adaptable radios and a world in which these systems are in the hands of millions of people suggest the need for correspondingly nimble and flexible processes for developing and evolving future wireless communications policy. In essence, what will be needed is an approach that is not necessarily completely right the first time, but right over time. That is, the approach should allow for experimentation and feedback, and the regulatory system should be able to track and even anticipate advances in wireless technology and emerging ways of implementing and using wireless services. Developing such a system in detail is beyond the scope of this committee’s charge, but it is with such an objective in mind that the following items are offered as promising avenues for progress.
Avoiding the Extremes in the “Property Rights” Versus “Commons” Debate
The terms “property rights,” “commons,” “and greater public good” are used as shorthand for particular approaches to spectrum management. “Property rights” refers to an approach that relies on a well-specified and possibly exclusive license to operate a service using a set frequency range, location, transmitted power, and so forth. These rights can be established or transferred through an administrative proceeding, auction, or market transaction. Ideally, any of the dimensions along which the rights are defined can then be redefined through market transactions.2 “Commons” refers to an approach that establishes a band in which those who operate devices do not need to obtain a license and instead must comply with rules that are applied to all devices operating in that channel, such as limits on transmitted power. This approach is intended to incentivize development of devices that perform better in a noisy, shared environment as an alternative to the use of market incentives for prioritizing potential sources of radiation in any given channel by the value of the communication carried in that channel.3 “Greater public good” refers to government-sponsored free use of the spectrum for such purposes as national security, public safety, and science.
Each approach has advantages and disadvantages, transaction costs, incentives for and loci for innovation, and so forth. No one of the approaches can at present be judged to be better than the other two. Moreover, there is a much larger space of alternatives that combine attributes of these approaches, and the dividing lines between the approaches will shift as technological capabilities, deployed services, and business models evolve. These observations suggest avoiding building an overly rigid regulatory structure or relying solely on a single approach. Instead, they suggest using each approach where practically and politically feasible, and measuring and monitoring their performance, and using those results
to inform future allocations. Regulators and policy makers will need to be able to track these developments and guide where the dividing lines should be in the future.
Leveraging the Role of Standards Setting in Regulatory Decision Making
Standards are stable and well-maintained specifications that are provided by vendors, service providers, nonprofit organizations, or ad hoc organizations. They specify attributes such as interoperability and compatibility, which are often important in regulatory proceedings. Regulators often rely, at least implicitly, on technical standards that guide those building devices and services in how to comply with the regulations. This reliance on standards setting reflects two related ideas—first, that regulators may be better positioned to review technical proposals already captured in standards than to recommend specific technical approaches, and second, that it may be best to leave some of the technical details needed to define a service to standards rather than rules.
However, the process of forging consensus on standards is not easy. As in other domains, standards for wireless technologies have tended to be characterized less by engineers seeking consensus resolution of largely technical matters and more by contention among players with significant stakes in the outcome (e.g., incumbents seeking to protect their position, participants who have investments in intellectual property, or participants who have differing business interests with respect to proposed services or applications). As standards have taken on greater importance, the number of competing players and conflicting interests has grown, making the processes more cumbersome.
One risk is that the large incumbent players can dominate by virtue of their greater resources and their greater participation in standards bodies, although this risk can be partly mitigated by moving to a one company, one vote formula, but with tradeoffs. Another risk is that standards efforts could degenerate into a battle between two camps with disparate proposals that can end in a deadlock if a standard accommodates both camps—essentially a nondecision that can, at the extreme, prevent a product from coming to market or necessitate the establishment of another industry group that creates another standard that is narrow enough to be implementable.4
Another challenge for standards setting is that standards are most useful once a new service has already seen at least some use. That is, standards processes are most useful in helping a set of actors forge a common approach for a service that has already been developed and, at least experimentally, deployed. They are much less effective where new solutions are being sought. Related to this is the risk that some standards that are developed may never see significant adoption. Nor is the standards process always nimble or rapid; a recent example is the IEEE 802.11n standard for wireless local area networks that took many years to finally adopt, by which time interim solutions had already been widely deployed to meet the market demands for faster networking.
Understanding the Sensitivity of Innovation to Policy Decisions
The innovation process involves a number of actors, including academic researchers, small and large firms, and end users. Policy and standards setting play an important role in shaping decisions that ultimately affect innovation. Understanding the interplay between technology and policy is critical to creating effective policy. Considerations include the tension between efficiency and innovation and between the various stages of innovation. Another important consideration is that innovation depends on inputs from basic research (see below).
Ensuring Technology Expertise in the Regulatory Process
When matters requiring an evaluation of technical claims or options come before the Federal Communications Commission (FCC), the technical basis for its decisions rests on information provided in comments to the FCC and assessments made by its engineering staff (Box 3.1). The technical analyses in the submitted comments will of course tend to reflect the interests of the parties submitting those comments. The expertise of its engineering staff allows the FCC to address many specific technical issues it must grapple with regularly—for example determining the right noise figure for a particular system or the appropriate specification for adjacent channel interference.
Spectrum policy has entered an era in which many critical and strategic technical issues are likely to arise as technologies, applications, and services evolve. The FCC confronted many novel technical issues in its early days. Over time its focus has broadened and now encompasses economic and legal issues as the industries it regulates mature and broaden, issues such as broadcast media ownership and common carrier regulation. Today, qualitative and quantitative technology shifts of the sort discussed in this report and their complexity and interactions mean
The Federal Communication Commission’s Office of Engineering and Technology
The Office of Engineering and Technology (OET), the technical advisory arm of the FCC,1 has three divisions: policy and rules, electromagnetic compatibility, and a laboratory. The policy and rules division has three branches. The spectrum policy branch covers regulations and procedures for spectrum allocation and utilization. The technical rules branch develops technical rules and standards for the operation of unlicensed RF devices. The spectrum coordination branch monitors the activities of other government agencies, particularly the National Telecommunications and Information Administration (NTIA), as well as activities of the communications industry. It is also the liaison between the FCC and the Interdepartment Radio Advisory Committee, which advises the NTIA. The electromagnetic compatibility division of OET studies radiowave propagation and communications systems characteristics; it also issues and manages experimental licenses. The laboratory division focuses mainly on testing, evaluation, and compliance. It has a technical research branch, a measurements and calibration branch, an equipment authorization branch, and a customer service branch. In 1998, OET convened a Technology Advisory Council drawn from a range of technical experts, including manufacturing, academia, communications services providers, and researchers. The council, which met regularly until July 2006, was intended to provide a means for the FCC to stay abreast of rapid advances in telecommunications technology and help inform FCC regulations in light of those advances.2 In October 2010, a new council was appointed.
that the FCC faces new challenges of a technological nature. Examples of these complex issues that were grappled with during the work of this committee included how best to use the white space of (unused) TV channels and how best to use the 700-MHz spectrum for public safety communications.
Because it believes that the FCC would greatly benefit from enhancing its technology assessment and engineering capabilities, the committee offers several options for obtaining access to such expertise.
One option is to recruit additional top-caliber engineers and scientists to work at the FCC, perhaps for limited terms. Programs could provide early- or mid-career professionals with an opportunity to gain experience in its policy and regulatory environment or could establish rotating posi-
tions to bring in senior academic and industry experts. There is, of course, a potential for conflicts of interests to arise when staff move between government and industry, and these conflicts of interest must be carefully avoided. On balance, however, the increased flow of expertise, ideas, and perspectives seems likely to bring net benefits. The FCC has used the position of chief technologist, which has been held by several senior experts from academia and industry, as one way to bring in such expertise. The committee believes, however, that it will be necessary to create an environment that attracts more of the right talent. As things stand, for example, the committee’s impression is that many in the technical community do not appear to be convinced that working at the FCC can help advance an engineering career in industry or academia.
Another option is to convene an external advisory committee that could give the FCC outside, high-level views on key technical issues. The FCC announced the appointment of a new Technology Advisory Council in October 2010, as this report was being prepared for publication.
Another option would be to add technical expertise to the staff of each commissioner. The staff members are regarded as highly competent, but most are legal professionals, not technologists. That is, although the staff members are generally knowledgeable—and often very much so—about technology, they typically do not have the advanced engineering background that may be necessary to understand and resolve complex, deeply technical issues.
Also, the FCC could tap outside technical expertise, including expertise available elsewhere in the federal government. Notably, the NTIA Institute for Telecommunication Sciences (ITS; Box 3.2) already provides considerable technical assistance to federal agencies on a cost-reimbursement basis and has done a limited amount of work for the FCC in the past. Over the years ITS has developed and maintained a strong competency in a number of technical areas related to RF communications. Strengthening the relationship between the FCC and ITS would give the FCC access to another source of independent scientific and engineering expertise on an as-needed basis. NIST, which has considerable expertise and resources for technology evaluation and is currently working in such areas as the performance of land mobile radios and their use for public safety, is another potential source of expertise. (One caveat is that the FCC’s status as an independent agency rather than an executive branch agency may limit work done by the NTIA or NIST to technical and not policy matters.)
Finally, another source of outside technical expertise might be a federally funded research and development center (FFRDC). These are organizations managed by universities, industrial firms, or nonprofits and chartered to provide federal agencies with technical expertise. FFRDCs
Institute for Telecommunications Sciences
The Institute for Telecommunication Sciences (ITS) is the research and engineering arm of the National Telecommunications and Information Administration (NTIA) in the Department of Commerce.1 Its stated mission is to be the federal government’s primary technical resource for telecommunications issues. A liaison office coordinates ITS technical research with other federal agencies. As part of its broader mission it has supported several other federal agencies, including the Departments of Defense, Homeland Security, and Transportation as well as state and local government.2 It works through cooperative research and development agreements with the private sector (e.g., American Automobile Association, Intel, Lucent, and Motorola) and academic institutions (e.g., University of Colorado, University of Pennsylvania). ITS has also provided technical support to the FCC for specific issues such as evaluation of propagation models necessary to implement the Satellite Home Viewer Act.3
ITS performs fundamental research and engineering with technical programs several areas directly related to wireless technology: broadband wireless, digital land mobile radio, information technology, propagation measurement and models, and spectrum research. It provides the technical resources from the United States in developing international telecommunications standards. The staff of ITS is composed mostly of scientists and engineers across a number of disciplines, including electronics engineering, math, physics, and computer science. Its stated goals reflect its engineering focus. Those goals include optimization of federal spectrum allocation methods, support for systems engineering and planning of interoperable public safety radio systems and standards (not frequency allocation, which is the purview of the FCC), improvement of network operation and management of national defense systems, and providing practical telecommunications performance measurement methods. ITS also hosts the International Symposium on Advanced Radio Technologies (ISART) conference, which annually brings together researchers, business leaders, policy makers, and regulators to discuss the future development and application of radio frequency technologies.
are able to bring in expertise on a project-by-project basis and to engage expertise that may not be available within the constraints of civil service salaries.
The committee’s view is that whatever mechanisms the FCC uses to tap outside technical expertise, the goal is to strengthen capabilities for establishing appropriate high-level guidance, and not to build up an infrastructure for more detailed command-and-control regulation.
TECHNOLOGY-ENABLED POLICY OPTIONS
Considering “Open” Approaches in the Range of 20 to 100 GHz
Use is relatively sparse at frequencies of 20 to 100 GHz; commercial services in that range represent a small fraction of the services that operate below 20 GHz. The relatively high attenuation in materials—and short free space propagation in the oxygen absorption band around 60 GHz—means that propagation distances are relatively short. The ratio of antenna size to wavelength makes it practical to form very narrow beams. Together, these factors make interference inherently unlikely.
These frequencies thus represent an opportunity that stands in marked contrast to the very difficult transition problems associated with introducing new services, allocations, and sharing arrangements at lower frequencies. (Increased use of higher frequencies would, however, do little, at least in the short term, to alleviate pressures to also introduce new services at lower frequencies.) For these higher frequencies, the reduced legacy problem and lower chance for interference (in the classical sense) indicate that nontraditional (“open”) approaches can predominate. Although it is an oversimplification to say this, at lower frequency the problem is dealing with the legacy, while at the higher frequency it is difficult for radios to interfere. These factors suggest that the two domains be approached differently, but the distinction has so far not been clearly articulated or incorporated into the policy-making process.
The lower bound of the range proposed for open use, 20 GHz, was selected on the basis of two factors—frequencies above 10 Ghz have only recently become practical in small devices at low cost and the region between 10 and 20 GHz is already heavily allocated, such as for Ku-band satellite transmissions between 12 and 18 GHz.
The upper bound of this range, 100 GHz, reflects what can reasonably be expected to be practical today or in the near future and the upper limit at which it is possible to have a reasonable sense of how the technology might be employed. It would thus be imprudent to recommend a particular regime for frequencies above 100 GHz, given the limited understanding of how radios might be constructed or operated in that domain, and it
would be prudent to review policy in this area every few years and make adjustments as appropriate.
FCC policy has already moved toward a more flexible and adaptive approach in this frequency domain, with an unlicensed regime established at 57 to 64 GHz and licensed access to bands at 80 and 95 GHz on a first-come, first-protected basis. These measures may stimulate commercial activity and speed the deployment of new services.
At the outset, these frequencies most likely would be used for very short distances and very-high-bandwidth applications, such as in-room video distribution, because the bandwidth for gigabit and higher-rate applications is not available elsewhere. This is not to say that existing applications in those ranges would be quickly or easily replaced, but rather that over time it would be attractive to introduce new applications at 20 to 100 GHz rather than carving out the rights to introduce them at lower frequencies.
Finally, although usage at 20 to 100 GHz is relatively low compared to usage at frequencies below 20 GHz, existing users at the higher frequencies are likely to object, and some exceptions to the open rule would probably be needed to protect some existing services. For example, many satellite and military services operate in this range, mostly under NTIA jurisdiction.5 There are also non-communications uses in this frequency range, such as radar, navigation, and other industrial, scientific, and medical uses. Recent experience in working out a sharing arrangement between WLAN and military radar use at 5 GHz suggests, in the view of the committee, both the possibilities and the potential pitfalls; an accommodation was ultimately reached but not without considerable study and delay. Because many of the existing uses above 20 GHZ are for government services, the participation of and cooperation between the NTIA and the FCC will be required to sort out the issues.
Using a Wider Set of Approaches to Mitigate Interference and a Wider Set of Parameters in Making Assignments
Interference should not be viewed simply as an overlap in frequency and space between two radios but also in terms of the ability of particular radios and radio systems to separate desired from undesired signals. Harm from interference has both technical dimensions (how well a radio or radio system can separate desired from undesired signals) and economic dimensions (the costs and distribution of costs of building,
deploying, and operating a radio/radio system with particular technical characteristics that make it easier to disambiguate the signals).
Today, technology is enabling new ways of mitigating interference. The degrees of freedom available for managing interference go beyond the traditional parameters of frequency and geographical area and include amplitude, frequency, space, time, and polarization. Interference mitigation can also be thought of in terms of the behavior of radio systems rather than individual radios. In the future, coordination and cooperation are more likely to be win-win situations; a key question is how to motivate such cooperation.
Regulation is beginning to reflect these opportunities. Historically, interference between adjacent bands has been mitigated by inserting guard bands. Under recently adopted rules for the 700-MHz band, for example, there are no guard bands, leaving it up to users to figure out how to mitigate interference, whether by cooperation among users, investment in better receivers, or by other means. This is a good example of a more technology- and service-neutral approach to regulation. Rather than mandate a particular technical solution, the idea is to be flexible and allow users to find the best ways of increasing overall efficiency.
Introducing Technological Capabilities for More Sophisticated Spectrum Management
Some current and emerging technologies could make it much easier to introduce new services into crowded frequency bands. Given sufficient motivation, ingenuity, and investment, it is not possible to obtain significant improvements in communications capacity in a particular piece of spectrum, but migrating current nondigital services to digital transmission will be a major challenge, especially for specific applications like aviation radios, which have a large, politically powerful legacy base. Improvements are more feasible in bands where the disadvantages of migration are not so widely distributed across so many users, where the user base is a less potent political force, or where the market dynamics are such that end-user technology is regularly refreshed.
Smart antennas, for example, could mitigate interference problems in an overlay system. By focusing a beam from the transmitter to a receiver, devices with smart antennas can significantly reduce their overall transmission power. They could also scan their environment for other transmissions and transmit in directions that help avoid interference. These technologies are not very practical at lower frequencies but become more so at somewhat higher frequencies.
Moreover, it may be possible to incorporate more sophisticated approaches into receiver specifications established through either stan-
dards or regulations. Adaptive radios need to be able to sense their environment, negotiate with other radios, and adjust their operation accordingly. Doing so requires radios that can listen to a much wider range of signals and distinguish among various signals more accurately than is required for a conventional radio. A receiver’s ability to sense a small signal in the presence of a nearby larger signal is limited both by noise, which tends to corrupt measurement of the received signal, and by the receiver’s dynamic range. Thus, adaptive radios are viable only if radios meet demanding specifications for both dynamic range and noise. The problem remains of how to deal with legacy hardware, which does not have this capability built in because it was made before receiver performance was improved to exploit these opportunities.
Such higher-quality receivers also cost more, have a more complex design, and consume more power. Even small additional costs matter a great deal when service providers are fighting to save pennies. The additional investment can have a big payoff, however, if it enables new applications that are not otherwise possible.
Developing Complementary Policy to Allow Negotiation Among Users
A complement to the introduction of new technology is the creation of a policy environment in which neighbors (and others whose services experience interference) are free to negotiate a mutually acceptable outcome. This notion, first proposed by Coase,6 provides for market negotiations to complement or replace regulatory mandates. A new arrangement may not be optimal for a given set of parties and might run the risk of becoming obsolete as technologies emerge, but such negotiations allow for flexibility in situations such as the following:
Operator A spills over into neighbor B’s spectrum in a manner that is acceptable under current regulation but is costly to neighbor B, who should be free to pay A to not spill over.
Operator A seeks to implement a service that will interfere with operator B’s service unless operator B improves the interference-rejection capabilities of its receivers. Operator A should be free to pay B for these improvements.
It is important to recognize, however, that if the transaction costs such as for bargaining are high, the bargains are likely to be less efficient. For
example, the introduction of more sophisticated devices and network architectures could make it more difficult to know who is spilling over into a neighbor’s usage rights, and who is not.
One can also envision scenarios in which such bargaining might not improve the overall efficiency of spectrum use. If license holders can negotiate with others to shut down interfering transmissions, the former will have less incentive to invest in innovative devices that can operate well in the presence of noise. Similarly, to the extent that device manufacturers know that their customers will not be able to protect themselves from interference, they will be motivated to invest in more robust, smarter devices that can give their purchasers better communications irrespective of whether or not there is an interference-reducing agreement.
Trading Absolute Outcomes for Statistically Acceptable Outcomes
Approaches that use a statistical probability of interference of less than 100 percent do not necessarily lead to ruinous outcomes that will destroy service. Rather, these approaches seek to relax constraints so as to normally (or almost always) provide good outcomes but accept poorer outcomes with acceptable probabilities and consequences. That is, the system attempts to offer optimal performance most of the time to most users and degrades softly under less optimal conditions. The difference between the approaches emphasizing absolute and acceptable outcomes regarding interference is somewhat analogous to the difference between personal auto safety (which “accepts” a certain number of accidents) and common carrier air safety (which has an explicit albeit unrealizable goal of zero accidents).
The latter approach has already been embraced in some aspects of telecommunications. The Internet’s best-effort design, for example, does not guarantee quality of service yet generally provides an acceptable overall experience. Acceptance of a similar tradeoff was reflected in the market’s favoring Ethernet over Token Ring technology in the early days of local area networks. Already in the wireless space, such imperfections as holes in coverage area or lower-quality audio (compared to a landline) are accepted in exchange for the convenience of mobility.
Such a relaxation of requirements could significantly open up opportunities for nonexclusive use of frequency bands. Rather than have regulators decide on acceptable quality, it might be desirable to allow licensees flexibility to negotiate mutually beneficial arrangements even though the result at times might be degraded quality.
Embracing “Design for Light” and “Design for Darkness” More Broadly in Design Concepts and Regulatory Frameworks
Many systems have been “designed for darkness”—that is, under the assumption that a particular band has been set aside for a particular service or operator and that there are no other emissions in that band. Cellular systems are a notable example of this approach. An alternative is to design for light, with an assumption of a noisy, cluttered environment. Both are reasonable design approaches for certain applications and services, but it is important to be clear about which mode is appropriate under what circumstances. The historical preference has been to design for darkness, whereas today, technological advances suggest opening up more bands in the design-for-light modality. These techniques include beam steering, enhanced signal processing, and network coordination. To design for light will require better information than is available today on sources of potential interference.
Broadening the Scope of Inquiry to Encompass Receivers and Networks of Transceivers
Much regulation has focused on transmitters, with specifications for transmission frequency and bandwidth, geographical location, and transmit power. Increasing use of new radio architectures (discussed above) suggests that the scope of inquiry be broadened to look at the properties and behaviors of receivers and networks of transceivers.
Better receiver standards would create an environment in which receiver capabilities present less of a barrier than they do today for implementing new spectrum sharing schemes. For example, it might be possible to overlay unlicensed use onto licensed use with receiver specifications written to these standards.
Expanding the scope for policy or regulation to a system of radios rather than an individual radio also would open up new opportunities. For example, a network of radios can help avoid the hidden node problem because it can use multiple network elements to listen from multiple points for transmissions. Also, a network of radios would be able to relay a transmission through hops at lower power at each node rather than directly at higher power, thus decreasing the chance it would interfere with another system. Receivers could also report on their position—for example, via embedded GPS receivers—although this capability has cost and potential privacy implications.
As discussed above, technology has enabled highly programmable radios. To be sure, such radios are not practical in many circumstances today because of their complexity, power use, and dollar costs, especially for mobile devices. Nonetheless, programmable radios are being used for some applications today (such as cellular base stations), and it is reasonable to expect wider use in the future. One implication of this programmability is that the radio operating parameters can be made modifiable to comply with policy or rule changes. Deployment of devices with such capabilities opens up new opportunities for more flexible regulation and for policy makers to safely work more incrementally. Namely, (1) policies would not need to be homogeneous and could be adapted to local environmental conditions such as signal density, (2) the operating rules of existing devices could be revised to accommodate new technology, and (3) devices could more easily be certified for international use because they can readily be switched to comply with local policy.
Although revisability may sound attractive, the opportunity must be weighed against some significant drawbacks. Paradoxically, rules that require revisability could actually have the effect of discouraging deployment and investment if they are seen as weakening the commitments made by regulators. The most likely scenario, if such a policy were poorly drafted, would be that most industry participants would take a wait-and-see position, which defeats the purpose of providing flexible and revisable rules for quick adoption. There are possible mechanisms to address this concern, such as offering investors compensation if the rules on which they relied are materially changed. Such mechanisms would need to be carefully considered as part of any rulemaking that sought to exploit revisability.
Exploiting Adaptive and Environment-Sensing Capabilities That Can Help Reduce Centralized Management
As agility, sensing, and coordination improve, and as etiquettes and standards for these capabilities develop, opportunities will likely arise for reduction of centralized management. Potential advantages to this approach include a lower barrier to entry (because entry either will not require engagement with a regulator for spectrum assignment or will entail negotiation with an existing license holder, or it will be easier and less costly to find an existing license holder willing to share its spectrum assignments) and flexibility of use (because operation is defined primarily by the attributes of radio equipment rather than by regulation). Potential disadvantages to this approach include uncertainty about the technical feasibility and the added costs of building more capable and robust
radios. Such a shift is also predicated on resolving the issues discussed above about more robust receiver design. Some current proposals would maintain a form of centralized control but would replace regulation with much more nimble and dynamic approaches, such as services that collect and distribute information about or grant access to open channels.
Establishing Mechanisms for Dealing with Legacy Systems
In recent years, notable efforts to deal with legacy systems have included relocating microwave services to allow deployment of PCS cellular telephony and the relocation of Nextel cell services out of public safety bands. More recently, relocation of government services as well as broadcast radio services and fixed services has been undertaken for new 3G advanced wireless services bands. Having an easier process for making such changes is a critical enabler of more dynamic policies to meet changing technologies and market needs. Although there are costs and difficulties associated with relocating infrastructure elements, an even bigger legacy challenge is the need to migrate potentially millions of user-owned or user-operated devices. Among the options for dealing with legacy systems are the following:
Commissioning independent neutral analyses to support decision making about potential interference with legacy services based on actual harm rather than political claims.
Establishing streamlined recovery procedures. Claims of interference are inevitable where old and new systems coexist. A streamlined process would help identify, report, and resolve such claims.
Establishing databases of legacy equipment. It is far easier to coexist with legacy systems if details about their operation are known. A lightweight system for registering systems would help to facilitate the creation of a useful database.
Exploiting technological improvement. As radios become more capable, they will be increasingly able to coexist with existing users and services. Future policy should require or incentivize new users to coexist with existing users—for example, by making future devices more flexible (e.g., adaptable filters and oscillators and reprogrammability) so that their operation can be relocated more readily—and should avoid rules that inhibit this.