Historically, the evolution of the public switched networks has, to a significant extent, been driven by the development and deployment of new technologies. Terrestrial transmission has evolved from copper wire pair to coaxial cable to optical fiber cable; radio has evolved from terrestrial microwave to satellite microwave to cellular mobile radio; switching began with mechanical technology, followed by electromechanical, electronic analog, and electronic digital, and optical modes are on the horizon; and computing power was first supplied by vacuum tubes, then transistors, and then successive generations of integrated circuit technology.
The evolution of the public networks to the year 2000 will be marked by further advances in network technology. Optical fiber is becoming the transmission medium of choice; digital switching is becoming the dominant switching technique; and software-based processing, linked to very large scale integrated (VLSI) circuitry, is becoming the preferred technology for network management and control.
The incorporation of these new technologies is making available to network users an economically viable host of new telecommunications and information services, which will give customers more channel capacity, more processing power, and more control over the mix of services they draw from both public switched and private networks. The new technologies make possible the first significant
deployment of broadband networks. Widespread use of real-time, high-speed data networks will develop, whose performance will offer economic advantages to high-volume users (Dvorak, 1987; Langseth, 1987).
But the changes in network architecture and operation brought about by these powerful new technologies will have unintended side effects which, if no adjustments are made, could seriously impair the ability of the public networks to provide the mix of services required to meet national security emergency preparedness (NSEP) goals. The sections that follow present a more detailed picture of how transmission, switching, integrated circuit technology, and network technologies are being deployed in the public networks and will briefly assess the implications of each of these technological trends for NSEP.
As copper-based systems become obsolete, the media that will provide transmission in the public networks will be optical fiber, satellite radio, and terrestrial radio. Increasingly, the dominant domestic transmission medium will be optical fiber (Henry, 1988; Solomon, 1988).
Optical fibers, first tested less than 20 years ago, are strands of ultrapure “glass,” usually fabricated from a silica-based compound, which guide lightwaves along a transmission path. Transmission is accomplished by modulating the light from a light source (either a light-emitting diode or a laser) and coupling the resulting optical beam into the fiber. At the receiving end, a photo-detector typically performs the first level of demodulation to provide multiplexed electrical output signals. Lasers are the preferred light source, since their narrower light beam and purer spectrum couples more efficiently into the fiber and results in higher overall transmission efficiency.
Fiber transmission provides unequalled channel capacity (bandwidth) and signal quality. Already, commercially available fiber systems can transmit at 1.7 Gbit/s rates, thus supporting over 24,000 voice conversations; systems with twice that capacity are forecast to be operational soon. Fiber has an inherent transmission capacity estimated at 20 THz, or more than 10,000 times existing systems; roughly the capacity of all the voice traffic in the United States at
the busiest hour on the busiest day of the year. Actual transmission capacity on fiber systems is limited not by the carrying capacity of the medium, but by limits on the ability to modulate the transmitting lasers. In this regard, fiber is unlike most other transmission media, whose inherent carrying capacity is less than the modulation capability of the source. Increases in capacity have recently come from wavelength division multiplexing techniques, which combine multiple bit streams on different wavelengths inside the fiber.
Fiber produces superior signal quality because the purity of the glass greatly reduces the attenuation and distortion of the signal as it travels from point to point. A common figure-of-merit is the rate-distance product. Digital fiber research systems have achieved 1,000 (Gbit/s)(km); commercial systems, 1 (Gbit/s)(km). Future advances in reducing attenuation, by development of purer glass compounds made of exotic fluoride-based materials, could enable transoceanic transmission without use of repeaters to amplify the signal.
Fiber is also, in some respects, cheaper to maintain than other transmission media. In addition to to its high capacity, the economic attractiveness of fiber transmission is driving its deployment in the public networks. But fiber deployment has complicated the task of powering telecommunications networks. Historically, the telephone company supplied power from a centralized source, not tied to electric utility power. With fiber deployment, and coupled with the widespread deployment of private branch exchange (PBX) and key systems powered on the premises, electrical power is increasingly being provided from the customer’s premises, often from the electric utility company.
By 1995, fiber will be the most common mode of transmission in network interoffice trunking systems (that is, the long distance portion of the public switched networks). Additionally, it is becoming cost-effective for deployment in the feeder portions of the network (from the access tandem switch gateways to the local central office). By the mid-1990s it may well become cheaper to install fiber in the “last mile” from the local exchange central switching office to the customer’s premises. The prevalence of metropolitan area networks by the year 2000 will mean a much richer fabric of interconnectivity on the scale of 100-km distance or less. In high-density areas, such as the Northeast Corridor, the possibility of improvising new interconnections between many such networks in the case of failure of the long-haul backbones could provide a promising backup capability.
National Security Emergency Preparedness Implications
The increased reliance on optical fiber has led to greater concentration of network traffic in a limited number of trunks and, by supplanting other transmission media, has increased network reliance on a single technology. Simply put, there are fewer transmission lines and fewer alternative transmission routes to act as backup in event of failure. The accidental cut of a single fiber cable in New Jersey in November 1988 took down network capacity by 200,000 conversations per hour. By the year 2000, with higher capacity fiber links, a single cable cut could lose many times that number of calls per hour. The increased dependence of the public networks on power supplied by electric utility adds a new source of network vulnerability: electric power outages (Samuelson, 1988).
The principal alternative transmission medium for long distance service has been satellite radio. Transmission is accomplished by sending line-of-sight, microwave radio signals from earth-station antennas to the satellite. Satellites provide highly economical transmission, especially for broadcasting and point to multipoint data transmission, because the cost of transmission does not vary with distance within the footprint (geographic coverage) of a given satellite (Lowndes, 1988). Also, the cost of right-of-way procurements is avoided. Satellites are not considered as desirable as terrestrial links for voice transmission, since the round-trip signal propagation delays of over 250 milliseconds to and from the satellite disturbs some users, even with high-quality echo-suppression processors. On a two-satellite path, user frustration is high, and thus a typical transoceanic call goes by satellite in one direction and via undersea cable on the second path.
Communication satellites can provide significant transmission capacity: The INTELSAT VI satellites will have a capacity of 100,000 voice circuits (compared to 240 circuits for the first INTELSAT satellites of the 1960s). Satellite technology continues to evolve: Earth station antennas only 2 meters in diameter (very small aperture terminals, or VSATs) are being deployed for business data transmission; satellites are using “spot beams” to pinpoint small geographic areas,
enabling frequency re-use within the satellite’s coverage area; on-board signal processing permits multibeam control; and power supply enhancements such as nickel-hydrogen batteries have extended the useful life of commercial satellites beyond 10 years. Fiber proliferation notwithstanding, satellites will remain important into the twenty-first century for several applications: broadcasting, remote location, point-to-multipoint data, and a “restoration” backup for transoceanic and terrestrial cable routes.
National Security Emergency Preparedness Implications
Satellites can provide enormously valuable backup for terrestrial systems. Their NSEP value is underscored by the decisions of the National Communications System (NCS) to implement the Commercial Satellite Interconnectivity and Commercial Network Survivability programs. The dominance of fiber as a terrestrial transmission mode makes satellites an especially important source of route diversity.
Terrestrial radio access is the third major area in which network transmission technology is advancing. The principal types of terrestrial radio are line-of-sight microwave and cellular mobile radio. Other systems, such as tropospheric scatter and meteor burst, perform highly specialized communications functions for the military.
Terrestrial microwave signals are transmitted using radio relay equipment (towers) spaced approximately 30 miles apart. Transmission frequencies range from 400 to 500 MHz up to 23 GHz for digital microwave systems.
Cellular mobile radio systems divide service areas into “cells,” which have low-power microwave antennas at their center, with each antenna linked terrestrially to a centralized switching center (mobile telephone switching office, or MTSO), which, in turn, is interconnected with the landline telephone network. Substantial frequency re-use can be achieved in these systems. As mobile users travel from cell to cell, their calls are “handed off” to the next cell they enter, thus freeing the channel in the previous cell. Cellular systems serve hundreds of the country’s largest metropolitan areas. Digital
transmission techniques will significantly increase system capacity, perhaps by a factor of four or five.
National Security Emergency Preparedness Implications
Radio access offers potentially significant enhancements to network redundancy. Radio transmission can, in some cases, be considered more robust than terrestrial links because, whereas terrestrial links are vulnerable along the entire length of the transmission line, radio links are vulnerable principally at the transmitting and receiving points. In the aftermath of the recent Hinsdale fire, some users were able to re-establish valuable communications links via radio links—notably via cellular radio and the use of VSATs (National Communications System, 1988).
Switching technology is marked by two divergent trends: advances in microprocessing technology are driving switching capabilities toward the customer’s premises; but the economics of digital switching is driving telephone companies to build large-hub switching centers with huge capacities. The technologies that are providing the impetus for these trends are very-high-performance integrated circuits and highly sophisticated distributed processing technologies.
Integrated circuit technology has progressed at a dizzying pace. As recently as 10 years ago a random access memory chip could store 16 kilobits of data; the current generation of chips can handle 1 million bits. With VLSI chips being supplanted by ultra-large-scale integrated (ULSI) circuit chips, by the year 2000, a 100-million bit chip is expected to be available. Processing memory is becoming considerably less constraining for the system designers. Multimegabit chips with self-healing capabilities, in the form of redundancy on a single chip, have been demonstrated in the laboratories of the chip designers. Semiconductors are now largely fabricated from silicon; eventually gallium arsenide will assume an increasing role because of its inherent speed advantage for digital signal processing applications.
Software and firmware technology has introduced stored-program control into switching systems, for both central and distributed
nodes. Software programs are becoming increasingly complex: central nodes now incorporate up to 10 million lines of code; by the year 2000 as many as 100 million lines of code may be needed to run central megaswitch hubs. Software-driven switching gives great flexibility to network operations and enables customer control of network configuration and operation. It does, however, introduce enormous design and maintenance complexity.
Distributed intelligence is significantly altering the physical topology of the public networks. Megaswitches are being linked to multiple remote switches. Distributed nodes serve as routing points for network control, linking the central node with remote databases. Central nodes perform specified functions for the distributed nodes, thus permitting economical deployment of the remote nodes.
The dependence of remote nodes on central hubs is analogous to that of computer workstations “slaved” to a central mainframe processor: Without stand-alone capability, the remote switches will fail if the host does. Fortunately, the remote switches at Hinsdale had some stand-alone capability, so some connectivity was retained after the fire. One example of a total dependence of remote nodes on a central processor is that of cellular “super systems,” in which the centralized MTSO supplies essential network functions.
Both the megacentralization and dispersal trends in switching will almost certainly continue in the networks of the year 2000, with neither trend having emerged as dominant.
Another way in which the dominance of digital switching techniques will influence the evolution of both public and private networks is in the increasing use of packet-switched networks. Whereas the traditional circuit-switched call occupies a specific transmission link for the duration of the call, packet switching techniques permit multiple calls to alternate in using the same communication channel; thus, channel usage is more efficient, especially for data calls. Packet networks will provide signaling capabilities needed for implementation of advanced digital networks, such as the integrated services digital networks (ISDN). Packet networks also enhance adaptive routing capabilities, which are predicated upon sophisticated signaling capabilities for which packet switching techniques are well suited.
Digital switches also provide self-diagnostic capabilities, enabling more rapid repair of damaged digital nodes. Operators at remote data terminals can examine distant switching nodes and determine which switch module needs to be replaced.
National Security Emergency Preparedness Implications
Enhanced distributed switching capabilities potentially give the networks of the future substantial augmentation in adaptive routing capabilities, which are essential for restoring network connectivity after major damage. But megaswitch nodes will create points of potentially catastrophic failure, and, as a later section of the chapter indicates, the increasing accessibility of network software will provide hackers and saboteurs with opportunities to damage the routing databases. As noted earlier, the vulnerability of large wire centers was graphically illustrated at Hinsdale. Recently a hacker penetrated university databases and even some computers at the National Security Agency.
Furthermore, the validation of software design, for systems of the complexity of the year 2000, is sufficiently difficult so that confirmation of software performance in all network modes and conditions may prove unattainable. This consideration introduces additional uncertainty, particularly under conditions of high network stress.
INTEGRATED CIRCUIT TECHNOLOGY
Very large scale integration continues to stretch the imagination. The realization of 1-million-bit chips today has brought memory costs to the point where software developers consider memory to be free. Additional advantages are realized from VLSI chips that reduce power, increase speed, and reduce the size of the packaged system. The trend of VLSI will certainly be superseded by ultra large scale integration (ULSI), and the 4-million-bit chips now in early development will, as indicated above, reach the 100-million threshold by the year 2000. In the future, memory will undoubtedly be treated as no obstacle, as well as being of minimal cost.
Not only have the semiconductor technologists projected 100-million-bit chips but also other memory technology continues with significant jumps in speed and reduction in size. Both in optics and in magnetics, the densities continue to increase. At the moment, the projections point to no limitations that will hinder a system developed for deployment in the year 2000.
VLSI technology has made possible the super-microprocessor chip. Not only do the advanced micros affect data processing, but they also bring today’s central office control into the single-chip
distributed switch control of tomorrow. The speed, power, and size factors completely change the concept of telecommunications system architectures for the year 2000.
A by-product of 100-Mbit chips will be multi-microprocessors with voting or automatic switchover between processors upon failure. In addition, the multi-megabit memory chip also allows multiredundancy in memories themselves. Thus, the concept of self-healing systems, talked about for the past 20 years, will certainly prove realizable in 2000. With such completely self-recoverable, robust systems, the concerns for unduly short mean times between failure can disappear. The system availability (the ultimate goal) will reach the levels necessary to hold maintenance costs down while providing uninterrupted service.
Digital signal processing (DSP) in a chip has been realized because of VLSI. Previously, filters and other processing of analog signals could only be achieved through relatively large physical components (resistors, capacitors, and inductors). Suddenly, the micro-processor as signal processor places in a chip the speed, power, and size that knows almost no bounds. ULSI will only push DSP still further. Therefore, merging of the analog world with the digital world folds together one of the most natural integrations achieved since the beginning of electronics.
The advent of integrated circuits has brought a true revolution in electronics, which has reached no limitation today that will not be surpassed tomorrow. The future, into the year 2000, holds exactly the same promise. Not only will the merging of analog signal processing and digital signal processing continue but, at the system level, the advanced microprocessors will allow switching and transmission to merge as well. By the year 2000, the trend of today to bring switching and transmission multiplexors together will only weld the switching multiplexor into a practically indistinguishable system element within a fiber network. Lastly, every time that silicon seems to reach a limiting threshold and gallium arsenide will have to take over, silicon again breaks another speed barrier. Whatever the final silicon limitation may be, the integrated circuit will continue to break the necessary barriers for the future.
National Security Emegency Preparedness Implications
Without hesitation it can be said that the semiconductor technology will meet any system requirement for the year 2000.
Computer control (stored-program control) is the dominant control technique for all electronic switching systems, including PBXs, in the U.S. networks. Furthermore, most terminal devices, from computers to feature telephones, use microprocessors. Whether large switching systems or small chips, software constitutes the breath that brings the equipment (hardware) to life. Unlike biological systems, the memory may be reloaded to make the equipment perform differently. The different sequences of instructions form programs that must be written and processed before the equipment may be activated. This process is time consuming and costly. While one may always hope than an invention or creative stroke of genius will alter this process, there is none currently on the technology horizon.
The variety of stored program devices and systems is ever increasing. They are programmed in at least a dozen different high-and low-level languages. There is no unanimity among designers as to the best language or operating system. One cannot predict what one is to encounter in a given installation. This means that making universal changes in programmed devices is ever increasing in difficulty.
The greatly increased reliance of the network on software to control network operations is a worrisome trend. As indicated above, large switches have software programs of up to 10 million lines of code, and massive databases used for network control result in concentration of network software assets. Further, the Federal Communications Commission has mandated that the Regional Bell Operating Companies provide Open Network Architecture (ONA) to enable nonnetwork-based providers to compete with the Bell Companies on an equal footing in competitive telecommunications markets. As noted in Chapter 4, while that purpose is laudable, the practical consequence of opening network software to outside access is a reduction in network security. Here again it is a mistake to view network assets solely in terms of their economic role and value; our public networks also have security and emergency capabilities that are critical to our national welfare and, indeed, to our very survival.
ONA will, to be sure, confer real benefits: Providers and users
can control their networks by reprogramming network software. Network structures can be dynamically reconfigured, reducing communications costs and providing a valuable tool for businesses with high data transmission requirements.
But as with fiber and digital switches there is a downside: viruses, Trojan horses, worms, and the like.
Sophisticated software is also a source of vulnerability in modern network signaling. The purposes of signaling are to route a call through the network and to report on its status—busy, ringing, connected, or terminated. Network signaling today is moving toward what is called “common channel signaling.” Older technology employed multifrequency signaling, such as the tone one hears in touch-tone telephones when dialing. In the old system, signaling was “in-band” —the network signals were carried in the same channel as the communications; the new system is “out-of-band” —signals are software-created and then carried in a common signaling channel, physically separate from the communications channels. This consolidation of the signaling function creates additional vulnerability.
In a typical call with in-band signaling, the calling party signaled to his originating central office by dialing a number. This number was then sent to the next office over a voice channel, which would later be used for the actual conversation. Signaling then proceeded from office to office until the final destination was reached. If the called party’s line was busy, a busy signal would be returned over the circuit path of the call to the originating party. Thus, signaling was distributed throughout all the trunks in the network.
Common channel signaling will change that: All signaling takes place over separate data links, which connect the switching systems of the network. In a typical application, the calling party signals his or her central office by using multifrequency touch tone. The centred office, employing common signaling, receives the dialed number and the central processor creates a message, which is sent over a separate packet-switched network to the destination central office. If the called party is busy, a busy message is returned over the packet network to the orginating central office.
This new signaling technology provides much flexibility in processing and routing calls. However, the concentration of the signaling software and hardware into a subnetwork means greater vulnerability
than if the signaling function were spread throughout entire networks. Without signaling, networks cannot function, so this added vulnerability is a serious matter. For example, the signaling networks of domestic interexchange carriers depend on a very limited number of critical packet switching nodes. While those systems can function under failures at single points, they cannot do so under multiple failures.
Another source of software vulnerability arises from a concept called the “intelligent network.” These networks employ the packet signaling networks to provide access to remote databases used for offering such services as the national 800 number. Some of the intelligence that would normally reside in a local switching office is now removed and concentrated at a distant point reached by the packet-switched network. There are only a few of these databases, and they are another source of major vulnerability (from accidental or intentional destruction of software data stored in the databases).
Another worrisome prospect for networks increasingly driven by system software is the possibility that a disgruntled employee might invade and damage or destroy executable network source code. This danger will exist even if executable code is masked from outside access. Partitioning or physically separate backup software may be needed to reduce this risk; otherwise, a knowledgeable insider might disable more than one network node by sending (via transmission links) software program alterations from one network node to other nodes.
The enormous proliferation of private networks will not alleviate these problems. Such networks employ the same technologies, and unless they are interconnected and interoperable with the public networks they do not provide much redundancy. Indeed, many of these network lines interconnect with the public networks through hubs like Hinsdale, and often their lines are laid along the same bridges and highways as the public network lines. In any event, private lines between business offices will not help a resident who needs immediate access to 911 service after a central office burns down.
The public networks have traditionally placed much reliance on the development, implementation, and adherence to standards to ensure interoperability and uniformity of performance. Prior to divestiture,
network standards were generally developed by the Bell System, implemented within the Bell System, and also made available to other telephone companies, which also embraced these de facto standards for the public networks.
In the postdivestiture environment, the telephone industry has embraced a voluntary standards setting mechanism that adheres to and follows the American National Standards Institute (ANSI) due-process principles. The T-1 Committee, sponsored by the Exchange Carrier Standards Association (ECSA), has become the primary standards setting instrument for the public networks. These forums are attended by a broad spectrum of industry participants, but the existence of conflicting interests can make consensus difficult to reach and can result in delay even where consensus is reached.
The computer industry is also putting more effort into the voluntary standards process through ANSI. The X-3 Committee, sponsored by the Computer and Business Equipment Manufacturers Association (CBEMA), along with the Institute of Electrical and Electronics Engineers (IEEE) and the Electronic Industries Association (EIA), all develop voluntary standards for data communications.
Two major trends affecting the entire telecommunications industry can be expected to have a major impact on the traditional role of standards in the evolution and operation for the public networks and for the private networks that may be used in an emergency to bridge breaks in the public network. These two trends are: (1) rapidly evolving, increasingly complex technologies and services and (2) increased competition, which will tax the ability of the existing standards.
In addition, because of the complexity of the technologies and services, the standards that are established may not have sufficient specificity to ensure full interoperability at the actual application level. Although individual public network providers may adhere to a standard, that alone will not guarantee interoperability. In order to alleviate this problem, groups are now being formed to establish conformance tests and provide testing services.
These trends, when coupled with the pressures of the competitive environment, may cause network providers to introduce technologies and services prior to the availability of a fully defined standard. Thus, the delay in setting narrow-band ISDN standards has retarded ISDN deployment and stimulated deployment of T-1 and other digital architectures (Buckley, 1989). However, the conformance testing
programs should make these problems short lived once they are detected.
National Security Emergency Preparedness Implications
As indicated above, the evolution of network management from circuit- to packet-switched architectures has the potential of significantly enhancing adaptation capabilities, but customer access to network software, the concentration of network databases, and the thinness of packet networks will create additional system vulnerabilities in the public networks. The deterioration of network interoperability resulting from standards degradation is, additionally, a matter of serious concern from a NSEP standpoint.
With an increasing number of carriers deploying digital networks, the public network configuration is evolving toward a set of separate islands, each having their own means for accessing and distributing a primary frequency reference. The islands will typically use a navigational system, such as Long-Range Navigation System-C (LORAN-C), for a timing source. As a backup facility, most islands will be equipped with cesium clocks having a 0.5 × 10−11 accuracy. The LORAN-C system coverage is expected to be expanded and will remain operational until well after the year 2000, at which time other navigational systems, such as the Global Positioning System (GPS), will be available to provide equal or better frequency references for network synchronization.
National Security Emergency Preparedness Implications
The trend of partitioning into separate timing islands has a beneficial effect on NSEP goals. Each island distributes timing over redundant paths, and each path has a limited number of nodes so that buildup of timing error is minimized.
It is expected that network timing parameters, such as number of slips per day, probability of reframe events, and so on, will tend to become standardized. Applications with especially stringent timing requirements, such as encrypted voice or video messages, will
bear the burden of design to accommodate the standardized network timing-error characteristics. If certain future applications are developed that must operate with higher precision timing references, there is the possibility that network timing recovery and distribution equipment could gradually be enhanced by adding more redundant timing sources and by increasing buffer sizes so that the rate of timing slips is reduced to required values. An in-depth discussion of this subject is given in Appendix B.
As a result of this analysis, the committee reaches the following conclusion.
No significant synchronization timing issues for national security emergency preparedness appear to exist, because timing is set by the connected surviving access tandem.
A SUMMARY OF PUBLIC SWITCHED NETWORK VULNERABILITY TRENDS
Among the technology trends that are increasing network vulnerability are the development and perfection of fiber optic technology, the advances in digital switching, and the reliance on software for network control. Optical fibers are able to offer great increases in traffic carrying capacity when compared to earlier transmission schemes. Consequently, new transmission routes are primarily fiber and, while a fiber route is not inherently more vulnerable than alternative landline transmission methods, fewer fiber routes are needed to meet capacity requirements.
The power of optical fiber technology is diminishing the number of geographic transmission routes, increasing the concentration of traffic within those routes, reducing the use of other transmission technologies, and restricting spatial diversity. All these changes are resulting in an increase in network vulnerability.
Switching technology has advanced in parallel with transmission technology. Today’s digital switches are physically smaller but have substantially greater capacity than earlier electronic switches. They also have the ability to control remote unmanned systems. Therefore, a single switching node may support communications for many tens of thousands of subscribers. Furthermore, each major transmission provider is embarked on an evolutionary path toward centralizing
control of its network in fewer switching centers and a small number of signal transfer points (STPs).
The evolution of switching technology is resulting in fewer switches, a concentration of control, and thus greater vulnerability of the public switched networks.
While switches have become more powerful and physically smaller, the cost of manpower and real estate have continued their upward climb. Consequently, communications providers are consolidating operations into fewer geographic facilities. This trend is also increasing the potential for catastrophic disruption that may be caused by damage to even a single location. Thus, access to critical nodes must be sufficiently restricted so that penetration by either casual or determined saboteurs is made virtually impossible.
There is a progressive concentration of various traffic in and through single buildings resulting in increasing vulnerability. It is common for the following types of equipment to be in one building: signal transfer points; class 3, 4, and 5 switches; packet switches; mobile telephone switching offices; and private line terminations.
Along with developments in transmission and switching that result in greater capacity, the public switched network is gaining greater “intelligence” through improvements in its software technology. This is leading to new services where users have access to previously prohibited aspects of network management and operations. At the same time, computer hackers have become more sophisticated and are more able to penetrate computer software.
The public switched networks are increasingly controlled by and dependent on software that will offer open public access to executable code and databases for user configuration of features, a situation that creates vulnerability to damage by “hackers,” “viruses,” “worms,” and “time bombs”
A significant aspect of the increasing vulnerability of the public networks is the trend toward centralization of network control by each of the major public communications carriers. The American Telephone and Telegraph Company (AT&T) network contains a limited number of STPs (deployed in pairs with each pair consisting of a primary STP and a backup). Other carriers will have even fewer
STPs. The committee notes that destruction of a pair of points will disrupt a carrier’s network in an entire geographic region.
The competitive environment will provide backup for some threats, but not for correlated events in which damage is inflicted at several points by an intelligent adversary or by a widespread natural disaster.
This vulnerability to correlated events is a natural product of common channel signaling (CCS). Communications suppliers are moving toward centralization, common control, and consolidation because of the economic realities of the competitive communications world. It is unlikely that companies will act independently in the national interest to increase redundancy (and hence their operating cost) without financial incentives, legislative imperatives, or the ability to recover their additional costs.
Earlier committees of the National Research Council that examined NSEP communications noted the trend in the public networks toward CCS. These committee reports cautioned the NCS that too few signal transfer points would represent an increased potential for vulnerability in the network.
The committee’s review of this matter clearly indicates that the trend toward common channel signaling is continuing and is irreversible in the timeframe of concern. Moreover, economics is clearly driving the number of signal transfer points and associated database facilities downward. Thus, the network vulnerability is increasing.
Divestiture and competition have greatly increased the number of separate networks that make up the public networks. AT&T, MCI Communications Corporation, US Sprint Communications Company, and all the Regional Bell Operating Companies (RBOCs) are implementing CCS with STPs and related databases in order to provide new services to more efficiently use their network facilities. If the total can be made interoperable to provide mutual support and backup in an emergency, the public networks will be much more robust and the STP vulnerability somewhat ameliorated.
At present, however, these separate sets of STPs are not fully interoperable and do not provide mutual support. It is not clear if
they can be made to do so. The NCS should examine this possibility very carefully and, if feasible, funds should be appropriated to increase interoperability.
Based on the foregoing discussion and analysis the committee makes the following recommendations.
Recommendation: Use More Technology Diversity
Because public network evolution is increasingly being driven by economic considerations, the National Communications System should ask the National Security Telecommunications Advisory Committee to examine how national security emergency preparedness needs can be met; the National Security Telecommunications Advisory Committee should recommend steps to make critical network nodes more secure, reduce concentration of network traffic, and increase alternate route diversity.
Trends in technology are increasingly concentrating public network assets in a few dominant technologies: fiber, digital switching, and software. The committee finds that these trends could adversely affect NSEP and recommends action to make critical network nodes more secure, reduce the concentration of network traffic, and examine ways to provide more diversity in transmission facilities.
Recommendation: The Nationwide Emergency Telecommunications Service Is Needed
Given that there is no assurance that by the year 2000 enhanced routing capabilities will be ubiquitous in the public networks, the Nationwide Emergency Telecommunications Service is needed now, and its functional equivalent will be needed beyond the year 2000 for national security emergency preparedness purposes.
Emerging network intelligence technologies will not, without remedial intervention, provide a suitable infrastructure for NCS’s proposed Nationwide Emergency Telecommunications Service (NETS).
Among the key new network capabilities the committee examined were the ISDN, switching techniques that use the asynchronous transfer mode, and the widespread deployment of VSATs. Neither these nor any other foreseeable emerging technology will, by themselves, ensure adequate fulfillment of the requirements for the proposed NETS. Due to concentration of network intelligence in large switches and databases, the public networks will lack sufficient critical-node redundancy to implement NETS if disaster strikes.
Recommendation: Provide Additional Redundancy
Because concentration of network traffic and routing nodes is increasing network vulnerability, additional route diversity and network node diversity should be provided for national security emergency preparedness purposes.
Implementing priority access procedures cannot alone ensure the availability of emergency communications. If fire destroys the only central switching office that can route emergency traffic from a given area, or if an earthquake uproots critical optical fiber transmission lines, essential communications linkages will be severed. The increased reliance of the public networks upon a single technology for transmission—optical fiber—is thus a source of great risk to NSEP.
These measures will cost money. However, whether users, shareholders, or taxpayers should bear the cost is a matter of public policy that goes beyond the scope of the committee’s charter.
Recommendation: Increase Radio Access Capabilities
Since radio technologies can provide a valuable source of alternative routing in emergencies, the National Communications System should consider how terrestrial and satellite radio transmission can be employed to provide route diversity for national security emergency preparedness purposes; in particular, consideration should be given as to how very small aperture terminals can be used to back up the public switched networks.
Advances in radio technology offer great promise for augmenting network route diversity. Cellular mobile radio has enormously expanded available capacity for mobile communications interconnected with the landline switched networks; digital microwave technology is
making telephone service economical in hitherto inaccessible rural areas; VSATs are making satellite distribution economical and efficient for smaller business users and present possibilities for economical deployment of widely distributed intelligent network signaling architectures.
Recommendation: Retain Existing Synchronization
As existing network synchronization levels already exceed those required for national security emergency preparedness, no action need be taken to increase the robustness of network synchronization beyond existing standards for normal network operation; designers of terminal devices should engineer them to operate satisfactorily under system synchronization standards.
In one respect, that of network synchronization, the existing and prospective network capabilities appear more than sufficient to meet present and future NSEP requirements. The committee examined network synchronization in detail and concluded that the present standards ensure an adequate margin of safety. However, because users have full freedom to connect registered terminal devices to the public networks, it is incumbent upon equipment designers to provide units that function properly within existing network synchronization standards.
Buckley, W. 1989. T1 standards and regulations: Conflict and ambiguity. Telecommunications (March).
Dvorak, C. 1987. A framework for defining service quality and its applications to voice telephony. Presentation to the Committee on Review of Switching, Synchronization and Network Control in National Security Telecommunications, Washington, D.C., December 8.
Henry, P. 1988. Lightwave communications: Looking ahead. Presentation to the Committee on Review of Switching, Synchronization and Network Control in National Security Telecommunications, Washington, D.C., May 18.
Langseth, R. 1987. Data communications overview: Network performance and customer impacts. Presentation to the Committee on Review of Switching, Synchronization and Network Control in National Security Telecommunications, Washington, D.C., December 8.
Lowndes, J. 1988. Corporate use of transponders could turn glut to shortage. Aviation Week & Space Technology (March 9).
National Communications System. 1988. May 8, 1988 Hinsdale, Illinois Telecommunications Outage. Washington, D.C.: National Communications System.
Samuelson, R. 1988. The coming blackouts. Newsweek. December 26.
Solomon, R.J. 1988. Planning for uncertain futures: The utility of a general purpose broadband network. Presentation to the Committee on Review of Switching, Synchronization and Network Control in National Security Telecommunications, Washington, D.C., March 15.