Toward a National Data Network: Architectural Issues and the
Role of Government
The last two decades have seen several revolutions occur in the telecommunications field, encompassing both the underlying technologies used to construct networks and the services offered to customers over these networks. In this paper, we follow two threads on their converging paths: the emergence and evolution of packet switching as the dominant technology for data communications and the central management of computer-controlled switches as a mechanism to create virtual private networks (VPNs) out of a national infrastructure. By 1990, the second thread culminated in the dominance of VPNs for the voice communications requirements of large customers. Now, both threads can combine to create a similar phenomenon for data communications requirements.
These events are playing out against a background of explosive growth in requirements for data communications. Growing public interest in the Internet and the availability of user-friendly access tools is causing a doubling of Internet traffic every 12 months. Federal government programs focusing on service to the citizen and more efficient operations within government are driving federal agency requirements higher and higher. Finally, there is a national initiative to bring the benefits of reliable, inexpensive data communications to public institutions as a whole through the creation of a national information infrastructure (NII). The federal government role in bringing the NII into being is unclear at the present time; current proposals call for the private sector to play the major role in actually building infrastructure. However, it has been postulated that the federal government could use its vast purchasing power to facilitate the development of an open data network, a key building block of the NII.1
It is well known that telecommunications networks exhibit economy-of-scale effects; unit costs decrease as absolute volume increases. This paper explores the economics of single-agency networks, government-wide networks, and networks with the span of the proposed NII. Specifically, it explores the benefits of a government-wide network based on shared public switches and transmission facilities. Such a network would yield the best unit prices for the government and create a solid infrastructure base for the NII at no additional cost.
Before we begin, a basic review of the key concepts underlying the issues explored in this paper is warranted to avoid any confusion over terms and definitions. These concepts are best discussed in terms of contrasts:
In a simple sense, data communications are between computers, and voice communications are between people. However, that basic fact results in different characteristics for data traffic as opposed to voice traffic.
Data traffic is "bursty" in nature; it occurs in periods of intense activity followed by (sometimes long) periods of silence. If delivery of the data cannot be immediate, they may be stored in the network and delivered at a later time (how much later depends on the application). Data traffic is not tolerant of errors; any error passed to an application will cause problematic behavior of some kind. The above characteristics lead to the requirement for protocols to package the data, transmit the package through intermediate points, check for and correct errors, and deliver the package to the distant end. By contrast, voice communications are continuous in nature and have a real-time delivery requirement. A varying delay for a voice signal results in totally incomprehensible speech at the receiving end. However, voice signals are robust and tolerant of errors. Speech itself is so redundant that words remain comprehensible even at error rates of 1 in 100 in the digital bit stream. It is clearly more important for voice applications to deliver the signal on time than to deliver it with 100 percent accuracy.
These different requirements have led to different switching techniques for voice and data communications. Circuit switching sets up a path with a guaranteed bandwidth from end to end for each voice call. While this would also work for data communications, it would be inefficient since the reserved bandwidth would be unused most of the time. A technique known generically as packet switching was developed to effectively share transmission resources among many data communications users. When we refer to data networks in this paper, we are talking about networks employing some form of packet switching.
The final concept we need to explore is the concept of shared versus dedicated networks. In the early days of telephony, all customers used (shared) the public switched network for voice communications. The backbone of the public switched network was composed of large circuit switches located on the telephone companies' premises. By the 1960s, manufacturers began producing economical switches designed for use on customers' premises to lower costs for large users. While these were primarily for local service, it was soon discovered that large customers could connect these premises switches with private lines and create private networks for long distance voice communications. Public switched service rates at the time were high enough that these private networks were economical for large corporations (and the federal government).
As packet switched data networks came into being in the 1970s, the private network alternative was the only one available to customers. There was no public packet switched data network, nor was there a large demand for one. Private data networks grew alongside the private voice networks, with packet switches on customer premises and private lines connecting the switches. Computer processing technology limited the capacity of packet switches to that required by a single large customer; little penalty was paid in not locating switches on carrier premises where they could be shared by many customers.
In the 1980s, two forces converged to spell the end of the private voice network. Divestiture created a very competitive interexchange market, and computer-controlled switch technology evolved to the point where the partitioning of a large network in software became feasible. In this case, the large network was the public switched network of each interexchange carrier that was serving the general population. Over time, competition drove the unit prices being offered to a wide range of customers down to levels consistent with the large total volume. Volume ceased to be a discriminator for price beyond a level of one-tenth of the federal government-wide traffic. This service, which now dominates the voice communications market, is called virtual private network (VPN) service.
The current approach to data communications for large customers is still the private network. There are two shortcomings to this approach: economies of scale beyond a single user are never obtained, and the proliferation of switches on user premises does not further the development of a national infrastructure. Technology such as asynchronous transfer mode (ATM) switches and high-capacity routers is emerging that makes carrier-premises switching feasible. At the same time, initiatives in government and in the research and education communities are generating a large future demand for data communications. The consolidated demand of the federal government could create an infrastructure that pushes unit costs well up on the economy-of-scale curve. However, this will only be the case if a shared network is used to satisfy these requirements. We call this network, based on standard interfaces and protocols, the National Data Network (NDN). This paper presents the
architecture and economics of such a network as it is used to satisfy the requirements of a single federal agency, the federal government as a whole, and the general population as the transport element of the NII. The implications of this architecture for universal access to the information highway will be apparent.
Any study of telecommunications alternatives must make assumptions about the technological and regulatory environment that will be present during the projected time period. In this case, the time horizon is from the present over the next 5 years. We assume that router and ATM switch technology that is now emerging will be ready for widespread deployment in operational networks during this period. Although the regulatory requirements that mandate the current restrictions relating to local exchange carriers (LECs), interexchange carriers (IXCs), and local access and transport areas (LATAs) are likely to be modified by the end of the period, the current structure is assumed for this study. Even if LATAs as a formal entity were removed, they would continue to be useful to represent statistical areas for traffic purposes.
All switching will be performed at LEC and IXC locations, beginning at the LEC central office, or wire center, serving the customer. Access to the wire center could be through a dedicated local loop or through a shared channel on an integrated services digital network (ISDN) local loop. Local loop costs between the customer's location and the wire center are not part of this study. Routers at wire centers or at tandem offices within a LATA will be provided by the LEC. LATA and regional switches that are used for inter-LATA transport will be provided by the IXC. Note that, in this scenario, both LECs and IXCs must implement shared subnetworks and cooperate in achieving the level of sharing needed for end-to-end economies to be realized. In most cases, this is not the case in today's networks. Each carrier generally uses the other only for dedicated circuits between its switches. The realization of an end-to-end shared network architecture is critical for the formation of a National Data Network.
A final assumption that does not affect the economics of this study, but that is essential to the viability of a shared network, is the successful addressing of security issues. Most present networks use encryption only on interswitch links; the traffic in the switches is in the clear and is protected by physical security. Since this would be an issue if the switch were on carrier premises, more complex security systems that encrypt traffic at the source before the network protocols are added are required to maintain security. Fortunately, such systems are being designed and deployed since they also provide a much higher level of security than the present system.
Three physical network models were formulated and evaluated for cost-effectiveness over a range of traffic levels. These models were crafted to represent the generic data communications requirements of a single federal agency, the federal government as a whole, and the general public as a user of the NII. The models differed primarily in the number and distribution of wire centers served by the network. The agency model served 1,350 wire centers; the distribution of wire centers served was derived from projected Treasury locations during the time frame of the study. The government model expanded the coverage to 4,400 wire centers. These corresponded to the wire centers currently served by FTS2000. For the NII model, all the wire centers in the continental United States (21,300) were served.
A network was designed for each model and traffic level. Transmission and switching capacity were sized to meet the throughput requirements for the traffic. Each network was then priced using monthly prices for transmission and amortized prices for switching equipment. A percentage factor was applied for network management based on experience with agency networks. The total costs and unit costs for each model and traffic level were then computed and analyzed. The following sections provide additional details on the network architecture and the traffic and cost models used in the analysis.
A four-level hierarchy was used for the network architecture; the locations of the switches were based on the current LEC and IXC structure that exists today. The equipment used for switching was based on the current technology of high-end routers at the lower levels of the hierarchy and ATM switches at the backbone level. As ATM technology evolves, the current routers will likely be replaced by ATM edge switches. For the purposes of this study, the capacity and cost factors of these two technologies would be similar.
At the top of the hierarchy is a backbone consisting of fourteen switches and connecting circuits. The country was divided into seven regions corresponding to the current regional Bell operating company (RBOC) boundaries. Two switches were placed in each region for redundancy. The topology of the interconnecting circuits was based on the interswitch traffic, with the requirement that each switch be connected through at least two paths. The backbone subnetwork is shown in Figure 1.
Within each region, one switch is placed in each LATA in the region. This switch serves as the "point of presence" in the LATA for the backbone network. Each LATA switch is connected to one of the regional backbone switches. Intra-LATA traffic is switched internally at this level and does not reach the backbone. Figure 2 illustrates this connectivity for the Northeast region, showing the two backbone switches and fourteen LATA switches.
Within each LATA, traffic from the wire centers is concentrated at tandem routers before being sent to the LATA switch. These tandem routers are located at LEC central offices, usually those serving a large number of exchanges. The number of tandem locations is somewhat dependent on the model in use, but a typical configuration is shown in Figure 3 for the Maine LATA (seven tandem switches are shown).
The final level in the hierarchy consists of routers in the LEC wire centers serving actual users. Access to these routers occurs over the customer's local loop and can take various forms:
As stated above, the number of wire centers served is dependent on the model being evaluated.
A single, scalable traffic model was used to evaluate all three physical network models (agency model, government model, NII model). In the future, data applications will encompass all facets of government operation, not only data center operations. Consequently, the current LATA-to-LATA voice traffic distribution of the government as a whole was used as the basis for the traffic model. This reflects the level of government presence and communities of interest within the country. The resulting traffic matrix represents a generic approach to characterizing a national traffic distribution.
Within a LATA, the traffic was assigned to wire centers based on the number of locations served (or, in the NII model, the number of exchanges served). The base unit used to characterize traffic was terabytes per day (1 terabyte = 1 million megabytes). As a calibration point and a way to put the traffic units into perspective, Table 1 gives the approximate traffic volumes for existing and proposed networks.
The current FTS2000 packet switched service (PSS) carries only a small percentage of the civilian agency data communications traffic; most of the traffic is carried over private networks using circuits procured under FTS2000 dedicated transmission service (DTS). The Department of the Treasury is procuring such a private network at this time and has estimated its traffic requirements over the life of the new contract. Analysis of current DTS bandwidth utilization by agency indicates that Treasury represents about 7 percent of the total FTS2000 agency requirement for bandwidth. This includes Department of Defense (DOD) circuits on FTS2000 but does not include the large number of DOD mission critical packet switched and IP router networks. The volume of traffic on these networks may be as large as the FTS2000 agency estimate.
As a point of comparison, the March 1994 Internet traffic traversing the NSFNET backbone is given.2 This traffic doubled in the past year and shows every indication of increasing growth rate. Of particular interest would be the amount of regional Internet traffic that could use the infrastructure generated by the NDN for more economical access and transport service.
The traffic volume generated by a mature NII cannot be estimated; truly, the sky is the limit if any of the future applications being contemplated grabs the imagination of the general public.
This study used volume ranges appropriate to the physical model under consideration. The agency model was evaluated at volumes ranging from 0.006 to 0.5 terabytes per day. The government was modeled as representing 14 typical agencies and was evaluated at volumes ranging from 2 to 8 terabytes per day. Note that although the government model had 14 times the traffic of the typical agency, it utilized only 3.3 times as many wire centers. The NII model extended the reach of the network to 5 times as many wire centers; it was modeled as carrying the traffic of 8 government-size networks (16 to 64 terabytes per day).
Circuit Cost Model
The cost of circuits used in this study was based on the current, maximally discounted tariffs for interoffice channels (i.e., channels between carrier premises). Local channels were not used since all switches in the study were located on carrier premises. Rates for OC-3 and OC-12 were projected as mature rates following the current economy-of-scale trends. Carrier projections for these rates support this view. The five line speeds used for circuits were as follows:
Figure 4 shows the monthly cost of a 500-mile circuit at different line speeds, illustrating the economies of the higher-speed circuits.
Equipment Cost Model
The wire center, tandem switch, and LATA switch cost models were based on high-end router technology (e.g., Cisco 7000). Serial interface processors were used for link terminations. ATM interface processors were assumed for T-3 links and above. ATM concentrator switches in the future should exhibit cost behavior similar to that of the router configurations. The backbone switch cost model was based on high-end, wide-area ATM switch technology (e.g., AT&T GCN-2000).
The one-time cost of equipment was amortized over a 5-year period to obtain an equivalent monthly cost that could be added to the monthly transmission costs. Before amortization, the capital cost of the equipment was increased by 20 percent to account for installation costs. Finally, a monthly cost of maintenance was added at a rate of 9 percent of the capital cost per year. These factors correspond to standard industry experience for these functions.
Management Cost Model
Network management costs were estimated at 25 percent of the equipment and transmission costs, based on current experience with agency networks. Implementing management cost estimates as a percentage assumes that network management will show the same economy-of-scale effects as the other cost elements. In fact, large networks will probably realize greater economies in network management than in any other area. These costs are driven mostly by personnel costs, which are relatively insensitive to traffic volume and only marginally related to number of locations.
The results of the analysis are presented here in two formats. The first graph for each physical model shows the variation of monthly cost with volume. The second shows the variation of unit cost with volume. Unit costs are presented in cents/kilosegment (1 kilosegment = 64,000 characters).
Figure 5a shows the variation of monthly cost versus volume for the agency traffic model. The curve demonstrates the classic economy-of-scale shape, although the effect is more discernible in the unit cost curve presented in Figure 5b. The unit costs predicted by the model at the lowest traffic levels are consistent with the current unit costs for FTS2000 packet switched service, which is operating at these traffic levels. It can readily be seen that large single agencies can achieve economies with private networks at their current volumes of 0.1 terabytes (TB) per day.
The monthly and unit cost curves for the government model are presented in Figures 6a and 6b. The combined costs of multiple single-agency networks comprising the same volumes are also shown. Significant cost savings are achievable with a government-wide network versus the multiple-agency networks. Unit costs at government-wide volumes (2 to 8 TB/day) are approximately one-third the unit costs realized using the volumes of even the largest single agencies (0.1 to 0.4 TB/day). The economies achievable for the smaller agencies would be much greater. A portion of the reason for the substantial economies realized is the more efficient use of facilities from the local wire centers up to the LATA switches. With multiple-agency networks, a large number of inefficient, low-speed circuits exist in parallel at the same local wire center. With a shared network, the traffic on these circuits can be bundled over more efficient, higher-speed circuits. The situation is made worse if the multiple agency networks use switches on customer premises rather than in wire centers.
The relative economies in moving from government-size networks to networks on the scale of the NII show a similar pattern (Figures 7a and 7b). The savings are not as great as in the previous example since extra costs are involved in extending the reach of the NII into all wire centers. Nevertheless, if the traffic increase is assumed to be on a par with the increased coverage (8 times the traffic with 5 times the wire centers covered), then the economies are still there and the enormous benefits of full coverage are realized.
The single NII network produces a 37 percent unit cost savings over the 8 multiple networks comprising the same volumes. Note that large increases in traffic from the wire centers already serving federal government traffic could be handled at little additional cost. This would be the case in most urban and suburban areas.
The cost figures presented above represent the costs as equivalent monthly costs, including the amortized cost of equipment and installation. It is instructive to break the equipment and installation costs out separately since these costs represent capital investment. In particular, Table 2 presents the additional investment required to carry increased traffic, given that a government-wide network carrying 4 TB per day of traffic has already been constructed. The investment in equipment required to build a network of that size is approximately $160 million. While substantial, this investment is commensurate with the estimated investment made to provide FTS2000 services to the federal government in 1988. These investment costs would be recovered through service charges to the government.
As Table 2 shows, additional traffic can be carried through the same wire centers that serve the government with little additional investment. Additional capital is needed to expand toward a fuller NII infrastructure that would require 5 times as many wire centers to be covered as are covered in the government network. However, the government network would still provide a significant jumping off point for the complete network. For example, an NII network serving all wire centers at 8 TB/day would require an investment of $410 million in equipment ($160M for the first 4 TB/day through the government wire centers plus $250M for the additional 4 TB/day through the remaining wire centers). The government network would have already caused 40 percent of that investment to be made.
The largest portion of the investment and monthly costs is in the access areas of the network, the portion that is normally provided by LECs. This reinforces the point made above in this paper that the shared network concept must be extended all the way to the user. It also points out the need for uniform standards for interfaces and switching in all regions (a minimum requirement for any open data network).
Three major conclusions can be drawn from the analysis presented above:
The savings resulting from the NDN approach are substantial enough to justify the complexities of an aggregated procurement (coordination of requirements, security, standards). Such a procurement would have to be carefully structured to harness the competitive forces necessary to motivate both local and interexchange carriers to pass on the cost savings shown above through lower prices. The end result would be a quantum step forward for the government and the country on the road to the information technology future.
1. Computer Science and Telecommunications Board, National Research Council. 1994. Realizing the Information Future: The Internet and Beyond. National Academy Press, Washington, D.C.
2. Computer Science and Telecommunications Board, National Research Council. 1994. Realizing the Information Future: The Internet and Beyond. National Academy Press, Washington, D.C.