Technology in the Local Network
The local telephone network is that part of the telephone network that connects individual subscribers, homes, businesses, and so on, to an end-office switching center and includes the end office itself. In its early embodiment, the local network was simply a telephone (or other instrument) at a subscriber's residence or place of business connected to a pair of wires that led to a switching office (Figure 1). At the switching office, connections were made between local users, or the signal was sent via a tandem or long-distance path for subsequent connection through another part of the telephone network.
The early design goals placed on the local network were relatively simple (at least from our perspective today): to provide reliable transmission and switching of voice signals that could be easily understood at the receiving end. There were other considerations, such as ringing the phone, that were also necessary, but, mainly, the subscribers just wanted to hear intelligible voices.
In concept, the local network is not much different today than it was in the past except that the termination at the subscriber's premises is made at a standard interface that does not include the customer's on-premises wiring or telephone (or other equipment). Of course, things are much more complex now, because the demands for added bandwidth, new services, and overall cost efficiency have greatly changed the design goals used to plan and implement the network.
The local telephone network is evolving rapidly from its historical manifestation as a narrowband connection of physical addresses to a more complex network of networks that includes narrowband, broadband, and variable-band transmissions to physical and logical addresses. The added capabilities and increased efficiency of the telecommunications network have allowed the introduction of new data services such as frame relay and switched multimegabit data service; developed new, intelligent features across a broad spectrum of users; and positioned the network for significant growth in the future.
History of the Local Network
At the time the telephone was introduced, the telegraph was regarded as far more important to commerce. The product of the telegraph was a written record of the communicated message. This tangible record provided a link to the familiar handwritten discourse of the commerce of the day. Because it did not provide a record of the message, the telephone was initially regarded as a novelty.
However, as the need for communications increased, the telephone soon surpassed the telegraph as the medium of choice. In fact, having a telephone became so popular that the proliferation of the supporting wires became objectionable. Engineers were forced to find a more compact means of running the wires from point to point. The engineers found that wrapping the copper pairs with paper insulation and encasing the resulting bundles of pairs in lead allowed a more compact transmission medium, called a cable.
It was also obvious that it was impractical for users to string a pair of wires from their location to each person to be called. The solution was to run all of the pairs of wires from the customer's premises to a central location. There, ''operators" could connect sets of wires together according to instructions from the customers. This created a center where customers were switched at the central point of the wires. This was the origination of the old-timer's call, "Hello, Central!"
The switching at Central was originally accomplished by human operators who manually interconnected the customers' calls. The transmission medium was copper wires either cabled or open. Control was provided by the customers' verbal instructions and the operators' manual actions. This division in function (i.e., transmission, switching, control, and terminal equipment) still exists in today's telecommunications networks, albeit in radically different forms.
Transmission Media and Multiplexing
Transmission equipment for telephony has evolved from simple open wire carrying a single conversation to optical fibers that can carry many thousands of conversations.
Signals from a home or business are carried over a twisted pair of copper wires, called the local loop, to a centrally located local office. Hundreds and even thousands of wire pairs are carried together in a single large cable, either buried underground in a conduit or fastened above ground to telephone poles. At the central office, each pair of wires is connected to the local switching machine. The transmission quality of the early installations was highly variable. Today, however, the plant that is being installed has the capability to transmit at least basic rate ISDN (144 kbps). This is true even for long loops (greater than 12,000 feet) that require loop extension equipment or lower-resistance wire. (Future plans will reduce the number of those long loops.)
In order to reduce costs, methods were developed to combine (multiplex) a number of subscribers on a single transmission medium from the central office, with the individual wire pairs split off nearer to the subscribers (Figure 2). As advances in technology progressed, multiplexing kept pace by increasing the number of conversations carried over a single path. Only a few years ago, multiplexing provided tens of conversation paths over a pair of wires. Initially this was accomplished by shifting each telephone signal to its own unique frequency band.
A major advance in multiplexing was accomplished when normal voice signals were converted into a coded digital form. In this form, the digital signals could be regenerated repeatedly without loss of the voice or other information content.
With today's time division multiplexing, each telephone signal is converted to a digital representation. That representation is inserted into fixed time slots in a stream of bits carrying many digitized telephone signals, with the overall stream operating at a high bit-rate. (An uncompressed voice signal requires 64,000 bits per second [bps] in digital form.)
The multiplexed signals can be transmitted over a variety of transmission media. The most common multiplexing system, called T1, operates over two pairs of copper wires carrying 24 telephone signals, at an overall bit-rate of 1.544 million bits per second (Mbps). First installed in 1962, the system is still widely used today.
With optical fiber, a beam of light is transmitted through a very thin, highly pure glass fiber. The light travels in parallel rays along the axis of the fiber. Many telephone signals are multiplexed together, and the light
source is simply tuned on and off to encode the ones and zeroes of the digital signal. A single strand of optical fiber used in today's telecommunication systems can carry 30,000 telephone signals. Also, the repeaters in an optical fiber can be separated much farther (tens of miles as opposed to 1,000 feet) than in an electrical system (see Figure 2).
The greatly increased capacity of fiber links at relatively low cost has led to the practicality of "bypass," in which high usage subscribers bypass the local provider and feed directly into the telecom network (Figure 3).
The history of transmission media and multiplexing shows an ever increasing progression of the total number of conversations that can be carried over a specific generation of the technology. The more conversations carried, the lower the cost per call, or the greater the bandwidth available per subscriber.
The telephone network is a switched network. The connection from one telephone to another is created and maintained only for the duration of each individual telephone call. In the early days, switching was performed manually, by operators who used cords to connect one telephone line to another. The automation of switching was first accomplished by allowing customers to directly control electromechanical relays and switches through a "dial" attached to the telephone instrument. Electromechanical switching equipment reduced the need for human operators. However, the equipment's capacity and capability for supporting new features was limited.
Most of today's switching machines switch signals that are in digital format. Digital switching interfaces well with the time-division-multiplex technology of today's transmission systems.
As technology has advanced, it has blurred some of our old categorizations in the local networks. We now have remote units in the feeder network of various sizes and capabilities that combine the transmission, multiplexing, and switching roles previously accomplished by discrete systems. Initially, these changes were done to reduce costs, but now this added complexity has given the networks much greater flexibility to grow and expand in capability.
The telephone system uses a myriad of control signals, some of which are obvious to the customer and others of which are unknown to him or her. The early signals between the customer and Central were a crank on the magneto to summon the other end. Instructions on whom to call were exchanged by verbal commands.
Today's customers are very familiar with the telephone's ring, dial tone, dialing keypad tones, and busy tone. However, control signals that are sent between switching offices, over circuits separated from the voice
channel, do not directly involve the customer's attention. These separated control signals used within this separate network are called "common channel signaling system number seven," or SS7.
Although the initial motivation in the introduction of common channel signaling was an improvement in call set-up, this change has supported the movement of network intelligence out of proprietary switching systems and into a set of distributed processors, databases, and resource platforms connected through well-defined industry standards.
Thus, we have seen the implementation of the intelligent network in which intelligence is added to the network to implement new features and services such as personal number usage, virtual PBXs, voice response
services, and others (see Figure 4). Further, this intelligence in the network has the potential to evolve to be the service infrastructure for future broadband and multimedia applications such as interactive video games, remote learning, and others.
Telephones and Other Station Apparatus
The telephone itself is a rather simple appliance. A microphone (the transmitter) and an earphone (the receiver) are contained in the handset. The modern keypad dialer sends unique combinations of two single-frequency tones to the central office to indicate the particular digits dialed.
Newer instruments, especially personal computers, are now common to the network. Their capabilities include simultaneous voice and picture communications and computers with telephone capabilities, and they will include features yet to be invented.
Future Evolution of the Local Network
There is currently a revolution under way in the local telephone network that is being brought about by changes in the technical, competitive, and regulatory arenas. From the technical perspective, there has been a great advance in the ability of networks generally to handle and process information. More precisely, there has been a digital revolution brought about by the ever increasing power and ever decreasing cost of microelectronics.
One of the real drivers in this revolution is electronics technology, specifically progress in semiconductor fabrication capabilities. Today's 0.5-µm feature size for complex production integrated circuits is expected to
shrink to 0.25 mm by 1999 and 0.18 mm by 2002. This trend will allow microprocessor speeds to increase from today's approximately 200 MHz clock rates to 500 MHz over the same time frame (see Figure 5). In addition, DRAM (dynamic random access memory) chip capacity will also increase from today's 16 Mb chips to an estimated 1024 Mb capacity in 2002. Putting these advances in electronics to work in development of RISC (reduced instruction set compiler) processors, the heart of desktop workstations or set-top boxes, for example, will mean that these processors can be expected to perform calculations such as those needed, for example, for video compression ten times faster in the years after 2000 than they do today.
While the text files transferred today between users are normally about 100 kb, it is not uncommon for files with graphics information to routinely exceed 1 Mb. It is now becoming common for users to attempt to send 1 Mb files over the existing telephone modem lines, with the result that the commonly used techniques are seen to be quite inadequate. Thus, at least a factor-of-ten improvement in available data rate is required.
The data services that users will soon demand certainly exceed the capability of the existing telecommunications network. As traffic begins to include full-screen, high-resolution color images, files will become of the order of 1 Gb in size, dictating a further increase in capacity of three orders of magnitude. This sort of increase in capability will require some fundamental changes. Growth will be required not only in the pipelines that provide the data but also in file server technology, network management infrastructure, and user software to enable rich new services.
Though we are dealing here explicitly with the local telephone network, the impact of this revolution also affects local networks of all kinds, including telephone networks, CATV networks, private data networks, cellular radio networks, and so ona profound technological convergence.
As an example of the data growth envisioned for the network, the increase in DS-1 and DS-3 access lines is enlightening. Figure 6 shows this growth to the year 2003.
Key elements in this robust capability for digital processing are that it is independent of content (such as voice or video) and permits the distribution of the processing to the periphery of the network, thus allowing computing power to migrate to the user. These developments have enabled the conception of a whole range of interactive services that meld and blend the areas of communications, computers, and video. In fact, it is this commonality of technology that has led to other implications in the areas of competition and regulation.
It is the desire to offer broadband video and interactive services that has created the incentive for the local exchange carriers to evolve their plants to provide bidirectional broadband access. Actually, the build-out of the broadband network is a process that has been going on for nearly two decades, beginning with the first introduction of optical glass fiber for carrying trunk traffic in the telephone company (telco) network around 1980. In the intervening years, fiber has replaced virtually all the metallic cable in the interoffice plant and has begun to migrate into the feeder portion of the distribution plant.
The most costly, but at the same time the most restrictive, portion of the access network, the subscriber loop, at this point remains copper. This is key in considering evolution toward a broadband infrastructure. With the digitization of the switching infrastructure, the state of the current network includes a totally fiber interoffice plant, a fully digital narrowband switching infrastructure, and a partially fiber feeder plant.
The approach to a broadband network must be formulated from this vantage point. There are two main technological thrusts that are enabling the digital video revolution. The first is the ability to compress digital video with high quality to the point where one can deliver video streams in a cost-effective way to individual subscribers (Figure 7). Even with the high capacity of optical fiber, uncompressed digital video would have remained a challenge in terms of transport, transmission, and switch capacity.
The other technical event contributing to the availability of digital video has been the development of broadband switching technology in the form of asynchronous transfer mode (ATM). Key features of the development of ATM include the ability to multiplex and to switch together (in a packet or cell format) the content of mixed streams of multimedia traffic, and, what is more, to do this isochronously, so that the time information of each stream retains its integrity. It is anticipated that ATM switches will become dominant after the turn of the century. One plan, shown in Figure 8, predicts 100 percent deployment by 2015.
The first plant upgrade enabled by the digital video revolution has been the migration of the existing CATV network to a fiber-fed technology where fiber is brought to within two or three radio frequency (RF) amplifiers of the subscriber. The fiber-fed bus architecture, or so-called hybrid fiber coaxial (HFC) system, provides capability possible to provide approximately 80 analog and 150 digital channels over a 750-MHz HFC network. If such an
HFC network serves areas of 500 homes, there are clearly enough channels available with some statistical concentration to provide selected channels for individual subscribers.
It is GTE's plan to offer 80 analog video channels in the initial rollout of the HFC system to about 400,000 customers in 1995, followed by 500,000 more in 1996. The cost should be in the range of $700 per customer. Subsequent upgrades will include adding 150 digital channels of broadcast MPEG in early 1996 at an added cost of about $200 per customer (set-top box). This will allow delivery of near-video-on-demand. In late 1996 or 1997, switched MPEG will be added for video-on-demand and other interactive services for a further incremental cost of $100 to $200 per customer.
Further HFC additions beyond 1997 will depend on results obtained with the initial system.
With a large number of channels available, even though the bus architecture is a shared medium, it provides most of the functionality of a switched star-star architecture that is typical of most telephone networks. The enchanced upstream connectivity allows the addition of voice and data as integrated services along with video.
The approach of the local telephone carrier to bringing broadband services to the loop involves the evolution to a broadband distribution network, which includes fiber that will go closer and closer to the subscriber and ultimately to the premises itself. The particular approach to bringing fiber to the loop is a function of cost. At the present time, fiber to the home is too expensive a solution. Bringing fiber to some intermediate point is the preferred option. The hybrid fiber coaxial system, while being implemented initially by CATV operators, is clearly one such approach.
While several local exchange carriers have embarked on network rollout programs with hybrid fiber coaxial technologies for their initial thrust into video distribution, it is clear that there is no straightforward way to integrate HFC with the existing twisted pair copper loop plant (e.g., power, ringing the telephone, etc.). The additional costs of managing and maintaining two networks appear to be a distinct disadvantage for this approach. In some cases, where aging loop plants are in need of full replacement, HFC can be put forward for integrated services and replacement of the existing copper plant.
The challenge is to find a mode of migration that will provide a sufficiently robust set of services to meet customers' needs in the coming broadband environment while maintaining an effective and elegant migration path from the existing copper plant. One such approach is asymmetric digital subscriber line (ADSL) technology (Figure 9). A pair of modem transceivers on each end of the copper loop provides digital line coding for enhancing the bandwidth of the existing twisted pair.
The approach is asymmetric because the capacity or bandwidth in the downstream direction, toward the subscriber, is greater than that in the upstream direction. One ADSL (very high speed ADSL, or VH-ADSL) approach for particularly high bandwidth in the 25 to 50 Mbps range involves serving areas consistent with fiber being brought into the loop within several thousand feet of the subscriber, and is thus consistent with the migration path that brings fiber close to the subscriber premises.
One of the transceivers is therefore installed at a fiber node. Eventually this may go to a fiber-to-the-curb system and ultimately, when economically justified, to fiber to the home. There is a continuum of serving-area sizes. But for the present, this is an integrated network that provides all services over a single plant and is competitive for the range of bandwidths required.
One of the major advantages of this ADSL approach is that it can be applied only to those customers who want it and are willing to pay for the added services. Thus, this method allows for an incremental buildup of a broadband capability depending on market penetration (Figure 10).
This, then, is the infrastructure we will be looking at, but what of the services and programming? It is clear that broadcast services need to be provided in an ongoing way. Additionally, various forms of on-demand or customized video programming formats are anticipated to be important. True video-on-demand commits a port and content to an individual subscriber, and the viewer has VCR-like control of the content. Near-video-on-demand shows a program frequently enough to simulate the convenience of video-on-demand, but without the robustness of true on-demand services. Clearly, other services will be more akin to the interactive features that have grown up on the personal computer (PC) platform. These include various information and transactional services, games, shopping, and, ultimately, video telephony.
The subscriber platform is also worthy of note. There are clearly two converging sources of services here. One is the cable television (CATV) environment, with the set-top box and television set as the platform, and the second is the PC. While the former has been almost exclusively associated with the domain of entertainment services, information and transactional services have clearly been the domain of the PC.
It is clear that the carrier must be prepared to provide services that are consistent with both platforms, because one or the other will likely continue to be favored for specific applications or classes of applications. An example of such a service is embodied in an offering called "Main Street." Main Street uses the TV with a set-top box to provide a visual presentation (currently stills and sound). A telephone line is used to send signals to a
head-end server (Figure 11). Services offered include scholastic aptitude test study, home shopping, access to encyclopedias, games, etc. The service is currently offered at a few locations around the United States.
These new networks of the future are much more complex and require a corresponding increase in intelligence. Intelligence (defined here as the ability to self-inventory, collect performance data, self-diagnose, correct faults, respond to queries, etc.) no longer resides exclusively in the central office but is spreading into the local access network, thanks to the plummeting cost and increasing reliability of processing power (Figure 12). The new distributed intelligent network elements will enable a revolution in the efficiency with which telcos can perform core business functions needed to administer customer services. For example, telcos have historically billed for service based on time of connection, bandwidth, and distance. This approach has little meaning in the case of connectionless data transmission, and new approaches need to be formulated.
Similarly, monitoring and testing network performance will attain new levels of efficiency as digital performance monitoring and automatic alarm collection/correlation move down to the individual line card level. Probably the most exciting aspect of these new intelligent access elements is the new services they will make cost effective. Reducing the cost of digital services such as ISDN and frame relay, and of higher-bandwidth services such as interactive multimedia, will require intelligent elements in the access networks. Dynamic service provisioning, whereby services are delivered in real time on an as-needed basis, will similarly rely on intelligent network elements. As these examples show, the addition of intelligence to the access network represents a new paradigm for the rapid, efficient, and cost-effective addition of new services.
Telecommunications management networks (TMN) is another emerging technology that will reduce operational costs by enabling more efficient network management 3. concept first appeared in the early 1980s, but activity remains primarily in the standards arena, where organizations such as the International Telecommunications Union, the American National Standards Institute's T1, and the European Telecommunications Standards Institute continue to evolve a complex set of TMN standards.
In the existing situation, each network (e.g., those of long distance carriers, local telephone networks, wireless networks, etc.) has its own specialized, proprietary management network, with little or no interoperation. Indeed, at present each network element has a unique interface to the management systems. The TMN approach, which has support across the communications industry, will result within the next decade in an open, multivendor environment with the benefits of reduced costs for operations systems and improved management efficiency.
The Local Network and the National Information Infrastructure
In order for the greatest number of individuals, residential or business, to access the NII, there must be support to allow a variety of types of customer premises equipment (CPE) to gain access to the network as well as to enable interoperability 4 among the pieces of the network. Users are going to want to be attached to the NII from any of various types of CPE (e.g., a plain old telephone system telephone, screen-phone, personal digital assistant, PC, or TV). They will connect to the network either by dial-up (wired or wireless), leased line, telco-provided video network or cable service, or by interfaces provided by the utility companies.
The interoperability of these network access types is going to be dictated by the need for users (and their applications) to get access to other users without needing to have multiple types of CPE and/or set-top boxes and without having to know what's on the ''other end of the line" 5. In addition, the NII will need to support a vastly increased degree of interoperability among both the attached information resources (and their providers) and the active components (e.g., personal "agents") that wish to make use of these resources.
Just as the NII is often discussed in the context of an extrapolation of today's Internet, so also the problems of interoperability in the NII can be thought of as an extrapolation of the simpler problems of interoperability in the context of information processing currently being extended to distributed computing. The current information processing infrastructure involves a vast legacy of networks of heterogeneous, autonomous, and distributed computing resources, including computers, applications, data (files and databases), and the underlying computing and communications technology. We endorse the Computer Systems Policy Project report, Perspectives on the National Information Infrastructure: Ensuring Interoperability (CSPP, 1994), which focuses on the importance of interoperability.
We want to add to that report's statements that developers of NII technology cannot assume a clean sheet of paper for new applications. Most current data are, unfortunately, tightly coupled to legacy applications. In order for the current vast repositories of data to be available to future NII applications, interoperability mechanisms must be developed to provide access to these data and, in some cases, to allow current legacy applications to participate in the future. These legacy applications and associated data represent huge investments and are unlikely to be rewritten to upgrade their technology base.
A major consideration for users on the local network is the NII interface. This interface requires an underlying communications model that includes how a user or application can connect to another user, service, or application by using customer equipment in a network-independent manner, and with a set of relatively simple formats for accessing the various kinds of data that will be available in the NII. To accommodate functionally limited CPE, transformation gateways may have to be provided at the central office close to the end user.
Planning for the Future of the Local Network
The capabilities of the communications networks have expanded significantly so that, from a technical standpoint, there are major overlaps. Telcos are now attracted toward video delivery, cable companies toward
providing telephony and data services, and broadcasters toward participating in cable networks, with all involved in programming to varying degress.
This state of affairs begs the obvious question: Why do other businesses seem so attractive? This appears to be based on two assumptions. The first is that, for the current participants, their networks can be modified to handle the total communications needs of customers (voice, data, video) by relatively modest incremental investments, and the resulting, greatly expanded network can be managed with essentially the same management team.
The second assumption is that the communications environment brought about by the digital interactive age promises a growth in and demand for new services that have not been experienced in recent times, if ever before. Thus the stage is set, at least on the telco side, for an entry into traditional entertainment and content-based services as well as an extension from voice and data into multimedia communications.
The evolution of the local network is also being affected by the changes that are taking place in the regulatory arena. There has been a general social trend toward a more competitive market-driven environment as well as a technical basis supporting deregulation, given that technology was establishing the basis for multiple providers for the same service.
This has begun with the Federal Communications Commission's video dial tone rulemaking, which provided the basis for telcos to offer transport services for video within their franchise areas. The most problematic business areas with respect to regulation have been those associated with content and programming. These have been driven by the FCC, based on the traditional concern with control of communications being in the hands of a single entity.
This has been a particular issue for the local telcos for several reasons. First, the video dial tone enabling regulation initially has proscribed the involvement of a telco in providing content over the video dial tone (VDT) network. This has been challenged by several of the local exchange carriers in federal court, with universally successful results to date. While there are still appeals processes to go through, it is clear that the First Amendment right stands significantly behind the carrier's positions.
The concern here has been the availability of extensive competitive programming for initial rollout of the network, in order that there be a basis for competing with the current incumbent. For this to happen, a local carrier must be allowed to prearrange to some degree the initial availability of programming, or the exercise will be one of building the video dial tone infrastructure and hoping that sufficient programmers and/or subscribers will arrive to pay for the cost of the network.
A second incentive for involvement in content and programming is the apparent structure of the current business, whereby significant leverage and profitability remain with the content, and delivery may be a less profitable part of the business. This may or may not be true in the future. It is clear, however, that access to competitive content and programming is essential in an era of video competition.
As the executives responsible for managing the companies that provide local service ponder the actions they should take to remain competitive (and to grow), they are faced with a number of uncertainties that are external to their companies, such as technology advancements, competitive actions, and governmental regulation.
This is a normal state of affairs for many businesses, and any business opportunity is normally undergoing a number of dynamic changes on a more or less continuous basis. In technology, the changes include such things as cost-per-processing capability, data storage capacities, and the like. These elemental changes may allow higher-level changes that dramatically affect the cost and performance of the business opportunity. In regulation, such items as allowing competition into new markets, pricing freedoms, and others likewise have an effect. And, of course, competitive changes occur as well.
With highly dynamic business opportunities, the executives running the competing businesses have to decide what approach to take to address the opportunity and when to make a commitment. If the commitment is made too early and something significant occurs, a competitor may come in slightly later and address the same opportunity with a product or service that costs less and does more. In the event of delay, on the other hand, there is the danger that the market will be taken and will be difficult to penetrate.
There can be little doubt that there will be many approaches to providing communications services in what is now the local loop. Indeed, it is the vitality of this massively parallel approach that is one of the great strengths of our economic system.
If the playing field is truly competitive and without undue restrictive regulations, the best approach, with the most efficient use of technology, should win out. Whether this winning approach will be a natural evolution of the present telephone network, an expansion and upgrade of the cable network, a wireless approach, or some hybrid of all of the above, in a competitive environment, the ultimate winners will be those who provide what the market wants at the most economical price.
Many of the decisions to be made in this arena entail large capital and work force expenditures that, once committed, are expensive to change. Indeed, much of the investment in the local networks has not been recovered, and large write-offs could occur if decisions are not made well.
The executives who are faced with the decisions on how best to address these markets can protect themselves from being blindsided by new technology development by becoming aware of the possibilities raised by new developments. They can likewise measure the competition and feel that they all are at least playing by the same economic rules. It is on the regulatory front that a more aggressive effort is needed to establish a level playing field. The network managers have to be confident that they are in a fair competitive match, and that there are no underlying rules that favor one competitor over another. If this occurs, then the decisions can be made.
The timing of the changes we can foresee, as discussed in this paper, is uncertain. The nature and timing of regulatory reform play a large role in advancing our telecommunications network, and regulatory reform must be completed.
Currently, there is a lot of asymmetry in the application of regulatory rules. The telcos, for example, have a utility company obligation to provide basic telephone service at tariffed rates to anyone who requests it in their serving territory, and to have sufficient reserve capacity to act as a carrier of last resort if some small facilities-based reseller experiences a network failure. For years this mandate has been financed by subsidizing individual local telephone service from business service and long-distance service so that billed local revenues are less than the cost of providing service. This situation is slowly changing as access charges are being lowered and local service rates are being increased in some jurisdictions. However, the situation that persists in many states is that residential local service rates are deliberately set below the actual cost of providing service. Moreover, in several states, the only way the local telco has been able to get any pricing flexibility for competitive services is to agree to price freezes for residential service.
While regulators have decreed that alternative access providers be granted at least virtual colocation in telco central offices to facilitate local telephone competition, they have not yet decided what obligation these new competitors have to share in the local universal service mandate. In that connection, GTE has proposed to the FCC, to the National Telecommunications and Information Administration, and to the committees in the House and Senate that are writing communications reform legislation that the entire array of explicit and implicit universal service funding mechanisms be reviewed as a whole, rather than piecemeal, and that a process be established whereby multiple providers could become carriers of last resort (COLR) and eligible to receive universal service funding. Whether or not this proposal is accepted and implemented, it is clear that until the ground rules for local telephone competition are acted upon and settled, the regulatory environment could continue to discourage some competitors from rapidly building out or evolving to universal broadband networks in the local loop.
The ultimate solution is the establishment of a highly competitive communications environment that would be market-driven to provide the information any customer may want or need, whether it be "just plain old" lifeline telephone service or a broadband, interactive multimedia connection. The challenge for legislators, government regulators, and business leaders is to come up with a process that moves from today's situation to this desired goal. The evolution of the local plant is really the key and, as discussed above, occupies much of the thought and planning of local telephone company operators and others who want to participate in that market.
1. Ohr, S. 1995. "Fast Cache Designs Keep the RISC Monster Fed," Computer Design, January, pp. 67–74.
2. Ryan, Hankin, and June Kent. 1994.
3. Glitho, R.H., and S. Hayes. 1995. "Telecommunications Management Network: Vision vs. Reality," IEEE Communications, Vol. 33, March, pp. 47–52.
4. Generally, we say that there is interoperability between two components X and Y when they can interact based on a (degree of) mutual understanding.
5. For example, you may want to listen to a video on a POTS phone, interact with a database service using your TV, or have a videoconference with two of your colleagues at the same time when one of you is using a computer, one is using a screen phone, and the third is using a TV. The situation today is that each user would use a distinct CPE depending on the purpose.