This rapid growth in speed is expected to continue over the next decade, because many new applications in important areas such as data and video will demand very high network bandwidths. These high-speed networks are introducing major new challenges in network congestion control, as explained in the next two sections. That the high-speed networks would make the solution for network congestion harder is contrary to what one's intuition might suggest.

Network Congestion Problem

Any network has bottlenecks or congestion points, i.e., locations where more data may arrive than the network can carry. A common cause for congestion is a mismatch in speed between networks. For example, a typical high-performance local area network (LAN) environment in the next several years may have the architecture shown in Figure 1. While the servers will use new high-speed asynchronous transfer mode (ATM) connections at the OC-3 rate of 155 Mbps, many clients will still depend on old, inexpensive but slower, 10-Mbps Ethernet connections. Data flowing from the servers at 155 Mbps to the clients at 10 Mbps will experience congestion at the interface between the ATM and Ethernet networks.

Congestion can also occur inside a network node that has multiple ports. Such a node can be a switch such as an ATM switch or a gateway such as a router. As depicted in Figure 2, congestion arises when data, destined for a single output port, arrive at many different input ports. The faster and more numerous these input ports are, the severer the congestion will be.

A consequence of congestion is the loss of data due to buffer overflow. For data communications in which every bit must be transmitted correctly, lost data will have to be retransmitted, and will result in degraded network utilization and increased communications delay for end users.

Inadequacy of Brute-Force Approach to Providing Large Buffers

A universal solution to the problem of losing data because of congestion involves buffer memory in which a congested point can temporarily queue data directed at overloaded output ports. This use of buffer is illustrated in Figure 2. However, simply providing large buffers would likely incur prohibitively high memory cost for high-speed networks, because as network speed increases, so also will the following factors:

  • Buffer overloading rate. Suppose that data from multiple input ports feed to a single output port, and that all the ports are of the same speed. If all these ports now increase their speed by a factor of X, then the overloading rate to the node buffer will also increase

Figure 1

Congestion due to a mismatch in speed between 155-Mbps ATM network and 10-Mbps Ethernet.

Figure 2

Congestion, in a switch or gateway, due to multiple arrivals at the same output.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement