• by the same factor of X. If, to prevent buffer overflow, the same congestion control scheme is used as was used before, then the feedback delay in the control system, which is a function of propagation delays and largely independent of link speed, will remain essentially the same. An increase in link speed will therefore demand an X-fold increase in the required buffer size, if possible data loss is to be kept to the same level as before.
  • Packet or burst size. For high-speed networks, high-level protocols will use data packets with an increased number of bytes, in order to reduce packet processing overhead at end systems, such as the packet interrupt frequency at receiving hosts. These large packets introduce large bursts of data that may arrive at congestion points at the same time. Assuming the same average load as before, bursts of increased size imply increased overlapping of arriving bursts at congestion points. A larger buffer is thus needed to accommodate these simultaneously arriving large bursts.
  • Transient traffic. Typical Transmission Control Protocol (TCP) sessions involve a few dozen kilobytes [19], and the required transmission time on an OC-3 link at 155 Mbps is only a few milliseconds. (A survey of Unix file sizes [7] has also shown a similar result for file sizes. That is, the average file length is only around 22 kbytes, and most files are smaller than 2 kbytes.) Thus, for high-speed networks, these sessions will not be long enough to achieve steady-state traffic flow beyond a local or metropolitan area. When facing this type of transient traffic over a wide area, traditional end-to-end flow control methods such as TCP will incur relatively long feedback control delays, and thus such methods cannot be effective in reducing buffer usage inside a network.
  • Bandwidth mismatch. As new networks are deployed, many of the relatively old, inexpensive, low-bandwidth networks will still be in use. As these new networks with higher and higher speeds emerge, gaps in speed between old and new networks will increase. For handling the same load, this greater mismatch in bandwidth again implies the need for larger buffers.
  • Load speed from computer sources. A single workstation or personal computer can now consume the whole bandwidth of an OC-3 link. High-end computers such as servers tend to support high-bandwidth network interfaces that run as fast as the fastest computer networks available. One can expect that, at any point in time in the foreseeable future, several high-performance computers, if not just one, will always be able to saturate the fastest links in any network.

To prevent data loss due to congestion, network buffers could be increased to accommodate the increase in each of the above factors. But these factors increase independently, and the multiplicative effects of such increases will demand enormously large buffers. In addition, as network usage increases, so also will the expected number of active sessions on the network and their peak bandwidths. For each session, a network node may have to buffer all the on-the-fly data from a distant sending host to itself when congestion occurs. The buffers occupied by the session can be the entire TCP window if TCP is used. If there are N sessions, N times the size/capacity of this buffer will be needed.

For all these reasons, brute-force methods of using larger and larger buffers cannot solve the congestion problems to be expected with high-speed networks.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement