Use of Flow Control

When congestion persists, no amount of buffering is sufficient in the longer term; instead, each source of traffic flowing through a bottleneck must be persuaded to send no more than its fair share of the bottleneck's capacity. That is, proper flow control can bound the buffer requirement.

This is fundamentally a feedback control problem, and many control ideas and principles apply. As depicted in Figure 3, each network node, which can be switches or gateways, collects information about congestion, and informs, directly or indirectly, the sources of data. This feedback is usually based on the amount of buffer space available or in use in the node. The sources act to control how much data they send. This control loop has a delay equal to at least twice the propagation delay between the switch and control point.

Control systems should seek to minimize this delay in feedback, since nodes will need to buffer any data that arrive after the nodes signal the congestion status but before the end of the delay. Moreover, the feedback control delay should be sufficiently small so that the control system can respond in time to any changes in traffic load.

Control of Congestion for ATM Networks

Control of congestion for ATM networks is of particular interest, because such networks support very high speed connections and multimedia services. An ATM network can simultaneously support multiple types of services of various qualities. As illustrated in Figure 4, these include constant bit rate (CBR) services for voice and other fixed-rate guaranteed traffic; variable bit rate (VBR) services for video; and available bit rate (ABR) services for data.

Being able to support ABR for data communications represents a major advantage of ATM networks over traditional time division multiplexing (TDM) networks. Under ABR services, users can have instant access to available network bandwidth when they need it, and they do not have to hold onto unused bandwidth when they do not need it. These services are exactly what many computer users desire.

In order to realize this potential of ABR services for data applications, nodes or end systems in a network need to receive status information on buffer or bandwidth usages from downstream entities. That is, effective and efficient flow control is essential.

Figure 3

Use of feedback control to handle. Congestion.

Figure 4

ATM network.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 4
Use of Flow Control When congestion persists, no amount of buffering is sufficient in the longer term; instead, each source of traffic flowing through a bottleneck must be persuaded to send no more than its fair share of the bottleneck's capacity. That is, proper flow control can bound the buffer requirement. This is fundamentally a feedback control problem, and many control ideas and principles apply. As depicted in Figure 3, each network node, which can be switches or gateways, collects information about congestion, and informs, directly or indirectly, the sources of data. This feedback is usually based on the amount of buffer space available or in use in the node. The sources act to control how much data they send. This control loop has a delay equal to at least twice the propagation delay between the switch and control point. Control systems should seek to minimize this delay in feedback, since nodes will need to buffer any data that arrive after the nodes signal the congestion status but before the end of the delay. Moreover, the feedback control delay should be sufficiently small so that the control system can respond in time to any changes in traffic load. Control of Congestion for ATM Networks Control of congestion for ATM networks is of particular interest, because such networks support very high speed connections and multimedia services. An ATM network can simultaneously support multiple types of services of various qualities. As illustrated in Figure 4, these include constant bit rate (CBR) services for voice and other fixed-rate guaranteed traffic; variable bit rate (VBR) services for video; and available bit rate (ABR) services for data. Being able to support ABR for data communications represents a major advantage of ATM networks over traditional time division multiplexing (TDM) networks. Under ABR services, users can have instant access to available network bandwidth when they need it, and they do not have to hold onto unused bandwidth when they do not need it. These services are exactly what many computer users desire. In order to realize this potential of ABR services for data applications, nodes or end systems in a network need to receive status information on buffer or bandwidth usages from downstream entities. That is, effective and efficient flow control is essential. Figure 3 Use of feedback control to handle. Congestion. Figure 4 ATM network.

OCR for page 4
Technical Goals of Flow Control for Supporting ATM ABR Services Flow control mechanisms designed to support ATM ABR services should meet a variety of technical goals, including the following: Data should rarely, if ever, be discarded due to exhaustion of node buffer memory. As mentioned above, such data may have to be retransmitted after a possibly lengthy time-out period, further contributing to network congestion and the delay experienced by the user. Network links should be used at full capacity whenever possible. For instance, if one connection sharing a link reduces the rate at which it sends, the others should increase their rates as soon as possible. In particular, as illustrated in Figure 5, the flow control mechanism should allow ABR traffic to fill in, instantly, unused bandwidth left on the link after guaranteed traffic is served. All the connections that are constrained by a bottleneck link should get fair shares of that link. The flow control mechanism should be robust. Loss or delay of control messages, and admission of additional connections while maintaining the total traffic load, for instance, should not cause increased congestion. The network administrator should not have to adjust any complex parameters to achieve high performance. The flow control mechanism should have a cost commensurate with the benefits it provides. Generally speaking, some existing LANs such as Ethernets have satisfied these goals. This explains at least partially why they have been used widely for data applications. New high-speed networks, such as ATM networks and switched Ethernets, use switches to achieve high performance. They are unlike conventional Ethernets, which use shared media. End systems on switch-based networks cannot monitor network congestion as easily as can end systems on shared-medium networks. Designing flow control schemes to satisfy the above technical goals for these new switch-based networks—especially for wide area networks (WANs)—is a significant challenge. Figure 5 Available bit rate (ABR traffic filling in bandwidth slack left by guaranteed traffic, to maximize network utilization).