The hosts in all these experiments are DEC Alpha 3000/400 workstations running OSF/1 V3.0. The OSF/1 TCP implementation [5], used in all the experiments reported in this paper, is derived from 4.3-Reno [21]. This TCP tends to acknowledge, and thus transmit, pairs of packets. The TCP window size for these experiments is limited to no more than 64 kbytes, and the packet size to 9180, except when noted. The workstations use 155-Mbps OTTO TurboChannel adapters provided by DEC. The Alphas can send or receive TCP using the OTTOs at about 15 Mbyte/sec. The OTTO drivers optionally implement CreditNet's credit-based flow control partially in software; with credit turned on they can send and receive TCP at 13 Mbyte/sec.

The measurements are all directly derived from the instrumentation counters in the CreditNet switch hardware. The hardware keeps track of the total number of cells sent by each VC and the number of cells buffered for each VC.

Measured Performance on CreditNet Experimental Switches

ATM-layer credit-based flow control resolves some TCP performance problems over ATM networks when packets are lost because of congestion [16]. The bottleneck switch no longer discards data when it runs out of buffer memory and possibly causes TCP to endure lengthy time-out periods. Instead, it withholds credit from the switches and/or hosts upstream from it, causing them to buffer data rather than sending it. This backpressure can extend all the way back through a network of switches to the sending host. The effect is that a congested switch can force excess data to be buffered in all the upstream switches and in the source host. Data need never be lost because of switch buffer overran. Thus, if TCP chooses a window that is too large, the data will simply be buffered in the switches and in the host; no data loss and retransmission time-outs will result.

Table 2 compares the useful bandwidths achieved with and without credit-based ATM-layer flow control in the configurations shown in Figure 15. For the flow-controlled cases, the switch has 100 cell buffers (4800 payload bytes) reserved per-VC. For the non-flow-controlled cases, the switch has 682 (32 payload kbytes) cells of buffering per-VC. Recall that for the configuration in Figure 15 the slow link can deliver at most 5.7 payload Mbps, and the fast link 17. Thus in both the one TCP and two TCPs cases, TCP with credit-based flow control achieves its maximum-possible bandwidth.

Using a configuration similar to that shown in Figure 15 (b), experiments involving one TCP and one UDP, instead of two TCPs, have also been carded out. A typical measured result is as follows. When ATM-layer credit-based flow control is used, UDP gets its maximum bandwidth limited only by the source, while TCP gets essentially the remaining bandwidth of the bottleneck link between the two switches. However, when credit-based flow control is turned off, TCP's throughput drops significantly and the total utilization on the bottleneck link by both TCP and UDP is reduced to less than 45 percent. Thus, when competing with UDP, TCP with ATM-layer flow control can keep up its throughput even though UDP does not reduce its bandwidth during network congestion.

TABLE 2

Measured Total Bandwidth Achieved with and without ATM-Layer Credit-based Flow Control, for the (a) and (b) configurations of Figure 15

 

Without ATM-Layer Flow Control

With ATM-Layer Flow Control

(a) One TCP

0.1 Mbyte/sec

5.7 Mbyte/sec

(b) Two TCPs

0.2 Mbyte/sec

5.7 Mbyte/sec



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement