Although TCP is the most widely used transport protocol, others are also in use. Some applications, such as transmission of voice or video information, do not require lossless transmissions and would not be able to incorporate packets that were retransmitted after loss. Applications such as these might use the User Data Protocol (UDP) which does not guarantee delivery. Losses in video information might be dealt with by encoding in packets of multiple classes and transmitting a low-resolution picture using TCP, with supplemental low-priority or high-resolution packets sent using UDP.

Thus, the control of traffic in the Internet is fully distributed, and feedback greatly affects traffic. Empirical results show that traffic is “bursty” and has fractal behavior over a wide range of time scales. This characteristic was discovered by analysis of data at Bellcore in the early 1990s and subsequently verified by a number of independent studies. The main cause of this self-similarity is the heavy-tailed property in the distribution of lengths of files or Web documents. (For a heavy-tailed distribution, P[X>x] ~ x−k, where k is a positive number; i.e., the probability that the length of a file is at least x decays very slowly, namely hyperbolically in x.) Queuing systems with exponential-type distributions are rather easy to model using Markovian techniques, but systems with heavy-tailed distributions, while more realistic, are quite difficult to analyze.

Predictive Models of the Internet

Currently, it is becoming feasible to create temporal models good enough for studying traffic performance at a single node or router. It would be desirable to have spatio-temporal models that enable study of the performance of the network as a whole (Problem 1).

Problem 1. Create spatio-temporal-layered models that represent various communication levels (application, transport, physical layer, and so on) and that capture and predict useful information about Internet behavior.

To date, single TCP sessions have been modeled as stationary ergodic loss processes. Many recently developed models assume that a single queue is the bottleneck, with several long-lived and homogeneous TCP sessions coming through; this results in a fluid-like model. TCP connections in a realistic Internet-like context, with its naturally occurring heterogeneities, cannot be modeled yet.

Willinger also noted that there has been some study of a new variation of TCP that randomly deletes packets at a linearly increasing rate when the queue length is too great, even if buffers are available. Debasis Mitra of Bell Laboratories, Lucent Technologies, noted that if cost is additive, and if treatment at the next node is not dependent on treatment at past ones, this protocol can be modeled as a Markov decision process. The use of stochastic differential equations would also be an approach to modeling router behavior.

If the characteristics of routers (e.g., buffer capacity, number of outward connections, protocols) are poorly modeled, then network designers are likely to overdesign (resulting in networks that are more expensive than necessary) or otherwise have difficulty ensuring good network performance. The cost implications are significant, owing to the number of routers in a network and the expense of wasting bandwidth, especially in wireless systems.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement