3
Modeling: New Theory for Networking

The coming of age of the Internet has brought about a dual set of challenges and opportunities. The intellectual tools and techniques that brought us this far do not appear to be powerful enough to solve the most pressing problems that face us now. Additionally, concerns that were once relegated to the background when the Internet was small and noncommercial are now of crucial importance. In these challenges lies the opportunity for innovation:

  • Understanding scaling and dynamics requires the development of new modeling methodologies and the undertaking of new modeling efforts (employing both well-known and newly developed techniques) to take our understanding beyond that afforded by today’s models.

  • Concerns of manageability, reliability, robustness, and evolvability—long neglected by researchers—are of critical importance and require the development of new basic understanding and theory.

  • Even traditional problem areas, such as routing, must be addressed in a new context in light of how the global Internet has evolved.

PERFORMANCE

Even as the Internet has grown more complex, those who study and use it seek to answer increasingly difficult questions. What sorts of changes in the scale and patterns of traffic could lead to a performance meltdown? What are the failure modes for large-scale networks? How can one characterize predictability?

Researchers have worked for years to develop new theory and improved models. While this work has yielded many insights about network behavior, understanding other aspects of the network has proven a difficult challenge. Workshop participants encouraged the networking research community to develop new approaches and abstractions that would help model an increasingly wide range of network traffic phenomena. Simple models are more easily evaluated and interpreted, but complex models may be needed to explain some network phenomena. Queues and other resources cannot always be treated in isolation, nor can models always be based on simplified router-link pictures of the network. Small-scale, steady-state, packet-oriented models may not adequately explain all Internet phenomena. It is also well known that more sophisticated input models (such as heavy-tailed traffic distributions) are required to accurately model some behaviors. In other cases, the need is not for increased model complexity or mathematical sophistication but for just the opposite: new simple models that provide insights into widescale behavior. These may well require dealing with networking traffic at a coarser time scale or higher level of abstraction than traditional packet-level modeling. Here, theoretical foundations in such areas as flow-level modeling, aggregation/deaggregation, translation between micro and macro levels of analysis, and abstractly modeling the effects of closed-loop feedback and transients could be helpful. Simulation is another important tool for understanding networks. Advances in large-scale simulation efforts would aid model validation and permit higher-fidelity results to be obtained.

THEORY: BEYOND PERFORMANCE

Over the past three decades, several bodies of theory, such as performance analysis and resource allocation/optimization, have contributed to the design and understanding of network architectures, including the Internet. However, as the Internet has evolved into a critical infrastructure used daily by hundreds of millions of users, operational concerns such as manageability, reliability, robustness, and evolvability have supplanted performance of the data



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 7
Looking Over the Fence at Networks: A Neighbor’s View of Networking Research 3 Modeling: New Theory for Networking The coming of age of the Internet has brought about a dual set of challenges and opportunities. The intellectual tools and techniques that brought us this far do not appear to be powerful enough to solve the most pressing problems that face us now. Additionally, concerns that were once relegated to the background when the Internet was small and noncommercial are now of crucial importance. In these challenges lies the opportunity for innovation: Understanding scaling and dynamics requires the development of new modeling methodologies and the undertaking of new modeling efforts (employing both well-known and newly developed techniques) to take our understanding beyond that afforded by today’s models. Concerns of manageability, reliability, robustness, and evolvability—long neglected by researchers—are of critical importance and require the development of new basic understanding and theory. Even traditional problem areas, such as routing, must be addressed in a new context in light of how the global Internet has evolved. PERFORMANCE Even as the Internet has grown more complex, those who study and use it seek to answer increasingly difficult questions. What sorts of changes in the scale and patterns of traffic could lead to a performance meltdown? What are the failure modes for large-scale networks? How can one characterize predictability? Researchers have worked for years to develop new theory and improved models. While this work has yielded many insights about network behavior, understanding other aspects of the network has proven a difficult challenge. Workshop participants encouraged the networking research community to develop new approaches and abstractions that would help model an increasingly wide range of network traffic phenomena. Simple models are more easily evaluated and interpreted, but complex models may be needed to explain some network phenomena. Queues and other resources cannot always be treated in isolation, nor can models always be based on simplified router-link pictures of the network. Small-scale, steady-state, packet-oriented models may not adequately explain all Internet phenomena. It is also well known that more sophisticated input models (such as heavy-tailed traffic distributions) are required to accurately model some behaviors. In other cases, the need is not for increased model complexity or mathematical sophistication but for just the opposite: new simple models that provide insights into widescale behavior. These may well require dealing with networking traffic at a coarser time scale or higher level of abstraction than traditional packet-level modeling. Here, theoretical foundations in such areas as flow-level modeling, aggregation/deaggregation, translation between micro and macro levels of analysis, and abstractly modeling the effects of closed-loop feedback and transients could be helpful. Simulation is another important tool for understanding networks. Advances in large-scale simulation efforts would aid model validation and permit higher-fidelity results to be obtained. THEORY: BEYOND PERFORMANCE Over the past three decades, several bodies of theory, such as performance analysis and resource allocation/optimization, have contributed to the design and understanding of network architectures, including the Internet. However, as the Internet has evolved into a critical infrastructure used daily by hundreds of millions of users, operational concerns such as manageability, reliability, robustness, and evolvability have supplanted performance of the data

OCR for page 7
Looking Over the Fence at Networks: A Neighbor’s View of Networking Research forwarding plane as the limiting factors. Yet theoretical understanding of these crucial areas is poor, particularly in comparison with their importance. The reasons for this disparity are many, including the lack of commonly accepted models for research in these areas, the difficulty of defining quantitative metrics, and doubts about the intellectual depth and viability of scholarly research in these areas. As an example, consider the use of “soft state”1 in the Internet, long hailed as a robust technique (when compared with hard-state approaches) for building distributed applications. Yet what, precisely, is the benefit of using soft state? The notions of robustness, relative “simplicity,” and ease of implementation generally associated with soft state have not been defined, much less quantified. To take another example, the notion of plug-and-play is widely believed to make networked equipment more manageable. However, the implications of such factors as increased code complexity and the cost of reconfiguring default settings remain elusive. At the heart of this endeavor is the seemingly simple but deceptively elusive challenge of defining the problems and an appropriate set of starting assumptions. The next steps include developing new concepts or abstractions that would improve present understanding of the infrastructure, defining metrics for success, and pursuing solutions. Because the basic understanding and paradigms for research here have yet to be defined, the challenges are indeed daunting. APPLYING THEORETICAL TECHNIQUES TO NETWORKING Outsiders observed that more progress on fundamental networking problems might come from greater use of theoretical techniques and understanding from algorithm design and analysis, complexity theory, distributed computing theory, general system theory, control systems theory, and economic theory. For example, routing has been well studied, both theoretically and practically, but remains a challenging and important problem for the networking community. Some of the open research questions relevant to routing noted by workshop participants include the following: Developing a greater understanding of the convergence properties of routing algorithms such as the border gateway protocol (BGP) or improvements to it. BGP has been found to suffer from much slower than expected convergence and can fail if misconfigured. Developing a better theoretical framework for robustness and manageability to inform the development of less vulnerable designs. Designing new routing algorithms that take into account real-world constraints such as the absence of complete information (and, often, the presence of erroneous information), peering agreements and complex interconnections among ISPs, and local policy decisions. Developing routing schemes that take into account the fact that the network is not simply composed of routers and links—network address translators, firewalls, proxies, underlying transport infrastructures, and protocols all come into play. Which of these elements are relevant and how should they be abstracted to better understand routing? Developing an understanding of the conditions under which load balancing and adaptive multipath routing work effectively and the conditions under which they can lead to instability and oscillation. 1   “State” refers to the configuration of elements, such as switches and routers, within the network. Soft state, in contrast to hard state, means that operation of the network depends as little as possible on persistent parameter settings within the network.