forwarding plane as the limiting factors. Yet theoretical understanding of these crucial areas is poor, particularly in comparison with their importance. The reasons for this disparity are many, including the lack of commonly accepted models for research in these areas, the difficulty of defining quantitative metrics, and doubts about the intellectual depth and viability of scholarly research in these areas.
As an example, consider the use of “soft state”1 in the Internet, long hailed as a robust technique (when compared with hard-state approaches) for building distributed applications. Yet what, precisely, is the benefit of using soft state? The notions of robustness, relative “simplicity,” and ease of implementation generally associated with soft state have not been defined, much less quantified. To take another example, the notion of plug-and-play is widely believed to make networked equipment more manageable. However, the implications of such factors as increased code complexity and the cost of reconfiguring default settings remain elusive.
At the heart of this endeavor is the seemingly simple but deceptively elusive challenge of defining the problems and an appropriate set of starting assumptions. The next steps include developing new concepts or abstractions that would improve present understanding of the infrastructure, defining metrics for success, and pursuing solutions. Because the basic understanding and paradigms for research here have yet to be defined, the challenges are indeed daunting.
Outsiders observed that more progress on fundamental networking problems might come from greater use of theoretical techniques and understanding from algorithm design and analysis, complexity theory, distributed computing theory, general system theory, control systems theory, and economic theory. For example, routing has been well studied, both theoretically and practically, but remains a challenging and important problem for the networking community. Some of the open research questions relevant to routing noted by workshop participants include the following:
Developing a greater understanding of the convergence properties of routing algorithms such as the border gateway protocol (BGP) or improvements to it. BGP has been found to suffer from much slower than expected convergence and can fail if misconfigured.
Developing a better theoretical framework for robustness and manageability to inform the development of less vulnerable designs.
Designing new routing algorithms that take into account real-world constraints such as the absence of complete information (and, often, the presence of erroneous information), peering agreements and complex interconnections among ISPs, and local policy decisions.
Developing routing schemes that take into account the fact that the network is not simply composed of routers and links—network address translators, firewalls, proxies, underlying transport infrastructures, and protocols all come into play. Which of these elements are relevant and how should they be abstracted to better understand routing?
Developing an understanding of the conditions under which load balancing and adaptive multipath routing work effectively and the conditions under which they can lead to instability and oscillation.