Making Disruptive Prototypes: Another Approach to Stimulating Research
In addition to measurement and modeling, a third approach to stimulating continued innovation is to build prototypes. Very successful, widely adopted technologies are subject to ossification, which makes it hard to introduce new capabilities or, if the current technology has run its course, to replace it with something better. Existing industry players are not generally motivated to develop or deploy disruptive technologies (indeed, a good example of disruptive technology is a technology that a major network hardware vendor would not consider implementing in its router products). Researchers in essence walk a fine line between two slippery slopes: Either carry out long-term research that may be difficult to apply to the Internet or work on much shorter-term problems of the sort that would be of interest to a router manufacturer or venture capitalist today, leaving little middle ground in which to invent new systems and mechanisms. So it is no surprise that as the scale and utility of the Internet have increased, it has become immensely difficult to develop an alternative vision of the network, one that would provide important new benefits while still supporting the features of today’s Internet, especially at the enormous scale of today’s network.
The Internet itself is, of course, a classic example of a disruptive technology that went from prototype to mainstream communications infrastructure. This section considers how to enable a similar disruptive innovation that addresses the shortcomings of today’s Internet and provides other new capabilities. Box 4.1 lists some research directions indentified by workshop participants as ways of stimulating such disruptive network designs. Research communities in computer architecture, operating systems, databases, compilers, and so on have made use of prototypes to create, characterize, and test disruptive technologies. Networking researchers also make use of prototyping, but the barriers discussed above make it challenging to apply the prototype methodology to networking in a way that will result in disruptive change.
CHALLENGES IN DEPLOYING DISRUPTIVE TECHNOLOGY
One important consideration in any technology area—a key theme of the book The Innovator’s Dilemma1—is that a disruptive technology is likely to do a few things very well, but its overall performance and functionality may lag significantly behind present technology in at least some dimensions. The lesson here is that if innovators, research funders, or conference program committees expect a new technology to do all things almost as well as the present technology, then they are unlikely to invent, invest in, or otherwise encourage disruptive technologies. Thus (re)setting community expectations may be important to foster disruptive prototypes. Expectation setting may not be enough, however, a new technology must offer some sort of compelling advantage to compensate for performance or other shortcomings as well as the additional cost of adopting it. Those applications that do not need some capability of the disruptive technology will use the conventional Internet since it is larger and more stable.
Also central to the notion of developing a disruptive technology is suspending, at least temporarily, backward compatibility or requiring that technology developers also create a viable migration strategy. Outsiders observed, for example, that rigid adherence to backward compatibility would have made the development of reduced instruction set computers (RISCs) impossible.
Another key factor in the success of a disruptive technology is the link to applications. The popularity of many disruptive computer technologies has been tied to the applications that people can build on top of the technologies. One example is the personal computer. Early on, it
BOX 4.1 Some Potentially Disruptive Ideas About Network Architecture and Design
Workshop participants discussed a number of architectural/design issues that could stimulate disruptive network designs. The items that follow, though not necessarily points of consensus among the authoring committee, were identified as interesting questions worthy of further consideration and perhaps useful directions for future networking research.
Where Should the Intelligence in the Network Reside?
The traditional Internet model pushes the intelligence to the edge, and calls for a simple data forwarding function in the core of the network. Does this continue to be the correct model? A number of ad hoc functions are appearing in the network, such as NAT boxes, firewalls, and content caches. There are devices that transform packets, and places where the network seems to operate as an overlay on itself (e.g., virtual private networks). Do these trends signal the need to rethink how function is located within the network? What aspects of modularity need to be emphasized in the design of functions: protocol layering, topological regions, or administrative regions? Is there a need for a more complex model for how applications should be assembled from components located in different parts of the network? There was a sense in discussions at the workshop that the Active Networks research may have explored some of these issues, but that the architectural questions remain unanswered.
Is the End-to-End Model the Right Conceptual Framework?
The end-to-end model implies that the center of the network is a transparent forwarding medium, and that the two ends have fully compatible functions that interwork with each other. From the perspective of most application developers and, in some sense, from the perspective of users, this model is not accurate. There is often a lot of practical complexity in a communication across the network, with caches, mirrors, intermediate servers, firewalls, and so on. From a user perspective, a better model of network communication might be a “limited horizon” model, in which the application or user can see the detail of what is happening locally but beyond that can interact with the network only at a very abstract level. Could such a view help clarify how the network actually works and how application designers should think about structure?
How Can Faults Be Better Isolated and Diagnosed?
When something breaks in the Internet, the Internet’s very decentralized structure makes it hard to figure out what went wrong and even harder to assign responsibility. Users seem to be expected to participate in fault isolation (many of them know how to run ping and trace-route but find it odd that they should be expected to do so). This perspective suggests that the Internet design might be deficient in that it does not pay proper attention to the way faults can be detected, isolated, and fixed, and that it puts this burden on the user rather than the network operator. The fact that this situation might arise from an instance of the end-to-end argument further suggests that the argument may be flawed.
Are Data a First-class Object Inside the Network?
The traditional model of the Internet is that it moves bytes between points of attachment but does not keep track of the identity of these bytes. From the perspective of the user, however, the namespace of data, with URLs as an example, is a part of the network. The users view the network as having a rather data-centric nature in practice, and they are surprised that the network community does not pay more attention to the naming, search, location, and management of data items. Should content-based addressing be a network research problem?
Does the Internet Have a Control Plane?
The original design of the Internet stresses the data-transport function but minimizes attention to management protocols, signaling, and control. A number of ad hoc mechanisms supply these functions, but they do not receive the research attention and architectural definition that the data movement functions do. This seems out of balance and may limit what can be achieved in the Internet today.
Abstractions of Topology and Performance
The Internet hides all details of topology and link-by-link measures of performance (for example, bandwidth, delay, congestion, and loss rates) beneath the IP layer. The simple assumption is that the application need not know about this, and if it does need such information, it can obtain it empirically (by trying to do something and observing the results). As more complicated applications such as content
caches are built, the placement of these devices within the topology of the Internet matters. Could a network provide an abstract view of its performance that simplifies the design of such systems? How could the various performance parameters be abstracted in a useful way, and would more than one abstraction be required for different purposes? What, for example, would it take for the network to provide information to help answer the question of which cache copy is most appropriate for a given user?
Beyond Cooperative Congestion Control
There seem to be a great number of papers that improve the current Internet scheme for congestion control. However, this scheme, which depends on the end nodes doing the right thing, seems less and less suitable in general as one can trust the end nodes less and less, suggesting that one needs to explore different trade-offs of responsibility between the users and the network. While some research is being done that explores alternatives to cooperative congestion control, this may be an area that deserves greater emphasis.
Incorporating Economic Factors into Design
It was noted that many of the constraints on the current network are economic in nature, not technological. Research, to be relevant in the immediate future, needs to take the broader economic, social, and governmental environment into account. One attendee noted that in many situations, the way to get people to behave how you want is to construct economic incentives, not technical constraints. This could be a useful way of thinking about network design issues.
Finding Common Themes in User Requirements
Many user communities feel that they are expending energy trying to solve problems faced by many other groups of users in areas such as performance, reliability, and application design. These communities believe that their requirements are not unique but that the network research community does not seem to be trying to understand what these common requirements are and how to solve them. The tendency within the network community is to focus attention on issues at lower layers of the protocol stack even if significant, widespread problems would benefit from work at higher layers. One reason is that when networking researchers become heavily involved with application developers, the work becomes interdisciplinary in nature. Ongoing work in middleware development is an example of this research direction. Workshop participants noted that this sort of work is difficult and rarely rewarded in the traditional manner in the research community.
Using an Overlay Approach to Deploying Disruptive Technology
Along with specific disruptive ideas, workshop participants discussed the important implementation question of how one could deploy new technology using the existing network (to avoid having to build an entirely new network in order to try out ideas). The Internet is generally thought of as being composed of a core, which is operated by the small number of large ISPs known as the tier 1 providers; edges, which consist of smaller ISPs and networks operated by organizations; and endpoints, which consist of the millions of individual computers attached to the Internet.1 The core is a difficult place to deploy disruptive technology, as the decision to deploy something new is up to the companies for which this infrastructure is the golden goose. Technical initiatives aimed at opening up the core might help, although ISP reluctance to do so would remain an issue. One of the successes of the Internet architecture is that the lack of intelligence within the core of the network makes it easy to introduce innovation at the edges. Following the end-to-end model, this has traditionally been done through the introduction of new software at the endpoints. However, the deployment of caching and other content distribution functionality suggest ways of introducing new functionality within the network near the edges. The existing core IP network could be used simply as a data transport service, and disruptive technology could be implemented as an overlay in machines that sit between the core and the edge-user computers.2 This approach could allow new functionality to be deployed into a widespread user community without the cooperation of the major ISPs, with the likely sacrifice being primarily performance. Successful overlay functions might, if proven useful enough, be “pushed down” into the network infrastructure and made part of its core functionality.
was a low-cost computing platform for those who wanted to write programs. Like the Internet, the PC was immediately seen as valuable by a small user community that sustained its market. But it was not until the invention of the spreadsheet application that the popularity of PCs would rise rapidly. Similarly, in the networking world, the World Wide Web dramatically increased the popularity of the Internet, whose size went from roughly 200,000 computers in 1990 to 10 million in 1996, to a projected 100 million in 2001. Although the inventors of these applications were technically sophisticated, they were not part of the research community that invented the underlying disruptive technology. These examples illustrate an important caveat: It is hard to know up front what the “killer app” for new enabling technologies will be, and there are no straightforward mechanisms to identify and develop them. With any proposed technology innovation, one must gamble that it will be compelling enough to attract a community of early adopters; otherwise it will probably not succeed in the long run. This chicken-and-egg-type problem proved a significant challenge in the Active Networks program (as did failure to build a sufficiently large initial user community from which a killer application could arise).
There is a tension between experimentation on a smaller scale, where the environment is cleaner, research is more manageable, and the results more readily interpreted, and experimentation on a very large scale, where the complexity and messiness of the situation may make research difficult. A particular challenge in networking is that many of the toughest, most important problems that one would look to a disruptive networking technology to solve have to do with scaling, so it is often important to push things to as large a scale as possible. One-of-a-kind prototypes or even small testbed networks simply do not provide a realistic environment in which to explore whether a new networking idea really addresses scale challenges.
This suggests that if the research community is to attract enough people with new application ideas that need the disruptive technology, there will be a need for missionary work and/or compelling incentives for potential users. Natural candidates are those trying to do something important that is believed to be very hard to do on the Internet. One would be trustworthy voting for public elections; another, similar candidate would be developing a network that is robust and secure enough to permit organizations to use the public network for applications that they now feel comfortable running only on their own private intranets.
While focused on the disruptive ideas that could emerge from within the networking research community, workshop participants also noted the potential impact of external forces and suggested that networking (like any area of computer science) should watch neighboring fields and try to assess where disruptions might cause a sudden shift in current practice. The Internet is certainly subject to the possibility of disruptive events from a number of quarters, and many networking researchers track developments in related fields. Will network infrastructure technologies—such as high-speed fiber or wireless links—be such a disruption? Or will new applications, such as video distribution, prove a disruptive force? Workshop participants did not explore these forces in detail but suggested that an ongoing dialogue within the networking research community about their implications would be helpful.