Visions For The Future Of The Fields
David D. Clark, Moderator
Edward A. Feigenbaum
Robert W. Lucky
Robert M. Metcalfe
David D. Clark
In contrast to the previous symposium segments in which people presented set pieces, the Visions panel was designed to be entirely interactive.
We talk about the future of the field, but it is ''fields"—plural—because the Computer Science and Telecommunications Board (CSTB) deals with computer science, computer engineering (this assumes that you believe they are different things), and telecommunications, especially with the acquisition of the redefined "T" in CSTB's title. Some parts of electrical engineering are also relevant. In addition, we have many different kinds of players—academia, industry, and government—as well as user groups and spokespersons for societal issues. Our concerns are multidisciplinary even within individual fields.
We have a future that is shaped by a variety of forces. Within each of the relevant fields, the question is, What is going to happen? Is it going to converge? Is it going to fly apart? We could ask some technological questions about the future. Are processes going to get faster? Are networks going to reach the home? The more interesting questions, perhaps, are the societal ones that may transform the world in some way. These are actually the hard questions to answer. Another thing we can do, especially with regard to some of the societal issues, is try to bound the possibilities. What are the boundary conditions of the technologies?
DAVID CLARK: Earlier in the symposium, we heard the phrase "the reckless pace of innovation in the field." It is a great phrase. I have a feeling that our field has just left behind the debris of half-understood ideas in an attempt to plow into the future. One of the questions I wanted to ask the panel is, Do you think that we are going to grow up in the next 10 years? Are we going to mature? Are we going to slow down? Ten years from now, will we still say that we have been driven by the reckless pace of innovation? Or will we, in fact, have been able to breathe long enough to codify what we have actually understood so far?
RAJ REDDY: You make it sound as though we have some control over the future. We have absolutely no control over the pace of innovation. It will happen whether we like it or not. It is just a question of how fast we can run with it.
CLARK: I was not suggesting that we had any control over the pace of innovation, but are you saying you think it will continue to be just as fast and just as chaotic?
REDDY: And most of us will be left behind, actually.
ROBERT LUCKY: We were talking this morning about the purpose of academic research. The problem that many of us involved in research have is that, as at Bell Labs, we used to talk about research in terms of 10 years. Now you can hardly see two weeks ahead in our field. The question of what long-term research is all about remains unanswered when you cannot see what is out there to do research on.
Nicholas Negroponte was saying recently that, when he started the Media Lab at the Massachusetts Institute of Technology, his competition came from places like Bell Labs, Stanford University, and the University of California at Berkeley. Now he says his competition comes from 16-year-old kids. I see researchers working on good academic problems, and then two weeks later some young kids in a small company are out there doing it. You may ask, "Where do we fit into this anymore?" In some sense, particularly in this field, I think there must still be good academic fields where you can work on long-term problems in the future, but the future is coming at us so fast that I just sort of look in the rear-view mirror.
MARY SHAW: I think innovation will keep moving; at least I hope so, because if it were not moving this fast, we would all be really good IBM 650 programmers by now. I think what will keep it moving is the demand from outside. In the past few years, we have just begun to get over the hump where people who are not in the computing priesthood, and who have not invested many years in figuring out how to make computers do things, can actually make computers do things. As that becomes easier—it is not easy yet—more and more people will be demanding services tuned to their own needs. I believe that they will generate the demand that will keep the field growing.
JURIS HARTMANIS: As was stated this morning, I think we can project reasonably well what silicon technology can yield during the next 20 years; the growth in computing power will follow the established pattern. The fascinating question is, What is the next technology to accelerate this rate and to provide the growth during the next century? Is it quantum computing? Could it really add additional orders of magnitude? Is it molecular or DNA computing? Probably not. The key question is, What technologies, if any, will complement and/or replace the predictable silicon technology?
CLARK: I wonder if growth and demand are the same thing as innovation? Mary, you talked about a lot of demand from outside. We could turn into a transient decade of interdisciplinary something, but does that actually mean there is any innovation in our field?
SHAW: We have had some innovation, but it has not been our own doing. Things like spreadsheets and word processors, for example, that have started to open the door to people who are not highly trained computing professionals have come at the academic community from the outside, and they had very little credibility for a long time. I remember when nobody would listen to you if you wanted to talk about text editors in an academic setting. Most recently, there has been the upsurge of the World Wide Web. It is true that Mosaic was developed in a university, but not exactly in the computer science department. These are genuine innovations, not just nickel-and-dime things.
EDWARD FEIGENBAUM: First, I would like to say a few words about the future, and then I will pick up on the theme that Dave Clark started with, the debris, and ask some of my friends in the audience about their debris.
There has been a revolution going on that no one really recognizes as a revolution. This is the revolution of packaged software, which has created immense amounts of programming at our fingertips. We go to the store; we buy it. This is the single biggest change from, say, 1980. I think the future is best seen not in terms of changing hardware or increased numbers of MIPS (or GIPS), but rather in terms of the software revolution. We are now living in a software-first world. I think the revolution will be in software building that is now done painstakingly in a craftlike way by the major companies producing packaged software. They create a "suite"—a cooperating set of applications— that takes the coordinated effort of a large team.
What we need to do now in computer science and engineering is to invent a way for everyone to do this at his or her desktop; we need to enable people to "glue" packaged software together so that the packages work as integrated systems. This will be a very significant revolution.
I think the other revolution will be the one alluded to by Leonard Kleinrock, what he called didactic agents or intelligent agents. Here, the function of the agent is to allow you to express what it is you want to accomplish, providing the agent with enough knowledge about your environment and your context for it to reason exactly how to accomplish it.
Lastly, I will say something about the debris. I can bring my laptop into this room, take an electric cord out of the back (presuming I have the adapter that David Farber was talking about before), and plug it into the wall. I get electricity to power my computer anywhere—in Wichita, La Jolla, on any Air Force base in the country, or anywhere else I might be, even at the National Academy. Yet I cannot take the information wire that comes out of the back and plug it into the wall in Wichita or La Jolla or any Air Force base I choose because all of a sudden I need the transmission control protocol (TCP) switcher. I need to have exact contexts for the TCP to operate in those particular environments. We do not yet have anything like an information utility. Yes, I can dial the Internet on a modem, but this is a second-rate adaptation to an old world of switched analog telephones. It is not the dream. The architecture of the Internet—wonderful as it may seem—has frustrated the dream of the information utility.
ROBERT METCALFE: There are two solutions to your problem. The first relates to the structure of the Internet, and for this I must defer to Robert Kahn. Since he and Vint Cerf are the fathers of the Internet, they must answer this question. The second solution to the problem has to do with what Gordon Moore has recently called ''Grove's law." Grove's law states that the bandwidth available doubles every 100 years. It is a description of the sad effects of the structure of the telecommunications industry, which would be in charge of putting these plugs where you want them. This industry has been underperforming for 40 or 50 years, and now we have to wake it up.
LUCKY: What was the question? We are pushing something we would like to call IP dialtone. I see this as the future of the infrastructure right now, to have an IP network. There was an interesting interview with Mary Modahl in last month's Wired magazine. They asked her if voice on the Internet would really take over, and she said no. She said that real-time voice is a hobby, like citizen's band radio, not a permanent application. I actually think that in the future, the voice may be a smaller network and the IP infrastructure will really take it over. IP dialtone will be the main thing. I would not rebuild the voice network. I would just leave it there and build this whole new network of IP dialtone networks.
CLARK: Part of what marks our field is this reckless pace of innovation into the future. Another is the persistence of stubborn, intractable problems that we have no idea of how to solve. An obvious problem that was raised earlier in various guises is (to look at it abstractly) our ability to understand complexity or (to look at it more concretely) our ability to write large software systems that work. When we go to CSTB's 20th anniversary celebration and look back, do you think we are going to see any new breakthrough? Let us pick this stubborn problem as an example; then we can talk about some others. In software engineering, is something actually going to change? Are we going to see a breakthrough? I am thinking about the point Ed Feigenbaum made that people are going to be able to engineer a software package at their desks. I said, "Oh no. It is done by gnomes inside Microsoft." Won't it be done by gnomes inside Microsoft for the next 10 years?
SHAW: I think this is a very big problem, and Ed pointed out a piece of it—that the parts do not fit together. We have, though, this myth that someday we are going to be able to put software systems together out of parts just like tinker toys. Well, it isn't like that. It is more like having a bathtub full of Tinker Toys, Erector Sets, Lego blocks, Lincoln Logs, and all of the other building kits you ever had as a child; reaching into it; grabbing three pieces at random; and expecting to build something useful.
As long as we have parts that are intended to interact with other parts in different ways and we cannot even recognize quite how any given part is expected to interact, we will have a problem. We do not even have distinctions explicit enough to do the analogue of type checking—to say, "This one does not fit with that one, what can I do about it?" Well, maybe there is nothing we can do, and maybe we can find a piece that will patch it up. I think this is one of the major impediments to being able to put together systems from parts and make them work. I do believe that we will be able to make progress. Breakthrough is a pretty big word, but I think we will at least be able to make significant progress on articulating these distinctions and helping each other understand when we have the problem and what, if anything, we can do about it.
The other problem that Ed Feigenbaum raised is the nonmigratory local context. I have the same problem that Ed does, except mine is at the software level. I put a document on a floppy disk and I take it someplace. Well,
may be the text formatter I find when I get there is the same one that the document was created with—how fortunate. Even so, the fonts on the machine are not the same, and the default fonts in the text formatter are not the same, and it probably takes me half an hour to restore the document to legibility just because the local context changed—I see everyone is nodding, so I can quit telling this story. Then, of course, there is the rest of the time, when I find a different document formatter entirely. This is another example of having parts that exist independently that we want to move around and put together. Once again, I think the big problem is not being able to articulate the assumptions the parts make about the context they need to have.
BUTLER LAMPSON: I say we just had a breakthrough. How many breakthroughs per decade are you entitled to? The breakthrough we just had is the Web. You had to cobble together a few million computers, a whole bunch of servers, all kinds of legacy databases and documents, and all kinds of stuff. All you have to do is write a few PERL scripts and you can patch together huge amounts of stuff and make it accessible to millions of people. What is all this whining and moaning about? Furthermore, I would like to point out that if you want your document to be portable, just write it in vanilla ASCII and you will not have any problems with portability.
SHAW: I am really good at ASCII, and ASCII art too, but we were planning the next decade's breakthrough.
CLARK: You used a portable operation object as an example. I actually think that is a lot easier than integrating software modules. I thought when Butler stood up he was perhaps going to say something about the viability of distributed object linking and embedding (OLE). Is this the answer to composable software?
LAMPSON: Give it a decade. Microsoft has short-term things and long-term things. This is one of the long-term things, like Windows.
METCALFE: At the risk of being nasty, what I just heard is that we need standardization. This is all I heard. I did not hear that all this money we are spending on software research is not resulting in any breakthroughs, or whatever breakthroughs it is resulting in are not being converted because we just cannot standardize on it. Is this right? Is this what I heard?
SHAW: Standardization suggests that there is one size that fits all, and if everyone would "just do it my way, everything would be just fine." That implies that there is one way that suffices for all problems.
LUCKY: Isn't standardization what made the Web? We all got together behind one solution; it may not fit everybody, but we empowered everybody to build on the same thing, and this is what made the whole thing happen.
CLARK: One statement that was made at the beginning of this decade was that the nineties would be the decade of standards. There is an old joke: the nice thing about standards is that there are so many to pick from. In truth, I think that one of the things that has happened in the nineties is that a few standards—not because they are necessarily best—happened to win some sort of battle.
LUCKY: This is a tragedy and a great triumph at the same time. You can build a better processor than Intel or a better operating system than Microsoft. It does not matter. It just does not matter.
CLARK: How can you hurtle into the future at a reckless pace and, simultaneously, conclude that it is all over, it does not matter because you cannot do something better, because it is all frozen in standards?
METCALFE: There seems to be reckless innovation on almost all fronts except two, software engineering and the telco monopolies.
CLARK: Yet if we look at the Web, the fact is that we have a regrettable set of de facto standards in HTML and HTTP, both of which any technologist would love to hate. When you try to innovate by saying it would be better if URLs were different, the answer is, "Yes, well there are only 50 million of them outstanding, so go away." Therefore, I am not sure I believe your statement that there is rapid innovation everywhere, except for these two areas.
METCALFE: I go back to Butler Lampson's comments. Just last week there was rapid innovation in the Web.
CLARK: How about Windows 95?
METCALFE: Windows 95 is an endgame.
LUCKY: Dave, it is possible that if all the dreams of the Java advocates come true, this will permit innovation on top of a standard. It is one way to get at this problem. We do not know how it is going to work out, but at least this would be the theory.
CLARK: I actually believe it might be true. I think this is very interesting.
A tremendous engine exists down below that is really driving the field—the rate of at least performance innovation, if not cost reduction, in the silicon industry. This was the engine that drove us forward. I think that this is true, but I am not sure it's the only engine. I wonder if on our 20th anniversary we will say, "Well, yes, silicon is the thing that drove us forward"; or will there be other things? Is the World Wide Web a creation of silicon innovation?
SHAW: No, it is a creation of the frustration of people who did not feel like dealing with FTP and TELNET but still wanted to get to information.
CLARK: I think you just said that silicon and frustration are our drivers.
LUCKY: At the base, silicon has driven the whole thing. It has really made everything possible. This is undeniable, even though we spend most of our time, all of us, working on a different level. This is the engine in the basement that really is doing it.
METCALFE: The old adage: "Grove giveth and Gates taketh away."
CLARK: You know I am an academic researcher. I thought I would ask the panel a question about my future, because I am very concerned about this. We heard all sorts of things earlier in the symposium about the nature of the field and the relationships that exist among research activities. I will be somewhat parochial here in order to focus. For academic research, there is the model of the future of reckless innovation, combined with the alternative model of well, it is all over. Somebody said to me that you can build a better operating system, but it would not matter. You can make a better Web, but it would not matter. You can create a better computer architecture, but it would not matter. It is all over.
In some places, like the silicon industry, I heard that the vector is very clear. They can see all the way out to 2010, and they know the problems they have to solve. I cannot repeat their language because I do not speak their language, but they have to learn how to do bipolar implant polarization. It is advanced technology development. In that context, Howard Frank considers what the research community is doing as very narrow.
I agree. If, in fact, our agenda has been defined by the boundary conditions of Windows 95 and the insistence of the silicon industry to move forward, then I think it is narrow, and there is no funding. If I had a good idea, I can bring one or two FTEs (full-time equivalents) to bear on it, and industry could bring 100 man-years to bear on it. Microsoft cranked out ActiveX in a year, right? How many man-years are in that? So what role can a poor academic play? I find myself asking, "If all of the academic researchers died, what impact would it have on the field in 10 years?"
REDDY: No students.
LUCKY: It is like the NBA (National Basketball Association) draft. Students are going to be leaving early, trying to be Mark Andreessen.
CLARK: This has happened to me. I cannot get them to stay. There is no doubt that it is a serious issue for me. So why does it matter?
METCALFE: I think it is true that, right now, industrial advancement in technology is outstripping the universities. I see this as a temporary problem that we need to fix. Some of us need to stop working on all these short-term projects in the universities and somehow leap out ahead of where the industry is now.
CLARK: I once described setting standards on the Internet as being chased by the four elephants of the apocalypse. The faster I ran, the faster they chased me because the only thing between them and making a billion dollars was the fact that you had not worked this thing out. You cannot outrun them. If it is a hardware area, you can hallucinate something so improbable you just cannot build it today. Then, of course, you cannot build it in the lab, either. We used to try to have hardware that let us live 10 years in the future. Now I am hard-pressed to get a PC on my desk. Yet in the software area, there really is no such thing as a long-term answer. If you can conceive it, somebody can reduce it to practice. So I do not know what it means to be long term anymore.
HARTMANIS: I do not believe what was said earlier, that if you invent a better operating system or a better Web or computer architecture, it does not matter. I think it matters a lot. It is not that industry takes over directly what you have done, but the students who move into industry take those ideas with them, and they do show up in development agendas and products. I am convinced that the above assessment is far too pessimistic about the influence of academic research.
FEIGENBAUM: I want to make a couple of comments. On the question of long-term versus short-term research, the universities would say, and I would say, that university researchers attend to longer-range issues. At a Defense Advanced Research Projects Agency (DARPA) software conference last August here in Washington, Bill Joy gave the keynote speech. In the question-and-answer period they talked about this issue of short-range and long-range research. In the context of stressing that there is really a place for universities, Bill said that at Sun, 18 months was a long time. He said he would not entertain anything that is more than 24 months out.
Then I was at another DARPA meeting recently where they were talking about advances in parallel computer architectures. The project they were focusing on as being very advanced was the work of the Stanford Computer System Lab on the FLASH architecture. This project has been going on for more than a decade now. It evolved with several different related architectures. This kind of sustained effort is the role of the university.
LUCKY: I want to say, in support of academics, that we are all proud of what the Internet and the Web have done. They were created by a partnership between academia and the government. The industry had very little to do with it. The question for all of us is whether this is a model that can be repeated. Can government do something again as it did with ARPANET that will have the tremendous effects for all of us that this has had two decades later?
WILLIAM WULF: I think this long-term versus short-term language is a red herring. Let me remind you of the figure that was up on the screen earlier this morning that comes from the Brooks-Sutherland report (Figure 2.1 in this volume). It shows research going on in universities and industry labs, product development going on, and the point in time at which something becomes a billion-dollar industry. The thing that is so wonderful about that figure is that it bounces back and forth. It is not a linear translation from far-out basic academic research to short-term grubbing product development. There are interesting, deep academic problems that are spawned by short-term product development. If any group in the world ought not to be having this discussion, it seems to me it is this field because we have experienced that the linear model is just so much nonsense.
MICHAEL DERTOUZOS: All this negativism, I just do not like it. Just a few random reflections: When I try to use my computer, I have to wait for what perceptually is 17 hours of booting. I do not want to wait that long, especially since I have to do it 20 or 30 times a day because it always fails. I would like a machine and a system that do not crash every 6 hours.
CLARK: Do you use Windows?
DERTOUZOS: I am using everything under the sun, and they all crash. I would like to have a machine that is easy to use. When I say a machine, I mean the whole spectrum—software and hardware systems. We brag about the Web, and yes, Butler, it is a great thing, and 40 million or 10 million users—or whatever the number is—is great, but there are 700 million telephones and 7 billion people in the world. There are voiceless millions, and we are not pinging against the limits. To get there and to have utility from these machines, we will have to be able to use them easily. To me, this is a long-term project for 30 or 40 years ahead.
Somebody said that voice was going away. I think speech is the most natural thing. We have to learn how to use it to make our machines understand us and learn from us. I just do not see this bottoming out of the field. Maybe we are in a bit of a lull. I agree that it is hard to find specific problems out there, but I think that if you look at the whole picture, there is a great deal ahead. I would like to ask if the people on the panel could provide their list of things that they would like to see.
FEIGENBAUM: I would like to say something about a paradox or a dilemma in which university researchers find themselves. If you go around and look at what individual faculty people do, you find smallish things in a world that seems to demand more team and system activity. There is not much money around to fund anything more than small things, basically to supplement a university professor's salary and a graduate student or two, and perhaps run them through the summer.
Partly this is because of a general lack of money. Partly it is because we have a population explosion problem and all these mouths to feed. All the agencies that were feeding relatively few mouths 20 years ago are now feeding maybe 100 times as many assistant professors and young researchers, so the amounts of money going to each are very small. This means that, except for the occasional brilliant meteor that comes through once in a while, you have relatively small things being done. When they get turned into anything, it is because the individual faculty member or student—as Professor Hartmanis mentioned, some students take these ideas out into the
world—convinces an industry to spend more money on it. Subsequently, the world thinks that it came out of the industry.
ANITA BORG: I wanted to talk a little bit about the question of where you get innovation and where academics get ideas for problems to work on. This is something that I talk about every time I go, as an industry person, to talk to a university. It relates to what Bill was saying. If we keep training students to look inside their own heads and become professors, then we lose the path of innovation. If we train our students to look at what industry is doing and what customers and people out there using these things cannot do—not be terrorized by what they can do, but look at where they are running into walls—then our students start appreciating these as the sources of really hard problems. I think that this focus is lacking in academia to some extent and that looking outward at real problems gives you focus for research.
HARTMANIS: I fully agree. Students should be well aware of what industry is and is not doing, and I believe that many of them are well informed. Just as Michael Dertouzos complained about what is happening to his machine and what he wants to see done, students see problems with software and with the Internet. They go out and work summers in industry. They are not in any sense isolated; they know what is going on. Limited funding may not permit big university projects, but students are quite well informed about industrial activities.
SHAW: I side more with Anita. Earlier I mentioned three innovations that came from outside the computer science community—spreadsheets, text formatting, and the Web. I think they came about because people outside the community had something they needed to do and were not getting any help in doing. So we will get more leads by looking not only at the problems of computer scientists, but also at the problems of people who do not have the technical expertise to cope with these problems. I do not think the next innovation is particularly going to be an increment along the Web, or an increment on spreadsheets, or an increment on something else. What Anita is asking us to think about is, How are we going to be the originators of the next killer application, rather than waiting for somebody outside to show it to us?
FEIGENBAUM: I have talked to a lot of people abroad—academics and industry people in Japan and Europe—about our computer science situation, especially on the software side. We are the envy of the world in terms of the connectedness of our professors and our students to real-world problems. Talk about isolation—they think they are isolated relative to us.
I want to make a specific suggestion. There was a topic that came up in Joe Traub's talk about information warfare. There is, I think, a real-world context about which people ought to be concerned. I am giving you a perspective of 20 months with the Air Force and seeing the very real side of our academic discussions. We are in what I would call a pre-engineering phase with regard to handling the problems of information warfare that Joe spoke about. By pre-engineering I mean crafty and creative tinkering. If you actually go to the places where this work is done and watch the people at work, there is some scientific understanding, but not much. Indeed, there is no real engineering going on there, although the work is very innovative.
I think that the computer science academic world ought to pay attention. Len Kleinrock was making the case earlier that computer scientists and engineers should understand the nomadic computing world. He was telling us to understand this from the point of view of the "good guys" who want to give us functionality and ease of use. I would say we need to convince computer scientists and engineering researchers to understand the same world from the point of view of the "bad guys," and understand it at some depth. That is not the kind of thing that we academics usually pay attention to, but we must have good academic research focused on these issues.
STEWART PERSONICK: I want to add some data here. We have a research program at Bellcore; it is not enormous, but it amounts to about $35 million a year funded by our external customers, and we have some funding from the government as well. So we have a fair amount of money. In recent years, we have tried very hard to align our research to the needs of our customers to keep up the funding. I have advised universities that we funded at modest levels that I was not going to fund them as much as I used to. However, I indicated that I would be delighted to subcontract some of our research to them because this would be merely a transaction. I told them that it was not a gift. Bellcore has work to do, and we are prepared to subcontract it to them. We are talking potentially about millions of dollars. I have had no takers. People have been upset or discouraged by the fact that I have reduced the traditional funding; modest as it was, I have reduced it. I have not had anybody come back to me and express interest in subcontracting. What I hear people saying is, "Well, you know we don't do that."
This goes along with what I think Ed was saying. We do not have these enormous teams, but we do have teams working on big system problems that are very, very tough, and anyone who could solve these problems would be quite famous, in addition to making money for the customers. We are not seeing the academic community respond by saying it would love to subcontract this work. It seems as if it has not yet bought into this paradigm that we work together as a team on some problems that have real customers. These are not development things in some grubby sense. They are really, really tough computer science problems and system problems.
CLARK: Now it is time to give each of the panelists two or three minutes to tell us the thing about the future that matters the most to you.
REDDY: As Bob Lucky pointed out, there are different kinds of futures. If you go back 40 years, it was clear that certain things were going to have an impact on society—for example, communications satellites, predicted by Arthur Clarke; the invention of the computer; and the discovery of the DNA structure. At the same time, none of us had any idea of semiconductor memories or integrated circuits. We did not conceive of the ARPANET. All of these came to have an impact.
So my hypothesis is that there are some things we now know that will have impact. One is digital libraries. The term digital library is a misnomer, the wrong metaphor. It ought to be called digital archive, bookstore, and library. It provides access to information at some price, including no price. In fact, the National Science Foundation (NSF) and DARPA have large projects on digital libraries, but they are mainly technology based—creating that technology to access information. Nobody is working on the other problem of content.
We have a Library of Congress with 30 million volumes; globally, the estimate is about 100 million volumes. The U.S. Government Printing Office produces 40,000 documents consisting of 6 million pages that are out of copyright. Creating a movement—because it is not going to be done by any one country or any one group, it must be done globally—to get all the content (to use Jefferson's phrase, all the authored works of mankind) on-line is critically important. I think this is one of the futures that will affect every man, woman, and child, and we can do it. At Carnegie Mellon University (CMU), we are doing two things to help. In collaboration with National Academy Press, we are beginning to scan, convert, correct, and put in HTML format all of its out-of-print books. There are already about 200 to 300 of them. By the end of the year, we expect to have all of them. The second thing CMU is doing is offering to put all authored works of CSTB members on the network.
METCALFE: I would like speak briefly on behalf of efforts aimed at fixing the Internet. The Internet is one of our big success stories and we should be proud of it, but it is broken and on the verge of collapse. It is suffering numerous brownouts and outages. Increasingly, the people I talk to, numbering in the high 90 percent range now, are generally dissatisfied with the performance and reliability of the Internet.
There is no greater proof of this than the proliferation of intranets, which people tend to build. The good reason they build them is to serve internal corporate data processing applications, as they always have. The bad reason for building intranets is because the Internet offers inadequate security, performance, and reliability for its uses. So we now have a phenomenon in companies. The universities, as I understand it, are currently approaching NSF to build another NSFnet for them. This is really a suggestion not to fix the Internet, but to build another network for us.
Of course, the Internet service providers are also tempted to build their own copies of the Internet for special customers and so on. I believe that this is the wrong fix, the wrong approach. We need to be working on fixing the Internet. Lest you be in doubt about what this would include, it would mean adding facilities to the Internet by which it can be managed. I claim that these facilities are not in the Internet because universities find management boring and do not work on it. Fixing the Internet also would include the addition of mechanisms for finance so that the infrastructure can be grown through normal communications between supply and demand in our open markets, and the addition of security; it is not the National Security Agency's fault that we do not have security in the Internet. It occurred because for years and years working on security has been boring, and no one has been doing it; now we finally have started.
We need to add money to the Internet—not the finance part I just talked about, but electronic money that will support electronic commerce on the Internet. We need to introduce the concept of zoning in the Internet. The Communications Decency Act is an effort, although lame, to bring this about. On the Internet, mechanisms supporting freedom of speech have to be matched by mechanisms supporting freedom not to listen.
We need progress on the development of residential networking. The telecommunications monopolies have been in the way for 30 or 40 years, and we need to break these monopolies and get competition working on our behalf.
SHAW: I think the future is going to be shaped, as the past has been, by changes in the relationship between the people who use computing and the computing that they use. We have talked a lot today about software, and we have talked a little about the World Wide Web, which is really a provider of information rather than of computation at this point. I believe we should not think about these two things separately, but rather about their fusion as information services, including computation and information, but also the hybrid of active information.
On the Web, we have lots of information available as a vast undifferentiated sea of bits. We have some search engines that find us individual points. We need mechanisms that will allow us to search more systematically and to retain the context of the search. In order to fundamentally change the relation between the users and the computing, we need to find ways to make computing genuinely widespread and affordable, private and symmetric, and genuinely intellectually accessible by a wider collection of people.
I thank Bob Metcalfe for saying most of what I was going to say about what needs to be done because the networks must become places to do real business, rather than places to exchange information among friends. In addition, we need to spend more time thinking about what you might call naive models, that is, ways for people who are specialists in something other than computing to understand the computing medium and what it will do for them, and to do this in their own terms so they can take personal control over their computing.
LUCKY: There are two things I know about the future. First, after the turn of the century, one billion people will be using the Internet. The second thing I know is that I do not have the foggiest idea what they are going to be using it for.
REDDY: Digital libraries.
LUCKY: Perhaps. I think it is fundamental that we do not know this. We have created something much bigger than us where biological rules seem more relevant than the future paradigm that we are used to, where Darwinism and self-adaptive organization may be the more relevant phenomena with which to deal. The question is, How do we design an infrastructure in the face of this total unknown? There are certain things that seem to be an unalloyed good that we can strive for. One of them is bandwidth. Getting bandwidth out all the way to the user is something we can do without loss of generality.
On the other side, it is hard to find other unalloyed goods. For example, intelligence is not necessarily a good thing. Recently there was a flurry of e-mail on the Internet when one of the router companies announced that it was going to put an "Exon box" in its router. An Exon box would check all packets going by to see if they are adult packets or not. There was a lot of protest on the Internet, not because of First Amendment and Communications Decency Act principles, but because people did not want anything put inside the network that exercises control, simply as an architectural paradigm, more than anything else.
So it is hard to find these unalloyed goods. Bandwidth is good, but anything else you do on the network may later come back to bite you because of profound uncertainty about what is happening.
HARTMANIS: I would like to talk more about the science part of computer science, namely, theoretical work in computer science, its relevance, and identifying some stubborn intellectual problems. For example, security and trust on the Internet are of utmost importance; yet all the methods we use for encryption are based on unproven principles. We have no idea how hard it is to factor large integers, but our security systems are largely based on the assumed difficulty of factoring. There are many more such unresolved problems about the complexity of computations that are of direct relevance to trust, security, and authentication, as well as to the grand challenge of understanding what is and is not feasibly computable. The notorious P = NP problem is probably the best known problem of this type, but by far not the only one. I consider these among the most important problems in theoretical computer science and sincerely hope that, during the next 10 years, some of them will be solved. I believe that deeper understanding of these problems will have a strong impact on computer science and beyond. Because of the universality of the computing paradigm, the quest to understand what is and is not feasibly computable is equivalent to understanding the limits of rational reasoning—a noble task indeed.
FEIGENBAUM: I would like to talk very briefly about artificial intelligence and the near future. If we look
back 50 years—in fact to the very beginning of computing—Turing was around to give us a vision of artificial intelligence and what it would be, beautifully explicated in the play about Turing's life, Breaking the Code.
Raj Reddy published a paper in the May 1996 Communications of the ACM, his Turing Award address, called "To Dream the Possible Dream." I, too, share that possible dream. However, I feel like the character in the William Steig cartoon who is tumbling through space saying, "I hope to find out what it is all about before it is out."
There is a kind of Edisonian analogue to this. Yes, we have invented the light bulb, and we have given people plans to build the generators. We have given them tools for constructing the generators. They have gone out and hand-crafted a few generators. There is one lamppost working here, or lights on one city block are working over there. A few places are illuminated, but most of the world is still dark. Yet the dream is to light up the world! Edison, of course, invented an electric company. So the vision is to find out what it is we must do—and I am going to tell you what I think it is—and then go out and build that electric company.
What we learned over the past 25 years is that the driver of the power of intelligent systems is the knowledge the systems have about their universe of discourse, not the sophistication of the reasoning process the systems employ. We have put together tiny amounts of knowledge in very narrow, specialized areas in programs called expert systems. These are the individual lampposts or, at most, the city block. What we need built is a large, distributed knowledge base. The way to build it is the way the data space of the World Wide Web came about—a large number of individuals contributing their data to the nodes of the Web. In the case I am talking about, people will be contributing their knowledge in machine-usable form. The knowledge would be presented in a neutral and general way—a way of building knowledge bases so they are reusable and extendible—so that the knowledge can be used in many different applications. A lot of basic work has been done to enable this kind of infrastructure growth. I think we just need the will to go down that road.
JEROME GLENN: As far as being timid about talking about the future, didn't we all get into computers because we wanted to focus global intelligence on the most difficult problems to solve? Things that we could not do alone? Are we going to create a global interface between human brains and problems and machines? Isn't that the direction? So what is this fear of talking about the future?
PETER FREEMAN: I see a fair amount of confusion between the development of products or technology and the development of concepts or understanding. Several of you touched on this in your comments about what goes on, or should go on, in university-based research. I quite agree that many of us in universities are too focused on the short term, but ultimately, if we are to get to that next generation of products and technology, we have to have some new concepts.
I would point out just one in an area that several of you identified, software engineering, which I agree is almost devoid of ideas. There are a few people—Mary Shaw is one of them—who are trying to develop ways to express the architecture of software systems. Without that kind of architectural representation and description, we will never be able to do the kinds of things that Edward Feigenbaum was asking about, for example.