National Academies Press: OpenBook

Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium (1987)

Chapter: Discussion: Comments on Expert Systems and Their Use

« Previous: Expert Systems: Applications in Space
Suggested Citation:"Discussion: Comments on Expert Systems and Their Use." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 142
Suggested Citation:"Discussion: Comments on Expert Systems and Their Use." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 143
Suggested Citation:"Discussion: Comments on Expert Systems and Their Use." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 144
Suggested Citation:"Discussion: Comments on Expert Systems and Their Use." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 145
Suggested Citation:"Discussion: Comments on Expert Systems and Their Use." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 146

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

DISCUSSION: - ON EX~T SYSTEMS AND ~IR USE Alley Newell Bruce Buchanan Gave us a broad view of en: sys~nE; and Showed a rather large collection of aspects amass ache hole field that need to be worried about to ma ~ the advances MESA needs. This leads to a po let I want to make, which concerns my own concern about whether research is really needed on some Parts of expert systems. - ~ -— — . . As preparation, Figure 1 shows my current favorite diagram to explain AI. You need to understand about AI that there are two dimensions in terms of which to taLk about the performance of systems. The first is the amount of immediate knowledge that they haste stored up, that they can get ammos to. m is can con Leniently be measured by the number of rules. m e second is the amount of knowledge that they obtain by exploring the problem. m is can conveniently be measure] by the number of situations examined before committing to a response. Emus, there are isobars of equal performance, with better performance increasing up towards the northeast. You can roughly locate different intelligent systems in this space. Expert systems are well us on the immediate-knowle~ge scale, without much search. The Hitech chess program, mice has a little, Bus not very much knowledge, lies far OUt on the search dimension. , ,^ _ , The human being is substantially above the expert systems on the knowledge dimension. do 1ess search than humans do. ~ ~ - Also, most expert systems one Anode po =t of this diagram is teat, In one current era, expert systems are an attempt to explore what can be achieved without very much search and reasoning, but with a modest amount of immediately available knowledge. If you accept the characterization of expert systems in the figure, then even without all the research that Bruce was talking about, there exists an interesting class of programs, even though it is very limited ~ capability. ~ . . . m e expert systems of today constitute a class of prcqrams that appears to be very useful if you limit the tasks to the right kinds. Bruce was helping to characterize that. We actually know a modest amount about this type of task. _, _ . . — If you have the right knowledge assembled, then you know what to do and how to do it without very ash involved reasoning. For such barks and their exit systems, it is not clear that the big need is to do a lot more rarer. The big issue is to build lots of these systems for lots of these tasks. bat is needed is more like a development effort, to find out which tasks can successfully be done with modest amounts of mortise. The 142

143 In 1 10 ~ O c: 109 1o8 107 1 o6 lo, O 105 ,~ 1 0 - LL ~ 103 1o2 1o1 10° ~ Human ~ \ EQUIPERFORMANCE ISOBARS - Expert\~ \ \ \ System /\ ~ \ \ \ HItech _ ~ Early Al systems - - 1 1 1 1 1 1 1 1 1, 1 10 10 10 103 104 105 106 107 108 109 101° SEARCH KNOWLEDGE Situations/tasks FIGURE 1 ~n~iate knowledge versus search kna~riedge trade-off net is not to build any more e~-sys~rn shells, or to build more ~ ols. The need is to pour all of the effort into firming out, in the plethora of space-station tasks, which are the ones that the current level of technology really does provide interesting and useful solutions. Tom Mitchell talked much more specifically than did Bruce about the fact that the space station is a physical system--that if you want to use expert systems and AI systems, they had better interact directly with physical devices. ~ agree absolutely that this is a major issue and a very important one for MESA to research. In particular, bringing control theory and symbolic reason ng together so we understand those as a single field is important. What I Could like to emphasize is how little we know about that. In some respects we do not even knew the units to use to taLk about it, or how such symbolic programs ought to interact with control systems.

144 To bring this point home, let me note that a lot of current effort in understanding the human motor system is H;recte] toward exploring a kind of system which is not controlled On detail. A particular Dynamic system that has the right properties is composed, and is sent off to do a motor action. A good example is Hollerbach's model of handwriting, in which the whole system is composed of simply-interacting dynamic subsystems, which continuously draw lett~r-like curves. which are then __~ .~ _~_-a cabs ~ c ~ _ ~ ~ ~ —Ad_ m~ul''=~=u Ace- ~1~1~ 1~ ~ a. These dynamic systems are not cast in concrete e They are created and torn down In seconds, in order to compose and raCCmpC66 dynamically according to s~hort-term tack requirements. The motor units that the cognitive system interacts with are these co=pcsed dynamic systems. We know almost nothing about such systems. When we finally understand something about it, I suspect it will change our notion entirely of the interface between the symbolic system and The Dynamic system. The point is that there is a lot of research before we even get a clear idea clear about how symbolic systems ought to interact with mechanical and Dynamic systems. Tom made a suggestion about emulating devices. If a device breaks, then the emulation can be plugged in. I th m k this is an intriguing idea and there may be a whole world of interesting research In it. You might counterargue that, if this is possible, then everything might as well be run in computer mode. But there is a real reason not to do that. Making the emulation work may take a lot of computing power. principal reason for using real physical devices and not simulating everything is that your system runs faster if you do not simulate it. But that does not imply that, if one device breaks, you cannot bring to bear an overwhelming amount of computational capacity to try to compensate for it. Thus, the system is prepared to emulate everywhere, but only has to do it in one or two places on any occasion. Emulation provides a backup capability. In fact, it is never likely to be as good, but at least it will be better than having to shut down the whole system. I think this is an interesting path of research, which could be pursued a long way. In particular, the feature that Tom mentioned about thinking of ways to construct systems so that they are decor potable and emulat~hie Bight yield many interesting possible ities. Tom also rained the issue of sharing responsibility. However, he did not in fact tell us much about how tasks should be shared. Rather he described a particular aspect of the issue, which suggests that the machine caght to learn form the human, and then, quite properly, that the human ought to learn flus the machine. I approve of both of these activities, but they beg the whole question of sharing. They do not elaborate ways of sharing, but both spend a fair amount of their time simply learning to be like each other, and confusing who really has the knowledge and who really knows how to do what. In fact, if one has machines with this kind of capability, the entice question of what it means to share may get transformed. It will became extremely difficult to quantify or be precise about who knows what, who ought to do what, and even who is doing what in the space station. There exists a k mad of complementarily, in which the more you spread capabilities around in the system, so that there is a lot of redundancy, the less possible will it be to characterize the male of system components A

145 effectively~to say for instance what the separate contributions are to he pr~uctivi~ of the token station. All ~ want to Observe is that such system; are not clean, and learning and performance get confused. HaGrever/ even though they are not clean/ they may turn out to be the kirk of system one has to build in order to get the margins of safety that are needed ~ space. Finally/ I want to talk about the issue of robustness, although it was not a major focus of either speaker. It is a fact, I believe, that there has been essentially no work on making expert systems robust. There is much attention/ of course/ to their giving explanations. But fundamentally expert systems are collections of rules, which are ultimately brittle and unforgiving. The lack of attention to robustness arises/ ~ part, because there is a market for programs that are not very flexible or very robust. m ey can nevertheless, be successful. They will be increasingly su~r=~sful, especially if the problem is turned around by saying 'I've got this hammer; where are interesting things to hit with it?' As a result, the expert systems field is not focused on solving the problem that I think NEST has to get solved, which is that it cannot use expert systems in space unless we understand how to build robust expert systems. A research program in robust expert systems could be fielded by NASA, and I would certainly recommend it. Given requirements on robustness, one could explore more redundant rule sets or the provision of greater backtracking and reasoning mechanisms. There are many approaches to robustness and reliability that have their analog in expert systems and could provide guidance. However, I think something more heroic is at stake. What is really wrong here is the whole notion of laying dawn code--or rules, which play the role of code for existing expert systems. That is, ~~ soon you lay down code, it becomes an echo from the past, unadapted to the future. You have become subject to a mechanism. Code is blind mechanism, complex perhaps, but blind. The important thing about a blind mechanism is that it does not care. A bullet does not care who it kills. A broken beam does not care on whom it falls. The horror stori-= about non-rob~,ct software almost invariably reflect the fact that code was laid down in the past, in a fantasy land of what was going to be, and something different happened at run time, for which the code was not adapted. the problem, I believe, is that the unit, the Ime of code, is wrong. A clue for what might be right comes fray the database world, with its adoption of transaction processing. It was concluded that the wrong Ming to do was to take a line of code to be the unit. Mat had to be done was to package the specification of behavior in a hanienec] form called the transaction, for which sane guarantee= could be made. This has the right flavor of having Charmed the nab of the unit to _ _ _ __1~ ___1 __~_ T! =__ ~ =~ - It =~ _~_ =~_tt~_ ~ =~ tat_ ~ ~ ~' ~ 1 llama: Len I ILL - Lea::j. 1~ I~ =1~: WLUl~ ~ EVIL ~~ Age Wee 1= aged just a little mechanism. Somehow, in the area of robustness, the smallest unit of action has got to be, if ~ can use a metaphor, a caring piece of action. It has to be an action, which has a big enough context, even On its smallest unit, to react in terms of the global goals of the system, so it can care about safety and can care about the

146 consequences of what it is Moire. Samehaw we have to find out hcw to create units that have that property. The units carrot be rules or code and so forth, which are just ~nisms. ~ Chink NASA ought to go aft ~chat. It would be a great r~cPar~h project. It is mar contribution to this symposia of a realty basic research goal that has an exceedingly small dance of succeeding, but an incense payoff if it does.

Next: Synopsis of General Audience Discussion »
Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium Get This Book
×
 Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium
Buy Paperback | $125.00
MyNAP members save 10% online.
Login or Register to save!

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!