Cover Image

PAPERBACK
$119.00



View/Hide Left Panel
Click for next page ( 92


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 91
AI SYSTEMS IN THE SPACE STATION Thomas M. Mitchell INIRODUCI1ON Among the technologies that will help shape life in the space station, Artificial Intelligence (AI) seems certa m to play a major role. The striking complexity of the station, its life support systems, and the manufacturing and scientific apparatus that it will house require that a good share of its supervision, maintenance, and control be done by computer. At the same time, the need for intelligent communication and shared responsibility between such computer programs and space station residents poses a serious challenge to present interfaces between man and machine. Hence, the Potential and nope for contributions from AI to the space station effort is great. The purpose of this paper is to suggest areas In which support for new AI research might be expected to Produce a significant impact on future space station technology. Given the breadth of this task, the approach here will be to sample a few such ureas and to rely on the other symposium participants and other sources (e.g., Technical Report N~-ASEE, 1983; Tedhnical Report Ned, 1985) to fill in the picture. More specifically, we will address here (1) the use of knowledge-based systems for monitoring and controlling the space station, and (2) Issues related to sharing and transferring responsibility between computers and space station residents. Before focussing on the specifics of these two problem areas, it is useful to understand their significance to the development of the space station (and to other advance projects such as development of a lunar base and interplanetary probes). In his keynote address to this symposium, Allen Newell provides an analysis of the general characteristics and constraints that define the space station effort. include the following: Those of particular relevance to this paper The station is an extraordinarily complex system` with an extremely high premium to be placed on reliability, redundancy, and failsafe operation. In past space efforts, a large share of astronaut train mg has gone into acquiring the knowledge needed to supervise, control, and troubleshoot various spacecraft 91

OCR for page 91
92 subsystems. The Increased c~nplexit~r of the space station argues for c~uter-bas~ assistance in the supervision of many station subsystems, and it is no surprise that ache history of the space program is a history of increasing automation ark chanter su~rision. F=th~re, the high premimn on failsafe operation place= spry canards on the flexibili~r arxt adaptability of such ~uter-based supervisors. . . . Subh system must be flexible enough to recognize and a~pt to unanticipated events, ark to fornicate such unanticipated events clearly to the hens who help choose ache rinse to these events. The flexibility d ~ reed here goes well beyond that associated with present-day computer based supervisory systems. The space station is intended to be a highly evolutionary system, which will be continually reconfigured and upgraded over the course of its lifetime in space. The highly evolutionary nature of the station will make the task of crew train mg even move difficult than if the station were a static system. The problem of updating operating and troubles hoofing procedures will be greatly exacerbated. In general, there will be greater demands on maintaining and updating the external documentation . . . . . . . . . . or the space station subsystems, and on prompt, thorough updating of procedures for monitoring, controlling, and troubleshooting the evolving space station. Ccmputer-based methods for automatically updating such procedures, given updates to the description of the space station, would greatly enhance the ability to manage the evolving station. The crew of the space station will possess differing levels of expertise regarding different space station subsystems, and will live in the station long enough that then' expertise will change over the course of their stay aboard the Station. These differences in level of sophistication among various crew my; (and between the same crew mar at differing tone pose significant pray; ark Opportunities for the ~ter system with which they will interact. For naive users, ~ter synods that ~ given actions will have to provide a fairly detailed explanation of ye reasoning behind the ~ ation. For more ex ~ rt users, 1-cc explanation may be needed. For advanced users, there will be an opportunity for the computer system to acquire n ~ problem-solving tactic_ from the users. Furthermore, as a particular user b , familiar with the competent= and limitations of a particular cc: puter-h=~P~ supervisor, his willingn=-c~ to allow the system to make various decisions without human approval may well change. The ability to interface effectively with a range of users, acting as a kind of tutor for some and acquiring new excise fan others, ~d allow the coercer to act as the "corporate marry" for the particular abet of the Apace station bat is its dc~ n and for Rich it will house a continually e~rolvir~g set of excise.

OCR for page 91
93 BORING, DI~NOS~G, AND C~N~T,T.T~G THE SPACE STATION Given the abase characteristics of the space station effort, it is cheer that the use of ~ter-h~ assistants for supervising various space station subsystems card have a major impact on Be overall reliability and cost of space station Operations. In order to develop such cc~put~r-bas~ supervisors' - tic YCh is needed in a ~ er of areas such as representing and reason ng about complex designed artifacts, inferring the behavior of such systems from schematics showing the Or structure, and automatic refinement of supervisory procedures based on empirical observation as well as the known system schematics. since the space station will itself be a large, well-documented artifact, it ~ reasonable to expect a significant number of a . ~ _ _ ~ ~ ~ ~ ~ ~ ~ _ __e __ ~ ~ _ opportunities For applying computers to One cask or supervising, controlling and diagnosing the space station. For example' one might well expect that a computer could monitor various space station subsystems such as the parts of the navigation system, to detect behavior cutside their expectcS operating ranges, take remedial actions to contain the effects of observed errors, diagnose the likely causes of the observed symptoms, and reconfigure the system to eliminate the error. Of course, lim;t=~ applications of computers to this king of problem are fairly common in current-day space systems. But present met hods for automated monitoring, diagnosis and control are far from the levels of generality, robustness, maintainability, and competence that one would desire. AT offers a new approach to the problem of automated ~vision. With appropriate research support, NASA might expect to significantly accelerate the devel~nt of AI methods for deal ing with this class of problems, and thereby provide important near tedhnolow to support the space station. ,, _ _ ,, ~ , , A Roger of recent AI system; have addressed problems of monitoring, diagnosing, or controlling designed artifacts such as o~uter systemic (Ennis et al., 1986), ele*m~ani~1 systems (Pazzani, 1986), dhemi~1 pareses (Scull et al.. 19851. ark Mini1 circuits (Davis, 1984; Ger~esereth, 1981~. Prom this work, an initial set of techniques has emerged for ~1~ after programs that eddy a Mel (often in qualitative terms) of the behavior of the system Waxier study, and which use this Mel to reason about the diagnosis, control, or reconfiguration of the system. Mile much remains to be ur~ersto~, the initial approaches have Chain clearly the potential for super~risory Outer systems that combine judgemental heuristics with reasoning fray a concrete Mel of the systems under stud. An Example As an example of an AT system that deals winch monitoring and troubleshooting a designed artifact, consider Davis' circuit troubleshooting system (Davis, 1984~. This system troubleshoots digital circuits, given a schematic of the misbehaving circuit together with den discrepancies between predict and observed signal

OCR for page 91
94 values. Its organization is Typical of several troubleshooting systems that have been developed for electronic, mechanical, and other types of systems. The basic idea behind this troubleshooting system is that it uses the schematic of the system, together with its knowledge of the expected behaviors of system Opponents, ~ order to reason backward from observed incorrect output signals to those upstream circuit Opponents that could have product the cbservec} error. This process is i'1ustrat~ in Figure 1, tin from Davis (1984~. In this figure, if the circuit inputs are given as shown, the system will infer the expected outputs as shown in round parentheses, based on its knowledge of the behaviors of multipliers and adders. If the two observed outputs are ~~ shown in square parentheses, then a discrepancy is found between the expected and observed values for signal F. The system will then enumerate candidate fault hypotheses by considering that the error may be due to a failure in Add-l, or to incorrect values for one of its inputs (either X or Y). Each of these last two hypotheses might be explained further in tRrms.of possible failures of the components or signals on which it, in turn, depends. Thus, candidate fault hypotheses are enumerated by examining the structure of the circuit as well as the known behaviors of its components. In addition to enumerating fault hypotheses ~ this fashion, the system can also prune these hypotheses by determining other anticipated consequences of presumed faults. For example, the hypothesis that the error in signal F is caused by an error in signal Y. carries with it certain implications about the value of signal G. The value of 10 for signal F can be expla med by a value of 4 for signal Y. but this would in turn lead to an expected value of 10 for signal G (which is observed to hold the value 12~. Hence, this hypothesis may be pruned, as long as one assumes that the circuit contains only a single fault. The above example illustrates how a computer system can reason about possible causes of observed faults, by using knowledge of the schematic of the faulty system as well as a library describing the expected behaviors of its components. There are many subtleties that have been A 2 2 3 _ _ 3 - Mult-1 , x Add-1 (12) ~ r [10] Y l _ G Add-2 (12) r [12] MUII-~ z Expected-( ) Actual---[ ] FIGURE 1 Troubleshooting example. Source: Davis (3984~.

OCR for page 91
95 glossed over ~ this example, such as reasoning about the possibility of multiple system faults, interactions between faults, intermittent errors, utilizing statistical knowledge of likely faults and the resulting faulty behavior, scaling this approach to more complex systems, and the like. Relic research is still needed to develop more realistic diagnostic systems of this sort, and many of these issues are under study at this time. In addition, a good deal of research has been devoted to developing similar troubleshooting systems for artifacts other than digital circuits (e.g., mechanical electrcmechani~=l, and chemical processes). The topic of reasoning about the expected behavior of designed artifacts of many types is an active research area within AI (see, for example, the recent special volume of Artificial Iht=1ligence on qualitative reasoning about physical systems (North-Holland, 1984~.) hands-On Supervisory Systems The above example is meant to suggest how a program can utilize an internal model of the system it is monitoring in order to localize the cause of anomalous behavior. Since the space station will be heavily instrumented with sensors and with comput~r-controlled effecters, the real opportunity here lies in developing a technology for "hands-on" AI supervisory systems: systems that have the means to directly observe and control the behavior of systems that they monitor, and that possess an explicit model of the system under supervision to guide their reasom ng about monitoring, controlling, and troubleshooting this system. Figure 2 illustrated the general organization of such a hands-on supervisory system. One instantiation of the scenario characterized In the figure could be an electronically self-sensing, self-monitoring space station. Here the system under supervision is the space station, sensors may observe the temperatures, pressures, and electrical behavior of various subsystems of the space station, and effectors may correspond to electrically controlled devices such as signal generators, heaters, compressors, and alarm systems. m e goal of such an intelligent, serf-m anchoring space station would be to observe its behavior through its sensors, comparing these observations to the behavior anticipated by its internal model, an] utilizing its effecLors to maintain stable operation, reconfigure subsystems, and control the trajectory of star-= of the system. A number of observations are apparent about such a system: To a limited degree it is already possible to build such partially ~elf-mon~toring systems. m e theoretical possibilities for computer monitoring and control in such systems far exceed the capabilities of cur present techniques. m e effectiveness of such a system will depend on continuing fundamental research In AI, especially in areas such as qualitative reasoning, diagnosis, control, and learning. Tb allow for such a future, the initial design of the space station must al low for flexible introduction of new sensors and effecters In all subsystems of the space station, and over the entire life of the station.

OCR for page 91
96 Supervisory System 1 of System I r Under Supervision Ef factors ~ I Prob Tom So leer Sensors I I I l FIGtIllE: 2 Hays on s~isory system. A very different instantiation of ye scenario of Figure 2 is ebb nect by inducing mobility ~ e sensors and effacers of the ~ter monitor. ~ this case, the s~risor card take ye form of a collection of mobile platforms whose sensors include eras, range finders, Much sensors, and oscilloscope pubes, and whose effe~ors include gels, r~et engines, manipulators, signal generators, arm arc welders. Such a system might be ~1 to zanier the Physical plant of the space station, c) Sing for wear, arm repairing the station as necessary, boo cerior and exterior. Several privations follow from considering this scenario: The leverage gained by avid

OCR for page 91
97 mobility to sensors and effecters is large--especially in situations such as troubleshooting where the system parameters in question might not be directly observable or controllable by statically positioned sensors and effectors. A number of difficult issues arise in representing and reasoning about three dimensional space, navigation, and the mechanics of physical systems. Given previous experience with robotics, it is clear that the difficulty of the technical problems can be considerably eased by designing a well-engin==red work environment (e.g., by including easy grasping points on objects that are to be manipulated in the space station. In fact, we would like cur supervisor to possess a ccmbination of mobile and stationary sensors and effectors, including the union of those in the above scenarios. Thus, these two scenarios illustrate different aspects of the class of hanger-on supervisor problems summarized in Figure 2. The two scenarios suggest a number of common technical problems, including problems of integrating human judgement with computer judgement, planning a sequence of control operations based on only an incomplete model of the system under supervision, and utilizing sensory input to ref me the model of the system undo' supervision. At the same time, each scenario carries it own technical problems which overlay those generic issues. For example, a mobile supervisor for monitoring and repairing the exterior surface of the space station most fare issues such as representing and reasoning about tier== dimensional space and navigation, interpreting a rich set of perceptual data taken from a changing (and incompletely known) vantage point, and using tools to manipulate the space station. Thus, NASA Should consider supporting research on the generic problems of Harrison su~isory system;, as well as research on selected instances of the problem which it expects would yield significant practical gains. Nature of the Problem A fundamental defining characteristic of the system supervisor problem is uncertainty in the supervisor's knowledge of the system under study. A supervisor can almost never have complete and certain knowledge of the exact state of the system, of the rules that determine how one system state will give rise to the next, or of the exact effects of its control actions on the system. This characteristic alters dramatically the nature of diagnostic and control tasks. For example, given a perfect model of the system under study, a program might derive an open-loop control sequence to place the system in some desired state. However, in the absence of a perfect model, controlling the system requires interleaving effecter actions with sensory observations to detect features of the system state. m e types and degrees of uncertainties faced in system supervision problems vary, of course, with the specific task. For instance, the task of monitoring a digital circuit might correspond to an extreme point in the spectrum of possibilities, since circuits schematics do, in fact, provide a very detailed model of the system, and since observing digital signal values is (by design) a relatively unambiguous

OCR for page 91
98 Bask. It is probably no accident ~t several of the -earliest attempts to construct AI troubleshooting aids were conducted ~ the domain of digital circuitry. Ha~rever, that work shared Cat even ~ this dc~ it was very difficult to troubleshoot circuits teas ~ only on the knowledge available fen the circuit schematic (Davis, 1984~. the pr~blen is that circuit behavior can depend on thermal effects, physical proximity of ~nents, and other factors which are not typically reflected in a circuit schematic. Furthermore, it is precisely in troubleshooting situations that such effects become significant to determining the system's behavior. The problem of incomplete knowledge in modeling subsystem behaviors is even more difficult when one considers systems with combinations of electric=.], mechanical, chemical, and biological subsystems. In addition to uncertainly in modeling the expected behavior of the system under study, the difficulty of interpreting sensory input adds another kind of uncertainty in many domains. In the digit al circuit world, it is fairly straightforward to observe the value of a desired signal, though it is rare that circuits are constructed so that every signal is brought outside the circuit for troubleshooting purposes. If the system under study is a chemical process rather than electrical, detecting relative concentrations of chemicals can often be a more complex task. In mechanical systems, detecting enact locations and forces is generally cut of the question. If the system is the exterior of the space station and the sensors are video cameras, then the Iffily of sensing the exact location and physical condition of each subcc=ponent can itself become such an overwhelming Mask that the Cations themselves must be treated as uncertain. Yet Another dimension of uncertainty arises from the effecters that are utilized by the supervisor to alter the system under study. Again, in the circuit domain effecters such as signal generators are relatively reliable. But In the robotics domain, in which the system being supervised is the physical world, effecters such as artificial limbs may be fa Ply unreliable in executing actions such as grasping. In such ~==C, the problem of planning a sequence of actions to bring the system to a desired state must take into account nondeterminism in the effect of actions it performs. In a sense, the ability to observe and affect the system under study and the able ity to predict its behavior provide redundant scurces of knowledge so that one can be used to make up for uncertainly ~ the other. For instance, feedback control methods utilize sensory information to make up for an incomplete model of the next-state function. On the other hand, one can make due with observing only a small proportion of the signal values in a circuit and ,,.c~ the model of suboc~ponent behaviors to infer additional signal values upstream and downstream of observed signals. Given the various uncerta Sties that must be faced by a supervisory system, it is unlikely that purely algorithmic methods can be mapped out for dealing with all eventualities (although the vast NASA troubleshooting manuals indicate the degree to which this ought be possible). A supervisory system will do best if it possesses redundancy to make up for the uncertainties that it must face:

OCR for page 91
99 redundancy Ln the sensors that give it information about the world, in the effecters with which it controls the world, and in the behavioral models that it uses for reasoning about the system under study. While such redundancy can help reduce uncerta Sty, it will not be eliminated, and the supervisor must therefore employ problem solving methods designed to operate under incomplete information. All of these napes suggest the importance of combining heuristic methods with deductive methods for reasoning about the system under study. Finally, these same problem characteristics that suggest the utility of employing AI methods (the need for flexibility in solving problems despite uncertainty) also suggest the Importance of including humans in the problem-solving process. Even by optimistic estimates, it seems unlikely that AI systems will be able to completely replace human judgement in many supervisory tacks, though they may well augment it in many tasks. m us, in many cases we envision cooperative problem solving involving computer systems and humans. Section "Sharing and Transferring Expertise in M~n-~achine Problem Solving" discusses issues related to man-machine cooperation in this regard. Research Recc==endations What research should be supported by NASA in order to maximize the future availability of hanger-on supervisory systems of the kin] described above? m is section lists some areas that seem especially important, though the list is certainly not intended to be complete.2 Modeling system behavior at multiple levels of abstraction. At the heart of the ability to supervise a system lies the ability to model its behavior. Systems theory provides one body of (primarily guantitative) techniques for describing and reasoning about systems. AI has developed more symbolic methods for describing and reasoning about systems, given a description of their parts structure. A good deal of research is needed to further develop appropriate behavior representations for a variety of systems at a variety of levels of abstraction, and for inferring behavioral descriptions from structural descriptions. In addition, work is needed on automatically selecting from among a set of alternative models the one most appropriate for the task at hand. For example, one useful research task might be to develop a program which can be given a detailed schematic of a large system (e.g., a computer) as well as a particular diagnostic problem (e.g., the printer is producing no output), and which returns an abstract description of the system which is appropriate for troubleshooting this problem (e.g., an abstracted block diagram of the computer focussing on details relevant to this diagnostic task). Planning with incomplete knowledge. The planning problem is the problem of determining a sequence of effecter actions which will take the external system to a desired state. This problem has

OCR for page 91
100 been studied ~nbe.nsely within AI, especially as it relates to planning robot actions in the physical world. However, current planning methods make unrealistic assumptions about the completeness of the rcbot's knowledge of its world, and of its knowledge of the effects of its own actions. New research is needed to develop planning methods that are robust with respect to uncertainties of the kinds discussed above. One usefrn research task here would be to develop methods that pro~u~- plans which include sensor operations to reduce anticipated uncertainties in the results of effecter actions, and that include conditional branches in the plan to allow for "run-time" derisions based on sensory actions. . . Integrating methods from control theory with symbolic control methods. Problems of system control, diagnosis (identification), and monitoring have been studied for some time in fields such ~~ system control theory. Such studies typically assume a quantitative, mathematical model of the system under supervision, whereas AI methods model the system in a symbolic, logical formalism. System theory has developed various methods for using sensory feedback to make up for uncertainty in the model of the system under supervision, but these methods are difficult to apply to complex planning problems such as determining a sequence of robot Aerations to repair a failed door latch. Still, both fields are addressing the same abstract problems. Very little attention has been paid to integrating these two bodies of work, and research on both vertical and horizontal integration of these techniques should be supported. Autcmatically refining the supervisor's theory of system behavior through experience. As discussed in the previous subsection, a major limitation on the effectiveness of a supervisor lies in its uncertain knowledge of the system under supervision. Therefore, methods for automatically refining the supervisor's knowledge of the system would be extremely useful. In AI, research on machine learning and automated theory formation should be supported as it applies to this problem. m e integration of this work with work in Systems theory on model identification should also be explored. Possible research tasks In this area include developing robot systems that build up maps of their physical environment, and systems that begin with a general competence in same area (e.g., general-purpose methods for grasping tools) and which acquire with experience more special purpose competence with experience (e.g., special methods for most effectively manipulating individual tools). Perception from multiple sensors. One method for reducing uncertainly in the supervisor's knowledge of the system's state is to allow it to use multiple, redundant sensors. Thus, a robot might use several video cameras with overlapping fields of view, placed at different vantage points, together with touch

OCR for page 91
101 sensors, range finder, infrared sensors, etc. Or a supervisor for mom toring a pow=' Supply system might utilize ~ set of overlapping voltage and current sensors together with chemical sensors, hart sensors, etc. The benefits of using multiple sensors is clears they provide more information. However, order to make use of the increasing amounts of data available from multiple sensors, research is needed to develop more effective sensory 1nterpretation/perception methods for in~iviHn~d sensors, and for fusing data from several sensors. An example research task here might be to develop a system that employs a number of video cameras, and which determines the correspondence between image features of the various images. A more ambitious project sight try to predict image features likely to be found by one camera, based on information from other touch, video, and heat sensors. Representing and reasoning about 3D geometric properties. For supervisors that possess mobile sensors or effectors, a variety of problems exist in reasoning about navigating through space, and in reasoning about 3D mechanical linkages such as those that couple a robot arm to a screw via a screw driver. Research is needed on representing 3D objects (including empty space) in ways that allow for efficient computation of relations among objects. such as intersections. (collisions\. unions possible J ~ ~ ~ ~ ~ ~ __~1~:_~ ^. _ =~ ~:_~ ~:~1 ~ - ~ - ~~ ..~1A 1~ ~ ~ ~ C ~ 1' ~ ~ ~11~ C / =~ 1~ ILL ~~= ~~ ~ ~ le Woo involves constructing temporary mechanical linkages among objects (e.g., among a robot arm, screw driver/ screw, and wall), research is needed on efficiently representing and reasoning about such linkages so that effector commands can be planned that will achieve desired effects. While special-purpose robots operating in special-purpose environments can sometimes avoid using general methods for reasoning about 3D geometry, general purpose systems expected to solve unanticipated problems will require this capability. Designing systems to minimize difficulty in observing and controlling them. Given the great difficulties in the supervisory back that we ~ntrodu~ by uncertainty, one obvious reaction is to try to design the space station to reduce the uncertainties that auto meted supervisors will face. In short, the station should be designed to maximize the observability and controllability of these features which the supervisor will need to sense and effect. In the =~e of a supervisor with immobile sensors and effectors, such as a system to monitor the power supply, this requires that a broad and redundant set of sensors and control points be built into the power supply at design time. In the case of mobile supervisors, the observability of the station can be eng m ~=red, for example, by painting identifying marks on objects which will ease problems of object identification and of registering images obtained from multiple viewpoints. Similarly, the controllability of the physical

OCR for page 91
102 space station can be enhanced, for example, by designing all its parts to present the same simple grasping po Ant. While a good 0~1 of anecdotal experience has been obta med on designing robot workstations to maximize their controllability and observability, little exists in the way of a science for designing such easily-supervised systems. Research in this area, if successful, could significantly reduce the number of technical problems that auto meted supervisors in the space station will face. Feasibility of replacing hardware subsystems by software emulations. For immobile supervisors which monitor subsystems such as power supplies, navigation systems, etc., one intriguing possibility is that they might be able to substitute additional computation in place of failed hardware. For example, consider a subsystem, S. with a failed thermostat, T1. If S is being supervised by a computer System with a good model of the suboomponents of S. then this supervisor might be able to keep S working acceptably by substituting its own simulate cutout of in, p~ - ~~ ~~. ~ ~ ~~ p - : 1 ~ - ~~' - - L~ '-~ ~1C ~~= Vie ~~ 1~G" ~~- The degree to which this is possible will depend, of course, on (1) The veracity of the supervisor is model of S. (2) the access the supervisor has to other sensors In S (the more redundant, the better), and (3) the ability of the supervisor to control the point in S corresponding to the output of T1. While a software simulation might be slower and less accurate than a working thermostat, the advantage of substituting software for failed hardware is clear. Perhaps a small number of high-speed processors (such as parallel processors that have been developed for circuit simulations) could be included in the space station precisely for providing high-speed backup for a wide range of possible hardware failures. While the feasibility of adding robustness to the space station by adding such computational power is unproven, the potential impact warrants research in this direction. SHARING AND TRANSFERRING EXPERTISE IN M~N-MACHINE PROBLEM SOLVING As noted On the previous section, the same problem characteristics that argue for flex~bili~ and a~ptabili~r in ~utP' supervisory systems also argue for allowing humans to participate ~ problem solving and decision making processes. As the complexity of computer support for the space station grows, the need for communication and shared responsibility between the computer and space station resident= will grow as well. If ever we reach the stage of a fully automated, self-supporting space station, we are likely to first spend a significant period of time in which computer assistants will provide certain fu~ly-automated services (e.g., simply monitoring station subsystems to watch for unexpected behavior), but will require Interaction with their human counterparts in responding to many navel

OCR for page 91
103 events. Effective methods for such man-machine interaction will encourage the introduction of computer assistants for many more backs than possible if totally auto meted operation were demanded. This section considers some of the research issues related to developing effective communication between AI systems and their users. Since several other symposium participants will address the issue of man-machine communication in general, I will try to focus this section on issues specific to sharing problem solving responsibilities and to transferring expertise from humans to their computer assistants. Shared responsibility is a desirable characteristic whenever one is faced with a multifaceted task for which humans are best suited to some facets and machines to others. Humans ~ e mechanim=1 tools (e.g., wrenches) and computational tools (e.g., pocket cap curators) for exactly such reasons. In the space station, we may find it desirable to share responsibility On motor tasks, as in a human controlling the mechanical robot arm in the space shuttle, in cognitive hanks, as in a human an] computer system working jointly to troubleshoot a failed power supply, or In perceptual tasks, In which a human may assist the computer in finding corresponding pa Mets in multiple camera images so that the co mput~r can then apply image analysis and enhancement procedures to the images. In each case, shared responsibility makes sense because the machine has certain advantages for some aspects of the task (e.g., physical strength and the ability to operate in adverse environments) while the human possesses advantages for other aspects (e.g., motor skills and flexibility in dealing with the unanticipated). Sharing in the process of problem solving also raises the prospects for transfer of expertise. In many fields, humans learn a great deal by acting as an apprentice to help a more advanced expert solve problems. As the medical intern assists in various hospital procedures, he acquires the expertise that event~1y alicws him to solve the.same problems as the doctor to whom he has apprenticed. One recent develcpment in AI is a groom g interest in constructing interactive problem solv m g systems that assist in solving problems, an] that attempt to acquire new expertise by observing and analyzing the steps contributed by their users. This section argues that research toward such ~ q apprentice systems is an important area for NOVA support. An Example In order to ground the discussion of share J responsibility and apprentice=, we briefly summarize a particular knowledge-based consultant system designed to interact with its users to solve problems in the design of digital circuits. This system, called LEAP (Mitchell et al., 1985), is a prototype system which illustrates a number of difficulties and opportunities associated with shared responsibility for problem solacing. LEAP helps to design digital circuits. Users begin a session by entering the definition of some input/output function that they would like a circuit to perform (e.g., multiply two numbers). LEAP provide=

OCR for page 91
104 assistance in design mg the desired circuit, by utilizing a set of if-then rules which relate desired functional characteristics to classes of circuit implementations. For instance, one rule An this set dictates that "IF the desired function requires converting an input serial signal to an equivalent parallel si ~ , THEN one may use a shift register." LEAP utilizes these rules to suggest plausible refinements to the abstract circuit modules that characterize the partial design at any given stage. Figure 3 depicts the interface to IEAP as seen by the user. m e large window on the right contains the circuit abstraction which is presently being designed by the user/system. As shown An the figure, the circuit consists at this point of two abstract circuit Locales. For each of these circuit nodules, LEAP possesses ~ description of the function to be implemented. At any point during the design, the user selects one of the unimplemented circuit mc~ules to be considered, and LEAP examines its rule set to deters me whether any rules apply to this Techie (i.e., rules whose preconditions match to the specifications of the circuit mc~ule). If LEAP determines that same of its rules apply to this situation, it presents the raccc=en~ations associated with these rules to the user. The user can then exam me these options, select one if h_ wishes, and LEAP will refine the design accordingly. Figure 4 depicts the result of such an implementation step. Should the user decide that he does not want to follow the system's advice, but instead wishes to design this portion of the circuit mantm1ly, he can undo the rule-generate] refinement and use LEAP as a simple, yraphics~riented, circuit editor. lEAP provides a simple e~le of Abased Problem solving between man and machine. me user dirts the focus of attention by selecting With circuit He to refine new. [EAP suggests possible impl ~ ntations of this McCabe, and Me user either approves the raccrmen~ations or replaces them with his own. LEAP thus acts as an apprentice for design. For design problems to which its rule base is well-suited, it provides fur advice. For circuits completely outside the scope of its knowledge it reduce= to a standard circuit editing package, leaving the bulk of the work to the human user. As the knowledge base of LEAP grows over time, one would expect it to gradually take on an increasing share of the responsibility for solving design problems. LEAP also illustrates how such knowledge-based apprentices might learn from their users (Mitchell et al., 1985) . In particular, LEAP has a primitive capability to infer row rules of design by Ring and generalizing on the design steps contributed by its users. In those cases Are the user rejects the system's advice and designs the circuit subrule himself, [EAP collects a training example of scone new rule. That is, lamp r ~ rds the circuit function that was desir ad, along with the user-supplied circuit for implementing that function. LEAP can then analyze this circuit, verify that it correctly implements the desired function, and formulate a generalized rule that will allow it to raccmmnd this circuit in similar subsequent situations. The key to LEAP's ability to learn general rules from specific examples lies in its starting knowledge of circuit operation. Although it may not

OCR for page 91
105 ..~ ~ ...... ~0~ ~ _ . . _ . ~ ; ~_ 1 / '''''''' 1---2: . lag),, Am, ~1 .: ::~: ::: 1 ' ~ WL 1 :~ A :. . ~ ~F .,' ........ < F :, . ~ : : `.~..................... i < ) ~ ~ ( - Z ~ r ,I ~ ~ ,- ~ 0 ~ ~ z _a l 1 1 1 1 1 1 ~1 _ 1 1 FIGURE 3 Interface ~ the LEAP system.

OCR for page 91
106 , ~ @~ C'O~ , _ 1 L ._ . _ L Q ~ ~ L 5 L ~ ~ I I >. a ~ ~ ~ ~ ~ c, | two (,0 1~1 IIGURE 4 Circuit refined by LEAP. m e c LL e c

OCR for page 91
107 initially have the expertise to generate a particular implementation of the desired function, it does have the ability to recognize, or verify, the correctness of many of its users' solutions. on general, it is easier to recognize a solution than to generate one. But once a solution can be recognized and explained, then LEAP can generalize on it by distinguishing that certain features of the example are critical (those mentioned in the verification), whereas others are not (those not mentioned On the verification). LEAP is still a research prototype system, and has not yet been subjected to testing on a large user ccmmunity. While there are no dcNbt many technical issues still to be solved, it serves as a suggestive example of how a knowledge-based consultant might be useful as an apprentice even before its knowledge base has been fully developed. It also suggests how its interaction with the user might lead it to extend its knowledge base autcmatim=~1y. The methods for collecting training example= and for formulating general rules appear generic enough that similar learning apprentice systems might be developed for many supervisory tasks of the kind discussed in the previous section. Ocher current research is exploring the feasibility of such learning apprentices in bask domains such as signal Interpretation (Smith et al. 1985), proving mathematics theorems (O'Rorke, 1984), and planning simple robot assembly steps (Segre and DeJong, 1985~. Nature of the Problem The LEAP system suggests one kind of shared responsibility between computer and human, as well as a mechanism for the gradual accretion of knowledge by the system so that over time it can take on a progressively greater share of responsibility for problem solving. The ability to acquire new rules by generalizing from the users' actions follows from LEAP's starting knowledge of how circuits work. That is, it be-tins with enough knowledge of how circuits operate, that it is _ _ _ ~ , ~ . . . . . . able to explain, or verify, the appropriateness of the users' actions once it observes them. Once it has verified that the user's circuit correctly implements the desired function, then it can generalize on this action by retaining only those features of the specific situation that are mentioned in this explanation. Similarly' if one tried to construct such a learning apprentice for troubleshooting power supply faults, one would want to include sufficient initial knowledge about the power supply (i.e., its schematic) that the system could verify (and thus generalize on) users' hypotheses about the causes of specific power supply malfunctions. ~Thus, in order for a system to learn from observing its users, it must begin with sufficient knowledge that it can justify what it observes the user do. It seems that for supervisory tasks of the k md discussed above' the primary knowledge required to construct such explanations is a description of the structure and operation of the system under supervision. Since AI has developed methods for

OCR for page 91
108 representing such knowledge, supervisory tasks seem like good targets for further research on learning apprentices. In addition to cognitive tasks such as monitoring, designing, and debugging, one might consider learn m g apprentices for robotics tasks such as using tools (OC Segre and DeJong, 1985 for one example). Given a new tool for the robot to use, one way to train it Light be to use a teleoperator to guide the robot through several uses of the tool. For example, given a new type of fastener, a user might guide the Robot to grasp the fastener and use it to fasten two objects together. If the system could start with enough knowledge to explain which features of its trajectory and other motions were relevant to accomplishing the given Ok, then it ~ ght be able to generalize accordingly. Research on such robotic learning apprenti~-c seems worthwhile and highly relevant to the goals of the space station program. To understand the issues involved in sharing information and responsibility between human and machine, it is instructive to consider the issues involved in sharing responsibility strictly among humans. In both cases there are certain subproblems that are best dealt with by individual agents, and others where shared responsibility makes best sense. Successful interaction requires arriving at an agreement on which agent will perform which task. In LEAP, the user makes all such choices. But ~ more complex scenarios the user may not want to spend the time to approve every suggestion of the apprentice. In such case=, there must be ways to agree upon a policy to determine which decisions are worth having the human approve. Of course there are many other issues that follow from this analogy as well: the cooperating agents eventually need accurate models of their relative competence at various subtasks. And there will be questions of social and legal responsibilities for actions taken. Here we have tried to suggest that one class of computer assistants on the space station be viewed as Dynamic systems that interact with their users and work toward extending their knowledge and competence at the Cask they perform. Preliminary results f ~ AI suggest that this is a worthwhile research task. m e nature of the space station suggests that such self-refining systems are exactly what will be needed. m e continually changing configuration of the station itself, the continN ally changing crews and types of operations that will be conducted aboard the space station, the evolving technology that will be present, all dictate that the compute' assistants aboard must be able to adjust to new problems, new procedures an] new problem solving strategies over the life of the space station. Research Rcco==en~ations Here we suggest several areas In which NOVA fright support research toward advanced interfaces for interaction bearer hens and intelligent consultant systems.

OCR for page 91
109 . Ar~hitec~res that support graceful ~nsf~ of Wise and r~nsibility. inseam tom de~relc ping hernia apprentice sy~ for Apace station applications is arranged bash on reaent AT rams and on me importance of such systems to Ache Apace station pa ~ r am. A prudent rester ~ she ategy at this point woNId be to support develcpment of a variety of learning apprentices in various task areas (e.g., for troubleshooting space station subsystems, for monitoring and controlling subsystems, for managing robot manipulation of its environment). Such a research strategy would lead to experimenting with alternative software architectures for learning apprentices, as well as an increased understanding of the feasibility of constructing learning apprentices for specific space station task areas. Evolution of y~ainsize and initiative of interaction. As the expertise of the apprentice grows, and as the human becomes more familiar with the cc mpetence and communication capabilities of the computer, one expects that the optimal style of communication should shift. Changes may occur, for example, in who takes the initiative in controlling the direction of problem solving, and in the grainsize of the tasks (e.g., initially small subtasks will be discussed, but later it may be sufficient to focus only on larger gra m subtasks). Research on interfaces that support these kinds of changes over time in the nature of the interaction, and which support expect communion about such issues, should be encouraged. Such flexible LntPrfa~=s are important whether the apprentice learns or not, s lace the user will certainly go through a learning period during which his uniting of Me system's oompetence and foibles, and his willir~ness to Must In the system drill change. Task~rien~ced sties of cooperative problem solving. ~ order ~ understand the kinds of knowledge that must be car~unicat~d during shared problem solving, it may be worthwhile to conduct protocol shades in which a novice human apprentices winch an expert to assist him and to ac~iire his e ~ rtise (e.g., at a tack such as troubleshooting a piers of equipment). Data collected from such experiments should provide a more precise understanding of the types of knowledge ccmmunicated during shared problem solving, and of the knowledge acquisition process that the apprentice goes through. Transferring knowledge from machine to man. Given the plans for a frequently changing crew, together with the likely task specialization of cc mputer consultants, it is reasonable to assume that in some cases the computer consultant will possess more knowledge about a particular problem class than the human that it serves. In such cases, we wcN1d like the system to cammunicate its understanding of the problem to the interested but novice user. Certa m work in AI has focused on using large

OCR for page 91
110 knowledge bases as a basis for teaching expertise to humans (e.g., Clancey and Letsinger, 1984~. Research advances on this and other methods for communicating machine knowledge to humans would place NASA in a better position for crew training and for integrating intelligent machines into the human space station environment. SUMMARY This paper presents a sampling of recommended research directions which NASA may wish to support In order to accelerate the develcpment of AI technology of particular relevance to the space station. We fr=l that recent AT research indicates the potential for a broad range of applications of AI to space station problems. In order for this potential to become reality, significant support for heroic AI research is needed. Research toward developing a wide range of "hands-on' supervisory systems for monitoring, controlling, troubleshooting and maintaining space station subsystems is strongly reocmmYnled. Such research is Important both because of its potential impact on reliability and safety of the space station and because the technical development of the field of AI is at a point where a push in this area may yield significant technical advances. Such hands-on supervisory systems could include both physically stationary supervisory systems that mom tor electronic subsystems, power supplies, navigation subsystems and the like, as well as physically mobile supervisors that monitor and repair the exterior and inferior physical plant of the space station. Important technical challenges remain to be addressed in both areas. In support of developing and deploying such knowledge-based supervisors, it is recam=ende] that research be conducted leading toward interactive, self-extending knowledge-h~p~ systems. Such systems may initially serve as useful apprentice= in monitoring and problem solving, but should have a capability to acquire additional knowledge through experience. ~ ., m e evolutionary nature of the space station together WlUn one turnover of crew assure that a continually changing set of problems will confront onboarS computer systems. m is feature of the space station, together with the need to continently extend the knowledge of problem solvers onboarS, argue for the importance of research toward interactive, self-extending knowledge based systems. There are certainly additional areas of AT research which would also benefit the space station program. The goal of this paper is to point out a few such areas, in the hope of stimnlating thought about these and other possible uses of AI in the space station.

OCR for page 91
111 AcKNowLEDGEMENTs My thanks go to Allen Newell and Oren Etzion1 for providing useful comments on earlier drafts of this paper. this work was supported in part by NSF grant DCR-8351523. NOTES 1. In fact, initial AI systems for troubleshooting and control have generally been restricted to dealing with typed- m observation inputs and to typing out their recommendations rather than exerting direct control over the system. However, there are exceptions to this, such as the YES/MVS system (Ennis et al., 1986) which directly monitors and controls operations of a large computer system. 2. The research recommendations listed here represent solely the opinion of the author, and should not necessarily be interpreted as recommendations from the symposium as a whole. 3. LEAP also utilizes knowledge about behaviors of individual circuit components, plum knowledge of how to symbolically simulate digital ~ e Cl~lltS e 4. Other relevant knowledge includes the goals of the user (e.g., a decision must be made to act within 15 seconds), and emp~ri~1 data on the frequencies of various types of faults. IN - -A 1984 Artificial Intelligence, Special Volume on Oualitative Reasoning About Physical Systems. Nor~h-Holland. Clancey, W. and Ietsinger, R. 1984 NEOMYCIN: reconfiguring a rule-based expert system for application to teaching. Bp. 361-381 ~ Clancey and Shortliffe, eds., Readings ~ Medical Artificial intelligence. Addison-Wesley. Davis, R. 1984 Diagnostic reasoning based on structure and behavior. Artificial Intelligence 24:347-410. Ennis, R. L., et al. 1986 A cont mucus re~l-time expert system for cc mputer operations. IBM Journal of Research and Development 30:1.

OCR for page 91
112 nesereth, M. 1981 The Use of Hierarchical Models in the Autcmatei Diagnosis of Computer Systems. Technical report Stanford HPP memo 81-20, Stanford University, Stanford, CA. Mitchell, T. M., Mahadevan, S., and Steinberg, L. 1985 LEAP: a rearm ng apprentice for VISI design. Pp. 573-580 in Proceedings of the Ninth International Joint Conference on Artificial Intelligence. August. National Aeronautics and Space Administration 1983 Autonomy and the Human Element in Soace. Technical report NASA-ASEE 1983 summer faculty program final report. National Aeronautics and Space Administration, Stanford, Ck. National Aeronautics and Space Administration 198S Advancing Automation and Robotics Technology for the Spac-~tation and for the U.S. Econc my. Technical report NASA technical memorandum 87566. National Aeronautics and Space Administration, Springfield, VA. O'Rorke, P. 1984 Generalization for explanation-based schema acquisition. Pp. 260-263 in Proceedings of the AAAI. AAAI, Aust m, TX. Pazzani, M. J. 1986 Refining the knowledge base of a diagnostic expert system: An application of failure-5riven learning. Pp. 1029-1035 in Proceedings of the Fifth National Conference on Artificial Intelligence. THAI, August. Segre, A. M., and DeJong, G. F. 1985 Explanation-bas~ manipulator lo: acquisition of planning ability through observation. Pp. 555-556 in Proceedings of the 1~ Conference on Robotics and Automation. lace, St. Lcuis, Ad. Snarl, E. A. , Jam~eson, J. R. and Delaune, C. I. 1985 A fault-detection and isolation method applied to liquid oxygen loading for the space shuttle. Pp. 414-416 in Proceedings of 1985 International Joint Conference on Artificial Intelligence. International Joint Conference on Artificial Intelligence, Los Angeles. Smith, R. G., Winston, H. A., Mitchell, T. M., and Buchanan, B. G. 1985 Representation and use of explicit justifications for knowledge base refinements. Pp. 673-680 in Frcceedings of the Ninth International Joint Conference on Artificial Intelligence. August.