National Academies Press: OpenBook

Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium (1987)

Chapter: Synopsis of General Audience Discussion

« Previous: Discussion: Comments on Expert Systems and Their Use
Suggested Citation:"Synopsis of General Audience Discussion." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 147
Suggested Citation:"Synopsis of General Audience Discussion." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 148

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

SYNOPSIS OF CAL AUDIENCE DIS=JSSION Concerns of several varieties were expressed about the knowledge engineering aspects of e ~ systems. Members of the audience with direct experience with developing expert systems gave these remarks special cogency. Expert systems se ~ to work better where good extensive formulations of the knowledge base already exist. Attempting co Develop anal Knowledge ease ~~ part or One expert system effort often fails. The domains of expert systems are often exceedingly narrow, limited even to the particularity of the individual case. Given the dependence of the knowledge in expert systems upon the informants, there exists a rain danger of poor s`ystems if the human experts are full of erroneous and imperfect knowledge. There is no easy way to root out such bad knowledge. On this last point it was noted that the learning apprentice systems ~ . ~ . .. . . . . . . The human ~ ~ . ~ ~ ~ ~ ~ ~ ~ ~ ~ . ~ . . alscussea In Mltanellts paper provide some protection. · . . . . experts give advice for the systems to construct explanations of the prior experience, and what the systems learn permanently is only what these explanations support. Thus the explanations operate as a filter on incorrect or incomplete knowledge from the human experts. Concern was expressed about when one could put trust in expert systems and what was require J to validate them. This was seen ~~ a major issue, especially as the communication frum the system Acted towards a clinked "Yes sir, will do". It was pointed out that the Issue Was exactly the same complexity with humans and with machines, in terms of the need to accumulate broad-band experience with the system or human on which to finally build up a sense of trust. Trust an] validation are related to robustness in the sense used in Newell's discussion. It was pointed out that one path is to endow such machines with reasoning for validation at the moment of decision or _ _ _, , . . . .. , _ _ _ action, when the context is available. This at least provides the right type of guarantee, namely that the system will consider some relevant issues before it acts. To make such an approach work requires providing additional global context to the machines, so the information is available on which to make appropriate checks. Finally, there was a discussion to clarify the immediate-knawledge vs search diagram that Newell used to describe the nature of expert systems. One can move along an isobar, trading off less immediate-kna~riedge for more search (moving Can and to the right) or,, 147

148 vice-versa, more immediate-knowledge for less search (moving up and to the left). Or one can move toward systems of increased power (moving up acrves the isobars) by pumping in sufficient additional knowledge and/or search in same combination. The actual shape of the equal-performance isobars depends on the task domain being covered. They can behave like hyperbolic asymptotes, where further tradeoff is always possible at the cost of more and more knowledge (say) to reduce search by less and less. But task drains can also be absolutely finite, such that systems with zero search are possible, with all correct response simply known. For these, there comes a point when all relevant knowledge is available, and no further addition of knowledge incrust-= performance. #

Next: Session III: Language and Displays for Human-Computer Interaction »
Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium Get This Book
×
 Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium
Buy Paperback | $125.00
MyNAP members save 10% online.
Login or Register to save!

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!