Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
7 Artificial Intelligence Artificial intelligence (AI) looms large in the public's perception of the future of computer science and technology and has contributed much to the emergence of this field. In this chapter, we focus on what we consider to be particularly promising aspects of AT: sensory computing, expert systems, deeper cognitive systems, and robotics. SENSORY COMPUTING Understanding the workings of the human sensory apparatus and implementing comparable capabilities on machines, particularly speech and vision, is an important scientific challenge and a tech- nological imperative. It is vita] to the development of autonomous devices such as robots and for improved communication between machines and their human users. In the area of speech understanding, it has proved to be more difficult than expected to get computers to recognize untrained hu- man speech. At present, systems can recognize a limited number of words, they take a relatively long time to do it, and speakers must usually pause between words. Even this modest success requires the speaker to familiarize the computer with the unique qualities of his or her voice by reading aloud lists of all the words to be used. For such speaker-dependent systems, the machine, after training, can achieve word recognition of several thousand words with a success rate in 51
52 the upper 90 percent range. Speaker-independent systems that can understand continuous speech appear to be feasible but are at least 3 to 5 years away. Advances in natural language understanding and cognitive science, combined with the potential of multiprocessor systems to provide the huge processing power required, hold out a big promise but not a guarantee of expanded capabilities for speech comprehension via computer. Machine vision represents another critical area in which, as in speech, significant progress is likely to depend on the combination of cognitive research with the evolution of massively parallel and most probably special-purpose multiprocessor systems. Machine vision is the process of deriving useful information about a scene from images, for example, the conversion of a huge list of numbers representing the light intensities of millions of minute dots, which make up an overall picture as perceived by a video camera, into a description of the pictured objects, their location, and spatial relationships. This description may be used, in turn, to control a manipulator that picks up an object or to guide a vehicle on a road. Since as long ago as 1950, demonstrations of machine vision have included recognition of printed characters, medical unage anal- ysis (e.g., counting blood cells), some industrial vision (e.g., printed circuit board inspection), flexible assembly, and military target de- tection. Despite these successes, however, the capabilities of machine vision today are still largely limited to printed character recognition, medical image analysis, and some industrial inspection. This is due in part to the low computational power available and in part to the youth of current theoretical foundations and algorithms that address visual perception. Advances in machine vision are expected to have a significant impact over a large number of uses for the same reason that our own eyes are so important in everything that we do. Autonomous systems, be they military vehicles or robots on the factory floor, will have vision with far greater capability and flexibility than to- day's repetitive-motion robots. Another important consequence of improved machine vision and better speech comprehension will be the evolution of more natural interfaces that span nearly all applica- tions of computers and permit users to speak or show things to their machines as naturally as they do in interacting with people.
53 E:XPl:RT SYSTEMS Expert systems involve techniques for representing knowledge and methods by which that knowledge can be used by a machine to reason toward the solution of problems that are difficult enough to require significant human expertise for their solution. Every expert system consists of three principal parts: the knowI- edge base, the reasoning or inference methods, and their interface with the user. Knowledge bases contain factual knowledge and heuristic knowledge. The factual knowledge, like the knowledge in textbooks or journals, is widely shared and easily obtained. In con- trast, the heuristic knowledge is rarely discussed and is largely in the private domain of experts. It is the knowledge of good practice, good judgment, and plausible reasoning in the field. It is the knowledge that underlies "the art of good guessing. The inference methods used by expert systems are often based on propositional calculus or predicate logic. Most commonly used are "forward chainings methods, which follow causal paths from condi- tions presented to the program to conclusions reached by the program (modus pollens applied repeatedly), or Backward chainings meth- oafs, which proceed from goal statements to conditions (same logic backward). Probabilistic frameworks and some ad hoc frameworks are also used for inference. As one would expect from a technology so broadly conceived, the span of applications ~ as wide ~ the world of professional and semiprofessional work. The earliest applications of expert systems were in such esoteric areas as the analysm of chemical data, medical diagnosis and therapy planning, the interpretation of data from oil wed logging, and the defense-related interpretation of deep-ocean sound. As the applications of expert systems began to grow in the m~-1980s, other mainline commercial and industrial applica- tion~ began to emerge. Finally, in government, expert systems are used to assist government officteab in interpreting health care man- agement data and complex pension laws. In m-1987, there were approximately 1,500 applications in use and several thousand under development (Feigenbaum et al. 1988~. As expert system continue to evolve, it is becoming apparent that two applications areas manufacturing (in particular, the white- colIar aspects) and financial services are beginning to dominate. In each of these, the sheer economic volume of goods and services means that even small enhancements to the average human professional skis in decision making is leverage for great economic gain. Examples of
54 expert systems in manufacturing include: the design of a manufac- turable configuration of subsystems, given a customer order for a rn~nicomputer, and the design of an associated floor layout; real-time scheduling and rescheduling (due to a machine failure) of the progress of wafer~in-process in a huge microchip manufacturing facility; and planning the manufacturing process for jet fighter parts. In finance, expert systems are used to assist bank officers In deciding the credit worthiness of a loan applicant and to asset insurance underwriters in deciding price and terms for insurance contracts. Probably more than half of today's expert systems are used for diagnostic purposes, such as assisting auto mechanics in diagnosing and repairing subsystems of automobiles and carrying out real-time remote diagnostic tests of massive steam turbine generators. Appli- cations to diagnosis wiD continue to be widespread. Motivating this is the increasing complexity of devices and systems used throughout industry. Unassisted human abilities in problem solving, training, and retraining cannot keep pace with current and expected develop- ments. There are a number of key research issues in expert systems. (~) Knowledge representation: How shad the knowledge of a domain of human endeavor and the world in which it is situated be represented as data structures in the memory of a computer? (2) Knowledge utilization: How can this knowledge be used for problem solving? Essentially, this is the question of the design of inference (reasoning) procedures and frameworks. (3) Knowledge acquisition: How wiD it be possible to acquire the knowledge automatically (machine learn- ing) or at least semiautomatically (transfer of expertise from humans, their texts, or their data)? (4) Large knowledge bases: The power of expert systems resides in the specific knowledge of the problem domain, and for systems to be powerful they must contain a large amount of high-quality knowledge. Accordingly, an enormous knowI- edge infrastructure needs to be codified and represented for machine use and, as one would expect, this is ant] wiD continue to be a huge endeavor in which machines may participate, as In (3) above. Applications of precise knowledge delivery are also of increasing importance. A knowledge delivery application is one in which the right knowledge, in the context of a problem or a service, is delivered at the right moment for a human professional to consider. For exam- ple, one commercially available knowledge delivery system advises clinical pathologists about tissue diseases and associated features. Such applications are motivated by the great complexity of human
55 system and procedures that are now in place a complexity that even the mind of a specialist cannot encompass. A knowledge deliv- ery application is, in essence, a "living rulebook or textbooks that delivers knowledge in context. As we look toward the future, the volume of expert systems is expected to grow and blend with the great stream of more conven- tional data processing and numeric applications. There is a certain inevitability at work here. As the cost of computers continues to fall during the coming two decades, many more of the practitioners of the worId's professions will be persuaded to turn to information pro- cessing for assistance in managing the increasing complexity of their daily knowledge-related tasks. The computers that will act as intel- ligent assistants for these professionals will have to have reasoning capabilities and knowledge. In tune, we wall undoubtedly achieve a broad reconceptualiza- tion of what ~ meant by an expert system. In the broader concept, the system will be conceived as a collegial relationship between an intelligent computer agent and an intelligent person (or persons). Each will perform tasks that it/he does best, and the intelligence of the system will be a result of the collaboration. Trues and problems expected to dominate the agenda of future expert system researchers include: (1) the creation of more powerful, general, and easy-to-use programming systems that will liberate the user from knowledge engineering intermediaries; (2) new knowledge representation for- malisms and techniques, adequate and effective for representing a broad body of general knowledge about the everyday world, the worlds of science and engineering knowledge, biological and medical knowledge, and so on; (3) new reasoning methods that escape the elegant but rigid bounds of propositional and predicate logic and reuse old knowledge for solving new problemsforerunners of such methods are now called reasoning-by-analogy, case-based reasoning, script-based reasoning, and chunking; and (4) new machine learning methods for acquiring knowledge based on analogies, on abstractions from internal problem-solving processes, on watching human expert problem solving, and on the automated reading of textual material from journals and textbooks. We can envision that as society changes from industrial to post- industrial and as work becomes increasingly the work of professionals and knowledge workers, the power tools will be expert systems. The economic and social well-being of advanced societies increasingly will be the result of Working smarter" rather than "working harder," and
56 expert systems will be agents of that change. Knowledge is power in human affairs, and expert systems are amplifiers of human thought and action. DEEPER COGNITIVE SYSTEMS Another unportant focus in Al research involves the attempt to understand and model the deeper cognitive activities fundamental to intelligence, including learning, explaining, planning, and hypothe- sizing. Research in this area Is an interdisciplinary enterprise, involv- ing a synthesis of concepts from experimental psychology, linguistics, neuroscience, and computer science; advances hold the dual promise of increasing understanding of human cognitive processes and ~ntro- ducing more and more intelligence into the computer. Listed below are some of the promising current thrusts of this research. 1. The organization of memory. Work in the cognitive Al field has overturned the once-prevalent view that human memory could be viewed as a largely unorganized mental filing cabinet. AT re- searchers have developed several sophisticated and influential models of how humans organize their knowledge. Although these theories, which include semantic networks, frames, and scripts, involve differ- ent methods of representing the memory's organization, they share a common assertion that memory structure consists of a network of stored associations, with various types of information stored at each node of the network. Collectively these memory models have helped in the construction of knowledge-based systems that use contextual information to tackle specific problems. 2. Learning from practice. After a long lull caused by dmappoint- ments with early experiments on learning machines some 20 years ago, recent advances in the development of computer systems have given rise to programs that exhibit modest yet continuous learning from practice on the tasks that they perform, much as humans do. These recent innovations have important implications for computer science in that they represent key steps toward the goal of making more intelligent machines. Multiprocessors add fuel to this promise with the substantially greater power they possess. Machines that could learn from practice, even at a modest scale, could relieve much of the burden of programming all the necessary intelligence at the outset and could help tailor generic programs to specific applications. 3. Connectionem. The last decade has witnessed progress in the development of systems and theories involving the connections
57 paradigm, which is often likened to the human nervous system. The result of an interdisciplinary effort by neuroscientists, psychologists, and computer scientists, connectionist work grows from the shared conviction that the computational architecture of human cognition is fashioned within the highly parallel dynamic architecture of the human brain. Connections systems involve interconnected networks of large numbers of elemental computing nodes that often simply add up the values of their inputs and check if the sum Is above a preset threshold. These massively parallel systems, somet~rnes referred to as neural networks, operate by learning strategies that involve the modification of the elemental nodes (e.g., the thresholds) in response to what they experience. Recent advances in this area are related to new knowledge about what can be learned by such networks and improvements in VEST circuits that make possible new complex architectures of many such interconnected cells. Progress has been constrained by the learning limitations of small experunental systems and by the absence of a theory sufficiently developed to address how large neural networks can become capable of substantial, predictable, and scalable learning. Making systems more intelligent is a primary goal of AT research, and advances toward this objective will make computers both more useful and easier to use. The power and flexibility of today's machines are greatly inhibited by the amount of detailed knowledge that must be memorized by those who wish to use them effectively. Given advances in equipment and a deeper theoretical understanding of human cognitive processes, tomorrow's computers should have a greatly enhanced capacity to understand what unsophisticated users want them to do. More successful cognitive computer systems will enhance the usefulness and ease of use of computer systems in all areas of application. ROBOTICS Robotics researchers strive to understand and build machines that are sufficiently intelligent to interact effectively with the physi- cal world in the performance of designated tasks. Progress in robotics will continue to be a key to enhanced productivity ~ factories, with particular utility in the performance of repetitive and dangerous jobs or jobs that require sustained quality control. Moreover, extensive use of robotics and other computer technologies in design and man- ufacturing Is expected to make possible the rapid prototyping of
58 products (or even factories), permitting the cost-effective manufac- ture of customized products at mass production costs. Autonomous systems with mobile and perceptual capabilities will also make possi- ble the performance of otherwme-unpossible tasks, such as planetary exploration over a long period of time. Listed below are some of the major promising research thrusts in robotics. 1. Sensors and perception. Sensors are the mechanisms that provide information about the robot's relation to the environment. Perception enables a machine to comprehend and adjust to its physi- cal surroundings. Modes of sensing and perception include the visual (see Chapter 7), tactile, force, torque, speed, and even olfactory modes. Improvements in sensing and perception have the potential to augment the usefuInem of virtually every type of machine by in- creasing itd ability to adapt to the complexity and variability that characterize the physical world. An important research activity in this area is combining sensory transducers with computation for mak- ing smarter sensors. Machine vision represents the most important perceptual capability of future robotic systems. 2. Mechanisms. Progress in robotics depends on the design and manufacture of mechanisms capable of the subtle, strong, and precise motions required for useful activity. The need for precision, speed, light weight, and strength poses serious problems that cannot easily be met with conventional approaches. To date' most of the best work in this area has come from the intensive efforts of design teams relying on traditional engineering methods. We expect that extensive computer-assisted design will play an expanded role in this area through modeling and sunulation of complete robotic mechanisms before construction. 3. Sensorimotor integration. To achieve smooth, flexible, em ficient motions in robots, the sensor and motor controb must be integrated and coordinated. Advances here call for research on vi- sual, force, tactile, and torque feedback. A deeper understanding of this integrative process among robot sensors and actuators wild broaden understanding of neuroscience and biomechanics as well. 4. Planning. The effectiveness of robotics depends heavily on a machine's ability to define necessary actions and specify their se- quence in order to achieve a desired goal. Planning ranges from high-level task planning (e.g., to assemble a product) to low-level path planning (e.g., for obstacle avoidance). A difficult and impor- tant problem in planning involves the conversion of semantic or mid sion descriptions of a robot's goad to physical or machine-executable
59 functions. Planning must also account for the inherent uncertainty and partial knowledge that robots have of their physical environ- ment. Some researchers believe that the best hope for progress in planning rests with the creation of more intelligent program tenth a deeper knowledge of the physical world.