National Academies Press: OpenBook

Artificial Intelligence: Current Status and Future Potential (1985)

Chapter: Artificial Intelligence: Current Status and Future Potential

« Previous: Rear Admiral Charles H. Davis
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 11
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 12
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 13
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 14
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 15
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 16
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 17
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 18
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 19
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 20
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 21
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 22
Suggested Citation:"Artificial Intelligence: Current Status and Future Potential." National Research Council. 1985. Artificial Intelligence: Current Status and Future Potential. Washington, DC: The National Academies Press. doi: 10.17226/18501.
×
Page 23

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

ARTIFICIAL INTELLIGENCE CURRENT STATUS AND FUTURE POTENTIAL HERBERT A. SIMON Carnegie-Mellon University AT THE OUTSET, I need to clarify what I mean by the term ^-A intelligence. In a military context, intelligence can have two quite JL \. distinct meanings: first, it can refer to information of significance to military operations and to the means for securing and analyzing it; second, it can refer to the faculty of the human mind and brain that enables us to think and learn. It is the second meaning that was intended by the inventors of the label artificial intelligence, and that will be my meaning in my talk today. I might prefer a different label. In our early work, Al Newell, Cliff Shaw, and I referred to what we were doing as "complex information processing." But since that's a rather bland phrase compared with the challenging claim that our field has something to do with intelligence, it is the AI label that has stuck. A QUARTER CENTURY OF RESEARCH Ai has been very much in the public eye during the past two or three years. Since almost all weekly and monthly journals and many newspapers have done feature stories on it, I am sure that it has not escaped your notice. Some of you may be surprised, however, to learn that it is not at all a new thing. Research on artificial intelligence began in 1955 with a computer program called LT (Logic Theorist), which was capable of seeking and finding proofs of theorems by a process of heuristic, or selective, search. By 1958, computers were designing motors, transformers, and generators automatically in companies like Westinghouse, and a paper was published by Goodwin in that year titled "Digital computers tap out designs for large motors . . . fast." In 1960, Clarkson showed how a computer could choose stocks and bonds for an investment portfolio in much the same manner as that task was done by a bank investment officer. 11

DEFINING INTELLIGENCE Why do I cite these programs, now a quarter century old, as examples of AI? To answer that, we have to agree on what intelligence is. We judge our fellow humans—and ourselves—to be intelligent when we can perform certain kinds of tasks: solving problems, making decisions, learning. Of course we also sometimes judge a person to be intelligent if we see a thoughtful frown, but that isn't always a reliable cue. It is safer to use some kind of intellectual task, perhaps of the kinds that are found on intelligence tests. If a human being did any of the tasks I mentioned earlier—proving a theorem, designing a motor, choosing a sensible trust portfolio—we would probably agree that intelligence had been exhibited. Similarly, we can say that artificial intelligence has been exhibited by a computer when it has done something that would have required intelligence in a man or woman. Artificial intelligence is the study of computers doing intelligent things—things that would require intelligence of people. The possibilities of Ai have also long been of interest to military organizations. I remind you of the evolution of early warning systems for aircraft, which have gradually been automated over the past 30 years by incorporating more and more intelligence into the computer components of the systems. In the same way, more and more of the intelligence required for landing planes has been taken over by automatic systems. There are many other military applications, existing and planned, that illustrate the automation of intelligence. We are so accustomed to thinking of these as "automation" that we seldom consider the level of intelligence that would be required of a human being performing the same task. But there is no sharp line between automation and artificial intelligence: the one gradually merges into the other with the introduction of new and more powerful techniques of automation. We tend to think of a computer application as automation if the task performed has a well-defined goal and structure and well-defined alternatives, and especially if it is performed in a carefully designed, simplified environment. Thus, automatic welding in a factory setting is automation and is not usually regarded as artificial intelligence; welding performed on pipes on an irregular sea bottom may require a device that is much more flexible in its responses, hence one that genuinely incorporates techniques of artificial intelligence. Automation deals with the domain of well-structured problems, AI with the domain of ill-structured problems, and there is no sharp boundary between the two domains. 12

IMPEDIMENTS TO GROWTH If AI is already 30 years old, why are we just beginning to learn about it now? Why can we point to only a few real-world applications, and these mostly quite recent? Until the past few years, the growth of the field has certainly not been precipitous. What have been the limits on its development? I think there is no single explanation for the slow growth of applied AI, but several contributing factors. First, it has been paced by limits on the processing speed and memory size of computers. The expert systems that we are beginning to hear of today are large programs, with large associated data bases. Until megabyte memories were common and cheap, it was not realistic to think of large programs of this kind. Computer speed and size, and computer architecture, did not much impede the conceptual growth of the field—that has been limited mainly by the imaginations of the researchers. Computer hardware probably did limit the rate at which basic research ideas could be translated into practical application. The principle of investment in securities by computers could be demonstrated by Clarkson in 1960; however, there was no possibility, at that time, of building a system suitable for practical everyday use. Today we hear a great deal about prospective new computer architectures, so-called supercomputers and parallel computers. While it is always nice to have larger and faster machines, I do not think these developments are critical for the conceptual development of AI, except possibly in the domain of visual and auditory pattern recognition. Nor do we have to wait for new programming languages to advance AI. The principal programming tools have been available for a long time: list processing languages from the beginnings of AI, and so-called production systems for the past decade or more. One other impediment to the rapid growth of AI should be mentioned: skepticism. People, including some members of the computer science community, have often been extremely skeptical that computers could exhibit anything that could reasonably be called intelligence or could compete with humans in the quality of their performance of profes- sional-level tasks. Only concrete demonstration with running and debugged computer programs has been able, step by step, to gain ground against that skepticism. The boundary is redrawn each year as computers demonstrate their competence in new domains, but the skeptics simply retreat to the territory that has not yet been explored. Skepticism has impeded progress in a number of ways. It has made it more difficult to fund research in AI. It has made it difficult, until 13

quite recently, to attract large numbers of the ablest graduate students to the field. And it has diverted the energy of AI researchers themselves to debating with the skeptics about the possibility of AI, instead of simply writing programs that exhibit it. But I am getting ahead of my story. I should say something about why the skepticism is unwarranted—about how artificial intelligence is possible. How is AI POSSIBLE? Artificial intelligence research rests on a hypothesis, sometimes called the physical symbol system hypothesis, which states: The necessary and sufficient condition for any system, biological or mechanical, to be capable of thought and intelligence is that the system be a physical symbol system: that is, that it be able to input (read) symbols, output (write) symbols, create structures of symbols related in various ways, store symbols and symbol structures in a memory, compare symbol structures for identity or difference, and branch (adapt its behavior) on the basis of the outcomes of such comparisons. By a symbol is meant any pattern, whether built of ink, of chalk, of electricity and magnetism, or of neurons, that can be compared with other patterns. The electromagnetic fields stored in the memory of a computer are patterns, hence symbols. They may be used to represent numbers, words, or even pictures and diagrams. So the physical symbol system hypothesis claims that any system that can read, write, relate, store, and compare patterns and branch can be programmed to behave intelligently. The hypothesis has two immediate corollaries. If it is correct, then it follows that: 1. Computers can be programmed to think (since they obviously possess the symbol-processing capabilities listed above). 2. Humans use symbolic processes, implemented by the nervous system, to think. TESTING AND APPLYING THE HYPOTHESIS There is no need either to accept or to reject the physical symbol system hypothesis on faith. It is an empirical hypothesis to be accepted 14

or rejected, like any other scientific theory, on the basis of factual evidence. Much of the research activity in AI, and in the allied domain called cognitive science, is aimed at performing such tests. To test the hypothesis that computers can be programmed to think, we construct computer programs to handle a variety of tasks of the kinds that require thinking in humans. To test the hypothesis that a thinking human being is a physical symbol system, we try to write computer programs that think in as humanoid a fashion as possible (taking the same steps, making the same mistakes as people) and then compare the behavior of these programs, in detail, with the behavior of people performing the same tasks. From this description, you can see that research in Ai has two distinct goals, which are complementary but directed at quite different appli- cations. First is the goal of supplementing human brain power with intelligent machine power, of augmenting the amount of intelligence available for dealing with human problems and decisions. The first stage of the Industrial Revolution was concerned with supplementing human muscle with the power of machines; this second stage of the Industrial Revolution is concerned with supplementing human brains with the intelligence of machines. Artificial intelligence thereby becomes a major direction in which we seek to increase productivity in our society, so that we will have the resources to meet the human, social, and security needs of that society. The second goal of Ai, or of the related field called cognitive science, is to understand and improve human thinking and learning processes, using the computer as a central tool to build models of those processes. This goal is a part of the task of experimental and theoretical psychology. As understanding is gained of human thinking and learning, it can be applied to improving our decision-making processes in organizations and our teaching and learning processes in the schools. RESEARCH ON CHESS Turning from a general characterization of the field, I should like now to illustrate my thesis with examples, the first being research on the game of chess. Chess may appear to be a trivial domain for serious research, but we should recall that some of the most important advances in genetics in this century were made using fruit flies as experimental organisms. Fruit flies are of only slight economic importance, and even less esthetic appeal. But they offered certain advantages (rapid breeding, simple care, large chromosomes) for experimentation, and a large body of knowledge accumulated about this domain. 15

Chess has also proved a valuable experimental domain, this time for the accumulation of knowledge about intelligence. The game has little regularity—few mathematical theorems can be proved about it. It is complex enough to engage bright people for a lifetime. It even has characteristics that are relevant to military competition. So chess has become the fruit fly of artificial intelligence. The first thing that was learned about the game is that it is impossible to find good moves by solving an optimization problem. The space is simply too large for computers present or prospective to explore systematically: some 10!2° possible games. It is a good environment in which to learn how people deal with complexity that forbids exhaustive search or exact algorithms. A study of human chess experts—grand masters—began to reveal the bases of their skill. The grand master, it turns out, is able to recognize any one of a vast number of features when that feature appears as a pattern on a chessboard during a game. A grand master's repertoire is at least 50,000 such patterns, and with each pattern is stored in memory information about what to do when the pattern is seen—for example, "If you see an open file, put a rook on it." Now it is not true that every time a grand master sees an open file he moves a rook to it. But it is true that he notices the features and thinks of moving the rook. If he did not think of it, in a matter of seconds, he would not be a grand master. A large part of the grand master's skill, then, is the skill of recognition, those 50,000 patterns or "chunks" as we call them in psychology. It is this recognition ability that allows him to play 50 games simulta- neously with weaker opponents, at 10 seconds a move, waiting until the opponent makes an error that is recognizable as one of these features and exploiting the error in a standard fashion. The same knowledge exhibits itself in other ways. Display a chess position to a grand master from some well-played game that is unknown to him. After 10 seconds remove it and ask him to reconstruct the position. He will replace correctly 23 or more of the 25 pieces on the board. A weaker player, even a good amateur, faced with the same task will replace only 6 or 7 correctly. Must you have some kind of special visual imagery to be a chess master? Put the same 25 pieces on the board but arrange them at random. Again, the amateur will succeed in replacing about 6—and so will the grand master! What is involved is not visual imagery but the 50,000 chunks. The game position is made up of familiar configurations, the random position is not. The same prowess can be demonstrated in expert bridge players, poker players, or devotees of any activity who have put in 10 years learning their trade. 16

EXPERT SYSTEMS That brings us to the topic of expert systems, the principal form that applied Ai takes these days. An expert system is any system that can perform professional tasks in some domain at an expert level. The program may achieve its expert performance by borrowing some of the tricks that human professionals use in performing the same task, or it may rely largely on the brute force speed and memory size of modern computers. Most existing expert systems combine these two sources of power: they use heuristics, or rules of thumb, borrowed from people, in order to search selectively and intelligently, rather than depending on massive search by pure trial and error. However, they are usually prepared to carry out more extensive searches than humans can, to compensate for possibly incomplete knowledge and heuristics. For example, the very strongest current chess playing programs can perform at master level. They may examine as many as a million, or even several million, branches of the game tree before they make a move. Of course a search of 106 possibilities is still very selective when compared with the total search space of 10120 possibilities. Nevertheless, human masters and grand masters do as well or better, although they seldom examine more than about 100 branches of the game tree. The chess programs are still trading off computer speed against their limits of chess knowledge. In the same way, medical diagnosis systems, ore prospecting systems, and other expert systems that have been developed search selectively, but not as selectively as human professionals, and compensate for their lack of selectivity with their speed. Described in general terms, most expert systems have a common architecture, consisting of a large data base, an "indexing" system for accessing the data base on the basis of the cues presented in the problem situation, and capabilities for making inferences from the knowledge they draw from the data base. A medical diagnosis system like CADUCEUS or MYCIN, for example, contains a large store of medical information, consisting of disease entities (possible diagnoses) associated with symptoms (cues). Given an initial set of symptoms, the system recognizes one or more disease entities that are indexed by these symptoms. On the basis of its knowledge, it then requests additional information and tests to allow it to discriminate among the alternative diagnoses. It may also draw inferences from symptoms in one part of the body to originating causes in other parts of the body. As evidence accumulates, it is able gradually to assign a much larger weight to the probability of one diagnosis than to the others and to arrive at a final decision. 17

An expert program in chemistry for synthesizing organic reactions would be organized in a similar way, but it might make greater use of means-ends analysis in solving its problems. Given a set of initial reagents, their costs and their chemical reactivities, the system is asked to design a set of reactions that will produce a desired substance. Starting with the desired molecule, the system reasons backward to simpler substances from which it could be synthesized, and from these to substances available among the reagents. In choosing a desirable reaction path, it can take into account not only the characteristics of the chemical reactions, but also the costs of ingredients and energy. I cannot give you a list of the expert systems that have been built and used up to the present time, both because the list is quite long and because new systems are being created almost daily. Neither can I claim a comprehensive knowledge of them or of the extent to which they have actually been applied to real-world situations. In addition to the examples I have already mentioned, there are systems for inter- preting the data in mass spectrograms, systems for configuring complex computer systems to meet the requirements of particular customers, systems for scheduling large job shops, systems for interpreting oil well drilling logs, and others. And as I mentioned earlier, the boundary is very vague between the expert systems that we call artificial intelligence systems and other kinds of highly automatic control systems. THE MOVING BOUNDARIES OF AI The research strategy of artificial intelligence has followed the usual path of starting with the simplest kinds of situations—highly structured "toy" problems—and, when these have been mastered, moving on to situations that are more complex and less well structured. The field advances gradually from the well-structured to the ill-structured, and from data-poor to data-rich domains. That kind of progress is contin- uing, and it is useful to ask what lies ahead. THE SPECIAL PROBLEMS OF ROBOTICS Robotics is that special branch of artificial intelligence that is concerned with systems that perform physical actions on their environments. What is the difference between robotics and the forms of factory automation with which we have been familiar for many years? In factory automation, the focus is placed on simplifying tasks 18

sufficiently and standardizing the environments in which they are performed, so that they could be done by machines without human intervention. A typical example might be the machines for boring cylinders in automobile engine blocks. The underlying analytic tech- niques for designing and building such systems have been drawn mainly from control theory. Current robotics research aims at relaxing the limits on automation in two related directions: first by providing the machines with sophis- ticated sensory organs—"eyes" and "ears"—so that they can locate and manipulate the materials they are dealing with, and flexible manipulative faculties—"hands" and "legs"—so that they can perform complex actions in continually varying circumstances. The human eye and ear and limbs evolved along with the older parts of the brain that we share with our mammalian cousins. Nature has had 400 million years to shape and perfect mammalian (including human) sensory and motor systems. They are very sophisticated and complex systems, capable (e.g., in the retina of the eye) of rapid parallel computation, and highly attuned to their tasks. By contrast, the new parts of the brain, which we are especially proud of because they distinguish us from other species of animals, have been evolving for only a couple of million years. The parts of the brain that we use for abstract, professional thinking are still rather simple and crude. The new brain is a slow, serial, one-thing-at-a-time system. There has not yet been time for fine tuning. It should not have been a surprise, therefore (although in fact it was), that progress in artificial intelligence research was easier and more rapid in imitating the functions of the new brain than it was in imitating sensory and motor functions. The bottleneck in robotics research today, where progress is being made but relatively slowly, is in the design of sensory and motor organs, not in the design of systems capable of reasoning about sensory information and turning it into plans of action. It is easier to automate professors and naval planners than it is to automate bulldozer drivers. The reason, of course, why we want our new robots to have sophisticated sensory and motor capabilities is that we want them to be capable of operating in natural environments that have not been presimplified to facilitate automation. The problems are different both from those of traditional factory and office automation, in which the complexity of the environments could be controlled and reduced, and from planning tasks, in which the environment can be represented in a smoothed and simplified model. A robot vehicle, operating in a real environment (an unmanned and untethered submersible, for example), must be able to cope with the unexpected and the variable. It must be 19

able to readjust its picture of the external world continually as its sensors provide it with new information. These are some of the reasons why robotics is perhaps the most challenging—and sometimes frustrating—branch of artificial intelli- gence today, and why we should expect steady progress but not sudden miracles of accomplishment. SYSTEMS FOR SCIENTIFIC DISCOVERY To take a still deeper look into the future, I would like now to describe work that is still in the basic research stage and far from practical application. I refer to computer systems that, by examining empirical data, can discover new scientific laws. It is sometimes thought to be impossible for a computer to discover anything new, to do anything creative. After all, a computer can do only what it is programmed to do. But that perfectly true statement does not mean that a computer always does what its programmer thought it was going to do. Nor does it mean that a computer cannot be programmed to do things that its programmer would be unable to do. Nor does it mean that it cannot be programmed to do very general things: specifically, to explore a body of data looking for regularity (just as it might explore a chess position looking for an unsuspected pattern of moves). An example of a discovery system is BACON, a program developed by Patrick Langley, Gary Bradshaw, Jan Zytkow, and myself. BACON'S expertise derives from a small set of rules of thumb that guide its searches through data. It looks for correlations among variables, it looks for invariant relations, it invents and introduces new concepts that may simplify the relations it has found, it looks for symmetries among variables and possible laws of conservation, and it is alert for integral ratios among variables of which all are integral multiples of some single number. When given the distances of the planets from the sun and their periods of revolution, BACON discovers (in less than a minute) that the periods vary as the 3/2 powers of the distances. That is Kepler's Third Law, an important discovery of the seventeenth century. When given data on the accelerations of blocks of different masses connected by a spring, BACON discovers the law of conservation of momentum and, in the process of doing so, invents and applies the concept of inertial mass. Up to the present time, we cannot claim that BACON has discovered anything truly new, because we have exercised it on historical data in order to calibrate it against major scientific discoveries. Of 20

course, BACON has no reason to know that it is merely rediscovering laws that were already in the textbooks—and, because it has no way of knowing that, its rediscoveries are no less difficult and innovative than the original discoveries. BACON is not the only discovery system that has been built. There are also the AM and EURISKO systems constructed by Douglas Lenat, which have labored mainly in the domain of mathematical discovery, and the META-DENDRAL system that discovered certain chemical laws relating to mass spectrogram analysis. Only the future will tell what potential these systems have for practical application. But they have already demonstrated conclusively that computers can be programmed to explore and invent. REPRESENTATION In human problem solving, success often depends on representing the problem correctly. There are many problems that are hard to reason about in words, but whose solution becomes easy to find when the proper diagram is drawn. An important task in basic Al research today is to discover why this is so—how diagrams are used in the process of solving problems. For example, I ask you to imagine a rectangle that is twice as wide as it is high. Now draw a diagonal from the NW to the SE corner, and a vertical line from the middle of the top to the middle of the bottom of the rectangle. Do the two lines intersect? Of course. How do you know? You can "see" it in your mind's eye. Now proving formally that those two lines intersect is a formidable mathematical task, requiring either analytic geometry or topology. Do you think you could construct such a proof? Yet that is, in effect, what your mind's eye did in just an instant. It is a powerful inference engine. A large fraction of the resources of our society are spent in educating ourselves (you must count the students' time as well as the teachers'). Insight into mental processes—those used, for example, in visual imagery—can lead us to greatly improved techniques for teaching and learning, because in Al there are two complementary routes to enhancing human intelligence. One is to build intelligent machines that can augment it. The other is to improve our skills in using our own minds for learning, thinking, solving problems, and making decisions. The latter route, the enhancement of human performance, may in fact be the more important one. 21

RESEARCH ON LEARNING This brings me to my final example of ongoing research in Al: research on learning. I have several times mentioned production systems, which are simply computer programs that use a particular kind of program- ming language. Production systems are attractive for AI applications because they are easily modified simply by deleting instructions (productions) or inserting new ones. This simplicity of structure suggests that they might be programmed to modify themselves in adaptive ways—to operate as adaptive production systems. This has now been accomplished in a number of laboratory applications. The adaptive production system is exposed to a worked-out example of a problem solution—say the three or four steps that are involved in solving a simple linear equation in one variable. The system examines the example, comparing successive steps to see what changes have been made and what has motivated these changes. The reasons for the changes are detected by determining what progress the step contributed toward the final result. The program then constructs new instructions that, given the same cues, will take the same actions. These new instructions, when assembled and inserted (automatically) in the pro- gram, constitute an effective program for solving equations in algebra. On the basis of experience in constructing and testing adaptive production systems of this kind, we can design learning experiences for human students that will allow them to acquire new skills (to build new productions in their minds). About a year ago, this idea was used to test whether secondary school students could learn to factor quad- ratics, simply by being exposed to a carefully planned sequence of worked-out examples. The experiment, which was carried out in a number of schools in the People's Republic of China, was highly successful. Almost all the students learned to factor quadratics in less than 20 minutes. Similar experiments are now being carried out in the United States—and of course we will need much more experience with these ideas before we will know how, to use them effectively to improve the teaching of mathematics. Teaching and learning, as we do them today, are highly pragmatic activities, shaped by practical experience but with little underlying theory. Artificial intelligence has now reached the point at which it can tackle tasks at the level of complexity of school and university subjects. Out of research on teaching and learning, using the new Al technology, we may see a revolution in the educational process quite comparable to the revolution in medical services that followed the deepening of our knowledge of the biological bases of disease. 22

CONCLUSION I should like to conclude by sharing with you my own vision of the future of artificial intelligence and cognitive science. I see no limits to the range of human thinking capabilities that can be embraced by these fields. I believe that, in time, the boundaries of AI will encompass all of human cognition, and I have tried to present here some of the evidence that leads me to this conclusion. The boundaries are going to be enlarged gradually, not suddenly. The pace of application will be even more gradual, for it will call for enormous amounts of capital investment in hardware and especially software. Expert systems, and other schemes for thinking and planning, will probably move faster than robotics. Building systems that can sense and act flexibly in complex natural environmenments is still the harder task. We should not suppose that the spread of automation implies that human resources must be unemployed. Our society has many needs that we do not feel are being met adequately. We are now engaged in a great national debate to determine how scarce resources are to be allocated among national security, health, consumer goods, investment, the environment, energy needs—all of these in the face of a growing national debt. If we have unemployment, it surely is not because we are too productive. And the correct measures to combat unemployment are surely not measures that will check the advance of our productivity. I do not mean that a society undergoing change does not create hardship for many of its members. We must do better than we have in the past to permit our society as a whole to absorb the transient costs of beneficial social change. But we must do this without making change and progress themselves impossible or difficult. I have proposed that there are two potential kinds of payoff from artificial intelligence and cognitive science. In my personal judgment, the biggest of these lies in gaining a deeper knowledge of ourselves and in improving our own processes of thinking and learning. There are many problems in the world today that we often think of as technological in nature: overpopulation, the bomb, stress on the environment, scarcity of resources. At the most fundamental level, of course, these problems are not technological. They are the problems of ourselves. They will only be solved as we learn to think better, to plan better, to cooperate better. AI and cognitive science, pursued vigorously, can make an important contribution to improving the use we make of our own minds in addressing the world's critical problems. 23

Next: Curriculum Vitae, Dr. Herbert A. Simon »
Artificial Intelligence: Current Status and Future Potential Get This Book
×
 Artificial Intelligence: Current Status and Future Potential
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!