Designing Socially Intelligent Robots
Media Arts and Sciences
Massachusetts Institute of Technology
The emerging market for personal-service robots raises questions about the design of robots that can play a role in the daily lives of ordinary people. Beyond performing useful tasks, average consumers want personal robots to be natural, intuitive beings with which they can interact, communicate, work with as partners, and teach new skills, knowledge, and tasks (see Fong et al., 2003, for a review). Human-robot interaction (HRI) and social robotics are emerging areas of inquiry in the field of autonomous robotics. In this paper, I argue that social and emotional intelligence will be fundamental to the design of personal-service robots (Breazeal, 2002). After all, personal robots should not only be useful to their human users, but humans should also genuinely enjoy having their robots around.
The idea of creating lifelike robots has amused and fascinated people for thousands of years. Throughout history, humans have attempted to mimic the appearance, functionality, and longevity, as well as the cognitive and adaptive processes, of biological creatures. The idea of lifelike machines appears in Homer’s Iliad when Hephaistos, the god of metalsmiths, fashions mechanical helpers—strong, vocal, intelligent maidens of gold. The idea surfaces again in medieval times in the Jewish legend of the Golem, a robot-like servant made of clay brought to life by Rabbi Loew to save the Jews of Prague.
As technology advanced, people actually began to build such machines. The first technological breakthrough occurred in the 15th century with mechanical clocks. One hundred years later, clock makers began to build mechanical animals. There is even some evidence that as early as 1478, the young Leonardo da Vinci conceptualized a humanoid automaton controllable by a very crude but programmable analog computer composed of cogs and pulleys (Rosheim, 2000). Nearly 40 years later, in 1515, Leonardo built his famous self-propelled mechanical lion, commissioned by the Medici, which reportedly walked from its place in the room, opened its breast full of lilies, and presented them as a token of friendship from the Medici to Francis I, King of France. In response to the 18th century craze for animated objects, Jacques de Vaucanson created the famous mechanical duck in 1738 that could flap its wings, eat, and digest grain (which still remains a mystery) (Doyon and Liaigre, 1966). In the 1830s or 1840s, Joseph Faber invented a mechanical talking head, called Euphonia, which reputedly could be made to speak in several European languages (Lindsay, 1997). These are just a few of many examples of historical mechanical automata; a more complete account can be found in Rosheim (1994).
The year 1946 marks the invention of the ENIAC computer, the first large-scale, general-purpose, electronic digital computer (McCartney, 1999). Just a few years later, in 1950, the famous British mathematician, Alan Turing, wrote a provocative paper called, “Computing Machinery and Intelligence,” in which he discussed the possibility of building machines that can think and learn. Turing outlines a test (the “imitation game,” later known as the Turing Test) to determine if a machine can think (Turing, 1950). That same year, Grey Walter published his work on building two robotic tortoises out of analog circuitry that could navigate towards a light source and interact with one another in simple ways (Walter, 1950). In the science fiction arena, Isaac Asimov published his famous three laws of robotics (Asimov, 1942). A visionary Walt Disney applied robotic technology to entertainment for the earliest physically animated performers, such as the famous Abraham Lincoln audio-animatronic that debuted at the 1964 New York World’s Fair.
Today robotic technology is used in entertainment for many purposes. We are familiar with animatronics in theme parks and the use of sophisticated robotic puppets for special effects in films. With recent advances in low-cost electronics, robots can now interact with people in an entertaining, engaging, or anthropomorphic way. In fact, interacting with people has become an important aspect of a robot’s functionality. For instance, a new generation of robotic toys has emerged—many of them inexpensive, but some are more expensive and rather sophisticated, such as Sony’s robotic dog, Aibo. Robotic kits for
“edutainment,” such as Lego’s Mindstorms, allows kids and adults alike to create their own robotic inventions.
Location-based entertainment robotics, such as robotic museum tour guides, not only entertain visitors, but also provide them with information (Nourbakhsh et al., 1999). Health-related applications are being explored, such as robotic pet-therapy surrogates intended to provide the same health benefits as their living counterparts. Even robots for scientific purposes are beginning to have more socially interactive qualities. For instance, NASA Johnson Space Center’s humanoid robot, Robonaut, is ultimately envisioned to be a completely autonomous astronaut’s assistant that can work as a productive and cooperative member of a human-robot team (Bluethman et al., 2003).
What about the science-fiction dream of having your very own Star Wars R2-D2 or C-3PO—an appealing robotic sidekick that helps you in your daily life? We are starting to see precursors of such futuristic visions in university and corporate research labs around the world, such as Honda’s humanoid robot, ASIMO. Toyota recently announced the Partner Robot Project, which has a stated goal of developing humanoid robots that function as personal assistants for humans. These robots shall “have human characteristics, such as being agile, warm and kind and also intelligent enough to skillfully operate a variety of devices in the area of personal assistance, care for the elderly, manufacturing and mobility.”
Robotic Trends magazine defines personal-service robots as “robots or robotic technology purchased by individual consumers that educate, entertain, assist, or protect in the home.” One of the strongest motivations for the development of personal robots is to provide domestic assistance and care for the elderly. The global demographic trend of rapidly aging societies, in which a smaller working-age population is responsible for supporting a larger retired population, has created an urgent need for robots that can be capable assistants for people in their homes and can supplement the workforce.
The International Monetary Fund predicts that Japan, in particular, will experience a dramatic change in the ratio of working-age people to retired-age people—from 4:1 today to 2:1 by 2025. In addition, the convergence of many technological developments in mobile computing, such as advances in microprocessor technology, wireless technology, image processing, speech recognition, motor-sensor technology, and embedded systems development tools, have made the development of personal robots increasingly feasible.
Although the service-robot market is still immature, the few quantitative studies that have been done indicate that the market for personal-service robots is on the verge of dramatic growth. Recent research by the Japan Robotics Association (JRA), United Nations Economic Commission (UNEC), and International Federation of Robotics (IFR) indicates that the service-robot market will experience exceptional growth, both in the near term (from $600 million in 2002 to
approximately $6 billion in 2009) and for the next few decades (reaching an estimated $60 billion by 2025) (UNECE and IFR, 2002). Of course, one must always take extrapolations from existing studies about the future of immature markets with a large grain of salt. Nevertheless, if these predictions are correct, personal robots will be a ubiquitous technology.
THE PSYCHOLOGY OF ROBOT DESIGN
The success of personal-service robots depends not only on their utility, but also on their ability to be responsive to and interact with ordinary people in a natural and intuitive way. Furthermore, because they may coexist with people on a daily basis, their long-term appeal will certainly affect our willingness to accept them into our lives. For instance, longitudinal studies on the adoption and impact of assistive technologies for the elderly have shown that functionality and need are only part of the design equation. Social and emotional factors also greatly affect the individual’s willingness to adopt the technology. Technologies that are stigmatizing (i.e., that make the user feel feeble or vulnerable or make the user feel that they appear that way to others) are often rejected. Even worse, if stigmatizing technologies are adopted, they can contribute to self-imposed isolation or depression (Forlizzi et al., 2004). Thus designing personal robots that support humans socially and emotionally will be just as important as designing them for their cognitive abilities.
According to The Design of Everyday Things, in order for people to interact with another entity, they must have a good conceptual model of how that entity operates, whether it is a device, a robot, or even another person (Norman, 1990). If they have such a model, people can explain and predict what an entity may do, understand the reasons for doing it, and know how to elicit desired behavior. The design of a technological artifact, whether it is a robot, a computer, or a teapot, can help a person form this model by “projecting an image of its operation,” either through visual cues or continual feedback. By adhering to natural signals and mappings (e.g., physical metaphors or social norms), the artifact becomes intuitively understandable to people.
Numerous HCI studies suggest that people apply a social model when observing and interacting with autonomous robots (Kiesler and Goetz, 2002). Studies by Reeves and Nass (1996) have shown that people treat even desktop computers as social entities and adhere to social norms in their interactions with them. In fact, studies demonstrate that it takes surprisingly few cues to elicit social behavior—a text interface alone is sufficient. Autonomous robots, of course, are quite different from desktop computers in their projected animacy. Like the behavior of living things, the behavior of autonomous robots is a product of their internal state, as well as physical laws. They perceive the world, make decisions, and perform coordinated actions to carry out tasks. If this self-directed, creature-like behavior can be augmented by an ability to communicate
with, cooperate with, and learn from people, people will be encouraged to anthropomorphize them. This is true, even for simple vehicles, such as those described in Braitenberg, 1984.
Social robots are a class of autonomous robots explicitly designed to encourage people to socially interact with and understand them. If social robots have personalities, people may be more likely to have a good mental model for them. According to Norman (2004), personality is a powerful design tool for helping people form a conceptual model that channels beliefs, behavior, and intentions in a cohesive and consistent set of behaviors. From a design perspective, the emotion system of a robot could implement the style and personality of the robot, encoding and conveying its attitudes and behavioral inclinations toward the events it encounters. The robot’s personality must be designed so that its behavior is understandable and predictable to people. Therefore, parameters of the personality must fall within recognizable human (or animal) norms; otherwise, the robot may appear to be mentally ill or completely alien. The science of natural behavior, as well as artistic insights from classical animation and character design (Thomas and Johnson, 1981), can be useful guides to design in this respect.
ROBOTS WITH SOCIAL AND EMOTIONAL INTELLIGENCE
As robot designers, we tend to emphasize the cognitive aspect of intelligence when designing robot architectures; we tend to view the social, especially the emotional aspect with skepticism (see Sloman and Croucher, 1980, for an exception). However, numerous scientific studies continue to reveal the reciprocal roles of cognition and emotion in intelligent decision making, planning, learning, attention, communication, social interaction, memory, and more (see Isen, 2000, for a review). Cognition and emotion are conceptually distinct, complementary information-processing systems that evolved in response to social and environmental pressures to ensure the health and optimal functioning of the creature (Damasio, 1994). As Norman et al. (2003) argue, the cognitive system is responsible for interpreting and making sense of the world; the emotional system is responsible for evaluating and judging events to assess their overall value with respect to the creature (e.g., positive or negative, desirable or undesirable, etc.).
Emotions play an important role in signaling the salience of things, directing attention toward what is important and away from distractions, thereby helping to prioritize concerns (Picard, 1997). Alice Isen (2000) has studied the beneficial effects of mild, positive affect on a variety of decision-making processes for medical diagnosis tasks (e.g., facilitating memory retrieval; promoting creativity and flexibility in problem solving; and improving efficiency, organization, and thoroughness in decision making). Negative affect allows us to think in a highly focused way under negative, high-stress situations. Positive affect allows us to think creatively and make broad associations in a relaxed positive state.
Furthermore, whereas too much emotion can hinder intelligent thought and behavior, too little emotion is even more problematic. The importance of emotion in intelligent decision making was demonstrated by Damasio in studies of patients with neurological damage that impaired their emotional systems (Damasio, 1994). Although these patients performed normally on standardized cognitive tasks, their ability to make rational and intelligent decisions in their daily lives was severely limited. For instance, they may have lost a lot of money in an investment, but, instead of becoming more cautious and curtailing investing, these emotionally impaired patients continued to invest. Because they did not seem to link bad feelings and dangerous choices, they continued to make the same choices again and again. The same pattern was repeated in relationships and social interactions, sometimes resulting in the loss of jobs, friends, and so on.
Highly functioning autistics reveal the crucial role of emotion in normal relations. They seem to understand the emotions of others like a computer—they memorize and follow rules to guide their behavior but lack an intuitive understanding of others. In short, they are socially handicapped because they cannot understand or interpret the social cues of others or respond in a socially appropriate way (Baron-Cohen, 1995).
Emotion-inspired mechanisms and capabilities will be essential to the success of autonomous robots. Many more examples could be given to illustrate the importance of social and emotion-inspired mechanisms and abilities to robots that must make decisions in complex and uncertain circumstances, either working alone or with other robots. Our primary interest, however, is how social and emotion-inspired mechanisms can improve the way robots function in the human environment and enable them to work effectively in partnership with people.
This does not imply that a robot’s emotion-based or cognition-based mechanisms and capabilities must be identical to those in natural systems. The question of whether or not robots can feel human emotions, for example, is irrelevant to our purposes. Furthermore, providing social-based and emotion-based mechanisms should not be glossed over as merely building “happy” or entertaining robots. To do so would be to miss an extremely important point. Just as they do in living creatures, social and emotion-inspired mechanisms can be used to modulate the cognitive systems of the robot to make it function better in a complex, unpredictable environment—enabling it to make better decisions, to learn more effectively, and to interact more appropriately with others than it could with its cognitive system alone. Therefore, by designing integrated systems for robots with internal mechanisms that complement and modulate their cognitive capabilities with the regulatory, signaling, biasing, and other attention, value assessment, and prioritization mechanisms associated with emotion systems in living creatures, we will effectively be giving robots a system that serves the same useful functions that emotions serve in us—no matter what we call it.
The purpose of this short paper is to put forth an argument for social and emotional intelligence in the design of personal robots that assist and entertain their human users. (For an exploration of these issues from an artistic, scientific, and technological perspective, see Breazeal, 2002). Specific research projects in our laboratory are being conducted on how robots with social-emotive capabilities can assist human astronauts in space, perform opposite human actors in film, and serve as learning companions for children. For more information about this research, see <http://robotic.media.mit.edu/>.
Asimov, I. 1942 (reprinted 1991). I Robot. New York: Bantam Books.
Baron-Cohen, S. 1995. Mindblindness. Cambridge, Mass.: MIT Press.
Bluethmann, W., R. Ambrose, M. Diftler, E. Huber, M. Goza, C. Lovchik, and D. Magruder. 2003. Robonaut: a robot designed to work with humans in space. Autonomous Robots 14(2-3): 179–207.
Braitenberg, V. 1984. Vehicles: Experiments in Synthetic Psychology. Cambridge, Mass.: MIT Press.
Breazeal, C. 2002. Designing Sociable Robots. Cambridge, Mass.: MIT Press.
Damasio, A. 1994. Descartes’ Error: Emotion, Reason, and the Human Brain. New York: G.P. Putnam’s Sons.
Doyon, A., and L. Liaigre. 1966. Jacques Vaucanson, mécanicien de genie (Jacques Vaucanson, Genius Mechanic). Paris: Presses Universitaires de France.
Fong, T., I. Nourbakhsh, and K. Dautenhahn. 2003. A survey of socially interactive robots. Robotics and Autonomous Systems 42(3-4): 143–166.
Forlizzi, J., C. Di Salvo, and F. Gemperle. 2004. Assistive robotics and an ecology of elders living independently in their homes. Human-Computer Interaction 19: 25–59.
Isen, A. 2000. Positive Affect in Decision Making. Pp. 261-277 in Handbook of Emotions, 2nd ed., edited by M. Lewis and J. Haviland. New York: The Guildford Press.
Kiesler, S., and J. Goetz. 2002. Mental Models of Robotic Assistants. Pp. 576–577 in Proceedings of CHI 2002 Conference on Human Factors in Computing Systems. New York: ACM Press.
Lindsay, D. 1997. Talking head. American Heritage of Invention and Technology 13(1): 57–63.
McCartney, S. 1999. ENIAC: The Triumphs and Tragedies of the World’s First Computer. New York: Walker and Company.
Norman, D. 1990. The Design of Everyday Things. New York: Basic Books.
Norman, D. 2004. Emotional Design. New York: Basic Books.
Norman, D., A. Ortony, and D. Russell. 2003. Affect and machine design: lessons from the development of autonomous machines. IBM Systems Journal 41(1): 39–44.
Nourbakhsh, I., J. Bobenage, S. Grange, R. Lutz, R. Meyer, and A. Soto. 1999. An affective mobile robot with a full time job. Artificial Intelligence 114(1-2): 95–124.
Picard, R. 1997. Affective Computation. Cambridge, Mass.: MIT Press.
Reeves, B., and C. Nass. 1996. The Media Equation. Palo Alto, Calif.: CSLI Publications.
Rosheim, M. 1994. Robot Evolution: The Development of Anthrobotics. New York: John Wiley and Sons.
Rosheim, M. 2000. L’automa programmabile di Leonardo. XL Lettura Vinciana, 15 aprile 2000. Citta’ di Vinci, Biblioteca Comunale Leonardiana. Florence, Italy: Giunti Gruppo Editoriale.
Sloman, A., and M. Croucher. 1980. Why robots will have emotions. Pp. 197–202 in Proceedings of the 7th International Conference on Artificial Intelligence. Menlo Park, Calif.: International Joint Conferences on Artificial Intelligence.
Thomas, F., and O. Johnson. 1981. The Illusion of Life. New York: Hyperion.
Turing, A.M. 1950. Computing machinery and intelligence. Mind 59(236): 433–460.
UNECE and IFR (United Nations Economic Commission for Europe and the International Federation of Robotics). World Robotics 2002. New York and Geneva: United Nations Publications.
Walter, W.G. 1950. An imitation of life. Scientific American 182: 42–54.