scaling through these types of training was seen by some as likely to be domain dependent.

During these discussions, participants highlighted several common research areas necessary to advance IH-MC, such as communication, flexibility and resilience, human-machine models, user experience and system design, testbeds, and data overload.


For many participants, effective communication represents a significant barrier to advances in human-machine collaboration. According to Kruijff, it would be useful if robots could better explain to humans what they can or cannot do and what they actually do. An inability to effectively communicate this, he added, makes it difficult for humans and machines to gain common ground. In addition to verbal communication, Sidner noted the challenges posed by nonverbal behavior—for example, how might a robot “notice” what a human notices?

Other participants commented on the benefit of further research in how humans and machines communicate and understand intent. Bradshaw emphasized that by improving machine observability or “apparency,” humans will be better able to understand a device’s intent. In contrast, Veloso observed that cases may exist in which a human does not necessarily need to monitor or understand what the robot is doing, as long as he or she trusts the robot to proactively ask for help when necessary. Oron-Gilad cautioned that effectively conveying intent between two human operators, let alone between humans and robots, is still a challenge. For example, if a software agent incorrectly “guesses” a human’s intent, it might unnecessarily automate a task—thus leading to dangerous and unintended consequences.

Padgham proposed the development of a “teaming compact” whereby humans and machines mutually communicate their capabilities, goals, and intentions. Perhaps what is required, she said, is one common and simple language that can be used by any system. Also necessary, Matthias Scheutz added, are feedback mechanisms and intelligent and tangible interfaces between humans and agents. Other participants commented that this feedback should be dynamic so that human-machine collaboration can change over time—for example, as a result of training or changes in familiarity or trust.

This prompted a discussion on whether new ways for robots to communicate with one another could reduce the number of humans in human-robot teams. In response, one participant suggested that challenges of effective robot-to-robot communication would be made simpler by removing the cultural baggage of human communication.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement