After the scenario and panel discussions, workshop participants discussed common challenges in IH-MC, as well as breakthroughs to transform the ways in which humans and machines will collaborate in the future. Some of these breakthroughs, discussed below, were also touched upon during the scenario exercises.
Rethinking Roles for Humans and Machines
For some participants, achieving the desired breakthroughs begins with a reevaluation of the role humans play in IH-MC. According to Kruijff, this begins with recognizing that humans have a central role. Rather than focusing on the human as a part of the problem or only as a partial solution, he proposed that collaboration should be designed with and around humans and take into consideration the broader sociotechnological context.
From the machine side, Sidner observed that current robots have limited sensing, manipulation, and communication capabilities. However, as machine capabilities improve, machines will be able to play a larger and more complex role in IH-MC. For some observers, this could result not only in new roles for machines in human-machine collaboration, but also a shift in humanmachine team dynamics. Beyond humans leading teams of machines, Hoffman hypothesized that one day computational devices could serve as mentors or trainers for human (and/or human-robot) teams. If that were to happen, he asked, how might humans be trained to work with machine partners or even machine mentors? Understanding these questions, according to Liz Sonenberg, will benefit from a better understanding of what makes a human a good member/leader of a human-machine or all-machine team.
Jeff Bradshaw suggested that intelligent human-machine systems may also have a role in virtual and real-world training activities. For example, by designing experiments that require individuals to adapt in a changing environment, both on the field and in the laboratory, researchers could study individual and team dynamics. There was a debate over how closely human-machine training could replicate human team training (given the elaborate physical, cognitive, and communications skills that humans bring to bear). The efficiency of proficiency
scaling through these types of training was seen by some as likely to be domain dependent.
During these discussions, participants highlighted several common research areas necessary to advance IH-MC, such as communication, flexibility and resilience, human-machine models, user experience and system design, testbeds, and data overload.
For many participants, effective communication represents a significant barrier to advances in human-machine collaboration. According to Kruijff, it would be useful if robots could better explain to humans what they can or cannot do and what they actually do. An inability to effectively communicate this, he added, makes it difficult for humans and machines to gain common ground. In addition to verbal communication, Sidner noted the challenges posed by nonverbal behavior—for example, how might a robot “notice” what a human notices?
Other participants commented on the benefit of further research in how humans and machines communicate and understand intent. Bradshaw emphasized that by improving machine observability or “apparency,” humans will be better able to understand a device’s intent. In contrast, Veloso observed that cases may exist in which a human does not necessarily need to monitor or understand what the robot is doing, as long as he or she trusts the robot to proactively ask for help when necessary. Oron-Gilad cautioned that effectively conveying intent between two human operators, let alone between humans and robots, is still a challenge. For example, if a software agent incorrectly “guesses” a human’s intent, it might unnecessarily automate a task—thus leading to dangerous and unintended consequences.
Padgham proposed the development of a “teaming compact” whereby humans and machines mutually communicate their capabilities, goals, and intentions. Perhaps what is required, she said, is one common and simple language that can be used by any system. Also necessary, Matthias Scheutz added, are feedback mechanisms and intelligent and tangible interfaces between humans and agents. Other participants commented that this feedback should be dynamic so that human-machine collaboration can change over time—for example, as a result of training or changes in familiarity or trust.
This prompted a discussion on whether new ways for robots to communicate with one another could reduce the number of humans in human-robot teams. In response, one participant suggested that challenges of effective robot-to-robot communication would be made simpler by removing the cultural baggage of human communication.
Flexibility and Resilience
Some participants commented that improved human-machine collaboration will require improved flexibility and resilience. For example, Hoffman observed that human-machine interfaces would be improved by engineering for resilience, that is, designed for unanticipated tasks. Such flexibility is particularly important, Jean Scholtz added, as the tasks people do today will not be identical to those being done tomorrow or in five or ten years. Humans can rapidly adapt and apply their capabilities to new situations, so how can this flexibility and learning be applied to robots without significant programming?
Frank Dignum suggested that lessons may be learned from human adaptability—for example, the adaptation of human language to widespread adoption of text messaging. Rather than wait for convergence in, for example, natural language between humans and robots, he proposed a deeper examination into situations in which humans but not robots are able to adapt.
This resilience, Neerincx noted, will require breakthroughs in context-driven adaptive autonomy. Both Hoffman and Sidner commented that such high levels of complex autonomy would first depend on significant breakthroughs in commonsense knowledge and practical manipulation tasks.
Another common challenge discussed was the potential benefit of improved human, machine, and shared human-machine models. Goodrich spoke to the difficulties of developing such shared models by describing human and machine dynamic asymmetries in experience, understanding, goals, and capabilities.
Although some participants emphasized the need to provide robots with better models of humans, Scheutz noted the challenges of building correct models of robots for humans. Human models of robots, he said, need to be compatible with the ways humans will interact with them. For example, if a robot does not have good natural language or good visual sensing capabilities, perhaps anthropomorphized robot mouths or eyes will mislead humans to overestimate the robot’s capabilities. While highly realistic Geminoid robots exist, Holly Yanco added that the “uncanny valley” factor should also be taken into consideration.
According to Dignum, new social reality models may allow machines to do things “with” humans, and not just “for” humans in a limited role. Many of the workshop participants remarked that it would help for humans to develop social understanding and acceptance of such sophisticated machine capabilities in order for this level of collaboration to occur.
Testbeds and Fielded Systems
Several participants suggested that more and improved large-scale dynamic testbeds (as well as their ongoing evaluation) would benefit many of the previously discussed research issues. Going beyond testbeds, Kruijff observed that some of the challenges the group discussed would best be studied using deployed or fielded systems. For example, some challenges, such as philosophical linguistics issues, are more likely to arise in the field as opposed to the laboratory. True collaboration under stressful circumstances, he noted, cannot be replicated in the laboratory.
Challenges posed by “data overload” were also highlighted, in the context of improving both human-machine interaction and teamwork, as well as the value of using “big data” to solve large-scale problems.
Some suggested that new ways for teams to share concrete and dynamic information about their environment could provide novel perspectives that lead to interesting and new solutions. For Satoshi Tadokoro, this is particularly relevant in the context of supporting teams composed of one human and multiple robots. This type of coordination, he observed, requires significant amounts of and access to data. In the rescue domain, this means data about human-robot coordination and interaction, as well as the environment.
Morison proposed that significant opportunities exist for breakthroughs in the ways that large-scale robot/sensor data are used to expand the ways humans perceive the world. Systems could be designed, he said, for effective exploration so that relevant information can be quickly extracted.
Shared Resources for Shared Problems
Using IH-MC to solve highly complex problems, Hoffman noted, requires big research budgets, often in harsh economic climates. One path forward, he proposed, might be to choose a single problem large enough to require international funding efforts.
Padgham acknowledged that many fields associated with humanmachine collaboration have not been as successful as others in disciplining themselves to combine resources in pursuit of solving larger-scale challenges. In
part, this is because long-term funding to support such initiatives has not been as available in this field as it has been in others. As an example of a successful initiative, Sidner posited that the success of the physics research communities in effectively combining resources has, to some degree, been a result of 400 years of maturation within a set of unified fields. Perhaps, Padgham suggested, IHMC efforts to manage a small disaster would be appropriate for international funding and combined large-scale research efforts.