National Academies Press: OpenBook
« Previous: 1 Introduction
Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×

2

Scenario Exercises

Each group was allotted two hours to discuss their scenarios and then prepare a PowerPoint presentation of their findings and proposed solutions. Among the questions posed to each group were: What kind of progress can be demonstrated within two years? What could be done in years three to five? What issues are raised by including both software agents and robots as team members? In what circumstances is this system likely to fail when deployed in a real-world environment?

Scenario A: Preparing For and Managing a Major Disaster
Moderator: Michael Goodrich
Group Members: Michael Goodrich, Geert-Jan (GJ) Kruijff, Alex Morison, Daniele Nardi, Lin Padgham, Satoshi Tadokoro

Description: Mexico City’s 18 million inhabitants live within 40 miles of Mount Popocatepetl, an active volcano that most recently erupted in 2000. Group A, a private enterprise dubbed “007 and Beyond,” was given two years to develop a prototype for human-machine collaboration that would prepare for, respond to, and help with rebuilding following a major eruption. In this scenario, the group sought to address the life cycle of activities that constitute disaster management—from prediction through evacuation, to disaster mitigation and eventual reconstruction. The aim of this scenario was to consider how humans, robots, and software agents could co-manage a disaster and its aftermath.

The group’s moderator, Michael Goodrich, summarized the group’s discussion. According to Goodrich, the group focused on what it determined to be the core human-machine collaboration challenge of managing a major disaster: a decision-support system that enables affected individuals, their families, individual responders, groups of responders, agencies, and centralized planners to give and receive needed, reliable (trustworthy), and timely information. The

Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×

goal of such a system would be to enable individuals to make independent decisions in ways that ultimately support safety and survivability.

The group’s system design included three networked components: a centralized information repository; role-specific clients (e.g., the Red Cross or emergency food relief programs) that both “push” data into the repository and “pull” information to facilitate decision making; and a scenario simulator that could explore and evaluate feasibility for various interventions. Robots that explore areas unreachable by (or unsafe for) humans following an eruption could also be system “clients.” By exploring numerous scenarios in advance, the simulator could be used before a disaster to help design evacuation and responder protocols and after a disaster to help plan and manage search and rescue operations in real time. The group also designed the “iVolcano app” to facilitate interactions between the information repository and the humans, agents, and robots that use it. Thus, through iVolcano, people who are affected by the eruption could obtain critical information, such as where to find food, medical supplies, shelter, and water and where to charge their cell phones.

Goodrich also indicated that too much information can sometimes be as dangerous as too little in a major disaster—for example, if hundreds of people learned at approximately the same time where food was available, a stampede could ensue. The group saw two other potential problems: (1) many people would not willingly “push” information to a centralized data repository, either because of interagency tensions or possible concerns over privacy or trust, and (2) the system would need a method for differentiating the meanings of critical words. For example, water means “fire suppressor” to a fireman but something completely different to a nurse. Thus it would help if different word usages are mapped to a common ontology so that, in a time-critical situation, the person seeking information from the server isn’t overwhelmed by irrelevant information.

The group suggested that by year two it would be possible to put into operation a “thin” server capable of integrating a lot of the information that already exists. Although this achievement would not be “earthshaking” (no pun intended, said Goodrich), it could be useful. The group thought that in years three to five it would be possible to deploy the interactive planning simulator tool. Several of the client programs would likely take longer to develop.

Scenario B: Small-Lot Agile Manufacturing
Moderator: Matthias Scheutz
Group members: Tal Oron-Gilad, Don Mottaz, Gopal Ramchurn, Matthias Scheutz, Lakmal Seneviratne, Brian Williams

Description: George owns a small furniture company that builds one-of-akind furniture for its customers. As such piecemeal work negates economies of scale, he needs another way to generate profits. George retains the developers

Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×

of the Pengo9000 (the members of Group B), to create a coworker robot that will make it possible for him to triple his profits without adding manpower or major retooling costs.

The group’s moderator, Matthias Scheutz, summarized the group’s discussion. Scheutz initiated his discussion by commenting that the group found the exercise immensely challenging—so much so that solving this scenario required solving all of AI. Thus the group decided to separate the “spirit of the exercise” from a prototype that could potentially be available in a two-year time frame. Aspirationally, a collaborative robot would have natural language capabilities and would be able to learn and generalize from its lessons to real-world task completion. The robot would have sensing and perception capabilities that would, for example, enable it to distinguish between different kinds of wood, drill-bit requirements, and so on. It would have “common sense” knowledge, in addition to the domain knowledge necessary for understanding the commonsense meaning of words. For example, when someone is told to “go to the kitchen and turn the stove on,” a human understands that he must go to the kitchen before he turns the stove on. A conventional robot might be expected to know that the word “and” refers to parallel, sequential, or temporal sequencing, but it would not have the intuitive capability to infer the correct meaning. The aspirational collaborative robot would be able to take directions from a combination of verbal and gestural cues. Finally, that robot would have perceptual and actuation capabilities that would enable it to find chairs in another room and then know which ones need to be drilled.

The group suggested that a robot could be developed within two years to fulfill certain tasks. It would have effectors for drilling and clamping and algorithms for planning and scheduling, as well as detecting and targeting objects. The robot would also understand simple instructions, such as “drill a hole into the chair,” but it might not be able to do so repeatedly. In years three to five, the group posited that the robot would be able to pick up tools and learn how to use new tools. It would respond to more complex chained commands in combination with gestures and could detect errors. As a result, the robot would be a more active participant in the manufacturing process. Generally, though, the robot would still be very constrained in its capabilities, and after five years it would still not be a partner for the human furniture maker.

Scenario C: Hospital Service Robotics
Moderator: Candy Sidner
Group Members: Paul Maglio, Candy Sidner, Liz Sonenberg, Tom Wagner, Rong Xiong, Holly Yanco

Description: A large healthcare organization calculates the enormous sums spent in simply moving things—food, laundry, trash, wheelchairs, even

Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×

patients—in a hospital environment. The aim of Group C was to design within two years a system for collaboration between hospital staff and patients, robots, and software agents.

The moderator, Candy Sidner, who spoke on behalf of the group, discussed how the group grappled with the complexity of designing an integrated system of humans, robots, and software agents that could significantly improve nonurgent hospital operations. In addition, many of the skills required to make such a system both useful and cost-effective—such as high-level language and locomotive skills, and high-level human-behavior-recognition skills—are still many years out from real-time operability.

The group observed that many factors contribute to these complexities. First, many of the potential system “users” would be people with no expertise in robotics or software agent systems. Second, it is often difficult to separate the urgent from the nonurgent in hospital settings. For example, hospital staff would want even the simplest delivery robot to communicate to the appropriate staff person that it had come across a patient who had fallen on the floor. To achieve this, the robot would have to know that finding someone on the floor was an anomaly, who was the right person to contact, and that its request for help had been received and acted upon. Completion of these tasks by robots is currently infeasible. Third, many issues of cultural and language diversity of both staff and patient populations (not to mention hospitals in urban versus rural settings or in advanced versus developing countries) exist. Thus, some members of the group speculated, robots and software agents would best be programmed to address a variety of cultural norms regarding gender differences, the notion of personal space, and concerns about safety, to name just a few.

According to Sidner, the group surmised that within two years it would be possible to deploy a “tug” robot capable of carrying things in a basket from Point A to Point B. They also suggested that it would be possible to deploy a virtual “my hospital friend” capable of engaging in simple language communications with humans and helping patients with tasks that are not medically critical, such as ordering meals or leaving the facility on patient discharge. An optimal system—parts of which could take 20 years or more to realize—would require breakthroughs in numerous subdisciplines related to human-machine collaboration. This system would include the following attributes: robots that can safely lift and carry patients; robots and agents that can engage in natural speech with humans; a networked system of robots and agents that can effectively communicate with each other and with relevant hospital staff; agents and robots that can successfully negotiate task priorities with humans (and with each other); agents and robots that are capable of prioritizing and carrying out requests from multiple operators; and robots and virtual agents that can interact appropriately with patients of varying ages, cognitive abilities, emotional states, and medical conditions.

Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×

During the Q&A session, a workshop participant asked what a robot would have to do to convince a human that its own immediate priorities are more important than the human’s. Sidner answered that this presents a complex negotiation problem that is yet to be fully investigated by the science community. As an example, she pointed out that negotiation between humans and robots presumes advances in modeling wherein researchers understand the cognitive model that the robot has of itself and of the person with whom it is communicating. These advances, she explained, have yet to occur.

Scenario D: Virtual Team Training
Moderator: Mark Neerincx
Group Members: Michael Beetz, Jeffrey Bradshaw, Frank Dignum, Michael Freed, Yukie Nagai, Mark Neerincx

Description: A U.S. ship will soon be passing through the Straits of Hormuz, an area of high risk for terrorist attack. The ship’s captain would like to have an on-board training system that will help crew members prepare for any possible encounter. Group D was charged with developing an agent-based system for virtual team training that mixes humans and software agents in ways that challenge and improve their team skills.

The moderator, Mark Neerincx, summarized the group’s discussions. As he explained, the group sought to design a virtual training system that would learn along with the trainees so that (1) the system’s feedback to the trainees would improve incrementally; (2) the lessons themselves would become more challenging as the trainees’ capabilities grew; and (3) the system would provide team as well as individual feedback. In effect, a successful system would result in the coevolution of the virtual instructor, the students, and the software agents. The ideal system must have a degree of complexity, Neerincx suggested, to develop models of the trainees that the virtual instructor can use to provide useful feedback in an ongoing way, and to change the nature of the training as the trainees improve.

Within two years, the group suggested, it would be possible to establish a basic evolving framework if the following subtasks could be accomplished: (1) specifying ontologies that provide the basic foundation for how the virtual instructor will act and reason over time; (2) designing scenario-building tools; (3) developing templates for the use-cases; (4) developing a taxonomy of feedback rules; (5) developing both a task and user model; (6) creating feedback types that are appropriate to the templates; and (7) developing a small set of behavior detectors to trigger specific types of feedback. It would also be useful for the learners to provide feedback to each other. Such data would be fed back into the system (which is capable of pattern-finding) to improve future feedback and

Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×

exercises. Neerincx suggested that adapting interactive language programs might help develop the feedback templates in the two-year time frame.

In truth, the group believed that just about all of these tasks would be difficult to accomplish within two years, or even in years three to five. Neerincx noted that work on serious games might help in this area, as well as research on emotion modeling.

Scenario E: The Personal Satellite Assistant1
Moderator: Terry Fong
Group Members: Terry Fong, Robert Hoffman, Andreas Hofmann, Dirk Schulz, Jean Scholtz, Manuela Veloso

Description: The Enterprise on television’s Star Trek is a roomy place; in any actual spacecraft, however, space is at an extreme premium. This group’s task is to develop a Portable Satellite Assistant (PSA)—flying spherical robots approximately three inches in diameter—capable of assessing hazards, monitoring conditions, and traveling within a spacecraft to places that an astronaut is too large to enter.

The moderator, Terry Fong, presented the group’s discussion. He explained that working on a spacecraft presents two unique challenges: First, an astronaut’s time is precious and costly; they actually have little time to do “work,” as most of life on board is consumed by “housekeeping” and such human functions as sleeping, exercising, and eating. Second, the ship’s interior is extremely cramped, cluttered, and without a defined floor or ceiling. Thus it would be very helpful to have on board a small robot that could serve as an extra set of eyes and, perhaps, an extra brain. One prototype for such a PSA would be a spherical object about three inches in diameter. The PSA’s key capabilities would be mobile sensing; monitoring standard procedures to detect anomalies and possibly alert the astronaut when things go wrong; supporting normal procedures such as providing astronauts with temporal cues (e.g., “The next step is this.”) and spatial cues (e.g., by asking, “Did you look at this thing?” and then shining a laser pointer on something); and providing reference data to a crew member who is carrying out a piece of work.

The group designed the PSA to include the following technologies: cameras with zoom, 3-D, and color capabilities; sensors for reading temperature, barcodes, RFID, etc.; microphones; avionics for independent navigation; and wireless communication. With these technologies, PSAs could potentially assist astronauts by executing checklist procedures, an otherwise time-consuming task. PSAs would also have the capacity to view areas of the spacecraft that are out of

____________

1 Very limited PSA-like technologies have been tested in outer space. See, for example, http://psa.arc.nasa.gov/ and http://ssl.mit.edu/spheres/.

Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×

an astronaut’s line of sight, model the environment and show changes or abnormalities over time, confirm that procedural models are being followed, and provide timing alerts that anticipate what is needed next.

Fong added that the PSA would have a model of the particular human it is assigned to and would be tasked to learn the preferences and work-related idiosyncrasies (e.g., left-handedness) of “its” astronaut. The PSA would also have to “know” and compensate should its astronaut becomes less alert over time. The PSA would also have to be sufficiently resilient to adapt to a revised plan if its astronaut changes the sequence of a task for good reason. To achieve this, humans and their PSAs would undertake joint training prior to their mission.

The advantage of having such an assistant is that PSAs do not criticize or take offense. The disadvantages follow from the advantages: robots are incapable of exhibiting human behavior and the “uncanny valley” problem is likely to arise. The group speculated that accomplishing fixed tasks and mobility were achievable within two years. The PSA’s ability to change its models of the environment, task at hand, and so on and to observe and engage in unanticipated tasks could exist by years three to five.

Participants were asked prior to the meeting to give examples of successful Intelligent human-machine collaboration that are currently in use. The most popular responses were:

  • Robotic surgery
  • Google Search/search engines
  • Siri
  • Production systems where humans and robots work together (e.g., Kiva Warehouse Robotics)
  • Flight management and navigation systems on commercial aircraft
  • Intelligent vehicles (e.g., Google’s unmanned vehicle)
  • I have seen no successful examples
Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×

This page intentionally left blank.

Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×
Page 3
Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×
Page 4
Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×
Page 5
Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×
Page 6
Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×
Page 7
Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×
Page 8
Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×
Page 9
Suggested Citation:"2 Scenario Exercises." National Research Council. 2012. Intelligent Human-Machine Collaboration: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/13479.
×
Page 10
Next: 3 Human-Machine Teamwork Panels »
Intelligent Human-Machine Collaboration: Summary of a Workshop Get This Book
×
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

On June 12-14, 2012, the Board on Global Science and Technology held an international, multidisciplinary workshop in Washington, D.C., to explore the challenges and advances in intelligent human-machine collaboration (IH-MC), particularly as it applies to unstructured environments. This workshop convened researchers from a range of science and engineering disciplines, including robotics, human-robot and human-machine interaction, software agents and multi-agentsystems, cognitive sciences, and human-machine teamwork. Participants were drawn from research organizations in Australia, China, Germany, Israel, Italy, Japan, the Netherlands, the United Arab Emirates, the United Kingdom, and the United States.

The first day of the workshop participants worked to determine how advances in IH-MC over the next two to three years could be applied solving a variety of different real-world scenarios in dynamic unstructured environments, ranging from managing a natural disaster to improving small-lot agile manufacturing. On the second day of the workshop, participants organized into small groups for a deeper exploration of research topics that had arisen, discussion of common challenges, hoped-for breakthroughs, and the national, transnational, and global context in which this research occurs. Day three of the workshop consisted of small groups focusing on longer term research deliverables, as well as identifying challenges and opportunities from different disciplinary and cultural perspectives. In addition, ten participants gave presentations on their research, ranging from human-robot communication, to disaster response robots, to human-in-the-loop control of robot systems.

Intelligent Human-Machine Collaboration: Summary of a Workshop describes in detail the discussions and happenings of the three day workshop.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!