C

Presentation Abstracts

Session 1: Sociocognitive Issues

Yukie Nagai, Osaka University
Title: Robots that learn to communicate with humans

Abstract: How can robots learn to communicate with humans? How can they acquire the ability to read the intentions of humans? In order to collaborate with human partners, robots need to understand what the goal of the partner’s action is. Inspired by studies of developmental psychology and neuroscience, our lab has been developing robots that learn to communicate with others based on the mirror neuron system (MNS). The MNS plays a central role in understanding the goal of the other’s actions and imitating them. We have hypothesized that the MNS emerges through sensorimotor learning accompanied by perceptual development; immature perception in the early stages of development enables robots as well as infants to find the correspondence between the self and other (an important property of the MNS). My talk will present the results of the robotics experiment to verify this hypothesis and also the results of an additional experiment, which analyzes the microscopic structure of caregiver-infant interaction in order to better understand the developmental mechanism of infants. In this paper, I emphasize the importance of perceptual and motor immaturity in leading to further- and better-organized cognitive development.

Alex Morison, Ohio State University
Title: Expanding human perception and attention to new spatial-temporal scales through networks of sensor systems-

Abstract: Ubiquitous sensing capabilities create the potential to expand human reach to new spatial-temporal scales, but to date the potential is unrealized. Models of how human perceptual systems function successfully to manage multiple data streams and directly apprehend the world have inspired new technologies



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 39
C Presentation Abstracts Session 1: Sociocognitive Issues Yukie Nagai, Osaka University Title: Robots that learn to communicate with humans Abstract: How can robots learn to communicate with humans? How can they acquire the ability to read the intentions of humans? In order to collaborate with human partners, robots need to understand what the goal of the partner’s action is. Inspired by studies of developmental psychology and neuroscience, our lab has been developing robots that learn to communicate with others based on the mirror neuron system (MNS). The MNS plays a central role in understanding the goal of the other’s actions and imitating them. We have hypothesized that the MNS emerges through sensorimotor learning accompanied by perceptual development; immature perception in the early stages of development enables robots as well as infants to find the correspondence between the self and other (an important property of the MNS). My talk will present the results of the ro- botics experiment to verify this hypothesis and also the results of an additional experiment, which analyzes the microscopic structure of caregiver-infant inter- action in order to better understand the developmental mechanism of infants. In this paper, I emphasize the importance of perceptual and motor immaturity in leading to further- and better-organized cognitive development. Alex Morison, Ohio State University Title: Expanding human perception and attention to new spatial-temporal scales through networks of sensor systems Abstract: Ubiquitous sensing capabilities create the potential to expand human reach to new spatial-temporal scales, but to date the potential is unrealized. Models of how human perceptual systems function successfully to manage mul- tiple data streams and directly apprehend the world have inspired new technolo- 39

OCR for page 39
40 INTELLIGENT HUMAN-MACHINE COLLABORATION gies and visualizations to overcome data overload and release the power of new human-sensor systems. Candy Sidner, Worcester Polytechnic University Title: Agents for long-term relationships with isolated older adults Abstract: We are exploring the development of virtual agents who "live" in the homes of socially isolated older adults for extended periods of time. Our agent reasons about activities that are appropriate to undertake with the adult as its relationship changes, from stranger to something one might call "companion" in the course of daily interactions. In this talk, I will discuss the relationship man- ager that reasons about the relationship and plans activities, and the real-time collaboration manager, which puts those plans into effect while also reasoning about time and the time available to complete those plans. I will also discuss experiments with older adults in their homes, who use prototype agents to help us discover what the agent can best be doing with adults. Frank Dignum, Utrecht University Title: Interaction in context Abstract: When people interact they use context to both express and interpret the meaning of the information they want to exchange. Unfortunately, there are many overlapping contexts that might be active at the same time. Thus, choosing the right context to generate or interpret a message is a complex but very im- portant issue for human-machine collaboration, especially for human, agent, and robot teams. Session 2: Challenging Applications Lakmal Seneviratne, Khalifa University, Abu Dhabi, UAE, and King’s Col- lege London, UK Title: Force feedback and haptic interfaces during robot-assisted surgical inter- ventions Abstract: In recent years there have been significant advances in robot-assisted minimally invasive surgical (MIS) procedures. However, although robot- assisted MIS represents significant improvements over traditional MIS, it does not provide the surgeon with a sense of touch from the operating interface. Many robotic surgical applications require active interactions with complex dy- namic environments such as soft tissue. A fundamental understanding of the interaction dynamics between the surgical system and the environment is an essential element in intelligent surgeon-robot collaboration. The sensing of forces at the robot-tissue interface is a very challenging research problem. In this

OCR for page 39
APPENDIX C 41 presentation we survey a number of force and stiffness sensors developed for surgical robotic systems. These include force and stiffness sensors based on fi- ber-optic and pneumatic technologies. We explore finite element (FE) modeling of the robot-tissue interface, including inverse FE models for identifying tissue properties for diagnosis. The use of haptic interfaces at the surgeon-master inter- face is also investigated. Rong Xiong, Zhejiang University, China Title: A study on humanoid robots playing table tennis Abstract: Over the past twenty years, the research on humanoid robots has rapid- ly advanced, and various humanoid robots have been developed. They can walk, run, dance, play Taiji, etc. The ongoing research on humanoids is moving to- ward complex task performing in different environments, such as providing do- mestic service in a home environment or collaborating with human beings to move heavy objects. We take table tennis playing as an entry point to explore related technologies, because both intelligent interaction and dynamic response, which are fundamental factors for future service robots, are required but chal- lenging issues in such a task. We have proposed algorithms for fast visual recognition and accurate trajectory prediction of a Ping-Pong ball and coordina- tive motion planning and balance maintenance of the humanoid robot, and we have developed a real-time field bus to meet the requirements for quick re- sponse. Now the two 165 cm-tall humanoid robots we developed, “Wu” and “Kong,” can play table tennis continuously with each other and with amateur human players. This research topic also provides an interesting point of view for studies on autonomous cooperative or competitive interaction between robots or between a human and a robot. For example, how should the robot learn play motions and play strategies from human players? How should the robot vary its play motion and strategies depending on its real-time perception? Session 3: Learning and Adaptation in Dynamic Settings Michael Freed, SRI International Title: A virtual assistant for e-mail overload Abstract: E-mail client software is widely used for personal task management, a purpose for which it was not designed and is poorly suited. Past attempts to remedy the problem have focused on adding task management features to the client user interaction. RADAR uses an alternative approach modeled on a trust- ed human assistant who reads mail, identifies task-relevant message content, and helps manage and execute tasks. This talk describes the integration of diverse AI technologies and presents results from human evaluation studies comparing RADAR user performance to unaided commercial-off-the-shelf tool users and

OCR for page 39
42 INTELLIGENT HUMAN-MACHINE COLLABORATION users partnered with a human assistant. As machine learning plays a central role in many system components, we also compare versions of RADAR with and without learning. Our tests show a clear advantage for learning-enabled RADAR over all other test conditions. Satoshi Tadokoro, Tohoku University Title: The disaster response robot named “Quince” and lessons at the Fukushi- ma-Daiichi nuclear power plant accident Abstract: The accident at the Fukushima-Daiichi power plant, caused by the tsunami on March 11, 2011, resulted in a meltdown of nuclear fuel and in the hydrogen explosion of nuclear reactor buildings. Several robotic systems were applied to stabilize the situation there. A disaster response robot, Quince, which was developed by the presenter's group, was utilized for surveillance of the 2nd through 5th floors of the nuclear reactor buildings and achieved a certain contri- bution to their cool shutdown. It was a typical human-machine collaboration task. Both the researcher and engineer side and the user side learned many things in order to apply the robotic system to the unknown environment. This talk introduces an overview of this mission and lessons learned. Michael Goodrich, Brigham Young University Title: Autonomy, interaction, and collaboration: A WiSAR perspective Abstract: Based on discussions at the workshop, an operational definition of "collaboration" was created. Collaboration is a multi-agent problem that emerg- es when agents have asymmetric information, asymmetric goals, and asymmet- ric capabilities. These asymmetries enable agents to share resources to solve a problem that the agents couldn't solve independently, but these asymmetries also lead to potential conflicts of interest or points of confusion. This definition of collaboration sheds light on how a technical search team can use an unmanned aerial vehicle to support wilderness search and rescue. Technologies developed to support wilderness search and rescue teams can benefit by supporting the collaborative nature of the team. Importantly, collaboration can be seen as the (re)unification of two threads of research that were both present in Sheridan and Verplank's classic report, which is known for defining levels of autonomy but split the discussion into research on these levels and research on interaction de- sign.

OCR for page 39
APPENDIX C 43 Session 4: Human-Machine Interaction and Teaming Holly Yanco, University of Massachusetts Lowell Title: Human-in-the-loop control of robot systems Abstract: Robots navigating in difficult and dynamic environments often need assistance from human operators or supervisors, either in the form of teleoperation or occasional interventions when the robot cannot handle the cur- rent situation autonomously. Even in office environments, robots may need to ask for directions in unknown buildings. In this presentation, I will discuss my lab's research on the best practices for controlling both individual robots and groups of robots, in applications ranging from assistive technology to telepresence to search and rescue. A number of methods for this type of human- robot interaction (HRI), including large and small multi-touch devices, software- based operator control units (softOCUs), haptics, and natural language, will be presented. I will also discuss how we can improve HRI by modeling a user's current level of trust in a robot system.

OCR for page 39