system is too little and too late, and it tends to focus more on the designer’s theory than the user’s needs.
To fix these problems, the panel offered a wish list of changes to current practice. The process should: (1) base models on knowledge and meaning and not just on data; (2) include hypotheses in cognitive models to make them less rigid and more adaptive; (3) create the role of “modeler” who can bridge the worlds of the user and engineer-builder; and (4) colocate testbeds with deployed systems so that user involvement can be rich from the start. Moreover, (5) engineers should not only train but also mentor the users of the system so as to maximize their usefulness. In addition, product deployment should not be the end of the relationship between the user and the vendor but, rather, the beginning of a second stage of empirical study by the vendor to deal with the unintended consequences of the system (both positive and negative) once it is in place. This second stage will improve the usability of the system at that particular site while offering lessons to the vendor for the next generation of the system.
During the Q&A portion of this panel discussion, Lin Padgham suggested that a looser funding model that focuses on the end product as opposed to item-by-item accounting could result in a cocreative process that more accurately reflects the vendor’s capabilities and the user’s needs.
Panel Two: Intent Recognition, Execution Monitoring, and Planning
Moderator: Andreas Hofmann
Group Members: Michael Beetz, Tal Oron-Gilad, Andreas Hofmann, Paul Maglio, Dirk Shulz, Lakmal Seneviratne, Liz Sonenberg, Satoshi Tadokoro
The moderator, Andreas Hofmann, spoke on behalf of the panel. As he explained, the panel focused on the challenges of intent recognition, execution monitoring, and planning that are associated with the sense-deliberate-act loop (also known as the robotics paradigm). He explained that most sensor-based data is “noisy” and requires filtering for quick and correct evaluation. The panel suggested that more sophisticated algorithms based on plan context might be able to filter out the “noise” related to visual and tactile sensors. This notion led the panel to consider the planning phase of the robotics paradigm: How would the agent(s), robot(s), or mixed teams assess the success of the plan itself?
Hofmann noted that it is unrealistic to define the successful outcome of a plan in terms of specific assumptions going in. A more realistic strategy would be to continually evaluate the plan’s success. Here the panel suggested that execution should include an evaluation capability that can give a probabilistic estimate of the plan’s success. If the estimate goes below a certain threshold, a human operator would be called in to re-plan or somehow alter the original plan. The challenge, according to Hofmann, is to do this sooner rather than later in the course of the plan’s execution. Another challenge is that it may be difficult to