program is doing best-in-class work at reducing sensor size and weight. Off-ramps to divert developed technology into existing platforms should receive more emphasis. Also, scalability over robot platforms seems to be a major concern. There is a need to address scalability challenges with regard to capabilities (e.g., fidelity of the sensors, range and detection).

The selection of sensors to accomplish the perception element of an autonomous system should be driven by the task and mission to be accomplished and the objects, activities, and events that one is trying to find and characterize. Bounding the problem in this way leads to identification of what can be observed and a selection of the sensor suite and measurements associated with the set of observables. There was little discussion of the observables needed and the justification for the selection of sensors and associated processing.

Micro Autonomous Systems and Technology

The goals of the MAST program include reducing the sensor size, weight, and power by a factor of 100. The hair inertial sensor is projected to be 1.5 mm × 1.5 mm, which is approximately 4 times smaller than the conventional microelectromechanical system (MEMS) accelerometer.

The millimeter wave (MMW) radar work provides an approximate 100 times reduction in size, weight, and power with improved performance; the research resulting in this revolutionary technology development is of the highest caliber. An opportunity and challenge is to characterize and measure the effective range of this device for military utility. Once this is taken into account, the impact of the device can be accurately assessed. It is also important to illustrate more clearly how the work fits into an overall roadmap. The necessary next step in the maturation of the work is a technology demonstration as part of the future joint experiment to validate the utility of this technology. This work needs a comparison to the state of the art.

Signal Processing

The presentation on super-resolution processing described a technique for super-resolving three-dimensional (3D) range images acquired using a flash LADAR sensor for robotic applications in an urban environment. Work was done in collaboration with Carnegie Mellon University. Work was performed on the SwissRanger SR-3000 flash LADAR device, with a pixel count of 176 × 144 and 850 nm light modulated at 20 MHz. The system acquires range images at a maximum rate of 50 frames/second and has a field of view of 47.5 × 39.6 degrees. ARL’s super-resolution algorithm was used on the commercial off-the-shelf device to illustrate the benefit of super-resolution for flash LADAR imagery. For this work, the algorithm was operational in post-processing mode; however, it can potentially be implemented to run in near-real time on a robotics platform. The illustrative example of the operation of the algorithms is overly simple.

The purpose of perception tasks is to reason about what is in the environment and what things in the environment are doing. Although the team has made progress in these areas, clarification of the extent of progress would be assisted by clearer articulation of the state of the art. Semantic understanding of static areas, particularly terrain and object classification, works very well over large data sets. Semantic understanding of dynamic areas (e.g., activity recognition) works reasonably well on small data sets and on a restricted set of activities. Work on distributed and collaborative perception (multiple robots, robots and people) is in progress.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement