The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
New Research Directions for the National Geospatial-Intelligence Agency: Workshop Report
In addition, there is rapid development of mobile mapping systems equipped with synchronized navigation and imaging sensors, such as digital cameras installed in stereoscopic pairs, LiDAR, or both. Low-cost photogrammetric systems using calibrated consumer-grade digital SLR (single lens reflex) cameras and inexpensive software are becoming more available and accessible, even to the non-photogrammetrist. In some respects, Google Earth, NearMap, and PhotoSynth also might be considered photogrammetry as these software systems provide two- and three-dimensional representations that are derived from imagery. However, according to Dr. Frasier, these software systems lack the metric integrity inherent in photogrammetry, raising issues of calibration, sensor modeling, georeferencing, and rigorous data fusion.
Dr. Frasier identified several key research challenges for photogrammetry and geomatics, including sensor modeling and georeferencing, feature extraction, and above all, increased automation of the spatial information generation process. Several photogrammetric operations are routinely automated, including interior and exterior orientation; control point recognition; elevation extraction with user input for refinement, checking, and correction; aerial triangulation with user input for ground control; orthophoto generation; some aspects of display and visualization; and three-dimensional scene generation in mobile mapping (Xiong and Zhang, 2010). However, the automated extraction of information (i.e., vector data) from photo-textured three-dimensional point clouds (such as those generated by terrestrial LiDAR scans) is still an area of ongoing research (e.g., Chen et al., 2007). Other areas of emerging research include calibration of complex multi-sensor cameras and the alignment of cameras to inertial measurement units (IMUs) and LiDAR; sensor orientation modeling using rigorous sensor modeling versus rational polynomial functions for multi-scene processing of high-resolution satellite imagery; automatic feature extraction, particularly for building extraction, topographic mapping, and utility mapping; monoplotting in the absence of stereo for close range three-dimensional object reconstruction via single images and a digital elevation model; forensic measurement with consumer-grade cameras (van den Hout and Alberink, 2010); image sequence processing and analysis; enhanced object modeling and classification via full waveform LiDAR; biomass estimation via radar and LiDAR technologies (Kellndorfer et al., 2010); and enhanced classification for feature extraction. Dr. Frasier also noted the need for research in data fusion, which is discussed in Chapter 3.
To summarize Dr. Frazier’s presentation, the principal challenge of photogrammetry and geomatics is centered on the automated generation of spatial information from multiple sources of imagery and ranging data generated by ubiquitous, integrated multi-sensor systems. While discussion on automating “feature extraction” highlights the challenges of automation, it is only a starting point and the solution is likely to involve a combination of research from traditional remote sensing as well as from the new cross-cutting disciplines. Higher spatial and temporal resolutions will be required to support a range of functions, such as change detection, monitoring, and GIS database update. Research needed to support these functions ranges from metric processing of remotely-sensed multi-sensor data to feature extraction and modeling.
In the second presentation, Dr. Crawford focused on the state-of-the-art in optical remote sensing technologies. Remote sensing is the science of acquiring imagery and information about an object or phenomena using sensors that are wireless or not physically connected to the object, such as from airborne or spaceborne platforms. Remote sensing technologies include high resolution panchromatic and multispectral sensors; hyperspectral sensors, which collect tens to hundreds of narrow spectral bands continuously across the electromagnetic spectrum; and LiDAR, which includes full waveform systems and photon counting techniques. Common