Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 73
4 Human Interaction with Geospatial Information The emphasis in earlier chapters has been on methods and technologies to acquire, manage, and take more complete advantage of geospatial information. This chapter focuses on the human users of these technologies. Too often, the domain experts who have the knowledge necessary to take advantage of this information cannot find or use it effectively without specialized training. Current geospatial technologies are even less suited for citizens at large, who have problems and questions that require geospatial information but who are not experts in the technology or in all aspects of the problem domain. For example, a couple interested in buying a piece of property to create a commercial horse farm and riding stable should be able to pose what-if scenarios (What if the zoning regulations change? What if a new sports stadium is built a mile away?) to identify any environmental, legal, or other limitations that might interfere with their proposed business. For centuries, visual displays in the form of maps and images provided a critical interface to information about the world. Now, however, emerging technologies create the potential for multimodal interfaces—involving not just sight but also other senses, such as hearing, touch, gestures, gaze, and other body movements—that would allow humans to interact with geospatial information in more immediate and “natural” ways. One focus of this chapter is how recent advances in visualization and virtual/augmented environment technologies can be extended to facilitate work with geospatial information. The chapter outlines the issues associated with interaction styles and devices ranging from high-density, large-screen displays and immersive virtual environments to mobile PDAs and wearable
OCR for page 74
computers. It also discusses representation and interaction technologies specifically designed to support group collaboration. To date, most research on human interaction with geospatial data has roots in one of three domains: visualization (including computer graphics, cartography, information visualization, and exploratory data analysis), human-computer interaction, and computer-supported cooperative work. There has been only limited integration across these domains.1 The committee notes that although continued research in each remains important, an integrated perspective will be essential for coping with the problem contexts envisioned, such as crisis management, urban and regional planning, and interactions between humans and the environment. TECHNOLOGIES AND TRENDS Most advances in computing and information technology affect some aspect of human interaction with geospatial information. Four in particular are driving forces, with the potential to enable richer, more productive interactions: Display and interface technologies. As noted above, human interaction with geospatial information has been linked to visual display for centuries (e.g., the use of paper maps to represent geographic space in the world). Recent advances include developments in immersive virtual environments, large and very high-resolution panel displays, flexible (roll-up) displays, multimodal interfaces, and new architectures supporting usability. Distributed system technologies. Technologies that support remote access to information and remote collaboration also are having a dramatic impact on how people interact with information of all kinds. Among these are high-bandwidth networking, wireless networks and communication, digital library technologies, and interactive television. Mobile, wearable, and embedded technologies. Until recently, most human interaction with computerized or displayed geospatial information required desktop visual displays. Particularly relevant new technologies include wireless personal digital assistants (PDAs) that support both data collection and information dissemination; augmented reality devices that 1 Two notable exceptions are National Center for Geographic Information and Analysis research initiative 13, “User Interfaces for Geographic Information Systems,” and the ACM SIGGRAPH Carto Project, a 3-year collaboration with the International Cartographic Association focused on geovisualization. More information is available at <http://www.siggraph.org/project-grants/carto/cartosurv.html>.
OCR for page 75
support the matching of physical objects with virtual data objects; distributed sensor fusion techniques to support multivariate visualization in the field; and pervasive computing infrastructures (e.g., intelligent highways and other infrastructures) that can interact with mobile humans or computational agents to inform them about local context. Agent-based technologies. Software agents now are being applied in a wide array of contexts. As these technologies mature, there will be considerable potential to extend them for facilitating human interaction with geospatial information. Among the agent-based technologies that show promise are intelligent assistants for information retrieval, agent support for cooperative work and virtual organizations, and computational pattern-finding agents. The geospatial information science community is working on how these technologies can be made easier to use. Attention is being paid to new technologies for representing geospatial data (to the eye as well as to other senses, particularly touch and hearing2) and on increasing the usability and usefulness of interfaces for individuals and for collaborating groups (Jankowski and Nyerges, 2001; MacEachren and Kraak, 2001; Mark et al., 1999). The remainder of this section provides an overview of the current state of the art in three domains: visualization and virtual environments, human-computer interaction, and computer-supported cooperative work. Visualization and Virtual Environments Developments in scientific visualization and virtual environments have been closely coupled with advances in computer graphics. The primary impact of scientific visualization research on geospatial information has been the ability to obtain realistic terrain representations, zoom across scales, and create fly-through animations. These methods are now relatively common and included both in commercial products that run on desktop computers and in immersive virtual environments (e.g., CAVE Automatic Virtual Environments). Complementary research has enabled the creation of movie-quality time-series map animations. Both research thrusts have exploited dramatic advances in computer processing power 2 There is a base from which to address these issues in work on data sonification (the representation of information through abstract sonic means) within GIScience (see Fisher, 1994, and Krygier, 1994) and within information “sensualization” (Levkowitz et al., 1995; Lodha et al., 1996; and Ogi and Horose, 1997). Similarly, recent work with haptic interfaces offers another place to start (Asghar and Barner, 2001).
OCR for page 76
and in parallel algorithms for dealing with very large data sets. They also take advantage of—and in some cases drive the development of—increasingly high-resolution displays (e.g., multiprojector “powerwalls”). These allow users to see critical details in complex dynamic phenomena, such as subtle eddies that are critical to understanding global ocean circulation models. Researchers at the Air Force Research Laboratory have even developed a portable multipanel display for use in field command-and-control situations. The display requires interaction with a stream of information arriving via wireless connections from a diverse collection of distributed sensors (Jedrysik et al., 2000). Much of the research in scientific visualization has emphasized spectacular visual renderings rather than mechanisms for human interaction. In situations where data sets are very large, interactivity often is sacrificed— in ways that vary from a slower frame rate to offline rendering and later playback—so that scarce computing resources can be devoted to ever more detailed rendering. This trade-off, although perhaps reasonable when the focus is on single variables (e.g., ocean temperatures), does not support the kinds of creative exploration demanded by geospatial information, in which problems are ill defined and highly multivariate and the relative importance of variables is not known a priori. Nor are highly detailed renderings of predetermined scenes sufficient for geospatial applications in which dynamic exploration and interaction are critical capabilities (such as in the Digital Earth scenario described in Chapter 1). This situation has stimulated research in geovisualization, which integrates approaches from scientific visualization, cartography, information visualization, exploratory data analysis, and image analysis (MacEachren and Kraak, 2001). The results include methods enabling flexible, highly interactive exploration of multivariate geospatial data. Recent efforts have focused on linking geovisualization more effectively to computational methods for extracting patterns and relationships from complex geospatial data.3 Research in virtual environments, however, has emphasized support for human interaction, both with the display and with other human actors. A key objective has been the development of interaction methods that are more natural than using a keyboard or mouse and take advantage of three-dimensional, often stereoscopic displays. Another objective has been the leveraging of high-performance computing to support real-time interaction as well as remote collaboration. Here, too, a new area of re- 3 These topics were addressed recently at the European Science Foundation Conference “Geovisualisation—EuroConference on Methods to Define Geovisualisation Contents for Users Needs,” held in Albufeira, Portugal, March 2002. See <http://www.esf.org/euresco/02/lc02179> for more information.
OCR for page 77
search has emerged in response to the specialized characteristics of the geospatial information. GeoVirtual environments extend virtual interaction technologies to support both geospatial data analysis and decision support activities. Human-Computer Interaction New technologies—such as very large, high-resolution displays that can enable same-place collaborative work and smaller, lighter devices that generate or use georeferenced information anywhere and that are linked to wireless communications—now make it possible for everyone to have access to geospatial information, everywhere. A substantial amount of research has been conducted in the general area of human-computer interaction (HCI), providing an initial foundation for understanding how humans interact with geospatial information. Much of that work, however, has focused on how humans interact with the technology itself rather than with the concepts being represented through the use of technology. Although the new technologies pose an initial hurdle for users of geospatial applications, the fundamental challenge is how to support human interaction with the geospatial information itself. In other words, the challenge is in moving beyond HCI to human-information interaction. Here, there are few research results upon which to build new methods and technologies. Cartographers have been concerned with the empirical assessment of map usability since the 1950s, but their emphasis was on extracting information from static visual representations rather than interactive representations and analysis (Slocum et al., 2001). Similarly, although the HCI community has begun to examine the effectiveness of information visualization methods and tools, most studies have centered on information displays rather than on mechanisms for interacting with them (Chen and Yu, 2000). Computer-Supported Cooperative Work Most real-world, scientific geospatial applications involve cooperative work by two or more persons. To date, however, most geospatial technologies have been designed to support just one user at a time. A meeting of the National Center for Geographic Information and Analysis (NCGIA)4 prompted initial work on how traditional geospatial informa- 4 The NCGIA is an independent research consortium dedicated to basic research and education in geographic information science and its related technologies. The scientific report of the meeting is available online at <http://www.ncgia.ucsb.edu/research/i17/spec_report.html>.
OCR for page 78
tion systems could be extended to support group decision-making processes, but the work is still very preliminary. Fortunately, more general research on computer-supported collaborative work has yielded a substantial body of literature as well as a growing set of commercial and noncommercial tools. The technology has begun to mature to the point of inclusion in off-the-shelf office computing software. For instance, change tracking and other asynchronous collaboration features are now standard in document processing software, and same-time/different-place meeting tools allow the sharing of video, audio, text, and graphics. Nevertheless, significant barriers remain before this technology can contribute meaningfully to geospatial applications. There has been no attention to how the new collaborative features might be integrated with geospatial analysis activities and only limited attention to the role of interactive visualizations in facilitating cooperative work. RESEARCH CHALLENGES This section considers the system and human-user components of four interrelated issues, each of which is central to human interaction with geospatial information: (1) taking full advantage of increasingly rich sources of geospatial data in support of both science and decision making, (2) making geospatial information accessible and usable for everyone, (3) making geospatial information accessible and usable everywhere, and (4) enabling collaborative work with geospatial information. The focus on these issues (and the associated challenges and opportunities) resulted from the combination of preliminary work by the committee, contributions by workshop participants through working papers and during the workshop, solicited input from other experts, and post-workshop analysis by the committee. Each issue is discussed below separately. Harnessing Information Volume and Complexity The exponential growth of geospatial data will provide us with opportunities to enable more productive environmental and social science, better business decisions, more effective urban and regional planning and environmental management, and better-informed policy making at local to global scales. Across all application domains, however, the volume and complexity of the geospatial information required to answer hard scientific questions and to inform difficult policy decisions create a paradox—whereas the necessary information is more likely to be available, its sheer volume will make it increasingly hard to use effectively.
OCR for page 79
Harnessing and Extending Advances in the Visual Representation of Knowledge As geospatial repositories grow in size and complexity, users will need more help to sift through the data. Specifically needed are tools that can exploit advances in visualization and computational methods to trigger human perceptual-cognitive processing power. Three of the key needs are outlined below. First, there is a critical need for software agents to automate the selection of data-to-display mappings. Although recent advances in visualization methods and technologies have considerable potential to help meet this goal, they lack mechanisms for matching representation forms to the data being represented in ways that take full advantage of human perceptual-cognitive abilities (and that avoid potentially misleading representations). The real challenge is to develop context-sensitive computational agents that automate the choice of data-to-display mappings, freeing the user to concentrate on data exploration. In this sense, “context” must encompass not just the nature of the information being interacted with and the display/representation environment being used but also the characteristics of the problem domain. A second, complementary need is for dynamic, intelligent category-representation tools. These would enable flexible exploration and modification of conceptual categories by human users and would facilitate the interoperability of different geospatial systems. Of particular importance is how methods and technologies can support the different conceptual categories brought by individual users to a given data analysis task. A simple example is the category “forest,” which connotes harvestable timber and high densities of relatively large trees (perhaps 75 percent canopy) to a forester but connotes cover for troops (with a much lower percentage of canopy required to be in the category) to a military commander. The representation of conceptual categories is an important tool in developing the formal ontologies discussed in Chapter 3. Formalized ontological frameworks can define the differences in ontology among different disciplines and manage multiple definitions of a concept (such as the “forest” category noted above). One part of a solution is to develop visualization (and perceptualization) methods and tools that support navigation of the ontologies created, explanation and demonstration of the resulting conceptual structures and complex transformation carried out on the highly processed data, and integration of the results directly into the scientific process (by providing standard ways to manipulate geospatial data across applications). Hence, categories developed through analysis of highly multivariate data—e.g., aggregation of data from remote sensing, population and agricultural censuses, zoning, and other sources—in which the
OCR for page 80
BOX 4.1 Coping with Uncertainty in a Geospatial Field Data from many applications can be represented as a two-dimensional (2D) field in which each data point is a distribution. One example is data from the Earth Observing System, in which one treats the spectra at each pixel as a distribution of data values. A critical challenge with these data is to develop methods for coping with their uncertainty. Conditional simulation, also called stochastic interpolation, is one way to model uncertainty about predicted values in such a geospatial field (Dungan, 1999). It is a process by which spatially consistent Monte Carlo simulations are constructed, given some data and the assumption that spatial correlation exists. Conditional simulation algorithms yield not one but several maps, each of which is an equally likely outcome from the algorithm; each equally likely map is called a realization. Furthermore, these realizations have the same spatial statistics as the input data. In Figure 4.1 (pp. 82-83), each individual realization is a possible scenario given the same set of ground measurements and satellite imagery. Taken jointly, these realizations describe the uncertainty space about the map. That is, the density estimate (from, for example, a histogram) of the data values at a pixel is a representation of the uncertainty at that pixel. The visualization task, then, is to facilitate the understanding of uncertainty over the domain. One way is to simply plot the histogram of the distribution for every pixel. The obvious drawbacks to this approach are the screen resolution requirements and the potentially very cluttered presentation. Another approach, shown in part (a), is to summarize each dis definition is place- and context-specific (e.g., “rural” land) pose difficult challenges to current technologies. Methods for representing uncertainty constitute a third need. Users cannot make sense of data retrieved from a large, complex geospatial repository without understanding the uncertainties involved. Some research efforts have addressed the visualization of geospatial data quality and uncertainty (see Box 4.1 and Figure 4.1), but existing methods do not scale well to data that are very large in volume or highly multivariate (a problem also identified in Chapter 3 in connection with current data mining approaches and geospatial algorithms). Nor has sufficient attention been directed to helping analysts use uncertainty representations in hypothesis development or decision-making applications. Achieving real progress will require advances in modeling the components of uncertainty
OCR for page 81
tribution into a smaller set of meaningful values that are representative of the distribution. Parametric statistics (e.g., mean, standard deviation, kurtosis, and skewness) are collected about each distribution. This forms an n-tuple of values for each pixel that then can be visualized in layers. However, there are drawbacks to this approach as well—namely, the limited number of parameters that can be displayed, the loss of information about the shape of the distributions, and the poor representations if the distribution cannot be described by a set of parametric statistics. Clearly, alternative nonparametric methods need to be pursued. Methods illustrated in part (b) allow the user to view parts of the 2D distribution data as a color-mapped histogram. Here, the frequency of each bin in a histogram is mapped to color, thereby representing each histogram as a multicolored line segment. A 3D histogram cube then represents a 2D distribution of data. Interactivity helps in understanding the rest of the field, but there is still the need (as yet unrealized) to be able to “see” the distribution over the entire 2D field at once. A more subtle problem is capturing the spatial correlation of uncertainty over the domain. Using distributions of values aggregated from multiple realizations may be a good representation of the probabilities of values at a particular pixel, but that representation does not take into account any spatial correlation that may exist among the values in the vicinity of that pixel. Hence, another challenge is a richer representation of uncertainty that incorporates spatial correlation, and the visualization of such data sets. SOURCE: Adapted from a white paper, “Visualizing Uncertainty in Geo-spatial Data,” prepared for the committee’s workshop by Alex Pang; for more detail, see Kao et al. (2001). and in representing the uncertainties in ways that are meaningful and useful. The situation is complicated by the fact that many aspects of uncertainty relevant to human interaction with geospatial information are not amenable to modeling. Geospatial Interaction Technologies Increases in data resolution, volume, and complexity—i.e., the number of attributes collected for each place—can overwhelm human capacities to process information using traditional visual display and interface devices. Recent advances in display and interaction technologies promise to enhance our ability to explore and utilize geospatial data from extremely large repositories. However, current desktop-based Geographic
OCR for page 82
FIGURE 4.1 The data set highlighted here was generated using both ground measurements (forest cover from 150 locations throughout a region) and coincident satellite imagery (Landsat image of a spectral vegetation index). In (a) the bottom plane is the mean field colored from nonforest (cyan) to closed forest (red). The upper plane is generated from three fields: the bumps on the surface are from the standard deviation field, colored by the interquartile range; the heights of the vertical bars denote the absolute value of the difference between the mean and Information Systems (GISs) and geovisualization tools do not take effective advantage of human information processing capabilities, nor (as noted above) do they scale to analyses of very large or highly multivariate data sets. Methods are needed that support dynamic manipulation (e.g., zooming, querying, filtering, and labeling) on the fly, for millions of items. Considerable research investments will be required to realize the poten-tial offered by the new technologies. The first challenge is the development of inexpensive, large-screen, high-resolution display devices. Currently, the resolution of display technology remains nearly an order of magnitude less than that of print technology (i.e., a 20-inch monitor at UXGA resolution will display about 1.9
OCR for page 83
median fields, colored according to the mean field on the lower plane. To reduce clutter, only difference values exceeding 3 are displayed as bars. A histogram cube is depicted in (b). The two slices through the volume depict the histograms of each point along two lines crossing the 2D field. The distributions are mostly unimodal and skewed toward lower values. SOURCE: Reprinted from Kao et al. (2001) and Djurcilov and Pang (2000) by permission of IEEE. million pixels vs. about 69.1 million on a printed page). Higher resolutions could give the needed detail, whereas large size would take the geographic context of problems into account more effectively (particularly in support of collaborative work). Note that the large-screen, high-resolution technology must be affordable for classrooms, science laboratories, libraries, urban or regional planning offices, and similar settings for those communities to benefit from them. Just as traditional display technologies limit the representation of geospatial information, so, too, do traditional interfaces. First, the interaction devices themselves are too restrictive: a keyboard and mouse are not flexible or expressive enough to navigate effectively through large
OCR for page 94
around them. Imagine, for example, the ability to call up place-specific information about nearby medical services, to plan emergency evacuation routes during a crisis, or to coordinate the field collection of data on vector-borne disease.9 This section complements Chapter 2 (where the underlying technologies that support location-aware computing are considered) but focuses on two of the most intriguing aspects of ubiquity from the perspective of human users: facilitating the use of geospatial data from outside office or home settings and using geospatial information to enhance human perceptual capabilities. Mobile Access to Geospatial Information Underlying the goal of “geospatial everywhere” is the ability to obtain information on demand, wherever the user happens to be. This will necessitate the development of technologies and methods specifically accommodating user mobility. Traditional visual representation methods, developed for desktop (or larger) displays, are not effective in most mobile situations, where display screens are small and local storage and bandwidth capacities are severely limited. Research is needed to develop context-sensitive representations of geospatial information and to accommodate data subject to continual updating from multiple sources. These issues differ from the perceptualization issues already discussed in connection with the need for small, lightweight, and mobile technologies that can be used in public spaces. Although the available technologies provide limited visual representations of geospatial information in field settings, visual display remains the most efficient and effective method of geospatial access for sighted users. Accordingly, it makes sense to invest in the development of portable, lightweight display technologies, such as electronic paper, foldable displays, handheld projectors (which can be pointed at any convenient surface), and augmented reality glasses of the sort discussed in the next section. To exploit these technologies, we also must invest in appropriate interaction paradigms, such as voice- and gesture-based interfaces applied to PDA-like devices. Because the geographical context will be somewhat constrained, it may be possible to devise more “natural” interfaces. For instance, because the system will know where the user is located when a request is made, the spatial language of gestures or sketching movements may be interpreted more literally. Integrating two-dimensional (or three- 9 See “Developing Digital Neural Networks for Worldwide Disease Tracking and Prevention,” a white paper written by Eric R. Conrad of the Pennsylvania Department of Environmental Protection for the committee’s workshop.
OCR for page 95
dimensional)10 mobile displays, which support natural mechanisms for interacting with maplike representations and augmented reality methods and technologies (detailed below), poses a range of technology and HCI challenges. Supporting the acquisition and use of geoinformation from the field also will require attention to interaction issues associated with database access and knowledge discovery. Both efficient rendering and efficient transmission of geospatial representations are essential. A long history of research on map generalization provides an important conceptual base for meeting this challenge,11 but that research does not deal with real-time generation of dynamically changing representations. Rather, coordinated research drawing on both computer science (efficient algorithms) and cartography (understanding of the geospatial information abstraction process) is required. Intelligent mechanisms for transmitting data, such as context-sensitive data organization and caching, also must be developed (see also the challenges posed by the management of location-aware resources, discussed in Chapter 2). Mobile Enhancement of Human Perception Mobile augmented reality technologies use virtual information representations (visual, aural, or other) to enhance human perception. Surveillance camera images that make crime perpetrators more recognizable is a simple nonmobile example. Mobile augmented reality (see Box 4.4) does this dynamically while the user moves through an environment. Heads-up displays, for instance, have been used to help jet-fighter pilots find their targets and to assist civilian drivers see objects in the road ahead when visibility is poor. Because mobile augmented reality requires both detailed geospatial databases describing the “fixed” world and location-aware computing support to match the location of the user with that description, it is a classic example of a spatiotemporal application of geospatial information. As the geodata infrastructure expands, such applications will become increasingly important. Consider, for example, what it might mean in terms of human life if firefighters could look at a burning building and see (as a 10 A research group at the Fraunhofer Institute for Computer Graphics in Germany has developed prototype methods for 3D display of geospatial information on mobile, handheld devices (Coors, in press). 11 The International Cartographic Association has played an important role in this research. See Weibel and Jones (1998) and <http://www.lsgi.polyu.edu.hk/WorkshopICA/CfP_Hongkong_2001_v32.pdf>.
OCR for page 96
BOX 4.4 Mobile Augmented Reality Mobile augmented reality (MAR) combines computational models, location and head-orientation tracking, and algorithms for information filtering and display to enhance human perceptual capabilities. In this example, the user wears a see-through, head-mounted display; his position and head orientation are tracked as he moves. With the use of a model of the immediate environment that is stored on the wearable computer, computer graphics and text are generated and projected onto the real world using the heads-up display. The generated information is displayed in such a way as to correctly register (i.e., align) on the real world, thereby augmenting the user’s own view of the environment. Combining advanced research in MAR-specific algorithms for the user interface with recent developments in wearable computer, display, and tracking hardware has made it possible to construct mobile augmented reality systems using commercial, off-the-shelf components. Among the most challenging geospatial applications of MAR is that of providing situational awareness to military personnel in the so-called “urban can
OCR for page 97
yon.” Urban environments are complex, dynamic, and inherently three-dimensional. MAR can provide information such as the names of streets (street signs may be missing), building names, alternative routes, and detailed information such as the location of electrical power cutoffs. The location of potential threats—such as hidden tunnels, mines, or gunfire—can be provided, and routes can be modified on the basis of this information. Note that this information is displayed in a hands-off manner that does not block the user’s view of the real world, so he or she is able to focus attention on the task at hand. When linked by a network, these systems can enable the coordination of isolated ground forces. MAR usage could be helped along not only by continuing the MAR-specific research in interface/display and tracking/registration algorithms but also by developing methods to provide very high-resolution, correctly georegistered databases and new geographic information systems that can readily adapt to dynamic changes in the urban environment. SOURCE: Adapted from a white paper, “Geospatial Requirements for Mobile Augmented Reality Systems,” prepared for the committee’s workshop by Lawrence Rosenblum. transparent layer superimposed over the building) a representation of the activities on each floor (retail space on the first floor, a fitness center on the second, offices for the next five, and apartments above). Mobile augmented reality imposes constraints on interaction and display that go well beyond those already discussed. One issue is how the system should determine which aspects of reality to augment with which components of information. Real-world point-and-click (originally described in Chapter 2) offers one approach. Building on the desktop graphical user interface (GUI) metaphor, it allows users to interact with objects (in this case, integrated real/virtual objects) using a pointer device such as a gyro mouse or a laser pointer. An alternative metaphor, real-world gesture-and-ask, combines voice, gestures, and other information (such as the direction of the user’s gaze) so the user can interact with data sources without a handheld pointer. To make mobile augmented reality useful for emergency management, military deployment, and related rapid-response situations, systems must be able to cope with rapid changes, not only at the position of the observer but ongoing in the observer’s environment. This means that information about the environment must be collected at sufficient spatial
OCR for page 98
and temporal resolution, and at sufficiently quick intervals, to support real-time behavior. Ultimately, it will require the integrated exchange of information among many devices, including distributed repositories of geodata, embedded information collection devices, temporary autonomous devices for collecting information, and mobile receivers providing users with updated information. The examples of mobile augmented reality described above all deal with enhancing human vision. Research here could also yield significant benefits for sight-impaired individuals, helping them overcome many obstacles to freedom of movement. High-resolution geospatial data could deliver key information about the immediate environment to mobile users, through sounds or tactile feedback. Similar techniques could be used to augment human hearing. Research investments in this area not only could make it possible for users to hear sounds outside their normal perceptual range or to mitigate hearing deficiencies but also could provide added sensory input in situations where vision already is fully engaged. The test bed proposed in Chapter 2 could be used to conduct an in-depth evaluation and refinement of the techniques proposed in this section. Collaborative Work with Geospatial Information Most of the science and decision making involved in geoinformation is the product of collaborative teams. Current geospatial technologies are a limiting factor because they do not provide any direct support for group efforts. Collaborative methods and technologies could bring improvements in many geospatial contexts. They could enable teams of scientists to build cooperatively integrated global-regional models of environmental processes and their drivers; allow group-based site selection for key facilities (e.g., brownfield development or a nuclear waste disposal site); support homeland security activities such as identifying potential targets, patterns of activity, or space-time relationships in intercepted messages; and enable collaborative learning experiences that incorporate synchronous and asynchronous interactions among distributed students, teachers, and domain experts. The core challenge is to support effective geocollaboration by developing technologies such as group-enabled GIS systems, team-based decision support systems, and collaborative geovisualization. Understanding Collaborative Interactions with Geoinformation In spite of the large body of research in computer-supported collaborative work and HCI, we know relatively little about technology-enabled collaborative human interaction with geospatial information. A system
OCR for page 99
atic program of research is needed that focuses on group work with geospatial data and on the technologies that can enable and mediate that work. Currently, the only practical way for teams to collaborate on geospatial applications is to gather in a single place and interact with analysis tools by having a single person “drive” the software on behalf of the group. Fundamental changes in geospatial interfaces will be needed to support two or more users at once. Although some of these relate to low-level system issues (e.g., the Windows operating system acknowledges only one mouse cursor), the focus in this report is on extending geospatial methods and tools to support group development and assessment activities. In general, collaborative work can be characterized by its spatial and temporal components. That is, the location of participating individuals may be the same or different (i.e., face-to-face vs. distributed), and the individuals may interact at the same time or different times (synchronous vs. asynchronous). Technologically, it is the spatial distinction that is most important, because radically different kinds of technologies are needed to facilitate distributed work, particularly when it is conducted synchronously. Fundamental HCI research is needed to understand the implications of space and time for the design and use of tools for geocollaboration. It is not clear, for instance, to what extent different interfaces and representations are needed for each of the four cases. Current HCI research on geospatial collaborative work centers on engineering goals—that is, on how to make tools that function effectively in distributed or asynchronous environments. Research investments also are needed at the more fundamental level of design principles for geocollaboration that can generalize more readily to new collaborative contexts and technologies. Collaborative Geospatial Decision Making Decision-making activities that use geodata as core input are a particularly important application domain requiring advances in collaborative technologies and understanding of their use. Examples of such activities include urban and regional planning, environmental management, the selection of locations for businesses, emergency preparedness and response, and the deployment of military personnel. Geospatial decision making is now usually a same-place activity, but that could change dramatically as technology begins to support geocollaboration. A key challenge in geospatial decision making is to support group explorations of what-if scenarios. One possible solution is to extend and integrate existing technologies for the simulation of geographic processes
OCR for page 100
(both human and natural), access to distributed geodata repositories, and facilitation of group consensus building. An alternative solution would be to develop, from the ground up, methods and tools specifically intended to enable collaborative exploration of what-if scenarios. In either case, attention must be given not just to the technologies that support human interaction with dynamic geospatial models but also to interactions among team participants as they work with the models. Collaborative work in problem domains such as crisis management or situational awareness will require technologies for viewing and responding to geospatial information in real time and for sharing diverse perspectives on the information and the problem it is being applied to. In addition, research will be needed into techniques for measuring uncertainty in data for strategic assessment and decision-making activities, as well as into mechanisms for identifying and compensating for collaborators with access to just pieces of the group’s information. The latter is a particularly difficult, pervasive problem for real-time geocollaboration. Participants often have access to different sources of information, each of which may be context sensitive, limited in scope, incomplete, and of variable quality (consider, for example, a disaster management scenario involving individuals in the field and in the command center). Limits on sharing information may be imposed by technological limitations of broadcasting or display capabilities, privacy and security concerns, time factors (crisis decisions often must be made immediately), and the fact that participants may not have the breadth of expertise to interpret all the relevant geodata. Finally, current efforts center on the use of technology to make distributed collaboration work as much like same-place work as possible rather than on enhancing the process of collaboration itself. Additional research is needed to identify how collaborative efforts could take better advantage of what different participants bring to the process. This will be particularly important for decision-making scenarios (such as those already outlined) in which information access and expertise vary widely from one team member to another. Teleimmersion Teleimmersion12 can be considered a unifying grand challenge for multidisciplinary research at the intersection of geospatial information 12 The committee thanks Marc Armstrong for his assistance in developing this section. See “The Four Way Intersection of Geospatial Information and Information Technology,” a white paper written by Dr. Armstrong for the committee’s workshop.
OCR for page 101
science and information technology. It has been defined as the use of immersive, distributed virtual environments in which information is processed remotely from the users’ display environments (DeFanti and Stevens, 1999). The goal of teleimmersion is to provide natural virtual environments within which participants can meet and interact in complex ways. Because these environments become human-scale “spaces” and the collaboration often will deal with geographic-scale problems, a coordinated approach to human interaction with geoinformation and to teleimmersion is likely to have many payoffs. Achieving this goal will require focused research in at least five separate, but linked, domains: High-performance computing. Significant computation is needed to process the massive volumes of data and complex models and to render scenes realistically—all in near real time. However, if decision makers have to wait for hours to compute and render results for a summit meeting that will last only minutes, the number of scenarios they can consider is obviously limited. Research is needed to determine when and how geographical problems should be decomposed for distributed computing environments such as cluster computers or the computational grid. High-performance networking. Teleimmersion requires moving large data sets and, even more importantly, overcoming the latency and jitter problems introduced by remote, synchronous interactions. Indeed, latency can render a teleimmersive computing environment unusable because of the disorientation that occurs whenever there is a long lag between a user’s physical movement and the virtual representation of that movement. One way to overcome such problems is to establish quality-of-service guarantees (Bhatti and Crowcroft, 2000). Human-computer interaction. Open issues include appropriate interface metaphors and support for gestural interaction. For example, it is not clear what level of realism is appropriate for avatars (virtual persona) in multiuser systems. Face-to-face communication relies on gestures and facial expressions, and some researchers believe that realistic avatars facilitate more open communications among participants (Oviatt and Cohen, 2000). Visualization. To fully exploit the potential of teleimmersion, new research on the visualization of high-dimensional, virtual geographies is needed. Key issues include determining what level of geographical realism is appropriate in a virtual, geoinformation-based world and the role of animation in teleimmersive environments. Collaborative decision support. The migration from more traditional computer-supported cooperative work to collaborative virtual environments presents a number of significant research challenges (Benford et al.,
OCR for page 102
2001, present a comprehensive outline). Even if all of them can be addressed successfully, research investments will need to be made in issues specific to geocollaboration, such as those outlined earlier in this chapter. Although discussed here in the context of teleimmersion, these are all cross-cutting domains at the intersection of geospatial information and information technology that have appeared at multiple points in this report. Each will assume increased importance as geospatial applications become increasingly prominent in our daily lives. REFERENCES Armstrong, M.P. 1994. “Requirements for the Development of GIS-Based Group Decision-Support Systems.” Journal of the American Society for Information Science, 45(9):669-677. Asghar, M.W., and K.E. Barner. 2001. “Nonlinear Multiresolution Techniques with Applications to Scientific Visualization in a Haptic Environment.” IEEE Transactions on Visualization and Computer Graphics, 7(1):76-93. Benford, S., C. Greenhalgh, T. Rodden, and J. Pycock. 2001. “Collaborative Virtual Environments.” Communications of the ACM, 44(7):79-85. Berners-Lee, T., J. Hendler, and O. Lassilla. 2001. “The Semantic Web.” Scientific American, May. Bhatti, S.N., and J. Crowcroft. 2000. “QoS-Sensitive Flows: Issues in IP Packet Handling.” IEEE Internet Computing, 4(4):48-57. Blades, M. 1991. “Wayfinding Theory and Research: The Need for a New Approach.” In D. M. Mark and A.U. Frank (eds.), Cognitive and Linguistic Aspects of Geographic Space, pp. 137-165. Dordrecht, Netherlands: Kluwer Academic Publishers. Chang, K. 2001. “From 5,000 Feet Up, Mapping Terrain for Ground Zero Workers.” New York Times, September 23. Chen, C., and Y. Yu. 2000. “Empirical Studies of Information Visualization: A Meta-Analysis.” International Journal of Human-Computer Studies, 53:851-866. Computer Science and Telecommunications Board (CSTB), National Research Council. 1997. Modeling and Simulation: Linking Entertainment and Defense. Washington, D.C.: National Academy Press. Computer Science and Telecommunications Board (CSTB), National Research Council. 1999. Information Technology Research for Crisis Management. Washington, D.C.: National Academy Press. Coors, V. In press. “3D Maps for Boat Tourists.” In J. Dykes, A.M. MacEachren, and M.-J. Kraak (eds.), Exploring Geovisualization. Amsterdam: Elsevier Science. Cutmore, T.R.H., T.J. Hine, K.J. Maberly, N.M. Langford, and G. Hawgood. 2000. “Cognitive and Gender Factors Influencing Navigation in a Virtual Environment.” International Journal of Human-Computer Studies, 53(2):223-249. Darken, R.P., T. Allard, and L.B. Achille. 1999. “Spatial Orientation and Wayfinding in LargeScale Virtual Spaces II: Guest Editor’s Introduction.” Presence: Teleoperators & Virtual Environments, 8(6):3-6. Davis, D., W. Ribarsky, T.Y. Jiang, N. Faust, and Sean Ho. 1999. “Real-Time Visualization of Scalably Large Collections of Heterogeneous Objects.” IEEE Visualization, pp. 437-440. DeFanti, T., and R. Stevens. 1999. “Teleimmersion,” In I. Foster and C. Kesselman (eds.), The Grid: Blueprint for a New Computing Infrastructure, pp. 131-155. San Francisco, Calif.: Morgan Kaufmann Publishers.
OCR for page 103
Dollner, J., K. Baumann, K. Hinrichs, and T. Ertl. 2000. “Texturing Techniques for Terrain Visualization.” In Proceedings of the IEEE Visualization 00, pp. 227-234. Dungan, J.L. 1999. “Conditional Simulation: An Alternative to Estimation for Achieving Mapping Objectives.” In F. van der Meer, A. Stein, and B. Gorte (eds.), Spatial Statistics for Remote Sensing, pp. 135-152. Kluwer: Dordrecht. Djurcilov, Suzana, and Alex Pang. 2000. “Visualizing Sparse Gridded Datasets,” IEEE Computer Graphics and Applications 20(5):52-57. Elvins, T.T., D.R. Nadeau, R. Schul, and D. Kirsh. 2001. “Worldlets: 3-D Thumbnails for Wayfinding in Large Virtual Worlds.” Presence: Teleoperators and Virtual Environments, 10(6):565-582. Fisher, P. 1994. “Hearing the Reliability in Classified Remotely Sensed Images.” Cartography and Geographic Information Systems, 21(1):31-36. Früh, C., and A. Zakhor. 2001. “Fast 3D Model Generation in Urban Environments.” International Conference on Multisensor Fusion and Integration for Intelligent Systems 2001, Baden-Baden, Germany, pp. 165-170. Golledge, R.G. 1992. “Place Recognition and Wayfinding: Making Sense of Space,” Geoforum, 23(2):199-214. Jankowski, P., and T. Nyerges. 2001. Geographic Information Systems for Group Decision Making: Towards a Participatory, Geographic Information Science. New York: Taylor & Francis. Jedrysik, P.A., J.A. Moore, T.A. Stedman, and R.H. Sweed. 2000. “Interactive Displays for Command and Control.” Aerospace Conference Proceedings, IEEE, Big Sky, Mont., pp. 341-351. Kao, D., J. Dungan, and A. Pang. 2001. “Visualizing 2D Probability Distributions from EOS Satellite Image-Derived Data Sets: A Case Study.” Proceedings of Visualization 01, IEEE, San Diego, Calif. Krygier, J. 1994. “Sound and Geographic Visualization.” In A.M. MacEachren and D.R.F. Taylor (eds.), Visualization in Modern Cartography, pp. 149-166. Oxford, UK: Pergamon. Levkowitz, H., R.M. Pickett, S. Smith, and M. Torpey. 1995. “An Environment and Studies for Exploring Auditory Representations of Multidimensional Data.” In G. Grinstein and H. Levkowitz (eds.), Perceptive Issues in Visualization, pp. 47-58. New York: Springer. Lodha, S.K., C.M. Wilson, and R.E. Sheehan. 1996. “LISTEN: Sounding Uncertainty Visualization.” Visualization 96, pp. 189-195. IEEE, San Francisco, Calif. MacEachren, A.M., and M.-J. Kraak. 2001. “Research Challenges in Geovisualization.” Cartography and Geographic Information Science, 28(1):3-12. Mark, D.M., C. Freksa, S.C. Hirtle, R. Lloyd, and B. Tversky. 1999. “Cognitive Models of Geographical Space.” International Journal of Geographical Information Science, 13(8):747-774. Ogi, T., and M. Horose. 1997. “Usage of Multisensory Information in Scientific Data Sensualization.” Multimedia Systems, 5:86-92. Oviatt, S., and P. Cohen. 2000. “Multimodal Interfaces That Process What Comes Naturally.” Communications of the ACM, 43(3):45-53. Parish, Y., and P. Muller. 2001. “Procedural Modeling of Cities.” In Proceedings of SIGGRAPH 01, pp. 301-308. New York: ACM Press. Passini, R. 1984. “Spatial Representations: A Wayfinding Perspective.” Journal of Experimental Psychology, 4:153-164. Slocum, T.A., C. Blok, B. Jiang, A. Koussoulakou, D.R. Montello, S. Fuhrmann, and N.R. Hedley. 2001. “Cognitive and Usability Issues in Geovisualization.” Cartography and Geographic Information Science, 28(1):61-75. Weibel, Robert, and C.B. Jones (eds.). 1998. “Computational Perspectives on Map Generalization.” Special Issue on Map Generalization, GeoInformatica, 2(4):307-314.
OCR for page 104
This page in the original is blank.
Representative terms from entire chapter: