Human Interaction with Geospatial Information
The emphasis in earlier chapters has been on methods and technologies to acquire, manage, and take more complete advantage of geospatial information. This chapter focuses on the human users of these technologies. Too often, the domain experts who have the knowledge necessary to take advantage of this information cannot find or use it effectively without specialized training. Current geospatial technologies are even less suited for citizens at large, who have problems and questions that require geospatial information but who are not experts in the technology or in all aspects of the problem domain. For example, a couple interested in buying a piece of property to create a commercial horse farm and riding stable should be able to pose what-if scenarios (What if the zoning regulations change? What if a new sports stadium is built a mile away?) to identify any environmental, legal, or other limitations that might interfere with their proposed business.
For centuries, visual displays in the form of maps and images provided a critical interface to information about the world. Now, however, emerging technologies create the potential for multimodal interfaces—involving not just sight but also other senses, such as hearing, touch, gestures, gaze, and other body movements—that would allow humans to interact with geospatial information in more immediate and “natural” ways. One focus of this chapter is how recent advances in visualization and virtual/augmented environment technologies can be extended to facilitate work with geospatial information. The chapter outlines the issues associated with interaction styles and devices ranging from high-density, large-screen displays and immersive virtual environments to mobile PDAs and wearable
computers. It also discusses representation and interaction technologies specifically designed to support group collaboration.
To date, most research on human interaction with geospatial data has roots in one of three domains: visualization (including computer graphics, cartography, information visualization, and exploratory data analysis), human-computer interaction, and computer-supported cooperative work. There has been only limited integration across these domains.1 The committee notes that although continued research in each remains important, an integrated perspective will be essential for coping with the problem contexts envisioned, such as crisis management, urban and regional planning, and interactions between humans and the environment.
TECHNOLOGIES AND TRENDS
Most advances in computing and information technology affect some aspect of human interaction with geospatial information. Four in particular are driving forces, with the potential to enable richer, more productive interactions:
Display and interface technologies. As noted above, human interaction with geospatial information has been linked to visual display for centuries (e.g., the use of paper maps to represent geographic space in the world). Recent advances include developments in immersive virtual environments, large and very high-resolution panel displays, flexible (roll-up) displays, multimodal interfaces, and new architectures supporting usability.
Distributed system technologies. Technologies that support remote access to information and remote collaboration also are having a dramatic impact on how people interact with information of all kinds. Among these are high-bandwidth networking, wireless networks and communication, digital library technologies, and interactive television.
Mobile, wearable, and embedded technologies. Until recently, most human interaction with computerized or displayed geospatial information required desktop visual displays. Particularly relevant new technologies include wireless personal digital assistants (PDAs) that support both data collection and information dissemination; augmented reality devices that
Two notable exceptions are National Center for Geographic Information and Analysis research initiative 13, “User Interfaces for Geographic Information Systems,” and the ACM SIGGRAPH Carto Project, a 3-year collaboration with the International Cartographic Association focused on geovisualization. More information is available at <http://www.siggraph.org/project-grants/carto/cartosurv.html>.
support the matching of physical objects with virtual data objects; distributed sensor fusion techniques to support multivariate visualization in the field; and pervasive computing infrastructures (e.g., intelligent highways and other infrastructures) that can interact with mobile humans or computational agents to inform them about local context.
Agent-based technologies. Software agents now are being applied in a wide array of contexts. As these technologies mature, there will be considerable potential to extend them for facilitating human interaction with geospatial information. Among the agent-based technologies that show promise are intelligent assistants for information retrieval, agent support for cooperative work and virtual organizations, and computational pattern-finding agents.
The geospatial information science community is working on how these technologies can be made easier to use. Attention is being paid to new technologies for representing geospatial data (to the eye as well as to other senses, particularly touch and hearing2) and on increasing the usability and usefulness of interfaces for individuals and for collaborating groups (Jankowski and Nyerges, 2001; MacEachren and Kraak, 2001; Mark et al., 1999). The remainder of this section provides an overview of the current state of the art in three domains: visualization and virtual environments, human-computer interaction, and computer-supported cooperative work.
Visualization and Virtual Environments
Developments in scientific visualization and virtual environments have been closely coupled with advances in computer graphics. The primary impact of scientific visualization research on geospatial information has been the ability to obtain realistic terrain representations, zoom across scales, and create fly-through animations. These methods are now relatively common and included both in commercial products that run on desktop computers and in immersive virtual environments (e.g., CAVE Automatic Virtual Environments). Complementary research has enabled the creation of movie-quality time-series map animations. Both research thrusts have exploited dramatic advances in computer processing power
and in parallel algorithms for dealing with very large data sets. They also take advantage of—and in some cases drive the development of—increasingly high-resolution displays (e.g., multiprojector “powerwalls”). These allow users to see critical details in complex dynamic phenomena, such as subtle eddies that are critical to understanding global ocean circulation models. Researchers at the Air Force Research Laboratory have even developed a portable multipanel display for use in field command-and-control situations. The display requires interaction with a stream of information arriving via wireless connections from a diverse collection of distributed sensors (Jedrysik et al., 2000).
Much of the research in scientific visualization has emphasized spectacular visual renderings rather than mechanisms for human interaction. In situations where data sets are very large, interactivity often is sacrificed— in ways that vary from a slower frame rate to offline rendering and later playback—so that scarce computing resources can be devoted to ever more detailed rendering. This trade-off, although perhaps reasonable when the focus is on single variables (e.g., ocean temperatures), does not support the kinds of creative exploration demanded by geospatial information, in which problems are ill defined and highly multivariate and the relative importance of variables is not known a priori. Nor are highly detailed renderings of predetermined scenes sufficient for geospatial applications in which dynamic exploration and interaction are critical capabilities (such as in the Digital Earth scenario described in Chapter 1). This situation has stimulated research in geovisualization, which integrates approaches from scientific visualization, cartography, information visualization, exploratory data analysis, and image analysis (MacEachren and Kraak, 2001). The results include methods enabling flexible, highly interactive exploration of multivariate geospatial data. Recent efforts have focused on linking geovisualization more effectively to computational methods for extracting patterns and relationships from complex geospatial data.3
Research in virtual environments, however, has emphasized support for human interaction, both with the display and with other human actors. A key objective has been the development of interaction methods that are more natural than using a keyboard or mouse and take advantage of three-dimensional, often stereoscopic displays. Another objective has been the leveraging of high-performance computing to support real-time interaction as well as remote collaboration. Here, too, a new area of re-
These topics were addressed recently at the European Science Foundation Conference “Geovisualisation—EuroConference on Methods to Define Geovisualisation Contents for Users Needs,” held in Albufeira, Portugal, March 2002. See <http://www.esf.org/euresco/02/lc02179> for more information.
search has emerged in response to the specialized characteristics of the geospatial information. GeoVirtual environments extend virtual interaction technologies to support both geospatial data analysis and decision support activities.
New technologies—such as very large, high-resolution displays that can enable same-place collaborative work and smaller, lighter devices that generate or use georeferenced information anywhere and that are linked to wireless communications—now make it possible for everyone to have access to geospatial information, everywhere.
A substantial amount of research has been conducted in the general area of human-computer interaction (HCI), providing an initial foundation for understanding how humans interact with geospatial information. Much of that work, however, has focused on how humans interact with the technology itself rather than with the concepts being represented through the use of technology. Although the new technologies pose an initial hurdle for users of geospatial applications, the fundamental challenge is how to support human interaction with the geospatial information itself. In other words, the challenge is in moving beyond HCI to human-information interaction. Here, there are few research results upon which to build new methods and technologies. Cartographers have been concerned with the empirical assessment of map usability since the 1950s, but their emphasis was on extracting information from static visual representations rather than interactive representations and analysis (Slocum et al., 2001). Similarly, although the HCI community has begun to examine the effectiveness of information visualization methods and tools, most studies have centered on information displays rather than on mechanisms for interacting with them (Chen and Yu, 2000).
Computer-Supported Cooperative Work
Most real-world, scientific geospatial applications involve cooperative work by two or more persons. To date, however, most geospatial technologies have been designed to support just one user at a time. A meeting of the National Center for Geographic Information and Analysis (NCGIA)4 prompted initial work on how traditional geospatial informa-
The NCGIA is an independent research consortium dedicated to basic research and education in geographic information science and its related technologies. The scientific report of the meeting is available online at <http://www.ncgia.ucsb.edu/research/i17/spec_report.html>.
tion systems could be extended to support group decision-making processes, but the work is still very preliminary.
Fortunately, more general research on computer-supported collaborative work has yielded a substantial body of literature as well as a growing set of commercial and noncommercial tools. The technology has begun to mature to the point of inclusion in off-the-shelf office computing software. For instance, change tracking and other asynchronous collaboration features are now standard in document processing software, and same-time/different-place meeting tools allow the sharing of video, audio, text, and graphics. Nevertheless, significant barriers remain before this technology can contribute meaningfully to geospatial applications. There has been no attention to how the new collaborative features might be integrated with geospatial analysis activities and only limited attention to the role of interactive visualizations in facilitating cooperative work.
This section considers the system and human-user components of four interrelated issues, each of which is central to human interaction with geospatial information: (1) taking full advantage of increasingly rich sources of geospatial data in support of both science and decision making, (2) making geospatial information accessible and usable for everyone, (3) making geospatial information accessible and usable everywhere, and (4) enabling collaborative work with geospatial information. The focus on these issues (and the associated challenges and opportunities) resulted from the combination of preliminary work by the committee, contributions by workshop participants through working papers and during the workshop, solicited input from other experts, and post-workshop analysis by the committee. Each issue is discussed below separately.
Harnessing Information Volume and Complexity
The exponential growth of geospatial data will provide us with opportunities to enable more productive environmental and social science, better business decisions, more effective urban and regional planning and environmental management, and better-informed policy making at local to global scales. Across all application domains, however, the volume and complexity of the geospatial information required to answer hard scientific questions and to inform difficult policy decisions create a paradox—whereas the necessary information is more likely to be available, its sheer volume will make it increasingly hard to use effectively.
Harnessing and Extending Advances in the Visual Representation of Knowledge
As geospatial repositories grow in size and complexity, users will need more help to sift through the data. Specifically needed are tools that can exploit advances in visualization and computational methods to trigger human perceptual-cognitive processing power. Three of the key needs are outlined below.
First, there is a critical need for software agents to automate the selection of data-to-display mappings. Although recent advances in visualization methods and technologies have considerable potential to help meet this goal, they lack mechanisms for matching representation forms to the data being represented in ways that take full advantage of human perceptual-cognitive abilities (and that avoid potentially misleading representations). The real challenge is to develop context-sensitive computational agents that automate the choice of data-to-display mappings, freeing the user to concentrate on data exploration. In this sense, “context” must encompass not just the nature of the information being interacted with and the display/representation environment being used but also the characteristics of the problem domain.
A second, complementary need is for dynamic, intelligent category-representation tools. These would enable flexible exploration and modification of conceptual categories by human users and would facilitate the interoperability of different geospatial systems. Of particular importance is how methods and technologies can support the different conceptual categories brought by individual users to a given data analysis task. A simple example is the category “forest,” which connotes harvestable timber and high densities of relatively large trees (perhaps 75 percent canopy) to a forester but connotes cover for troops (with a much lower percentage of canopy required to be in the category) to a military commander. The representation of conceptual categories is an important tool in developing the formal ontologies discussed in Chapter 3. Formalized ontological frameworks can define the differences in ontology among different disciplines and manage multiple definitions of a concept (such as the “forest” category noted above). One part of a solution is to develop visualization (and perceptualization) methods and tools that support navigation of the ontologies created, explanation and demonstration of the resulting conceptual structures and complex transformation carried out on the highly processed data, and integration of the results directly into the scientific process (by providing standard ways to manipulate geospatial data across applications). Hence, categories developed through analysis of highly multivariate data—e.g., aggregation of data from remote sensing, population and agricultural censuses, zoning, and other sources—in which the
BOX 4.1 Coping with Uncertainty in a Geospatial Field
Data from many applications can be represented as a two-dimensional (2D) field in which each data point is a distribution. One example is data from the Earth Observing System, in which one treats the spectra at each pixel as a distribution of data values. A critical challenge with these data is to develop methods for coping with their uncertainty.
Conditional simulation, also called stochastic interpolation, is one way to model uncertainty about predicted values in such a geospatial field (Dungan, 1999). It is a process by which spatially consistent Monte Carlo simulations are constructed, given some data and the assumption that spatial correlation exists. Conditional simulation algorithms yield not one but several maps, each of which is an equally likely outcome from the algorithm; each equally likely map is called a realization. Furthermore, these realizations have the same spatial statistics as the input data. In Figure 4.1 (pp. 82-83), each individual realization is a possible scenario given the same set of ground measurements and satellite imagery. Taken jointly, these realizations describe the uncertainty space about the map. That is, the density estimate (from, for example, a histogram) of the data values at a pixel is a representation of the uncertainty at that pixel.
The visualization task, then, is to facilitate the understanding of uncertainty over the domain. One way is to simply plot the histogram of the distribution for every pixel. The obvious drawbacks to this approach are the screen resolution requirements and the potentially very cluttered presentation. Another approach, shown in part (a), is to summarize each dis
definition is place- and context-specific (e.g., “rural” land) pose difficult challenges to current technologies.
Methods for representing uncertainty constitute a third need. Users cannot make sense of data retrieved from a large, complex geospatial repository without understanding the uncertainties involved. Some research efforts have addressed the visualization of geospatial data quality and uncertainty (see Box 4.1 and Figure 4.1), but existing methods do not scale well to data that are very large in volume or highly multivariate (a problem also identified in Chapter 3 in connection with current data mining approaches and geospatial algorithms). Nor has sufficient attention been directed to helping analysts use uncertainty representations in hypothesis development or decision-making applications. Achieving real progress will require advances in modeling the components of uncertainty
tribution into a smaller set of meaningful values that are representative of the distribution. Parametric statistics (e.g., mean, standard deviation, kurtosis, and skewness) are collected about each distribution. This forms an n-tuple of values for each pixel that then can be visualized in layers. However, there are drawbacks to this approach as well—namely, the limited number of parameters that can be displayed, the loss of information about the shape of the distributions, and the poor representations if the distribution cannot be described by a set of parametric statistics. Clearly, alternative nonparametric methods need to be pursued.
Methods illustrated in part (b) allow the user to view parts of the 2D distribution data as a color-mapped histogram. Here, the frequency of each bin in a histogram is mapped to color, thereby representing each histogram as a multicolored line segment. A 3D histogram cube then represents a 2D distribution of data. Interactivity helps in understanding the rest of the field, but there is still the need (as yet unrealized) to be able to “see” the distribution over the entire 2D field at once.
A more subtle problem is capturing the spatial correlation of uncertainty over the domain. Using distributions of values aggregated from multiple realizations may be a good representation of the probabilities of values at a particular pixel, but that representation does not take into account any spatial correlation that may exist among the values in the vicinity of that pixel. Hence, another challenge is a richer representation of uncertainty that incorporates spatial correlation, and the visualization of such data sets.
SOURCE: Adapted from a white paper, “Visualizing Uncertainty in Geo-spatial Data,” prepared for the committee’s workshop by Alex Pang; for more detail, see Kao et al. (2001).
and in representing the uncertainties in ways that are meaningful and useful. The situation is complicated by the fact that many aspects of uncertainty relevant to human interaction with geospatial information are not amenable to modeling.
Geospatial Interaction Technologies
Increases in data resolution, volume, and complexity—i.e., the number of attributes collected for each place—can overwhelm human capacities to process information using traditional visual display and interface devices. Recent advances in display and interaction technologies promise to enhance our ability to explore and utilize geospatial data from extremely large repositories. However, current desktop-based Geographic
Information Systems (GISs) and geovisualization tools do not take effective advantage of human information processing capabilities, nor (as noted above) do they scale to analyses of very large or highly multivariate data sets. Methods are needed that support dynamic manipulation (e.g., zooming, querying, filtering, and labeling) on the fly, for millions of items. Considerable research investments will be required to realize the poten-tial offered by the new technologies.
The first challenge is the development of inexpensive, large-screen, high-resolution display devices. Currently, the resolution of display technology remains nearly an order of magnitude less than that of print technology (i.e., a 20-inch monitor at UXGA resolution will display about 1.9
million pixels vs. about 69.1 million on a printed page). Higher resolutions could give the needed detail, whereas large size would take the geographic context of problems into account more effectively (particularly in support of collaborative work). Note that the large-screen, high-resolution technology must be affordable for classrooms, science laboratories, libraries, urban or regional planning offices, and similar settings for those communities to benefit from them.
Just as traditional display technologies limit the representation of geospatial information, so, too, do traditional interfaces. First, the interaction devices themselves are too restrictive: a keyboard and mouse are not flexible or expressive enough to navigate effectively through large
data spaces. Although there have been advances in alternative styles of interaction, such as voice- and gesture-based manipulation, their capabilities are still extremely rudimentary. Significant work is needed to determine if, and how, alternative styles of interaction might facilitate geospatial applications. Second, geospatial displays that use stereo and/ or animation (e.g., to portray a third spatial dimension and time) introduce technological challenges associated with interaction tools for manipulating three- or four-dimensional scenes. Third, there are conceptual challenges in devising interface metaphors to support interaction with dynamic geographic spaces, which typically cover more territory than the user can see or interact with from a single vantage point. Possible alternatives would be to support human interaction with geospatial information from within a fully immersive virtual environment or to adopt a fish-tank metaphor, in which the information space is presented as a scale model manipulated from a perspective outside the virtual environment (see Box 4.2 for a discussion of one such effort and Figure 4.2 for an illustration). Again, the initial studies are promising, but a substantial research investment is needed to bring these techniques to maturity in geospatial applications.
Another promising approach to harnessing the scale and complexity of geospatial information is to explore the use of senses other than vision. These multimodal interfaces present a host of research challenges. Not only must devices and methods be developed and tested for efficacy in geospatial contexts, but basic research must also address the larger issue of information perceptualization, or how to represent complex information using combinations of haptic (tactual and kinesthetic), sound, and visual variables. It is not even clear what the appropriate balance might be between realism and abstraction in depicting highly complex, multivariate, multiscale, time-varying geospatial information.
Finally, navigation through the real world is challenging, and a large industry has existed for centuries that develops and provides navigational aids. Navigation in virtual geographic spaces—particularly abstract spaces that represent the nonvisible world—is even more difficult. To date, research efforts in virtual environments, particularly those depicting geospatial information, have centered on the creation of the environments themselves; attention is now needed to determine how to enable navigation through virtual spaces. One promising approach is to build on the long history of research on wayfinding in physical environments.5 Wayfinding is defined as the process of developing and executing plans
for travel through the environment; it involves cognitive activities associated with several sub-components of this process, such as mental representations of geographic space, route planning, and distance estimation (Golledge, 1992; Elvins et al., 2001). In applying wayfinding support to virtual environments, it will be necessary to invest in research that addresses an array of open questions, such as the effect of individual differences (e.g., age, gender, and cognitive ability) on success rates for particular navigational technologies; the potential role of virtual wayfinding aids modeled on aids used in the real world, such as maps and GPS; and the extent to which wayfinding strategies learned in the real world transfer to abstract virtual worlds, and vice versa.
Enabling Work with Heterogeneous, Urban Representations
The world is becoming increasingly urban. Representing and interacting with geospatial information from urban areas pose special challenges, related to the complex, three-dimensional structure of cities as well as their highly dynamic nature. This was never clearer than on September 11, 2001. Although much of the work needed for urban geospatial applications centers on developing technologies suitable for acquiring, organizing, and managing these special types of information (see Box 4.3 and Figures 4.3 and 4.4), research also is needed to address two human interaction problems.
The entertainment industry has created the expectation that we can visually zoom through geospatial information at scales that range from the entire planet to rooms in a building. To achieve this capability for real-world situations, we must solve several fundamental problems.6 Although some problems are similar to those for supporting interaction with any 3D data space, the need for realistic appearance in urban representations creates rendering challenges. Representation methods are needed that balance realistic appearance so that users can identify the places, buildings, and objects they are seeing while still being able to move smoothly through the environment and across scales. At the same time, these methods must accommodate capabilities such as virtual x-ray vision, allowing the user to see both the outside of built structures and activities taking place on the inside. For example, crisis management applications would benefit from such new representation methods by allowing firefighters to visualize building occupancy by floor, plan escape routes,
BOX 4.2 Virtual Reality for Personal GIS
The Haptic Fish Tank Virtual Reality effort at the University of New Hampshire’s Data Visualization Research Lab is developing a personal workspace that supports a high level of user interaction with geospatial information. Its haptically (sense of touch) enabled environment employs a mirror and head tracking mechanisms to create a small but high-quality virtual reality model that also allows the user to insert a hand into the workspace. The hand remains invisible, but the object it holds is represented visually. This arrangement can be augmented with force feedback devices (such as the Phantom) that allow the user to feel constraints on the objects being viewed and manipulated. This approach offers several advantages:
SOURCE: Colin Ware, Data Visualization Research Lab, University of New Hampshire.
and so on. Supporting this goal will require methods and technologies to fuse information about the external environment with information on the internal environment of constructed space—information that, when available at all, currently is captured and stored in very different information systems and explored using different software tools. Moreover, what many users will want to “see” is not just the urban landscape or architectural depictions of building interiors, but abstract information about urban places and spaces. This might include depictions of telephone traffic,
flows of capital into and out of businesses, the average distribution of people at different times of day, or categories of “space use” across the city. Heterogeneous applications like these will require ways to create ad hoc visualizations that can be combined and tightly coupled. Whereas views on the same screen at the same time might be helpful, views that are dynamically linked—so that changes to one result in changes to the other—or linked at a semantic level below the visual display would be more powerful. Dynamically linking on-demand visualizations that differ in type but share conceptual structures and address common applications is a general problem that, if solved, will have an impact well beyond urban visualization applications. An example of paired representations that share an underlying conceptual structure (information organized by floors) but differ in type would be a realistic rendering of a building, based on an architectural model, that can be sliced through at any floor to see room layout and a graphic that depicts the activities on that floor.
Another key challenge posed by urban environments is that they are extremely dynamic. To support human interaction with urban geospatial
BOX 4.3 Toward Multiresolution Visualization of Urban Environments
Researchers in the Graphics, Visualization and Usability Center at the Georgia Institute of Technology have developed a global geospatial hierarchy for terrain, buildings, and atmospheric effects, including weather and view-dependent, continuous level-of-detail (LOD) methods for displaying all of these features while retaining good visual quality. Both the global hierarchy and the rendering method are important. An appropriate global hierarchy provides a scalable structure and an efficient, georeferenced querying mechanism. The view-dependent, continuous LOD method provides a means of managing what could be an overwhelming amount of detail, while ensuring that visually important items are displayed clearly. A side benefit is that data can be quickly retrieved and transmitted in chunks of varying resolution (important because huge models are too large to reside in main memory and must be moved in and out piecewise as needed). Figure 4.3 shows some of the results of using this approach (Davis et al., 1999).
In complementary research efforts, dynamic textures have been applied to terrain to significantly increase the detail of urban features and create high-resolution animations of changing detail, such as flood patterns (Dollner et al., 2000). Ultimately, it would be desirable to display scenes based on both acquired data and procedural models. An example of a procedurally generated city, shown in Figure 4.4 (Parish and Muller, 2001), demonstrates the amount of detail that can be generated and displayed with modern 3D graphics. However, the scene depicted is not interactive, and even though it is complex, it does not have accurate textures or 3D details.
information in contexts such as dealing with a terrorist attack, an earthquake or hurricane, or a debilitating power outage, it will be necessary to integrate information updates on the fly.7 This will require new technologies for capturing change information at relevant time intervals or for recognizing change events, organizing and transmitting that information wherever it is needed, and facilitating user interactions with dynamic representations.
The ability to acquire data on the fly will give geospatial databases new richness. Technologies such as lidar (laser imaging detection and ranging) permit an airplane to collect large sections of a city with height resolution of an inch or two and lateral resolution of a foot, while imagery from satellites is often at 1 meter resolution or less. New methods can collect and automatically process data at ground level as one moves through urban areas. For example, Früh and Zakhor (2001) use a calibrated laser range finder and camera system mounted on a truck that is driven up and down city streets; through a set of clever analyses, they get accurate absolute and relative positioning of streetscapes over several blocks. Techniques such as these produce impressive and potentially very large 3D urban scenes.
Complete urban models will require combining all these sources of information. Research shows that accurate detail can be collected and automatically processed and that such techniques, when perfected, will provide an avalanche of urban detail. Among other things, they will change how we think about urban data sets, because they can be continuously updated as the urban scene changes. (This point is made with shocking force by lidar data of the World Trade Center complex that were collected after September 11, showing the immediate aftermath and the changing piles of rubble (Chang, 2001); these data were used to plan recovery and salvage efforts.)
SOURCE: Adapted from a white paper, “Towards the Visual Earth,” prepared for the committee’s workshop by William Ribarsky.
Geospatial for Everyone—Universal Access and Usability
The preceding section described challenges in making very large geospatial information repositories productive for scientists, resource managers, and decision makers. As geodata become widely available, they will engender an even greater challenge. The new technologies, which were developed for specialists, must be adapted to the needs of ordinary citizens who vary greatly in age, interests, familiarity with computers and databases, and physical capabilities (vision, manual dexterity, etc.). Making geoinformation more accessible will stimulate the market, bringing new business opportunities. Even more important, giving the
average citizen access to the vast geospatial resources being assembled by government and private organizations could mean a much better informed citizenry and more equitable public policies. The discussion that follows is organized around three interrelated aspects of generalized access to geospatial information: simplifying the retrieval of data, developing interaction styles and representations for broader audiences, and understanding patterns of use and usability.
Expanding Geospatial Data Retrieval to New Audiences
Enabling a wider range of users to retrieve geodata from repositories of growing size and complexity will require techniques that help users not just to formulate appropriate queries but also to determine what kinds of data are available in the first place. A goal is to help the user find the desired geospatial information (map, image, data, description) by replacing hard-to-use query languages with expressive visual and interactive methods. This is, of course, not just a human interaction problem; it will require substantial advances in database interoperabil-
ity, semantic representations, and the ability to support scale- and context-appropriate queries and representations of the information retrieved, as discussed previously. In addition, new techniques will be essential to expose the availability, purpose, limitations, and representations of data (maps, images, diagrams, tables, audio descriptions) to people who are unfamiliar with even the most basic concepts of metadata and database operation. Dialogue-based systems that iteratively help users refine retrieval requests are one promising approach. Another long-term solution is to develop semantic webs (Berners-Lee, Hendler, and Lassilla, 2001) for geoinformation, such as the Digital Earth scenario described in Chapter 1. The location specifications inherent in geospatial data provide a natural organizing structure that may actually facilitate the implementation of such webs. How to generate comprehensive metadata that will be useful for general access and how to present them most effectively are open research questions, however, that call for test beds (as suggested in Chapter 2) whose use can be monitored, analyzed, and improved interactively.
Methods for identifying appropriate search criteria and narrowing the scope of queries must be made more natural. Current technologies require a substantial amount of knowledge and training to retrieve geodata effectively. Significant research investments will be required to address all dimensions of this problem. Natural-language, visual, sketch-based, and gesture-based methods for geospatial queries must be developed, as well as geospatial query agents capable of translating imprecise, poorly sequenced human questions into the formalisms needed to service queries with appropriate retrievals.
Simplified Interactions for General Audiences
Current representations of geospatial information rely almost exclusively on our visual capabilities. It has become increasingly urgent to move beyond simple visualization to perceptualization, both to enable understanding by individuals with limited sensory or motor abilities and to support richer portrayals of complex geoinformation spaces for general audiences.
Multimodal interfaces, intended to exploit the full range of human sensory processing, clearly could support access tailored to audiences with special needs. Our traditional reliance on maps and earth metaphors means that a significant research effort will be needed to identify suitable nonvisual paradigms; likely alternatives include aural, tactile, and/or haptic representations. It also will be necessary to develop methods for automatically converting visual representations to nonvisual ones.
The mechanisms for interaction are just one part of the problem in geospatial representations, however. Representations must be capable of making inherently complex information understandable to general audiences. More user friendly interfaces will be needed that can support individuals who have no training in GIS; who may not have the cognitive abilities to understand complex information, interfaces, or computer systems; and who may be relatively unskilled in the use of keyboard and mouse. One goal is to develop technologies that will facilitate exploration and navigation by nonexperts in geoinformation spaces of increasing complexity. As discussed above, to support the exploration by experts of very large and complex data sets, we can build upon our growing understanding of wayfinding in the real world and on the ability of representations and information devices to support those wayfinding activities. This in turn raises concerns about how general audiences might be encouraged to utilize geospatial data safely and accurately. Research will be needed in techniques for supplementing geodata portrayals with metainformation—such as how and why data were collected, uncertainty ratings, and caveats—that addresses appropriate use.
At the same time, efforts should be invested in intelligent interfaces that provide levels and types of expertise to complement the knowledge and skills of different users. For example, agents that “know” about spaces and places could track how different information sources are related and then anticipate common patterns of cascaded searches. The goal is to develop intelligent interfaces that adapt themselves to user needs, remember how to find information when it is needed again, and become smarter over time at seeking and presenting geoinformation.
Understanding Patterns of Use and Usability
To date, virtually nothing is known about the usability of geospatial technologies. Even less is understood about the extent to which those technologies can be matched to human conceptualizations of geographic phenomena or about the use to which the information will be put. It will be necessary to develop new tools to track how individuals and groups work with geospatial technologies, to assess which approaches are most fruitful, and to identify the usability impediments imposed by the technologies. Such understanding will be vital for tailoring user-centered design and other usability engineering methods to the needs of general audiences working with geoinformation. Generally very successful in information retrieval research has been the creation of benchmark data sets that can be used to compare algorithm performance as well as the performance of a user conducting the benchmark tasks. This approach is advocated here as part of a strategy for understanding and improving the use and usability of geospatial technologies.
In particular, it will be important to establish which techniques can measurably improve how effectively and productively geoinformation is used by the general public, students, and other nonspecialist audiences. As noted previously, current HCI research methodologies8 look at people’s interaction with technology rather than at how technology is applied to support people’s interaction with information. Cognitive and usability assessment techniques do not address visually enabled technologies or ones intended for application to ill-structured problems (such as those posed in the example scenarios at the beginning of this report). Research investments will be required to develop empirical paradigms for studying the interaction of nonspecialists with dynamic, complex information from disparate, domain-specific sources.
Geospatial Everywhere—Mobile Information Acquisition, Access, and Use
As noted in Chapter 2, the world and its inhabitants are increasingly “wired”—individuals traveling through and between places have real-time access to an increasing variety of information, much of it geospatial in nature. Freeing users from desktop computers and physical network connections will bring geospatial information into a full range of real-world contexts, revolutionizing how humans interact with the world
around them. Imagine, for example, the ability to call up place-specific information about nearby medical services, to plan emergency evacuation routes during a crisis, or to coordinate the field collection of data on vector-borne disease.9 This section complements Chapter 2 (where the underlying technologies that support location-aware computing are considered) but focuses on two of the most intriguing aspects of ubiquity from the perspective of human users: facilitating the use of geospatial data from outside office or home settings and using geospatial information to enhance human perceptual capabilities.
Mobile Access to Geospatial Information
Underlying the goal of “geospatial everywhere” is the ability to obtain information on demand, wherever the user happens to be. This will necessitate the development of technologies and methods specifically accommodating user mobility. Traditional visual representation methods, developed for desktop (or larger) displays, are not effective in most mobile situations, where display screens are small and local storage and bandwidth capacities are severely limited. Research is needed to develop context-sensitive representations of geospatial information and to accommodate data subject to continual updating from multiple sources. These issues differ from the perceptualization issues already discussed in connection with the need for small, lightweight, and mobile technologies that can be used in public spaces.
Although the available technologies provide limited visual representations of geospatial information in field settings, visual display remains the most efficient and effective method of geospatial access for sighted users. Accordingly, it makes sense to invest in the development of portable, lightweight display technologies, such as electronic paper, foldable displays, handheld projectors (which can be pointed at any convenient surface), and augmented reality glasses of the sort discussed in the next section. To exploit these technologies, we also must invest in appropriate interaction paradigms, such as voice- and gesture-based interfaces applied to PDA-like devices. Because the geographical context will be somewhat constrained, it may be possible to devise more “natural” interfaces. For instance, because the system will know where the user is located when a request is made, the spatial language of gestures or sketching movements may be interpreted more literally. Integrating two-dimensional (or three-
dimensional)10 mobile displays, which support natural mechanisms for interacting with maplike representations and augmented reality methods and technologies (detailed below), poses a range of technology and HCI challenges.
Supporting the acquisition and use of geoinformation from the field also will require attention to interaction issues associated with database access and knowledge discovery. Both efficient rendering and efficient transmission of geospatial representations are essential. A long history of research on map generalization provides an important conceptual base for meeting this challenge,11 but that research does not deal with real-time generation of dynamically changing representations. Rather, coordinated research drawing on both computer science (efficient algorithms) and cartography (understanding of the geospatial information abstraction process) is required. Intelligent mechanisms for transmitting data, such as context-sensitive data organization and caching, also must be developed (see also the challenges posed by the management of location-aware resources, discussed in Chapter 2).
Mobile Enhancement of Human Perception
Mobile augmented reality technologies use virtual information representations (visual, aural, or other) to enhance human perception. Surveillance camera images that make crime perpetrators more recognizable is a simple nonmobile example. Mobile augmented reality (see Box 4.4) does this dynamically while the user moves through an environment. Heads-up displays, for instance, have been used to help jet-fighter pilots find their targets and to assist civilian drivers see objects in the road ahead when visibility is poor.
Because mobile augmented reality requires both detailed geospatial databases describing the “fixed” world and location-aware computing support to match the location of the user with that description, it is a classic example of a spatiotemporal application of geospatial information. As the geodata infrastructure expands, such applications will become increasingly important. Consider, for example, what it might mean in terms of human life if firefighters could look at a burning building and see (as a
A research group at the Fraunhofer Institute for Computer Graphics in Germany has developed prototype methods for 3D display of geospatial information on mobile, handheld devices (Coors, in press).
The International Cartographic Association has played an important role in this research. See Weibel and Jones (1998) and <http://www.lsgi.polyu.edu.hk/WorkshopICA/CfP_Hongkong_2001_v32.pdf>.
BOX 4.4 Mobile Augmented Reality
Mobile augmented reality (MAR) combines computational models, location and head-orientation tracking, and algorithms for information filtering and display to enhance human perceptual capabilities. In this example, the user wears a see-through, head-mounted display; his position and head orientation are tracked as he moves. With the use of a model of the immediate environment that is stored on the wearable computer, computer graphics and text are generated and projected onto the real world using the heads-up display. The generated information is displayed in such a way as to correctly register (i.e., align) on the real world, thereby augmenting the user’s own view of the environment. Combining advanced research in MAR-specific algorithms for the user interface with recent developments in wearable computer, display, and tracking hardware has made it possible to construct mobile augmented reality systems using commercial, off-the-shelf components.
Among the most challenging geospatial applications of MAR is that of providing situational awareness to military personnel in the so-called “urban can
yon.” Urban environments are complex, dynamic, and inherently three-dimensional. MAR can provide information such as the names of streets (street signs may be missing), building names, alternative routes, and detailed information such as the location of electrical power cutoffs. The location of potential threats—such as hidden tunnels, mines, or gunfire—can be provided, and routes can be modified on the basis of this information. Note that this information is displayed in a hands-off manner that does not block the user’s view of the real world, so he or she is able to focus attention on the task at hand. When linked by a network, these systems can enable the coordination of isolated ground forces. MAR usage could be helped along not only by continuing the MAR-specific research in interface/display and tracking/registration algorithms but also by developing methods to provide very high-resolution, correctly georegistered databases and new geographic information systems that can readily adapt to dynamic changes in the urban environment.
SOURCE: Adapted from a white paper, “Geospatial Requirements for Mobile Augmented Reality Systems,” prepared for the committee’s workshop by Lawrence Rosenblum.
transparent layer superimposed over the building) a representation of the activities on each floor (retail space on the first floor, a fitness center on the second, offices for the next five, and apartments above).
Mobile augmented reality imposes constraints on interaction and display that go well beyond those already discussed. One issue is how the system should determine which aspects of reality to augment with which components of information. Real-world point-and-click (originally described in Chapter 2) offers one approach. Building on the desktop graphical user interface (GUI) metaphor, it allows users to interact with objects (in this case, integrated real/virtual objects) using a pointer device such as a gyro mouse or a laser pointer. An alternative metaphor, real-world gesture-and-ask, combines voice, gestures, and other information (such as the direction of the user’s gaze) so the user can interact with data sources without a handheld pointer.
To make mobile augmented reality useful for emergency management, military deployment, and related rapid-response situations, systems must be able to cope with rapid changes, not only at the position of the observer but ongoing in the observer’s environment. This means that information about the environment must be collected at sufficient spatial
and temporal resolution, and at sufficiently quick intervals, to support real-time behavior. Ultimately, it will require the integrated exchange of information among many devices, including distributed repositories of geodata, embedded information collection devices, temporary autonomous devices for collecting information, and mobile receivers providing users with updated information.
The examples of mobile augmented reality described above all deal with enhancing human vision. Research here could also yield significant benefits for sight-impaired individuals, helping them overcome many obstacles to freedom of movement. High-resolution geospatial data could deliver key information about the immediate environment to mobile users, through sounds or tactile feedback. Similar techniques could be used to augment human hearing. Research investments in this area not only could make it possible for users to hear sounds outside their normal perceptual range or to mitigate hearing deficiencies but also could provide added sensory input in situations where vision already is fully engaged. The test bed proposed in Chapter 2 could be used to conduct an in-depth evaluation and refinement of the techniques proposed in this section.
Collaborative Work with Geospatial Information
Most of the science and decision making involved in geoinformation is the product of collaborative teams. Current geospatial technologies are a limiting factor because they do not provide any direct support for group efforts. Collaborative methods and technologies could bring improvements in many geospatial contexts. They could enable teams of scientists to build cooperatively integrated global-regional models of environmental processes and their drivers; allow group-based site selection for key facilities (e.g., brownfield development or a nuclear waste disposal site); support homeland security activities such as identifying potential targets, patterns of activity, or space-time relationships in intercepted messages; and enable collaborative learning experiences that incorporate synchronous and asynchronous interactions among distributed students, teachers, and domain experts. The core challenge is to support effective geocollaboration by developing technologies such as group-enabled GIS systems, team-based decision support systems, and collaborative geovisualization.
Understanding Collaborative Interactions with Geoinformation
In spite of the large body of research in computer-supported collaborative work and HCI, we know relatively little about technology-enabled collaborative human interaction with geospatial information. A system
atic program of research is needed that focuses on group work with geospatial data and on the technologies that can enable and mediate that work.
Currently, the only practical way for teams to collaborate on geospatial applications is to gather in a single place and interact with analysis tools by having a single person “drive” the software on behalf of the group. Fundamental changes in geospatial interfaces will be needed to support two or more users at once. Although some of these relate to low-level system issues (e.g., the Windows operating system acknowledges only one mouse cursor), the focus in this report is on extending geospatial methods and tools to support group development and assessment activities.
In general, collaborative work can be characterized by its spatial and temporal components. That is, the location of participating individuals may be the same or different (i.e., face-to-face vs. distributed), and the individuals may interact at the same time or different times (synchronous vs. asynchronous). Technologically, it is the spatial distinction that is most important, because radically different kinds of technologies are needed to facilitate distributed work, particularly when it is conducted synchronously. Fundamental HCI research is needed to understand the implications of space and time for the design and use of tools for geocollaboration. It is not clear, for instance, to what extent different interfaces and representations are needed for each of the four cases.
Current HCI research on geospatial collaborative work centers on engineering goals—that is, on how to make tools that function effectively in distributed or asynchronous environments. Research investments also are needed at the more fundamental level of design principles for geocollaboration that can generalize more readily to new collaborative contexts and technologies.
Collaborative Geospatial Decision Making
Decision-making activities that use geodata as core input are a particularly important application domain requiring advances in collaborative technologies and understanding of their use. Examples of such activities include urban and regional planning, environmental management, the selection of locations for businesses, emergency preparedness and response, and the deployment of military personnel. Geospatial decision making is now usually a same-place activity, but that could change dramatically as technology begins to support geocollaboration.
A key challenge in geospatial decision making is to support group explorations of what-if scenarios. One possible solution is to extend and integrate existing technologies for the simulation of geographic processes
(both human and natural), access to distributed geodata repositories, and facilitation of group consensus building. An alternative solution would be to develop, from the ground up, methods and tools specifically intended to enable collaborative exploration of what-if scenarios. In either case, attention must be given not just to the technologies that support human interaction with dynamic geospatial models but also to interactions among team participants as they work with the models.
Collaborative work in problem domains such as crisis management or situational awareness will require technologies for viewing and responding to geospatial information in real time and for sharing diverse perspectives on the information and the problem it is being applied to. In addition, research will be needed into techniques for measuring uncertainty in data for strategic assessment and decision-making activities, as well as into mechanisms for identifying and compensating for collaborators with access to just pieces of the group’s information. The latter is a particularly difficult, pervasive problem for real-time geocollaboration. Participants often have access to different sources of information, each of which may be context sensitive, limited in scope, incomplete, and of variable quality (consider, for example, a disaster management scenario involving individuals in the field and in the command center). Limits on sharing information may be imposed by technological limitations of broadcasting or display capabilities, privacy and security concerns, time factors (crisis decisions often must be made immediately), and the fact that participants may not have the breadth of expertise to interpret all the relevant geodata.
Finally, current efforts center on the use of technology to make distributed collaboration work as much like same-place work as possible rather than on enhancing the process of collaboration itself. Additional research is needed to identify how collaborative efforts could take better advantage of what different participants bring to the process. This will be particularly important for decision-making scenarios (such as those already outlined) in which information access and expertise vary widely from one team member to another.
Teleimmersion12 can be considered a unifying grand challenge for multidisciplinary research at the intersection of geospatial information
science and information technology. It has been defined as the use of immersive, distributed virtual environments in which information is processed remotely from the users’ display environments (DeFanti and Stevens, 1999). The goal of teleimmersion is to provide natural virtual environments within which participants can meet and interact in complex ways. Because these environments become human-scale “spaces” and the collaboration often will deal with geographic-scale problems, a coordinated approach to human interaction with geoinformation and to teleimmersion is likely to have many payoffs. Achieving this goal will require focused research in at least five separate, but linked, domains:
High-performance computing. Significant computation is needed to process the massive volumes of data and complex models and to render scenes realistically—all in near real time. However, if decision makers have to wait for hours to compute and render results for a summit meeting that will last only minutes, the number of scenarios they can consider is obviously limited. Research is needed to determine when and how geographical problems should be decomposed for distributed computing environments such as cluster computers or the computational grid.
High-performance networking. Teleimmersion requires moving large data sets and, even more importantly, overcoming the latency and jitter problems introduced by remote, synchronous interactions. Indeed, latency can render a teleimmersive computing environment unusable because of the disorientation that occurs whenever there is a long lag between a user’s physical movement and the virtual representation of that movement. One way to overcome such problems is to establish quality-of-service guarantees (Bhatti and Crowcroft, 2000).
Human-computer interaction. Open issues include appropriate interface metaphors and support for gestural interaction. For example, it is not clear what level of realism is appropriate for avatars (virtual persona) in multiuser systems. Face-to-face communication relies on gestures and facial expressions, and some researchers believe that realistic avatars facilitate more open communications among participants (Oviatt and Cohen, 2000).
Visualization. To fully exploit the potential of teleimmersion, new research on the visualization of high-dimensional, virtual geographies is needed. Key issues include determining what level of geographical realism is appropriate in a virtual, geoinformation-based world and the role of animation in teleimmersive environments.
Collaborative decision support. The migration from more traditional computer-supported cooperative work to collaborative virtual environments presents a number of significant research challenges (Benford et al.,
2001, present a comprehensive outline). Even if all of them can be addressed successfully, research investments will need to be made in issues specific to geocollaboration, such as those outlined earlier in this chapter.
Although discussed here in the context of teleimmersion, these are all cross-cutting domains at the intersection of geospatial information and information technology that have appeared at multiple points in this report. Each will assume increased importance as geospatial applications become increasingly prominent in our daily lives.
Armstrong, M.P. 1994. “Requirements for the Development of GIS-Based Group Decision-Support Systems.” Journal of the American Society for Information Science, 45(9):669-677.
Asghar, M.W., and K.E. Barner. 2001. “Nonlinear Multiresolution Techniques with Applications to Scientific Visualization in a Haptic Environment.” IEEE Transactions on Visualization and Computer Graphics, 7(1):76-93.
Benford, S., C. Greenhalgh, T. Rodden, and J. Pycock. 2001. “Collaborative Virtual Environments.” Communications of the ACM, 44(7):79-85.
Berners-Lee, T., J. Hendler, and O. Lassilla. 2001. “The Semantic Web.” Scientific American, May.
Bhatti, S.N., and J. Crowcroft. 2000. “QoS-Sensitive Flows: Issues in IP Packet Handling.” IEEE Internet Computing, 4(4):48-57.
Blades, M. 1991. “Wayfinding Theory and Research: The Need for a New Approach.” In D. M. Mark and A.U. Frank (eds.), Cognitive and Linguistic Aspects of Geographic Space, pp. 137-165. Dordrecht, Netherlands: Kluwer Academic Publishers.
Chang, K. 2001. “From 5,000 Feet Up, Mapping Terrain for Ground Zero Workers.” New York Times, September 23.
Chen, C., and Y. Yu. 2000. “Empirical Studies of Information Visualization: A Meta-Analysis.” International Journal of Human-Computer Studies, 53:851-866.
Computer Science and Telecommunications Board (CSTB), National Research Council. 1997. Modeling and Simulation: Linking Entertainment and Defense. Washington, D.C.: National Academy Press.
Computer Science and Telecommunications Board (CSTB), National Research Council. 1999. Information Technology Research for Crisis Management. Washington, D.C.: National Academy Press.
Coors, V. In press. “3D Maps for Boat Tourists.” In J. Dykes, A.M. MacEachren, and M.-J. Kraak (eds.), Exploring Geovisualization. Amsterdam: Elsevier Science.
Cutmore, T.R.H., T.J. Hine, K.J. Maberly, N.M. Langford, and G. Hawgood. 2000. “Cognitive and Gender Factors Influencing Navigation in a Virtual Environment.” International Journal of Human-Computer Studies, 53(2):223-249.
Darken, R.P., T. Allard, and L.B. Achille. 1999. “Spatial Orientation and Wayfinding in LargeScale Virtual Spaces II: Guest Editor’s Introduction.” Presence: Teleoperators & Virtual Environments, 8(6):3-6.
Davis, D., W. Ribarsky, T.Y. Jiang, N. Faust, and Sean Ho. 1999. “Real-Time Visualization of Scalably Large Collections of Heterogeneous Objects.” IEEE Visualization, pp. 437-440.
DeFanti, T., and R. Stevens. 1999. “Teleimmersion,” In I. Foster and C. Kesselman (eds.), The Grid: Blueprint for a New Computing Infrastructure, pp. 131-155. San Francisco, Calif.: Morgan Kaufmann Publishers.
Dollner, J., K. Baumann, K. Hinrichs, and T. Ertl. 2000. “Texturing Techniques for Terrain Visualization.” In Proceedings of the IEEE Visualization 00, pp. 227-234.
Dungan, J.L. 1999. “Conditional Simulation: An Alternative to Estimation for Achieving Mapping Objectives.” In F. van der Meer, A. Stein, and B. Gorte (eds.), Spatial Statistics for Remote Sensing, pp. 135-152. Kluwer: Dordrecht.
Djurcilov, Suzana, and Alex Pang. 2000. “Visualizing Sparse Gridded Datasets,” IEEE Computer Graphics and Applications 20(5):52-57.
Elvins, T.T., D.R. Nadeau, R. Schul, and D. Kirsh. 2001. “Worldlets: 3-D Thumbnails for Wayfinding in Large Virtual Worlds.” Presence: Teleoperators and Virtual Environments, 10(6):565-582.
Fisher, P. 1994. “Hearing the Reliability in Classified Remotely Sensed Images.” Cartography and Geographic Information Systems, 21(1):31-36.
Früh, C., and A. Zakhor. 2001. “Fast 3D Model Generation in Urban Environments.” International Conference on Multisensor Fusion and Integration for Intelligent Systems 2001, Baden-Baden, Germany, pp. 165-170.
Golledge, R.G. 1992. “Place Recognition and Wayfinding: Making Sense of Space,” Geoforum, 23(2):199-214.
Jankowski, P., and T. Nyerges. 2001. Geographic Information Systems for Group Decision Making: Towards a Participatory, Geographic Information Science. New York: Taylor & Francis.
Jedrysik, P.A., J.A. Moore, T.A. Stedman, and R.H. Sweed. 2000. “Interactive Displays for Command and Control.” Aerospace Conference Proceedings, IEEE, Big Sky, Mont., pp. 341-351.
Kao, D., J. Dungan, and A. Pang. 2001. “Visualizing 2D Probability Distributions from EOS Satellite Image-Derived Data Sets: A Case Study.” Proceedings of Visualization 01, IEEE, San Diego, Calif.
Krygier, J. 1994. “Sound and Geographic Visualization.” In A.M. MacEachren and D.R.F. Taylor (eds.), Visualization in Modern Cartography, pp. 149-166. Oxford, UK: Pergamon.
Levkowitz, H., R.M. Pickett, S. Smith, and M. Torpey. 1995. “An Environment and Studies for Exploring Auditory Representations of Multidimensional Data.” In G. Grinstein and H. Levkowitz (eds.), Perceptive Issues in Visualization, pp. 47-58. New York: Springer.
Lodha, S.K., C.M. Wilson, and R.E. Sheehan. 1996. “LISTEN: Sounding Uncertainty Visualization.” Visualization 96, pp. 189-195. IEEE, San Francisco, Calif.
MacEachren, A.M., and M.-J. Kraak. 2001. “Research Challenges in Geovisualization.” Cartography and Geographic Information Science, 28(1):3-12.
Mark, D.M., C. Freksa, S.C. Hirtle, R. Lloyd, and B. Tversky. 1999. “Cognitive Models of Geographical Space.” International Journal of Geographical Information Science, 13(8):747-774.
Ogi, T., and M. Horose. 1997. “Usage of Multisensory Information in Scientific Data Sensualization.” Multimedia Systems, 5:86-92.
Oviatt, S., and P. Cohen. 2000. “Multimodal Interfaces That Process What Comes Naturally.” Communications of the ACM, 43(3):45-53.
Parish, Y., and P. Muller. 2001. “Procedural Modeling of Cities.” In Proceedings of SIGGRAPH 01, pp. 301-308. New York: ACM Press.
Passini, R. 1984. “Spatial Representations: A Wayfinding Perspective.” Journal of Experimental Psychology, 4:153-164.
Slocum, T.A., C. Blok, B. Jiang, A. Koussoulakou, D.R. Montello, S. Fuhrmann, and N.R. Hedley. 2001. “Cognitive and Usability Issues in Geovisualization.” Cartography and Geographic Information Science, 28(1):61-75.
Weibel, Robert, and C.B. Jones (eds.). 1998. “Computational Perspectives on Map Generalization.” Special Issue on Map Generalization, GeoInformatica, 2(4):307-314.