2
Location-Aware Computing
Mobile computing is commonly associated with small form-factor devices such as PDAs and untethered (wireless) connectivity, which, in turn, depend on a computing infrastructure that can be used to determine location. Such devices provide access to information processing and communication capabilities but do not necessarily have any awareness of the context in which they operate. “Context-aware computing” refers to the special capability of an information infrastructure to recognize and react to real-world context. Context, in this sense, includes any number of factors, such as user identity, current physical location, weather conditions, time of day, date or season, and whether the user is asleep or awake, driving or walking. Perhaps the most critical aspects of context are location and identity. Location-aware computing systems respond to a user’s location, either spontaneously (e.g., warning of a nearby hazard) or when activated by a user request (e.g., is it going to rain in the next hour?). Such a system also might utilize location information without the user being aware of it (taking advantage of a nearby compute server to carry out a demanding task).
Location-aware computing and location-based services are extremely active areas of research that have important implications for future availability of, and access to, geospatial information. Location sensing could be used in coastal areas, for instance, to pinpoint relevant information on meteorological and wave conditions for commercial fishermen, shipboard researchers, or recreational boaters. Other examples include the delivery of location-specific information, such as notifications of traffic congestion, warnings of severe weather conditions, announcements of nearby educa-
tional or cultural events, and the three scenarios posed in Chapter 1. Location-aware computing is a special case of broader distributed systems. The challenges intrinsic to distributed systems apply to location-aware computing as well. In addition, location-aware systems face constraints imposed by wireless communications and by the need to operate with limited computational and power resources.
This chapter explores the current state of research and key future challenges in these areas. Because the committee’s resources were limited, the discussion of current technologies focuses on the rapidly growing areas of data acquisition and delivery, which are being fueled by advances in location and orientation sensing, wireless communication, and mobile computing. Advances in these technologies could have a great effect on how geospatial data are acquired, how and with what quality they can be delivered on demand, and how mobile and geographically distributed systems are designed.
TECHNOLOGY AND TRENDS
Location-aware computing is made possible by the convergence of three distinct technical capabilities: location and orientation sensing, wireless communication, and mobile computing systems. This section summarizes the current state of these capabilities and provides some guidance on their probable future evolution.
Location and Orientation Sensing
The Global Positioning System (GPS) is the most widely known location-sensing system today. Using time-of-flight information derived from radio signals broadcast by a constellation of satellites in earth orbit, GPS makes it possible for a relatively cheap receiver (on the order of $100 today) to deduce its latitude, longitude, and altitude to an accuracy of a few meters. The expensive satellite infrastructure is maintained by the U.S. Department of Defense,1 but many civilian users benefit from the investment. Indeed, there has been a veritable explosion of GPS-based services for the consumer market over the past few years.
Although certainly important, GPS is not a universally applicable location-sensing mechanism, for several reasons. It does not work indoors, particularly in steel-framed buildings, and its resolution of a few meters is
1 |
The European Union plans to launch Galileo, a purely civilian equivalent of the U.S. GPS satellite network, by 2008. See <http://www.computerworld.com/mobiletopics/mobile/story/0,10801,69580,00.html>. |
not adequate for all applications. GPS uses an absolute coordinate system, whereas many applications (e.g., guidance systems for robotic equipment) require coordinates relative to specific objects. The specialized components needed for GPS impose weight, cost, and energy consumption requirements that are problematic for mobile hardware. Consequently, a number of other mechanisms for location sensing have been developed, and this continues to be an active area of research.
A recent survey article by Hightower and Borriello (2001) offers a good summary of the current state of the art in location sensing. In Table 2.1, adapted from that article, contemporary location-sensing technologies are compared feature by feature. A total of 15 distinct techniques are represented. A majority of them express location information in terms of physical units, such as meters or latitude and longitude (called “Physical”); the others use abstract terms (“Symbolic”) that typically have meaning to the user, such as “at 316 Gladstone Road.” In addition, location can be specified with respect to a shared reference grid (“Absolute”), such as latitude and longitude, or in terms of some relative frame of reference (“Relative”) such as the main entry to a building. “Use exposes location” means that the device must identify itself or its user to the infrastructure (as with cellular phone usage) before current location can be determined. As the table clearly shows, today’s technologies vary significantly in their capabilities and infrastructure requirements. Accuracy ranges from a millimeter to approximately 300 meters, with scales of operation ranging from within a single room to around the world. System costs vary as well, reflecting different trade-offs among device portability, device expense, and infrastructure requirements.
For applications involving mobile objects, orientation sensing is also important. One example of recent research in this area is the description by Priyantha et al. (2001) of a mobile compass. Using fixed, active beacons and carefully placed, passive ultrasonic sensors, they show how to calculate compass orientation to within a few degrees, using precise, subcentimeter differences in distance estimates from a beacon to each sensor on the compass. Their algorithm includes analysis of signal arrival times to produce a robust estimate of the device’s orientation.
Wireless Communication
The past decade has seen explosive growth in the deployment of wireless communication technologies. Although voice communication (cell phones) has been the primary driver, there also has been substantial growth in data communication technologies. The IEEE 802.11 family of wireless LAN technologies (IEEE, 1997) is now widely embraced, with many vendors offering hardware that supports it. The slowest member of
TABLE 2.1 Characteristics of Location-Sensing Technologies
|
Technique |
Attributes |
Accuracy (Precision) |
Scale |
Cost |
Limitations |
GPS |
Radio time-of-flight lateration |
Physical Absolute |
1-5 m (95-99%) |
24 satellites worldwide |
Expensive infrastructure $100 receivers |
Not indoors |
Active Badges |
Diffuse infrared cellular proximity |
Symbolic Absolute Use exposes location |
Room size |
One base per room, badge per base per 10 sec |
Administration costs, cheap tags and bases |
Sunlight and fluorescent light interfere with infrared |
Active Bats |
Ultrasound time-of-flight lateration |
Physical Absolute Use exposes location |
9 cm (95%) |
One base per 10 sq m, 25 computations per room per sec |
Administration costs, cheap tags and sensors |
Required ceiling sensor grid |
MotionStar |
Scene analysis, lateration |
Physical Absolute Use exposes location |
1 mm, 1 ms 0.1° (≈100%) |
Controller per scene, 108 sensors per scene |
Controlled scenes, expensive hardware |
Control unit tether, precise installation |
VHF Omni-Directional Ranging |
Angulation |
Physical Absolute |
1° radial (≈100%) |
Several transmitters per metropolitan area |
Expensive infrastructure, inexpensive aircraft receivers |
30-140 nautical miles, line of sight |
Cricket |
Proximity lateration |
Symbolic Absolute/relative |
4 × 4 ft regions (≈100%) |
≈1 beacon per 16 sq ft |
$10 beacons and receivers |
No central management receiver computation |
MSR RADAR |
802.11 RF scene analysis and triangulation |
Physical Absolute Use exposes location |
3-4.3 m (50%) |
Three bases per floor |
802.11 network installation, ≈$100 wireless NICs |
Wireless NICs required |
PinPoint 3D-iD |
RF lateration |
Physical Absolute Use exposes location |
1-3 m |
Several bases per building |
Infrastructure installation, expensive hardware |
Proprietary, 802.11 interference |
Avalanche transceivers |
Radio signal strength proximity |
Physical Relative |
Variable, 60-80 m range |
One transceiver per person |
≈$200 per receiver |
Short radio range, unwanted signal attenuation |
Easy Living |
Vision, triangulation |
Symbolic Absolute Use exposes location |
Variable |
Three cameras per small room |
Processing power, installation cameras |
Ubiquitous public cameras |
Smart Floor |
Physical contact proximity |
Physical Absolute Use exposes location |
Spacing of pressure sensors (100%) |
Complete sensor grid per floor |
Installation of sensor grid, creation of footfall training data set |
Recognition may not scale to large populations |
Automatic ID Systems |
Proximity |
Symbolic Absolute/relative Use exposes location |
Range of sensing phenomenon (RFID <1 m) |
Sensor per location |
Installation, variable hardware costs |
Must know sensor locations |
Wireless Andrew |
802.11 proximity |
Symbolic Absolute Use exposes location |
802.11 cell size (≈100 m indoor, 1 km free space) |
Many bases per campus |
802.11 deployment, ≈$100 wireless NICs |
Wireless NICs required, RF cell geometries |
E911 |
Triangulation |
Physical Absolute Use exposes location |
150-300 m (95%) |
Density of cellular infrastructure |
Upgrading phone hardware or cell infrastructure |
Only where cell coverage exists |
SpotON |
Ad hoc lateration |
Physical Relative Use exposes location |
Depends on cluster size |
Cluster at least two tags |
$30 per tag, no infrastructure |
Attenuation less accurate than time of flight |
SOURCE: Adapted from Table 1 of Hightower and Borriello (2001). |
this family provides a bandwidth of 2 Mb/s over a range of a few hundred feet in open air; faster members of the family offer 11 Mb/s and 50 Mb/s over much smaller ranges. Bluetooth (Haartsen, 2000) is a standard that is backed by many hardware and software vendors. Although it offers no bandwidth advantage relative to 802.11, it has been designed to be cheap to produce and frugal in its power demands.
Infrared wireless communication (Infrared Data Association, IrDA) (Williams, 2000) is the lowest-cost wireless technology available today, primarily because it is the mass-market technology used in TV remote controls. Most laptops, many handheld computers, and some peripheral devices such as printers are manufactured today with built-in support for IrDA. These devices typically support a low-bandwidth version at 115 Kb/s and a higher bandwidth version at 4 Mb/s. Infrared wireless communication must be by line of sight, with range limited to a few feet. Infrared wireless communication is adversely affected by high levels of ambient light, such as prevail outdoors during daylight hours.
Although it is difficult to foresee what new wireless technologies will emerge in the future, it is clear that advances will be constrained by trade-offs among four factors: frequency, bandwidth, range, and density of wired infrastructure (Rappaport, 1996; CSTB, 1997). Devices operating at a higher frequency could have greater bandwidth but would require major advances in high-frequency VLSI design. Advances also will be constrained by policy decisions on frequency usage (spectrum allocation) by the Federal Communications Commission. Range is fundamentally related to transmission power, but generating high power at high frequency always has been a difficult technical challenge. Further, propagation at higher frequencies typically relies on line of sight, because common objects such as walls are not transparent to radio waves.2 The standard solution to limited range and frequency allocation, coupled with line-of-sight constraints, is to use a wired infrastructure with base stations that define cells of wireless coverage around them. This is the basis of the now-widespread cell phone technology and wireless LAN technologies such as 802.11. Wired infrastructures impose significant costs for conditioning the environment, with density (and hence cost) increasing with bandwidth. Cheap, high-band-width, low-power, and ubiquitous wireless coverage will not be attained easily, so location-aware computing systems will have to be designed to cope with these realities. This is not a short-term annoyance but a core, long-term requirement of successful system architectures.
Mobile Computing Systems
Hardware for mobile computing has made impressive strides over the past 5 years. Lightweight and compact laptops and handheld computers are now extensively used by the general public. Although less widespread, wearable computers are beginning to make an impact in specialized applications. Where progress has been slow is in the integration of mobile hardware into systems that seamlessly bridge a user’s desktop, his or her activities while mobile, and the Internet computing world. Four fundamental issues (Satyanarayanan, 1996) complicate the design and implementation of such systems:
-
Mobile elements are resource-poor relative to static elements. For a given cost and level of technology, considerations of weight, power, size, and ergonomics will exact a penalty in computational resources such as processor speed, memory size, and disk capacity. Although mobile elements will improve in absolute ability, they typically will operate at lower resource levels than static elements.
-
Mobility is inherently vulnerable. In 2001, nearly 591,000 laptops were stolen in the United States, an increase of 53 percent from 2000.3 A laptop or handheld machine carried by a mobile user is more vulnerable to theft than a desktop in a locked office is. Portable computers are also more prone to accidental loss or physical damage. The vulnerability of mobile systems extends to the privacy and confidentiality of the data that may be stored on or accessible through them.
-
Wireless connectivity is highly variable in performance and reliability. Some buildings offer reliable, high-bandwidth wireless connectivity; others may support only inconsistent or low levels of bandwidth. The situation is particularly problematic in outdoor locations, where a mobile client may have to rely on a low-bandwidth wireless network with significant gaps in coverage.
-
Mobile elements rely on a limited energy source. Attention to power consumption must span many levels of hardware and software to be fully effective (NRC, 1997). Despite the fact that the demands on mobile computers continue to grow, battery technology is improving only slowly. Wireless transmission is one of the large users of energy, a situation that is not likely to diminish over time.
It is important to note that these issues are not artifacts of current technology but are intrinsic to mobility. Collectively, they complicate the
3 |
Data from <http://www.safeware.com/losscharts.htm>. |
design of mobile computing systems. Consequently, although significant research progress has been made, the design and implementation of mobile computing systems remain problematic. The limited commercial deployment of mobile computing systems restricts options available to scientists for experimentation.
RESEARCH CHALLENGES
This section outlines the key research challenges in location-aware computing that were raised at the committee’s workshop. The first set of challenges comprises the obstacles that must be overcome to ensure effective deployment of a location-sensing infrastructure. The second set of challenges involves the transient nature of location information and the resource constraints of mobile devices. These constraints complicate the use of location information in real-world applications. The third set of challenges lies in the arenas of privacy and security. The final set of challenges pertains to the creation of novel applications that exploit location awareness.
Effective Infrastructure Deployment
The deployment of location-sensing technologies will grow over time as service providers take advantage of the commercial opportunities they offer. Although commercial applications will absorb many of the costs associated with the deployment of location-sensing infrastructure, several key issues can be addressed effectively only through publicly funded research. Those issues are outlined below.
Technology-Independent API for Sensing Location
No single location-sensing technology is likely to become dominant; there are simply too many dimensions along which location-sensing mechanisms can vary (Hightower and Borriello, 2001). Examples include indoor versus outdoor use, accuracy, precision, energy usage, and the extent to which there is potential loss of privacy for users of the technology. As a result, the choice of location-sensing technology is likely to depend on the usage context, and various technologies are likely to coexist well into the future.
Unfortunately, this fragmentation of the location-sensing technology market has negative implications for location-aware computing software. First, it will necessitate a significant amount of technology-specific code. When a new technology is introduced, individual applications will have to be modified to take advantage of it, making long-term software main
tenance problematic. This is likely to slow the adoption of new technology, and it may even stifle innovation because of the perceived difficulty of gaining acceptance. A second consequence of market fragmentation is that it makes it very difficult to develop applications that can be used in a variety of location-sensing contexts.
These considerations argue for research into the creation of a technology-independent, high-level software application programming interface (API) for location sensing.4 The operating system interface is the most obvious level for this API, although the middleware level also might be feasible. By freeing application writers from the specifics of location-sensing technologies, an API will encourage the creation of long-lived applications. It also can encourage the creation of new location-aware applications by helping to amortize development efforts. Further, by lowering the barriers to adoption, it can stimulate new location-sensing technologies.
Although the design and validation of such an API remain an open research problem, certain attributes are already clear. The committee believes that the API must be an open standard rather than proprietary to one company or a consortium of companies. It must mask technology-dependent attributes of the underlying technology. It should allow specification of desired accuracy and discovery of actual accuracy. It should be capable of dynamically combining location information from multiple sources in a manner that is transparent to applications. Although details are sketchy, there have been reports of early work on standardization in this arena (Peterson, 2001).
Timing will play a crucial role in the long-term success of this endeavor. Because a standard effectively “freezes” some aspects of a technology, it is important that it not be defined too soon or too late. Premature standardization can result in a technically inferior standard because adequate experience and research results are not available at the point of definition. On the other hand, excessive delay can lead to a proliferation of commercial implementations and may deter eventual convergence. As David Clark, a leading networking researcher at the Massachusetts Institute of Technology, articulated, the optimal point for standards definition is after the research community has gained experience with one or more prototypes but before heavy investments have been made by industry. In this case, first adopters are likely to gain a significant market advantage, so there will be a real need to guard against premature standardization. A standard should be proposed only after adequate research has been conducted and validated through reference implementations.
Cost-Effective Deployment Strategies
Location-sensing technologies are expensive to deploy today. The share of system costs incurred by the infrastructure vs. that borne by end users varies significantly in current technologies. With GPS, for example, the end-user cost is relatively small but the cost of the satellite infrastructure is enormous, whereas the split is more balanced in an active badge system (see Table 2.1). Moreover, although hardware costs are likely to decline as volume increases, many technologies incur hidden costs that are much harder to reduce. An approach based on signal strength maps, for instance, requires creating the maps in every location where the system is deployed. The maps must be re-created whenever the physical topology of the location is modified in any significant way (e.g., when a store makes changes to a large merchandise display). Another example of a hidden (albeit necessary) cost is the need to monitor and audit the release of location information to guard against privacy lawsuits.
The growth of location-aware computing will be hindered as long as the costs of deploying and managing location-sensing systems remain high. Fundamental research on techniques to rapidly calibrate an environment for specific location-sensing technologies would reduce these costs. Two very different approaches are conceivable. One approach is to develop modeling and analysis techniques, predictive algorithms, tools for optimizing the deployment of infrastructure, and self-configuring technologies that could eliminate or minimize the need for human intervention and calibration. Some of the existing research on modeling the propagation of wireless signals may be relevant, but it will need to be substantially extended and refined. A different approach is to retain physical calibration but to develop automation techniques to speed the process. An intriguing possibility is the use of mobile robots for calibration. For example, rather than having a human engineer sample signal strengths, it might be possible to program a robot to construct a signal strength map. To further speed the process, multiple mobile robots might exploit parallelism. The results of robotic research in planning and team coordination are relevant here.
Opportunistic Acquisition of Sensor Data
The committee foresees a growing number of entities that combine location-sensing technologies with other types of sensors. Many cars today, for example, are equipped with both GPS and an antilock brake system (ABS). Adding wireless transmission would complete the elements necessary for the automated collection of road surface conditions. For example, during a snowstorm, road maintenance personnel might wish
to monitor road conditions to determine how best to allocate their resources. Every time an ABS detected the onset of wheel lockup (e.g., due to icy conditions), its GPS coordinates could be transmitted to a regional data collection site. Many ABS activations over a short period of time might signal icy conditions on a segment of road. Road maintenance personnel, using data mining and visualization software, could identify the problem locations and direct salt trucks to the needed areas. Real-time deployment of resources to needed areas could prevent accidents, conserve labor, reduce the use of salt, and so on. The key sensing capability (in this case, antilock brakes) is of value in and of itself, but adding locational and wireless communication capabilities amplifies the value of the primary capability. This is referred to as “opportunistic data acquisition.”
If we are to take advantage of such opportunities in the future, an investment must be made in research that explores appropriate techniques for data acquisition and redistribution. Some of the challenges to be addressed are scalability, mobile sensor sources, appropriate information-sharing policies, and mechanisms for preserving privacy5 without sacrificing functionality. Research also will be needed on how location-sensing systems might be designed to reasonably exploit new data acquisition opportunities as they arise.6
Adaptive Resource Management
Mobility exacerbates the tension between autonomy and interdependence that is characteristic of all distributed systems. The relative resource poverty of mobile elements, as well as their lower levels of security and robustness, argues for reliance on static servers. At the same time, the need to cope with unreliable, low-performance networks and to be sensitive to power consumption argues for self-reliance. Any viable approach to mobile computing must strike a balance between these competing concerns. The balance cannot be static; as the circumstances of a mobile client change, the system must be able to react, dynamically reassigning the responsibilities of client and server. In other words, mobile clients must be adaptive. This may occur in an application-transparent manner that is compatible with existing software, or in an application-aware manner in
volving a collaborative relationship between applications and the operating system (Noble et al., 1997). The need for adaptation complicates many fundamental aspects of mobile computing systems.
Transient Location Information Management
The ability to manage information about the availability of devices based on their location is an enabling technology for many of the applications discussed in this report. In the case of mobile devices, the situation is complicated by the fact that location information is transient. Box 2.1
BOX 2.1 Location Management The management of transient location information is a fundamental component of many of the examples discussed in this report, including the three scenarios introduced in Chapter 1. Although tracking the location of moving objects appears straightforward, techniques for doing so have been developed only recently. Suppose an object starts at the corner of 57th Street and Eighth Avenue in New York City at 7:00 AM and heads for the intersection of Oak and State streets in Chicago. A trajectory can be constructed using an electronic map geocoded with distance and travel-time information for every road section. Given that the trip has a starting time and assuming that speed is constant, we can compute the time at which the object will arrive at the beginning of each straight-line segment on the path. This trajectory information gives the route of the moving object, along with the time at which it will be at each point on the route. It is only an approximation of the object’s expected motion, because the object does not necessarily move in straight lines or at constant speed. The trajectory information is stored on a remote server and revised according to location updates from the moving object and real-time traffic conditions obtained from traffic Web sites. The server can compute the expected location of the moving object at any point in time; for example, if it is known that the object is at location (x5, y5) at 5:00 PM and at location (x6, y6) at 6:00 PM, and it moves in a straight line at constant speed between the two locations, then the location at 5:16 PM can be computed before or after that time occurs. Uncertainty The location of a moving object is inherently imprecise, because the database location (i.e., the object location stored in the database) cannot always be identical to the actual location of the object. Assuming that one can control the amount of uncertainty in the system, how should it be |
determined? Obviously, lowering the uncertainty would come at a cost. For example, if a moving object transmits its location to a location database every x minutes or every x miles, then reducing x would decrease the uncertainty in the system but increase bandwidth consumption and location-update processing cost. Predicting Future Locations Determining whether a trajectory will be affected by a traffic incident is not a simple matter—it requires prediction capabilities. For example, suppose that according to the current information, Jane’s van is scheduled to pass through a particular highway section 20 minutes from now, and suppose that a Web site currently reports a traffic jam in that area. Will Jane’s expected arrival time at her destination be affected? Clearly it depends on whether the jam has resolved itself by the time she arrives. If sufficient historical information is available, a traffic simulation may be able to predict the likelihood of this happening. In other cases, the system may not have a priori information about the future motion of an object. For example, customers of a mobile commerce application do not normally divulge their planned routes to merchants. If it were known, however, that at 9:00 AM the Daleys were going to be close to a store where a sale matches their customer profile, the system could transmit a coupon to them, allowing them to plan a stop. (Other kinds of applications also benefit from location prediction; in wireless systems, for example, it enables optimized allocation of bandwidth to cells.) Recently developed methods can predict the location of a moving object at future times, based on the fact that objects often have some degree of regularity in their motion. A typical example is the home-office-home pattern. If this pattern can be detected, location prediction is relatively easy for rush hour. Note that patterns are only partially periodic (i.e., sometimes only part of the motion repeats). For example, Joe usually may travel from home to work along a fixed route between 7:00 and 8:00 every workday morning and back home between 5:00 and 6:00 PM, but he may do other things and go other places during the rest of the day. Further, the patterns are not necessarily repeated perfectly; the home-to-work trajectory may be different from one day to the next because of different traffic conditions. Lastly, the motion can have multiple periodic cycles (Joe may go fishing every Saturday and every other Sunday). In location prediction, the goal is to detect motion patterns that can be partially periodic, not perfectly repeated, and that have multiple periodic cycles and to use this information to estimate where the object is most likely to be. SOURCE: Adapted from a white paper, “The Opportunities and Challenges of Location Information Management,” prepared for the committee’s workshop by Ouri Wolfson. |
describes the issues involved in tracking and predicting transient locations. Considerable progress has been made in this arena, but much more research will be required before that information can be applied in real-world situations, particularly when resources such as power and communications bandwidth are constrained.
Location-Aware Resource Management
Location sensing can provide the basis for novel techniques of re-source management, potentially improving the user experience. For example, Satyanarayanan (2001) suggested using location awareness to guide a mobile user from a bandwidth-impoverished to a bandwidth-rich environment. This is an example of an emerging technique, “cyber foraging,” that temporarily extends the resources of a mobile computer by pointing to remote resources that are found opportunistically. For instance, suppose a user waiting at a busy airport gate needs to send several important documents before boarding the plane. Given the large number of users at this gate surfing the Web, the system knows that the amount of bandwidth available is insufficient to send all of the documents before the plane leaves. The system notifies the user (based on current flight schedules) that sufficient bandwidth is available at a gate only a few minutes away. The user walks to the designated gate area, sends the important documents, and returns to the original gate in time to board the flight. This example (Satyanarayanan, 2001) illustrates how cyber foraging can enhance the necessarily limited capabilities of portable or wearable computers by guiding users to locations where additional resources (power, bandwidth, etc.) are available.
Researchers at Rutgers University (Goodman et al., 1997; Frenkiel et al., 2000) have proposed the use of “infostations” to provide high-bandwidth connections for mobile devices. Frenkiel et al. envision a network of frequent, short-range infostations that would provide low-cost, low-power access to information services such as large files (e.g., books, videos), location-dependent information (e.g., maps), and remote information (e.g., for military personnel in the field). They further suggest that the design should allow users to choose among several delivery options: immediate delivery at higher cost and battery drain; delayed delivery at lower cost and battery drain; or delivery to a fixed location (such as a home computer) at zero cost and greater delay (Frenkiel et al., 2000).
Many research problems must be addressed before the use of surrogates (infostations or the compute- and data-staging servers used in cyber foraging) can be accomplished invisibly and seamlessly. Mechanisms are needed for discovering and selecting surrogates and negotiating their use. The computational, bandwidth, and power requirements of applications
must be characterized in platform-independent ways. Techniques must be developed to ensure and verify an adequate level of trust in a surrogate, and practical boundaries must be established for what constitutes useful levels of trust. In addition, the shared use of surrogates leads directly to questions of load balancing and scalability. For example, it is not clear if admission control, best effort, or some new approach is best for surrogate allocation, or what the implications of these alternatives are for scalability or for the provisioning of fixed infrastructures to avoid overloads during peak demand. Additional investigation is needed to establish reliable and cost-effective techniques for monitoring mobile resources, discovering resources as they come in and out of service, partitioning and off-loading computation, and staging data. Because energy is a particularly critical resource in mobile computing, the research investment should include location- and orientation-sensing techniques that adapt to battery state. The nature of the connections—they are brief, intermittent, uncertain, and unpredictable—requires new strategies and algorithms for caching and prefetching data. For example, when an intensive computation that accesses a large volume of data needs to be performed, the mobile computer can ship the computation to a surrogate. If a user is stationary, it might be possible to complete the session with one surrogate. However, if a user is passing quickly through an area (in a vehicle, for instance), the session may have to span multiple surrogates (with data cached at multiple locations). In addition, it may be desirable to have the surrogate stage data ahead of time in anticipation of the user’s arrival in a given location. The long-term research goal is to develop the design principles and implementation techniques needed for well-engineered mobile computing environments, such as those suggested by cyber foraging and infostations.
Security and Privacy
Questions arise about whether location-aware computing technology will be able to ensure that information is available generally, equitably, and with sufficient attention to societal issues such as privacy7 and security. This section outlines key topics in security and interoperability.
7 |
The National Science Foundation-funded Center for Spatially Integrated Social Science and the University Consortium for Geographic Information Science held a specialist meeting on location-based services in December 2001 that explored privacy issues associated with location-based services data. More information is available at <http://csiss.ncgia.ucsb.edu/events/meetings/location-based/index.htm>. |
Privacy Controls and Location Authentication
Some location technologies, such as cell-based location sensing, expose the location of the user to the sensing infrastructure. The situation is no different today, when cell-phone providers know where you are to within the resolution of a cell. There is already some concern about this loss of privacy, and the concern will grow in intensity as the use of location-aware computing grows. There is an inherent tension between privacy and the transparent use of location information: indeed, the more seamless and easy to use that a location-aware application is, the fewer the cues to remind users that their locations are being monitored. Historical location information can be analyzed to obtain insights into a user’s typical movements (e.g., the location prediction methods discussed in Box 2.1). On the other hand, the authentication of locations is difficult, because it requires that both the identity of the user and his or her current location be established. The spoofing of user identity or of current location is difficult to guard against, making it difficult to tell whether a particular user was at a particular location at a particular time.
Credible solutions to these problems are required if independent location-service providers (LSPs) are to be commercially viable. The committee envisions LSPs playing the same role in location services that Internet service providers have played for network connectivity—that is, they will provide users with location information and location-based services on a subscription or fee-for-service basis. Their viability will depend on how well we can solve the problems of security and privacy.
A number of research topics follow from these observations. There is a need for system design techniques capable of providing end-to-end control of location information that include research on system layering to control the exposure of location information, as well as efficient auditing mechanisms for recording such exposure. Fine-grained access control mechanisms—permitting the precise release of location information to just the right parties under the right circumstances—are required. The use of location information to enforce security policies (e.g., a user’s laptop should not operate outside of the building) also should be explored. Research is also needed in protocols and mechanisms for authenticating and certifying the location of an individual at any given time. Finally, user interface techniques must be developed that remind users that their locations are being monitored and alert them when the trustworthiness of the entity performing that monitoring changes.
End-to-End Location Sensing
Computer system architectures often can be analyzed in terms of functional layers. Location sensing is end to end if the cooperation of interven-
ing layers is not necessary to the proper operation of the mechanism. Typically, the layer of a system architecture that possesses location awareness is distinct from any layers that rely on that knowledge. For example, in a location-sensing system based on wireless signal strength, location awareness is created in the device-driver layer, but use of that information occurs at the application layer. This distinction can have practical importance if the layers also differ in terms of the business and/or product that provides the functionality. In that case—as opposed to a vertically integrated system, in which one business provides the functions at all layers—issues will arise if there are inadequate incentives for cooperation and information sharing. For example, location sensing may be an inherent component of a wireless Internet service—it is needed to support the wireless service—but the associated information might not be usable by a separately provided health-care application, such as the monitoring of patients after open-heart surgery. There may, for instance, be legal inhibitors to the LSP supplying location information to the health-care service provider, from protections on personally identified information to the desire to avoid liability for providing incorrect location information. Even if such inhibitors are not present or significant, arrangements would need to be made, which might or might not be deemed worthwhile by the LSP. This problem is not limited to commercial uses of location data; one government agency might provide location sensing for use by another. And even if the LSP is willing and able to provide location information, its pricing will affect applications developers’ willingness and ability to use the information (as one would expect). In addition, interoperability problems could significantly hinder cooperative, multiparty applications absent appropriate standards setting, which will take time. Such circumstances may result in single-vendor, vertically integrated solutions dominating the market, and the high cost of such solutions may limit their use.
A more general and flexible future, akin to the manner in which the Internet evolved, may argue for end-to-end location-sensing techniques that allow each layer of a system to be self-reliant. In other words, a given layer needs to be able to discover its location by tunneling information from some other layer even when intervening layers do not cooperate. Those methods are not currently available, and their development will require significant research. Although technical innovations, and corresponding standard setting, may stimulate the market in new ways, complementary incentives for sharing location information—with privacy controls—should be explored if it is determined that broader generation of location information would be too expensive for the effort to be replicated by multiple parties. Here, too, research investment will be required to determine what kinds of controls are needed, how the information-receiving applications are authenticated, and so on, as well as research into the legal and economic dimensions of location sensing.
BOX 2.2 Sensor Networks Recent developments in wireless communication and sensor technology have enabled the development of sensor networks, collections of small sensing devices distributed spatially throughout an environment. Sensor networks already are being applied in areas as diverse as environmental monitoring, condition-based maintenance, surveillance, computer-augmented reality (“smart spaces”), and inventory tracking. Such networks differ from traditional sensor systems by their dependence on dense sensor deployment and physical co-location with their targets. Dense deployment implies the use of hundreds or thousands of sensor nodes within a small area, enabled by low-cost sensor devices. This allows redundant use of devices to ensure reliability. Physical co-location, which is enabled by the availability of inexpensive, short-range wireless communication, couples sensors tightly with their environment; they may be attached to packages being tracked or deployed a few meters apart to cover an intersection or field. Co-location simplifies signal processing problems and reduces communication costs. Spatial location is central to sensor network operation. The purpose of these networks is often to answer spatial queries such as, “What is moving down the road and how quickly?” or, “How many animals are in the northwest field?” Sensor networks also make use of spatial information to facilitate self-organization and configuration. The deployment of the sensors requires localization data to determine the quality of coverage and to constrain communications to the particular physical area being monitored. Collaborative signal processing techniques, such as beam-forming and information-based approaches, are used to combine the results of multiple sensors, thereby providing a collective result that is stronger than any individual sensor’s result. At an operational level, spatial information can be used to conserve energy through load balancing and to control the use of the communications network. SOURCE: Adapted from a white paper, “Using Geospatial Information in Sensor Networks,” prepared for the committee’s workshop by John Heidemann and Nirupama Bulusu. |
Applications
The availability of location-sensing information has begun to stimulate innovative applications, such as sensor networks (see Box 2.2) and many of the examples already noted. These applications are still at the stage of research prototypes. A more solid foundation is needed to systematize and firmly ground them, and long-term research investments in that foundation will engender other new applications.
Real-World Point-and-Click
The powerful point-and-click metaphor is now widely embraced because it is intuitive and easy to use. Today, users can point and click at on-screen icons to activate a software program or at an entertainment center to control media devices. The committee predicts that one of the radical changes that will be enabled by location-aware computing is the extension of this metaphor to physical objects. Imagine, for example, being able to point a wand at an object (e.g., a building, tree, or bus stop) and obtain information about that object (the building floor plan, the botanical and common names of the tree, or the time of arrival of the next bus).8 Real-world point-and-click would effectively blur the distinction between physical and virtual worlds.
Many research problems must be solved before this capability becomes reality. The “stab line” indicated by the wand has to be very precise, which means that the orientation as well as the location of the wand must be determined. Identifying the precise object of interest will, in most cases, require a fusion of information from several sources, including a comprehensive 3D model of the locale, dynamic information from sensors if the object pointed to is mobile, and information derived from analysis of the image captured by the camera at the wand’s tip. The human interface must not only help the user specify what type of information is desired, but it also must help him or her to disambiguate the target of the request. For example, if the wand is pointed at a building, the user must be guided to clarify which portion of the building is of interest (general information about the building, the floor plan for the second floor, the building address, etc.). If the wand is pointed at a hospital, the user might wish to see a listing of laboratory services, the names of available physicians, or the room number where a relative can be visited.9
This vision has enormous potential that could justify substantial research investment. Real-world point-and-click can be viewed as a driving application for location-aware computing in general. In addition to technologies supporting energy-efficient, precision location and orientation
sensing, a host of supporting capabilities will be required. Examples of key research topics include the advances in mobile computing capabilities already discussed, plus image recognition, creation of world and object models, semantic filtering for disambiguation, and context sensitivity.
National Test Bed
A complex relationship exists between commercial support for location-aware computing and deployment of the needed research. In the very long term, it is clear that location-aware computing should be a commercial activity. However, to get to that point, a considerable amount of research, deployment, and experience will be necessary. It is not sufficient for the research community to develop the concepts, algorithms, and architectures and then leave it to industry to pick up and carry on.10 Rather, the research community—both computer scientists and scientific users of information technology—has the opportunity to lead by example. There is value in the research community being engaged in the initial phases of actual use of location-aware computing so that it can explore new application paradigms enabled by this technology and conduct the high-risk, high-payoff experiments in the use of the technology.11
A key obstacle to research progress in location-aware computing is the lack of adequate large-scale experimental infrastructure. Such infrastructure is essential for empirical validation of concepts, techniques, and architectures for location-aware computing. The government can act as a catalyst by funding the creation and maintenance of a national test bed for experimental research in location-aware computing.12
Broad access to a national test bed could stimulate research in the development of a rich, open, location-aware computing infrastructure (e.g., standard protocols, APIs, platform-independent capability descriptions, scalability, reconciliation of conflicting information from different network nodes, adaptive resource management, static-mobile load balancing, and mediation of requests). Where feasible and cost-effective, such a test bed should leverage the physical infrastructure of the Internet. One strategy would be to use the Internet for low-level data transport but to overlay experimental location sensing and routing functionality on top of it. Part of this effort can include the creation and dissemination of benchmarks and testing methodologies for location-aware systems and applications. These artifacts can become part of the discourse of the research community and help forge a common basis for evaluation of ideas in the field. The test bed also can serve as the focal point for standardization efforts, through collaboration between researchers and industry groups. Because it can encourage location-aware applications, a large-scale test bed could speed up the commercialization of research results.
REFERENCES
Brown, Barry. 2001. “Warning! Your Trip May Be Tracked.” MSNBC, July. Available online at <http://www.msnbc.com/news/596601.asp?cp1=1>.
Computer Science and Telecommunications Board (CSTB). 1997. The Evolution of Untethered Communications. Washington, D.C.: National Academy Press.
Computer Science and Telecommunications Board (CSTB). 2002. Information Technology, Research, Innovation, and E-Government. Washington, D.C.: National Academy Press.
Frenkiel, Richard, B.R. Badrinath, Joan Borras, and Roy Yates. 2000. “The Infostations Challenge: Balancing Cost and Ubiquity in Delivering Wireless Data.” IEEE Personal Communications, April, pp. 66-71.
Golledge, Reginald G., Roberta L. Klatzky, Jack M. Loomis, Jon Speigle, and Jerome Tietz. 1998. “A Geographical Information System for a GPS-Based Personal Guidance System.” International Journal of Geographical Information Science, 12(7):727-750.
Goodman, D.J., J. Borras, N.B. Mandayam, and R.D. Yates. 1997. “Infostations: A New System Model for Data and Messaging Services.” In Proceedings of the IEEE Vehicular Technology Conference, 97(2):969-973.
Haartsen, J.C. 2000. “The Bluetooth Radio System.” IEEE Personal Communications, 7(1), February, pp. 28-36.
Hightower, Jeffrey, and Gaetano Borriello. 2001. “Location Systems for Ubiquitous Computing.” IEEE Computer, 33(8):57-66.
Institute of Electrical and Electronics Engineers, Inc. (IEEE). 1997. “Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications.” IEEE Std 802.11-1997.
National Research Council (NRC). 1997. Energy-Efficient Technologies for the Dismounted Soldier. Washington, D.C: National Academy Press.
Noble, B., M. Satyanarayanan, D. Narayanan, E. Tilton, J. Flinn, and K. Walker. 1997. “Agile Application-Aware Adaptation for Mobile Computing.” In Proceedings of the 16th ACM Symposium on Operating Systems Principles, October.
Peterson, Shane. 2001. “A Standard for Location?” Mbusiness Daily, January. Available online at <http://www.mbizcentral.com/m-business_story/standard-4-location>.
Priyantha, N.B., A.K.L. Miu, H. Balakrishnan, and S. Teller. 2001. “The Cricket Compass for Context-Aware Mobile Applications.” In Proceedings of the Seventh Annual International Conference on Mobile Computing and Networking, July.
Rappaport, Theodore S. 1996. Wireless Communications: Principles and Practice. Upper Saddle River, N.J.: Prentice Hall.
Satyanarayanan, M. 1996. “Fundamental Challenges of Mobile Computing.” In Proceedings of the 15th ACM Conference on Principles of Distributed Computing, February.
Satyanarayanan, M. 2001. “Pervasive Computing: Vision and Challenges.” IEEE Personal Communications, 8(4), August.
Williams, S. 2000. “IrDA: Past, Present and Future.” IEEE Personal Communications, 7(1), February, pp. 11-19.