Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 55
--> 2 Technology: Research Problems Motivated by Application Needs INTRODUCTION Chapter 1 identifies opportunities to meet significant needs of crisis management and other national-scale application areas through advances in computing and communications technology. This chapter examines the fundamental research and development challenges those opportunities imply. Few of these challenges are entirely new; researchers and technologists have been working for years to advance computing and communications theory and technology, investigating problems ranging from maximizing the power of computation and communications capabilities to designing information applications that use those capabilities. What this discussion offers is a contemporary calibration, with implications for possible focusing of ongoing or future efforts, based on the inputs of technologists at the three workshops as well as a diverse sampling of other resources. This chapter surveys the range of research directions motivated by opportunities for more effective use of technology in crisis management and other domains, following the same framework of technology areas—networking, computation, information management, and user-centered systems—developed in Chapter 1. Some of the directions address relatively targeted approaches toward making immediate progress in overcoming barriers to effective use of computing and communications, such as technologies to display information more naturally to people or to translate information more easily from one format to another. Others aim at gaining an understanding of coherent architectures and services that, when broadly deployed, could lead eventually to eliminating these barriers
OCR for page 56
--> in a less ad hoc, more comprehensive fashion. Research on modeling the behavior of software systems composed from heterogeneous parts, for example, fits this category. NETWORKING: THE NEED FOR ADAPTIVITY Because of inherently unpredictable conditions, the communications support needed in a crisis must be adaptable; the steering committee characterizes the required capability as "adaptivity." Adaptivity involves making the best use of the available network capacity (including setting priorities for traffic according to needs and blocking out lower-priority traffic), as well as adding capacity by deploying and integrating new facilities. It also must support different kinds of services with fundamentally different technical demands, and to do so efficiently requires adaptivity. This section addresses specific areas for research in adaptive networks and describes the implications of a requirement for adaptivity; the importance of adaptivity at levels of information infrastructure above the network is discussed in other sections of this chapter. Box 2.1 provides a sampling of networking research priorities discussed in the workshops. Although problems of networking that arise in national-scale applications are not entirely new, they require rethinking and redefinition because the boundaries of the problem domains are changing. Three issues that influence the scope of networking research problems are (1) scale, (2) interoperability, and (3) usability. Scale. High-performance networking is often thought of in terms of speed and bandwidth. Speed is limited, of course, by the speed of light in the transmission medium (copper, fiber, or air), and individual data bits cannot move over networks any faster. However, the overall speed of networks can be increased by raising the bandwidth (making the pipes wider and/or using more pipes in parallel) and reducing delays at bottlenecks in the network. High-speed networks (which include both high-bandwidth conduits or "pipes" and high-speed switching and routing) allow larger streams of data to traverse the network from point A to point B in a given amount of time. This makes possible the transmission of longer individual messages such as data files, wider signals (such as full-motion video), and greater numbers of messages (such as data integrated from large numbers of distributed sensors) over a given path at the same time. Research challenges related to the operation of high-speed networks include high-speed switching, buffering, error control, and similar needs; these were investigated with significant progress in the Defense Advanced Research Project Agency's (DARPA's) gigabit network testbeds. Speed and bandwidth are not the only performance challenges related to scale; national-scale applications must also scale in size. The number of information sources involved in applications may meet or even far exceed the size of the
OCR for page 57
--> BOX 2.1 Selected Networking Research Priorities Suggested by Workshop Participants Daniel Duchamp, Columbia University: Priority High-bandwidth and/or frequent upstream communication Bandwidth allocation that permits sudden, very large reallocations Self-organizing routing structures End point identification by attributes rather than just by name Ease of use Elimination of the concept of administrator Rajeev Jain, University of California, Los Angeles: Portable high-bandwidth radio modems that interface with portable computers and can be powered off the computer battery—these should be adaptable and interoperable with different frequency bands, channel conditions, and capacity requirements. Peer-to-peer distributed network protocols for setting up networks in the absence of wireline backbones. Bandwidth-efficient transmissions that allow increased capacity—for example, in many crises like the Northridge earthquake or Hurricane Andrew, even people in the same neighborhood were cut off due to the breakdown of telephone service. A portable bandwidth-efficient battery-operated peer-to-peer network technology would allow information systems to be set up to provide important support to communities in a crisis. nation's or world's population. In theory, every information producer may be an information consumer and vice versa. Consequently, there is the need not only to reduce the amount of time needed for quantities of bits to be moved but, even at the limits of technology in increasing that speed, to transport more bits to more places. The set of people, workstations, databases, and computation platforms on networks is growing rapidly. Sensors are a potential source of even faster growth in the number of end points; as crisis management applications illustrate, networks may have to route bits to and from environmental sensors, seismometers, structural sensors on buildings and bridges, security cameras in stores and automated teller machines, and perhaps relief workers wearing cameras and other sensors on their clothes, rendering them what Vinton Cerf, of MCI Telecommunications Corporation, called "mobile multimodal sensor nets." Medical sensors distributed at people's homes, doctor's offices, crisis aid stations, and other locations may enable health care delivery in a new, more physically distributed fashion, but only if networks can manage the increased number of end points. In
OCR for page 58
--> response, the communications infrastructure must be prepared to transport orders of magnitude more data and information and to handle orders of magnitude more separate addresses. A particular case, such as a response to a single disaster, may not involve linking simultaneously to millions or billions of end points, but because the specific points that will be linked are not known in advance, the networking infrastructure must be able to accommodate the full number of names and addresses. The numbering plan of the public switched telecommunications network provides for this capability for point-to-point (voice circuit) calling under normal circumstances. In the broader context of all data, voice, and video communications, the Internet's distributed Domain Name Servers manage the numerical addresses that identify end points and names associated with those addresses. The explosive growth in Internet usage has motivated a change in the standard, Internet Protocol version 6, to accommodate more addresses.1 Interoperability. The need for successfully communicating across boundaries in heterogeneous, long-lived, and evolving environments cannot be ignored. In crisis management, voice communications are necessary but not sufficient; response managers and field workers must be able to mobilize data inputs and more fully developed information (knowledge) from an enormous breadth of existing sources—some of them years old—in many forms. Telemedicine similarly requires a mix of communications modes, although not always over as unpredictable an infrastructure as crises present. Interoperation is more than merely passing waveforms and bits successfully; interoperation among the supporting services for communications, such as security and access priority, is highly complex when heterogeneous networks interconnect. Usability. The information and communications infrastructure is there to provide support to people, not just computers. In national-scale applications, nonexperts are increasingly important users of communications, making usability a crucial issue. What is needed are ways for people to use technology more effectively to communicate, not only with computers and other information sources and tools, but also with other people. Collaboration between people includes many modes of telecommunication: speech, video, passing data files to one another, sharing a consensus view of a document or a map. In crises, for example, the ability to manage the flow of communications among the people and machines involved is central to the enterprise and cannot be reserved solely to highly specialized technicians. Users of networks must be able to configure their communications to fit their organizational demands, not the reverse. This requirement implies far more than easy-to-use human-computer interfaces for network management software; the network itself must be able to adapt actively to its users and whatever information or other resources they need to draw upon. For networks to be adaptive, they must be able to function during or recover quickly from unusual and challenging circumstances. The unpredictable damage
OCR for page 59
--> and disruption caused by a crisis constitute challenging circumstances for which no specific preparations can be made. Unpredicted changes in a financial or medical network, such as movement of customers or a changing business alliance among insurers and hospitals that exchange clinical records, may also require adaptive response. Mobility—of users, devices, information, and other objects in a network—is a particular kind of challenge that is relevant not only to crisis response, but also to electronic commerce with portable devices, telemedicine, and wireless inventory systems in manufacturing, among others. Whenever the nodes, links, inputs, and outputs on a network move, that network must be able to adapt to change. Randy Katz, of the University of California, Berkeley, has illustrated the demands for adaptivity of wireless (or, more generally, tetherless) networks for mobile computing in the face of highly diverse requirements with the example of a multimedia terminal for a firefighter (Katz, 1995). The device might be used in many ways: to access maps and plan routes to a fire; examine building blueprints for tactical planning; access databases locating local fire hydrants and nearby fire hazards such as chemical plants; communicate with and display the locations of other fire and rescue teams; and provide a location signal to a central headquarters so the firefighting team can be tracked for broader operational planning. All of the data cannot be stored on the device (especially because some data may have to be updated during the operation), so real-time access to centrally located data is necessary. The applications require different data rates and different trade-offs between low delay (latency) and freedom from transmission errors. Voice communications, for example, must be real time but can tolerate noisy signals; users can wait a few seconds to receive a map or blueprint, but errors may make it unusable. Some applications, such as voice conversation, require symmetrical bandwidth; others, such as data access and location signaling, are primarily one way (the former toward the mobile device, the latter away from it). Research issues in network adaptivity fall into a number of categories, discussed in this section: self-organizing networks, network management, security, resource discovery, and virtual subnetworks. For networks to be adaptive, they must be easily reconfigurable either to meet different requirements from those for which they were originally deployed or to work around partial failures. In many cases of partial failures, self-configuring networks might discover, analyze, work around, and perhaps report failures, thereby achieving some degree of fault tolerance in the network. Over short periods, such as the hours after a disaster strikes, an adaptive network should restore services in a way that best utilizes the surviving infrastructure, enables additional resources to be integrated as they become available, and gives priority to the most pressing emergency needs. Daniel Duchamp, of Columbia University, observed, "Especially if the crisis is some form of disaster, there may be little or no infrastructure (e.g., electrical and telephone lines, cellular base stations) for two-way communication in the vicinity of an action site. That which exists may be overloaded. There are two
OCR for page 60
--> approaches to such a problem: add capacity and/or shed load. Adding capacity is desirable but may be difficult; therefore, a mechanism for load shedding is desirable. Some notion of priority is typically a prerequisite for load shedding." Networks can be adaptive not only to sharp discontinuities such as crises, but also to rapid, continuous evolution over a longer time scale, one appropriate to the pattern of growth of new services and industries in electronic commerce or digital libraries. The Internet's ability to adapt to and integrate new technologies, such as frame relay, asynchronous transfer mode (ATM), and new wireless data services, among many others, is one example. Self-Organization Self-organizing networks facilitate adaptation when the physical configuration or the requirements for network resources have changed. Daniel Duchamp cast the problem in terms of an alternative to static operation: Most industry efforts are targeted to the commercial market and so are focused on providing a communications infrastructure whose underlying organization is static (e.g., certain sites are routers and certain sites are hosts, always). Statically organized systems ease the tasks of providing security and handling accounting/billing. Most communication systems are also pre-optimized to accommodate certain traffic patterns; the patterns are in large part predictable as a function of intra- and inter-business organization. It may be difficult or impossible to establish and maintain a static routing and/or connection establishment structure, because (1) hosts may move relative to each other, and (2) hosts, communication links, or the propagation environment may be inherently unstable. Therefore, a dynamically "self-organizing" routing and/or connection establishment structure is desirable. Crisis management provides a compelling case for the need of networks to be self-organizing in order to create rapidly an infrastructure that supports communication and information sharing among workers and managers operating in the field. Police, fire, citizen's band, and amateur radio communications are commonly available in crises and could be used to set up a broadcast network, but they provide little support to manage peer-to-peer communications and make efficient use of the available spectrum. Portable, bandwidth-efficient peer-to-peer network technologies would allow information systems to be set up to support communications for relief workers. The issues of hardware development, peer-to-peer networking, and multimedia support are not limited to crisis management; they may be equally important to such fields as medicine and manufacturing (e.g., in networking of people, computers, and machine tools within a factory). Thus, research and development on self-organizing networks may be useful in the latter fields as well. Rajeev Jain, of the University of California, Los Angeles, suggested two main deficiencies in terms of communications or networking technologies in a
OCR for page 61
--> situation where relief officials arrive carrying laptop computers: (1) portable computing technology is not as well integrated with wireless communications technology as it should be, and (2) wireless communications systems still often rely on a wireline backbone for networking.2 These factors imply that portable computers cannot currently be used to set up a peer-to-peer network if the backbone fails; radio modem technology has not yet advanced to a point where it can provide an alternative.3 In mobile situations, people using portable computers need access to a wireline infrastructure to set up data links with another computer even if they are in close proximity. In addition, portable cellular phones cannot communicate with each other if the infrastructure breaks down. Jain concluded that both of these problems must be solved by developing technologies that better integrate portable computers with radio modems and allow peer-to-peer networks to be set up without wireline backbones, by using bandwidth-efficient transmission technologies. Peer-to-peer networking techniques involve network configuration, multiple access protocols, and bandwidth management protocols. Better protocols need to be developed in conjunction with an understanding of the wireless communications technology so that bandwidth is utilized efficiently and the overhead of self-organization does not reduce the usable bandwidth drastically (the current situation in packet radio networks). Bandwidth is at a premium because of the large volume of information required in a crisis and because, although data and voice networks can be deployed using portable wireless technology, higher and/or more flexibly usable bandwidths are needed to support video communication. For example, images can convey vital information much more quickly than words, which can be important in crises or remote telemedicine. If paramedics need to communicate a diagnostic image of a patient (such as an electrocardiogram or x-ray) to a physician at a remote site and receive medical instructions, the amount of data that must be sent exceeds the capabilities of most wireless data communications technologies for portable computers. Technologies are now emerging that support data transmission rates in the tens of kilobits per second, which is sufficient for still pictures but not for full-motion video of more than minimal quality. A somewhat higher bandwidth capability could support a choice between moderate-quality full-motion video and high-quality images at a relatively low image or frame rate (resulting in jerky apparent motion). Another example relates to the usefulness of broadcasting certain kinds of data, such as full-motion video images of disaster conditions from a helicopter to workers in the field; traffic helicopters of local television stations often serve this function. However, if terrestrial broadcast capabilities are disabled, it could be valuable to use a deployable peer-to-peer network capability to disseminate such pictures to many recipients, potentially by using multicast technology. The statement of James Beauchamp, of the U.S. Commander in Chief, Pacific Command, quoted in Chapter 1 underscored the low probability that all individuals or organizations involved in a crisis response will have interoperable
OCR for page 62
--> radios (voice or data), especially in an international operation or one in which groups are brought together who have not trained or planned together before. Self-organizing networks that allowed smooth interoperation would be very useful in civilian and military crisis management and thus could have a high payoff for research. The lack of such technologies may be due partly to the absence of commercial applications requiring rapid configuration of wireless communications among many diverse technologies. One purpose of the Department of Defense's (DOD's) Joint Warrior Interoperability Demonstrations (JWIDs; discussed in Chapter 1) is to test new technologies for bridging gaps in interoperability of communications equipment. The SpeakEasy technology developed at Rome Laboratory, for example, is scheduled to be tested in an operational exercise in the summer of 1996 during JWID '96.4SpeakEasy is an effort sponsored by DARPA and the National Security Agency to produce a radio that can emulate a multitude of existing commercial and military radios by implementing previously hardware-based waveform-generation technologies in software. Such a device should be able to act as if it were a high-frequency (HF) long-range radio, a very high frequency (VHF) air-to-ground radio, or a civilian police radio. Managing a peer-to-peer network of radios that use different protocols, some of which can emulate more than one protocol, is a complex problem for network research that could yield valuable results in the relatively near term. Network Management Network management helps deliver communications capacity to whoever may need it when it is needed. This may range from more effective sharing of network resources to priority overrides (blocking all other users) as needed. Network management schemes must support making decisions and setting priorities; it is possible that not all needs will be met if there simply are not enough resources, but allocations must be made on some basis of priority and need. Experimentation is necessary to understand better the architectural requirements with respect to such aspects as reliability, availability, security, throughput, connectivity, and configurability. A network manager responding to a crisis must determine the state of the communications infrastructure. This means identifying what is working, what is not, and what is needed and can be provided, by taking into account characteristics of the network that can and should be maintained. For example, the existing infrastructure may provide some level of security. Then it must be determined whether it is both feasible and reasonable to continue to provide that level of security. Fault tolerance and priorities for activities are other characteristics of the network that must similarly be resolved. In addition to network management tools to assess an existing situation, tools are needed to incorporate new requirements into the existing structure. For
OCR for page 63
--> example, there may be great variability in the direction of data flow into and out of an area in which a crisis has occurred—for example, between command posts and field units. During some phases, remote units may be used for data collection to be transmitted to centralized or command facilities that in turn will need only lower communication bandwidth to the mobile units. Adaptive network management can help increase the capability of the network elements, for example, by making the communications and computation able to run efficiently with respect to power consumption. Randy Katz has observed that wireless communication removes only one of the tethers on mobile computing; the other tether is electrical power (Katz, 1995). Advances in lightweight, long-lived battery technology and hardware technologies, such as low-power circuits, displays, and storage devices, would improve the performance of portable computers in a mobile setting. A possibility that is related directly to network management is the development of schemes that adapt to specific kinds of communications needs and incorporate broadcast and asymmetric communications to reduce the number and length of power-consuming transmissions by portable devices. For example, Katz observes that if a mobile device's request for a particular piece of information need not be satisfied immediately, the request can be transmitted at low power and low bandwidth. The response can be combined with those to other mobile devices, which are broadcast periodically to all of the units together at high power and bandwidth from the base stations. If a particular piece of information such as weather data is requested repeatedly by many users, it can be rebroadcast frequently to eliminate the need for remote units to transmit requests. Priority policy is a critical issue in many applications; the need for rapid deployment and change in crisis management illustrates the issue especially clearly. Priority policy is the set of procedures and management principles implemented in a network to allocate resources (e.g., access to scarce communications bandwidth) according to the priority of various demands for those resources. Priority policy may be a function of the situation, the role of each participant, their locations, the content being transmitted, and many other factors. The dynamic nature of some crises may be reflected in the need for dynamic reassignment of such priorities. The problem is that one may have to change the determination of which applications (such as life-critical medical sensor data streams) or users (such as search and rescue workers) have priority in using the communications facilities. Borrowing resources in a crisis may require reconfiguring communications facilities designed for another use, such as local police radio. A collection of priority management issues must be addressed: Who has the authority to make a determination about priorities? How are priorities determined? How are priorities configured? Configuration needs to be secure, but also
OCR for page 64
--> user friendly, because the people performing it may not be network or communications experts. How are such priorities provided by the network and related resources? How will the network perform under the priority conditions assigned? The last is a particularly difficult problem for network management. Michael Zyda, of the Naval Postgraduate School, identified predictive modeling of network latency as a difficult research challenge for distributed virtual environments, for which realistic simulation experiences set relatively strict limits on the latency that can be tolerated, implying a need for giving priority to those data streams. One suggestion arising in the workshops was a priority server within a client-server architecture to centralize and manage evolving priorities. This approach might allow for the development of a multilevel availability policy analogous to a multilevel security policy. A dynamically configurable mechanism for allocating scarce bandwidth on a priority basis could enable creation of the ''emergency lane" over the communications infrastructure that crisis managers at the workshops identified as a high-priority need. If such mechanisms were available they could be of great use in managing priority allocation in other domains such as medicine, manufacturing, and banking. In situations that are not crises, however, one might be able to plan ahead for changes in priority, and it is likely that network and communications expertise might be more readily available. Victor Frost, of the University of Kansas, discussed the challenges of meeting diverse priority configuration within a network that integrates voice with other services: Some current networks use multilevel precedence (MLP) to ensure that important users have priority access to communications services. The general idea for MLP-like capabilities is that during normal operations the network satisfies the performance requirements of all users, but when the network is stressed, higher-priority users get preferential treatment. For voice networks, MLP decisions are straightforward: accept, deny, or cut off connections. However, as crisis management starts to use integrated services (i.e., voice, data, video, and multimedia), MLP decisions become more complex. For example, in today's systems an option is to drop low-precedence calls. In a multimedia network, not all calls are created equal. For example, dropping a low-precedence voice call would not necessarily allow for the connection of a high-precedence data call. MLP-like services should be available in future integrated networks. Open issues include initially allocating and then reallocating network resources in response to rapidly changing conditions in an MLP context. In addition, the infrastructure must be capable of transmitting MLP-like control information (signaling) that can be processed along with other network signaling messages. There is a need to develop MLP-like services that match the characteristics of integrated networks.
OCR for page 65
--> An ability to configure priorities, however, will require a much better understanding of what users actually need. Victor Frost also observed, Unfortunately, defining application-level performance objectives may be elusive. For example, users would always want to download a map or image instantaneously, but would they accept a [slower] response? A 10-minute response time would clearly be unacceptable for users directly connected to a high-speed network; but is this still true for users connected via performance-disadvantaged wireless links? . . . Performance-related deficiencies of currently available computing and communications capabilities are difficult to define without user-level performance specifications. Security Security is essential to national-scale applications such as health care, manufacturing, and electronic commerce. It also is important to crisis management, particularly in situations where an active adversary is involved or sensitive information must be communicated. Many traditional ideas of network security must be reconsidered for these applications in light of the greater scale and diversity of the infrastructure and the increased role of nonexperts. To begin with, the nature of security policies may evolve. Longer-term research on new models of composability of policies will be needed as people begin to communicate more frequently with other people whom they do not know and may not fully trust. On a more short-term basis, new security models are needed to handle the new degree of mobility of users and possibly organizations. The usability or user acceptability of security mechanisms will assume new importance, especially those that inconvenience legitimate use too severely. New perspectives may be required on setting the boundaries of security policies not based on physical location. Composability of Security Policies As organizations and individuals form and re-form themselves into new and different groupings, their security policies must also be adapted to the changes. Three reorganization models—partitioning, subsumption, and federation—may be used, and each may engender changes in security policies. The following are simplistic descriptions, but they capture the general nature of changes that may occur. Partitioning involves a divergence of activity where unanimity or cooperation previously existed. In terms of security, partitioning does not appear to introduce a new paradigm or new problems. In contrast, subsumption and federation both involve some form of merging or aligning of activities and policies. Subsumption implies that one entity plays a primary role, while at least one other assumes a secondary role. Federation, on the other hand, implies an equal partnering or relationship. Both subsumption and federation may require that
OCR for page 88
--> in an organized manner. With improvements, for example, in schema description techniques, this could make the information integration problem more approachable as well. Information location also relates to the distributed computing issues raised above, since one approach involves dispatching not just passive queries to information sources, but active information "agents" that monitor and interact with information stores on an ongoing basis. Information agents may also deploy other information agents, increasing the challenges (both to the initial dispatcher of the agents and to the various willing hosts) of monitoring and managing large numbers of deployed agents. Meta-Data and Types Information is becoming more complex, is interpreted to a greater extent, and supports a much wider range of issues. Evidence of the increase in complexity is found in (1) the growing demand for enriched data models, such as enhancements to the relational model for objects and types; (2) the adoption of various schemes for network-based sharing and integration of objects, such as CORBA; (3) the development of databases that more fully interpret objects, such as deductive databases; (4) the rapid growth in commercial standards and repository technology for structured and multimedia objects; and (5) the integration of small software components, such as applets, into structured documents. One important approach to managing this increased complexity is the use of explicit meta-data and type information. William Arms, of the Corporation for National Research Initiatives, observed, "Very simple, basic information about information is, first of all, a wonderfully important building block and [second,] . . . a much more difficult question than anybody really likes to admit." Multimedia databases, for example, typically maintain separate stores for the encoded multimedia material and the supporting meta-data. Meta-data provide additional information about an object, beyond the content that is the object itself. Any attribute can be managed as meta-data. For example, in a multimedia database, meta-data could include index tags, information about the beginnings and endings of scenes, and so on. Meta-data can also include quality information. In crisis management applications, this is crucial, since there are some cases where many of the raw data (40 percent, in David Kehrlein's commercial GIS example discussed in Chapter 1) are inaccurate in some respect. As David Austin, of Edgewater, Maryland, noted, "Often, data are merged and summarized to such an extent that differences attributable to sources of varying validity are lost." Separately distinguishable meta-data about the reliability of sources can help users identify and manage around poor-quality data. Types are a kind of meta-data that provide information on how objects can be interpreted. In this regard, type information is like the more usual database schema. Types, however, can be task specific and ad hoc. Task specificity
OCR for page 89
--> means, for example, that the particular consensus types in the Multi-purpose Internet Mail Extension (MIME) hierarchy are a small subset of the types that could be developed for a particular application. Because of this task specificity, the evolution of types presents major challenges. For example, the type a user may adopt for a structured document typically evolves over a period of months or years as a result of migration from one desktop publishing system to the next. Either the user resists migration and falls behind technology developments, or the user must somehow manage a set of objects with similar, but not identical types. One approach to this problem is to create a separate set of servers for types that serve up type information and related capabilities (e.g., conversion mechanisms that allow objects to be transformed from one type to another). A related issue is the evolution of structured objects to contain software components. The distinction between structured documents and assemblies of software components has been blurring for some time, and this trend will complicate further the effective management of structured objects. For example, because a structured object can contain computation, it is no longer benign from the standpoint of security. An information object could threaten confidentiality by embodying a communications channel back to another host, or it could threaten integrity or service access due to computations it makes while within a protected environment. Many concepts are being developed to address these problems, but their interplay with broader information management issues remains to be worked out. This issue also reinforces the increasing convergence between concepts of information management and concepts of software and computation. Production and Value National-scale applications provide many more opportunities for information producers to participate in an increasingly rich and complex information marketplace. Every educator, health care professional, and crisis management decision maker creates information, and that information has a particular audience. Technology to support the efficient production of information and, more generally, the creation of value in an information value chain is becoming increasingly important in many application areas and on the Internet in general. The World Wide Web, even in its present early state of development, provides evidence of the wide range of kinds of value that can be provided beyond what are normally thought of as original content. For example, among the most popular Web services are sites that catalog and index other sites. Many sites are popular because they assess and evaluate other sites. There are services emerging for brokering of information, either locating sites in response to queries or locating likely consumers of produced specialty information. Because of the speed of the electronic network, many steps can be made very efficiently along the way from initial producer to end consumer of information.
OCR for page 90
--> Related to these concepts of information value are new information services. For example, there are several candidate services that support commerce in information objects. Because information objects can be delivered rapidly and reliably, they can support commerce models that are very different from models for physical objects. In addition, services are emerging to support information retrieval, serving of complex multimedia objects, and the like. The profusion of information producers on the Web also creates a need for a technology that enables successful small-scale services to scale up to larger-scale and possibly institutional-level services. National-scale applications such as crisis management complicate this picture because they demand attention to quality and timeliness. Thus the capability of an information retrieval system, for example, may be measured in terms of functions ranging from resource availability (for meeting a deadline) to precision and recall. Distribution and Relocation As noted above, distributed information resources may have to be applied, in the aggregate, to support national-scale applications. In these applications, there can be considerable diversity that must be managed. The distributed information resources can be public or private, with varying access control, security, and payment provisions. They can include traditional databases, wide-area file systems, digital libraries, object databases, multimedia databases, and miscellaneous ad hoc information resources. They can be available on a major network, on storage media, or in some other form. They also can include a broad range of kinds of data, such as structured text, images, audio, video, multimedia, and application-specific structured types. For many applications, these issues can interact in numerous ways. For example, when network links are of low capacity or are intermittent, in many cases it may be acceptable to degrade quality. Alternatively, relative availability, distribution, and quality of communications and computing resources may determine the extent to which data and computation migrate over the distributed network. For example, low-capacity links and limited computing resources at the user's location may suggest that query processing is best done at the server; but when clients have significant computing resources and network capacity is adequate, then query processing, if it is complex, could be done at the client site. When multiple distributed databases cooperate in responding to queries, producing aggregated responses, this resource-balancing problem can become more complex; when atomicity and replication issues are taken into account, it can become even more difficult. In crisis management, resource management and availability issues take on new dimensions. In a crisis, complex information integration problems may yield results that go into public information kiosks. When communications are intermittent or resource constrained, caching and replication techniques must
OCR for page 91
--> respond to levels of demand that are unanticipated or are changing rapidly. Can data replicate and migrate effectively without direct manual guidance and intervention? This is more difficult when there are data quality problems or when kiosks support direct interaction and creation of new information. USER-CENTERED SYSTEMS: DESIGNING APPLICATIONS TO WORK WITH PEOPLE Research on natural, intuitive user interface technologies has been under way for many years. Although significant progress has been made, workshop participants indicated that a more comprehensive view of the human-computer interface as part of larger systems must be developed in order for these technologies to yield the greatest benefit. Allen Sears observed, "The fact that humans make . . . errors, the fact that humans are impatient, the fact that humans forget—these are the kinds of issues that we need to deal with in integrating humans into the process. The flip side of that . . . is that humans, compared to computers, have orders-of-magnitude more domain-specific knowledge, general knowledge, common sense, and ability to deal with uncertainty." System designs should focus on integrating humans into the system, not just on providing convenient human-computer interfaces. The term "system" today commonly refers to the distributed, heterogeneous networks, computers, and information that users interact with to build and run applications and to accomplish other tasks. A more useful and accurate view of the user-system relationship is of users as an integral part of the total system and solution space. Among other advantages, this view highlights the need for research integrating computing and communications science and engineering with advances in the understanding of user and organizational characteristics from the social sciences. Human-centered Systems and Interfaces Traditional human-computer interface research embraces a wide array of technologies, such as speech synthesis, visualization and virtual reality, recognition of multiple input modes (e.g., speech, gesture, handwriting), language understanding, and many others.12 All applications can benefit from easy and natural interfaces, but these are relative characteristics that vary for different users and settings. A basic principle is that the presentation should be as natural to use as possible, to minimize demands on those with no time or attention to spare for learning how to use an application. This does not necessarily imply simplicity; an interface that is too simple may not provide some capabilities the user needs and lead to frustration. In addition, designers of interfaces in large-scale applications with diverse users cannot depend on the presence of a particular set of computing and communications resources, so the interfaces must be adaptable to what is available. The
OCR for page 92
--> network-distributed nature of many applications requires attention to the scaling of user interfaces across a range of available platforms, with constraints that are diverse and—especially in crises—unpredictable. Constraints include power consumption in portable computers and communications bandwidth. For example, it is important that user interfaces and similar services for accessing a remote computing resource be usable, given the fidelity and quality of service available to the user. An additional focus for research in making interface technologies usable in national-scale applications is reducing their cost. Crisis management, however, highlights the need to adapt not only to available hardware and software, but also to the user. Variations in training and skills affect what users can do with applications and how they can best interact with them. As David Austin observed: Training is also critical; people with the proper skill mix are often in short supply. We have not leveraged the technology sufficiently to deliver short bursts of training to help a person gain sufficient proficiency to perform the task of the moment. . . . [What is needed is] a system that optimizes both the human element and the information technology element using ideas from the object technology world. In such a system, a person's skills would be considered an object; as the person gained and lost skill proficiency over his career, he would be trained and given different jobs [so that he could be part of] a high-performance work force able to match any in the world. The approach involves matching a person with a job and at the same time understanding the skill shortfalls, training in short bursts, and/or tutoring to obtain greater proficiency. As shortfalls are understood by the person, he or she can task the infrastructure to provide just-in-time, just-enough training at the time and place the learner wants and needs it. In addition, because conditions such as stress and information overload can vary rapidly during a crisis, there would also be value in an ability to monitor the user's performance (e.g., through changes in response time or dexterity) and adapt in real time to the changing capabilities of users under stress. By using this information, applications such as a "crisis manager's electronic aide" could adjust filtering and prioritization to reduce the flood of information given to the user. Improvements in techniques for data fusion in real time among sensors and other inputs would enhance the quality of this filtering. Applications could also be designed to alter their presentation to provide assistance, such as warnings, reminders, or step-by-step menus, if the user appears to be making increasing numbers of errors. The focus of these opportunities is inherently multidisciplinary. To achieve significant advances in the usability of applications, improvements in particular interface techniques can be augmented by integrating multiple, complementary technologies. Recent research in multimodal interfaces has proceeded from the recognition that no single technique is always the best for even a single user, much less for all users, all the time, and that a combination of techniques can be
OCR for page 93
--> more effective than any single one. Learning how to optimize the interface mode for any given situation requires experimentation, as well as building on social science research in areas such as human factors and organizational behavior. Recognizing that the ideal for presentation of information to the user is in a form and context that is understandable, workshop participants noted that in some applications a visual presentation is called for. Given adequate performance, an immersive virtual reality environment could benefit applications such as crisis management training, telemedicine, and manufacturing design. In crisis management training especially, a realistic recreation of operational conditions (such as the appearance of damaged structures, the noise and smoke of fires and storms, the sound of explosions) can help reproduce—and therefore train for—the stress-inducing sensations that prevail in the field. Because response to a crisis is inherently a collaborative activity, simulations should synthesize a single, consistent, evolving situation that can be observed from many distinct points of view by the team members.13 Don Eddington identified a common perception of the crisis situation as a feature that is essential to effective collaboration. A depiction of the geographic neighborhood of a crisis can provide an organizing frame of reference. Photographs and locations of important or damaged facilities, visual renderings of simulation results, logs of team activity, locations of other team members, notes—all can attach to points on a map. Given adequate bandwidth and computing capacity, another way to provide this common perception might be through synthetic virtual environments, displaying a visualization of the situation that could be shared among many crisis managers. (The Crisis 2005 scenario presented in Box 1.3 suggests a long-range goal for implementing this concept such that a crisis manager could be projected into a virtual world optimized to represent the problem at hand in a way that enhances the user's intuition.) Research challenges underlying such visualizations include ways to integrate and display information from diverse sources, including real observations (e.g., from field reports or sensors) and simulations. The variation in performance among both equipment and skills of different users may prevent displaying precisely the same information to all users; presumably, some minimal common elements are necessary to enable collaboration. Determining precisely what information and display features should be common to all collaborators is an example of the need for technology design to be complemented with multidisciplinary research in areas such as cognition and organizational behavior. Collaboration and Virtual Organizations Because people work in groups, collaboration support that helps them communicate and share information and resources can be of great benefit. Crisis management has a particularly challenging need: an instant bureaucracy to respond effectively to a crisis. In a crisis, there is little prior knowledge of who will
OCR for page 94
--> be involved or what resources will be available; nevertheless, a way must be found to enable them to work together to get their jobs done. This implies assembling resources and groups of people into organized systems that no one could know ahead of time would have to work together. Multiple existing bureaucracies, infrastructures, and individuals must be assembled and formed into an effective virtual organization. The instant bureaucracy of a crisis response organization is an even more unpredictable, horizontal, and heterogeneous structure than is implied by traditional command and control models of military organizations in warfare—themselves a complex collaboration challenge. Crisis management collaboration must accommodate this sort of team building rapidly; thus, it provides requirements for developing and opportunities for testing collaboration technologies that are rapidly configurable and support complex interactions. One relatively near-term opportunity is to develop and use the concept of anchor desks (discussed above, in the section ''Distributed Computing"). The concept has been tested in technology demonstrations such as JWID (see Chapter 1); field deployment in civilian crises could be used to stress the underlying concepts and identify research needs. Anchor desks can provide a resource for efficient, collaborative use of information, particularly where multiple organizations must be coordinated. They represent a hybrid between decentralized and centralized information management. Each anchor desk could support a particular functional need, such as logistics or weather forecasting. A crisis management anchor desk would presumably be located outside the crisis zone, for readier access to worldwide information sources and expertise; however, it would require sufficient communication with people working at the scene of the crisis to be useful to them, as well as the ability to deliver information in scalable forms appropriate to the recipient's available storage and display capabilities (e.g., a geographic information system data file representing the disaster scene for one, a static map image for another, a text file for a third). An anchor desk could not only integrate data from multiple sources, but also link it with planning aides, such as optimized allocation of beds and medicines and prediction of optimal evacuation routes implemented as electronic overlays on geographic information systems, with tools involving a range of artificial intelligence, information retrieval, integration, and simulation technologies. An anchor desk could also house a concentration of information analysts and subject matter experts (e.g., chemists, as envisioned in the Crisis 2005 scenario); computing resources for modeling, simulation, data fusion, and decision support; information repositories; and others. Anchor desks could provide services to support cross-organizational collaboration, such as tools for rapidly translating data files, images, and perhaps even human languages into forms usable by different groups of people. Furthermore, the anchor desk might not be physically at one place; a logically combined, but physically separated, collection of networked resources could perform the
OCR for page 95
--> same function, opening the possibility for multiple ways of incorporating the capability into the architecture of the crisis response organization. The set of technologies implied by this sort of anchor desk could serve to push research not only in each technology, but also in tools and architectures for integrating these capabilities, such as whiteboards and video-conferencing systems that scale for different users' capacities and can correctly integrate multiple security levels in one system. Nevertheless, information must be integrated not only at remote locations such as command centers and anchor desks, but also at field sites. David Kehrlein, of the Office of Emergency Services, State of California, noted, "Solutions require development of on-site information systems and an integration of those with the central systems. If you don't have on-site intelligence, you don't know a lot." Judgment Support The most powerful component of any system for making decisions in a crisis is a person with knowledge and training. However, crisis decision making is marked by underuse of information and overreliance on personal expertise in an environment that is turbulent and rich in information flows. The expert, under conditions of information overload, acts as if he or she has no information at all. Providing access to information is not enough. The ability to evaluate, filter, and integrate information is the key to its being used. Filtering and integrating could be done separately for each person on that person's individual workstation. However, a more useful approach for any collaborative activity would be to integrate and allocate information within groups of users. (In fact, information filtering at the boundary of a linked group of users could be one of the most important services performed by the virtual subnets discussed above in the section "Networking"; filters could help individuals and groups avoid information-poor decision making in an information-rich environment.) Information integration techniques such as those discussed in the section "Information Management" are generally presented in terms of finding the best information from diverse sources to meet the user's needs. The flip side of this coin is the advantage of being able to cull the second-best and third-best information, reducing the unmanageable flood. A set of special needs of crisis management, which may have significant utility in other application areas as well, can be captured in the concept of judgment support. A crisis manager often makes intuitive judgments in real time that correspond to previously undefined problems without complete contingency plans. This should be contrasted with traditional notions of decision support, which are associated with a more methodical, rule-based approach to previously defined and studied problems. Judgment support for crisis management could
OCR for page 96
--> rely on rule-based expert systems to some extent, but the previously defined problems used to train these systems will necessarily be somewhat different from any given crisis. Workshop participants suggested a need for automated support comparing current situations with known past cases. To achieve this automation, however, much better techniques are required for abstractly representing problems, possible solutions, and the sensitivity of predicted outcomes to variations, gaps, and uncertain quality in available information. The last point is particularly important for crises, because it is inevitable that some of the information the judgment maker relies upon will be of low quality. Two examples are the poor quality of maps that crisis management experts remarked on in the workshops and the rapid rate of change in some crises that continually renders knowledge about the situation obsolete. The technology for representing problem spaces and running computations on them must therefore be able to account for the degree of uncertainty about information. Moreover, data may not always vary in a statistically predictable way (e.g., Gaussian distribution). In some kinds of crises, data points may be skewed unpredictably by an active adversary (e.g., a terrorist or criminal), by someone attempting to hide negligence after an accident, or by unexpected failure modes in a sensor network. Another reason the challenge of representing problems may be particularly difficult in crisis management is that the judgments needed are often multidimensional in ways that are inherently difficult to represent. James Beauchamp's call for tools to help optimize not only the operational and logistical dimensions of a foreign disaster relief operation, but also the political consequences of various courses of action, illustrates the complexity of the problem. Even presenting the variables in a way that represents and could allow balancing among all dimensions of the problem is not possible with current techniques. By contrast, the multidimensional problem discussed in Chapter 1 (see the section "Manufacturing")—simulating and optimizing trade-offs among such facets as product performance parameters, material costs, manufacturability, and full product life-cycle costs—although extremely complex computationally, is perhaps more feasible to define in terms with which computer models can work. If a problem can be represented adequately, a judgment support system should be able to assist the judgment maker by giving context and consequences from a multidimensional exploration of the undefined problem represented by the current crisis. This context construction requires automated detection and classification of issues and anomalies, identifying outlier data points (which could represent errors, but could also indicate emerging new developments), and recognizing relationships between the current situation and previously known cases that may have been missed by or unknown to the crisis manager. Because judgments are ultimately made by people, not computers, technologies intended to support making judgments must be designed for ease of use and with an ability to understand and take into account the capabilities and needs of the user. To a great extent, of course, it is up to the user to ask for the information
OCR for page 97
--> he or she needs, but a model of what knowledge that individual already has could be used to alter the system's information integration and presentation approaches dynamically. Another special application for crisis management is monitoring the decision maker, because of the stress and fatigue factors that come into play. Performance monitors could detect when the user's performance is slipping, by detecting slowed reaction time and onset of errors. This information could guide a dynamic alteration in the degree of information filtering, along with variations in the user interface (such as simpler menu options). These capabilities could be of more general value. For example, they could assist in assessing the effectiveness of multimedia training and education tools in schools and continuing-education applications. Of course, to be useful, a monitoring capability would have to be integrated properly with the way users actually use systems. For example, users will ignore a system that instructs them to get some rest when rest is not an option. Instead, it might be valuable for a system to switch to a standard operating procedures-oriented, step-by-step interface when the user shows signs of tiring. Human factors research provides useful insights, including some that are of generic usefulness. However, needs will always vary with the context of specific applications, implying the strong necessity for researchers and application users to interact during testing and deployment of systems and design of new research programs (Drabek, 1991). NOTES 1. Partridge, Craig, and Frank Kastenholz, "Technical Criteria for Choosing IP the Next Generation (IPng)," Internet Request for Comments 1726, December 1994. Available on line from http://www.cis.ohio-state.edu/hypertext/information/rfc.html. 2. Services and technologies are now emerging that may meet this need, such as cellular digital packet data and digital spread-spectrum. Portable terminals that can be used to communicate via satellite uplink are an additional exception; however, such systems are not yet portable or affordable enough that many relief workers in a crisis could carry one for general use. 3. Noncommercial, amateur packet radio is a counterexample; however, commercial service offerings are lacking. Part of the problem is the lack of methods of accounting for use of the spectrum in peer-to-peer packet radio networks, without which there is a potential problem of overuse of the spectrum—a tragedy of the commons. 4. A description of the proposed demonstration is available on line at the JWID '96 home page, http://www.spawar.navy.mil. 5. Many telephone carriers now provide frame-relay virtual subnets that are intended to support the isolation discussed here. One serious drawback at present is that their establishment is on a custom basis and is both labor intensive and time-consuming. Telephone carriers are likely to adopt a more automated order fulfillment process as demand grows, but it remains technically infeasible to requisition and establish these services in the heat of a crisis to solve an immediate problem. 6. Given the current costliness of access to high-performance computation and high-speed network services, achieving this gain will require political and economic decisions about making resources available, perhaps based on building a case that this investment could yield a positive payoff by lowering the eventual cost of responding to crises. 7. In addition, the coarser-grained simulation can be used to provide dynamically consistent
OCR for page 98
--> boundary conditions around the areas examined in finer detail. The model, called the Advanced Regional Prediction System, is written in Fortran and designed for scalability. See Droegemeier (1993) and Xue et al. (1996). See also "The Advanced Regional Prediction System," available on line at http://wwwcaps.uoknor.edu/ARPS. 8. A CAPS technical paper explains that "although no meteorological prediction or simulation codes we know of today were designed with massive parallelism in mind, we believe it is now possible to construct models that take full advantage of such architecture." See "The Advanced Regional Prediction System: Model Design Philosophy and Rationale," available on line at http://wwwcaps.uoknor.edu/ARPS. 9. The ability to effectively handle time as a resource is an issue not only for integrating real-time data, but for distributed computing systems in general. Formal representation of temporal events and temporal constraints, and scheduling and monitoring distributed computing processes with hard real-time requirements, are fundamental research challenges. Some research progress has been made in verifying limited classes of real-time computable applications and implementing prototype distributed real-time operating systems. 10. Details about I-WAY are available on line at http://www.iway.org. 11. One key data fusion challenge involves data alignment and registration, where data from different sources are aligned to different norms. 12. Some key challenges underlying communication between people and machines relate to information representation and understanding. These are addressed primarily in the section, "Information Management," but it should be understood that without semantic understanding of, for example, a user's requests, no interface technology will produce a good result. 13. This concept is currently used for military training in instances when high-performance computation is available; trainees' computers are linked to the high-performance systems that generate the simulation, and the trainees see a more or less realistic virtual crisis (OTA, 1995). Nonmilitary access to such simulations likely requires lower-cost computing resources.
Representative terms from entire chapter: