2
The Potential to Enhance Disaster Management: Key IT-Based Capabilities
How could better application of information technology (IT) to disaster management reduce the human and economic costs of catastrophic events? This chapter outlines a vision for IT-enhanced disaster management in terms of six areas of IT-based capabilities. Three scenarios developed by the committee (presented in Appendix A) describe specific fictional disasters to help place those capabilities in the context of existing IT use in disaster management and to highlight how progress would have tangible positive impacts.
Reducing the impact of disasters requires a complex mix of technical and social endeavors, and no single prescription or discipline can provide all the answers.1 Indeed, disaster researchers have frequently expressed concerns that technology not be viewed as a panacea.2 The committee shares the view that enhancing disaster management requires attention to
1 |
See, for example, National Research Council, A Safer Future: Reducing the Impacts of Natural Disasters, National Academy Press, Washington, D.C., 1991. |
2 |
See, for example, E.L. Quarantelli, “Problematical Aspects of the Information/Communication Revolution for Disaster Planning and Research: Ten Non-Technical Issues and Questions,” Disaster Prevention and Management 6(1), 1997. See also Sharon S. Dawes, Thomas Birkland, Giri Kumar Tayi, and Carrie A. Schneider, Information, Technology, and Coordination: Lessons from the World Trade Center Response, Center for Technology in Government, University at Albany, State University of New York, 2004; available at http://www.ctg.albany.edu/publications/reports/wtc_lessons/wtc_lessons.pdf. |
technological, organizational, and social factors and depends on a solid understanding of disaster management as well as the technologies.3
Nonetheless, IT represents an important point of leverage for enhancing disaster management.4 Briefings to the committee suggested that progress continues to be made toward ever more effective use of information technology to enhance disaster management. Better preparation and training of public safety officials and the public, improved mitigation and prevention measures, more efficient and effective response, and more rapid recovery are all possible.5 Furthermore, the public has high expectations that technology that it sees deployed ever deeper into other societal systems will be applied to improve the handling of disasters.
This chapter discusses six key IT-based capabilities that were selected by the committee because they (1) have the potential to address major problem areas in current disaster management practice and (2) represent areas where there appears to be significant potential for further advancement of the current state of the art. These capabilities span areas that could improve hazard mitigation, disaster preparedness, disaster response, and disaster recovery. They also aim to address requirements of practitioners at all levels—first responders, local or regional emergency managers, and national emergency managers. Improving these capabilities is applicable to addressing natural, accidental, and terrorist-induced disasters, though some capabilities may be more specific to one type.
Table 2.1 lists these capabilities together with examples of near-term, mid-term, and long-term opportunities for technology development that were identified by the committee. Some of these technology areas are already the focus of significant federal research and development investment. For example, self-managing and repairing networks are the focus of a Defense Advanced Research Projects Agency (DARPA) research program. Some technologies are the focus of considerable research and development investment from the private sector, such as wireless mesh
TABLE 2.1 Key IT-Based Capabilities for Disaster Management and Related Promising Technologies
|
Promising Technologies |
Key Capability |
Near Term |
More robust, interoperable, and priority-sensitive communications |
Cellular Wireless networking Redundant and resilient infrastructure Internet/IP-based networking |
Improved situational awareness and a common operating picture |
Radio-frequency identification for resource tracking and logistics |
Improved decision support and resource tracking and allocation |
Online resource directories Commercial collaboration software and file sharing |
Greater organizational agility for disaster management |
Computer-mediated exercises Portable unmanned aerial vehicles and robots |
Better engagement of the public |
Automated, multimodal public notification and resource contact systems Multimodal public reporting capabilities Validated online information sources Reverse 911 capability (i.e., two-way emergency reporting) |
Enhanced infrastructure survivability and continuity of societal functions |
Mobile power generators Redundant radio systems Dynamic stockpiled supply management |
|
|
Midterm |
Long Term |
Mobile cellular infrastructure Intelligent spectrum sharing Multiple input/multiple output wireless systems Integrated voice and data Policy-based access control mechanisms |
Software-defined radios Delay-tolerant networking Passive and active embedded conductors and relays for enhanced communication in buildings, rubble, and underground Policy-based routing and congestion management Self-managing and repairing (autonomous and adaptive) networks |
Embedded, networked sensors Routine information fusion Publish/subscribe information dissemination User-centered situational awareness information presentation |
Semantic routing Data mining across diverse information sources Calibrated information confidence tools Deployable sensor networks Automated information fusion from diverse sources Network and information security Augmented cognition |
Dynamic responsibility charting Intelligent adaptive planning tools |
Decision sentinels Distributed emergency operation centers Resource use modeling Coordinated transportation, communication, and decision support Computer-assisted decision-making tools |
Event-replay tools Online repositories of lessons learned Integrated ad hoc data-collection tools (blogs/wikis) |
Continuous learning tools Computer-assisted disaster simulation training Distributed, scalable, survivable data logging Dynamic capability profiling and credentialing Collective sensemaking |
Volunteer mobilization systems Distributed, dynamic private resource directories Enhanced two-way communication with public |
Automated public reporting tools Optimized data formatting for differing presentation devices |
Network redundancy Renewable power sources Embedded sensors for nondestructive asset evaluation |
Risk management tools with uncertainty modeling Resilient “smart” materials and structures for infrastructure survivability |
networks and collaboration software. Different actions are likely to be required to move technologies toward deployment in the field depending on where they are in the technology pipeline (discussed in Chapter 3), the degree of specialized technological adaptation required for use in disaster management, and the need for organizational changes.
The sections that follow examine each of the six key capabilities in more detail.
MORE ROBUST, INTEROPERABLE, AND PRIORITY-SENSITIVE COMMUNICATIONS
During a disaster, both commercial and public safety communication infrastructure—telephone lines, radio towers, communication switches, network operation centers, and the requisite power needed—is often degraded or damaged. Simultaneously the demand on communication increases from the public and from first responders. Mobile communication demand is especially acute. New users from external public safety jurisdictions enter the disaster region. The public may have to be mobilized for evacuations. Moreover, environments confronted by first responders, such as collapsed buildings, place unusual requirements on communications capabilities. And, finally, hostile action may compromise even those resources that survive the initial disaster.
Simple availability of communications is thus a critical starting point for disaster response. Communication robustness can be improved by applying well-understood techniques for improving the availability of systems, that is, hardening infrastructure, improving network resilience and adaptability, providing redundancy and diversity, improving component robustness, and optimizing recovery speed. Notably, many communications problems are caused by the destruction of communication devices or communication lines, by the loss of power to radio towers, switches, and other infrastructure, and by the lack of ability to recharge handsets and other mobile devices. These are often the result of damage to physical structures (buildings, cell towers). Although it is not economically feasible to harden all relevant equipment for worst-case scenarios, improvements are certainly possible.
Commercial services could provide a valuable complement to dedicated disaster management communications systems by providing redundant infrastructure for voice and data communications. Generally implemented on a different technology base than that of dedicated systems, they also add diversity. Commercial infrastructure, such as the cellular network, is also, by design, highly interoperable. (Mobile callers can readily communicate with a subscriber to their carrier, any other carrier, or a landline subscriber.) The participation of wireless Internet service
providers in Hurricane Katrina response and relief efforts hints at the possibilities of embracing and integrating another type of commercial technology into disaster management practice. Of course, one reason for deploying separate public safety radio systems is that they are designed to be more resilient that commercial infrastructure. And public safety radios are designed to much more stringent operational standards than mobile phones. Yet, both commercial and public safety communications infrastructures have suffered breakdowns in recent disasters.
One critical issue that arises in considering commercial services is priority access. When using commercial services, priority could be provided in a variety of ways. For example, it is possible to set up a cellular network in a spectrum band that is dedicated to disaster management, though cellular handsets would have to be modified (at a significant additional cost) to support this (and the spectrum allocated). Alternatively, a commercial service can be used, with arrangements made (preferably prior to an incident) to reserve a certain fraction of the capacity for prioritized disaster management traffic. Of course, priority access to commercial services requires a contractual and regulatory framework as well as technology adaptation.
Commercial technology (and adaptations of commercial technology), whether used as part of commercial services or incorporated into dedicated disaster communications systems, could be used to more quickly leverage the latest technology advances and capabilities. Commercial technologies provide a range of capabilities supporting voice and data at rapidly increasing transfer rates. Two examples of commercial technologies widely embraced by the public which hold potential for disaster management are cellular telephony and 802.11 standards-based wireless networking. Cell phones are now used by a large majority of people on a daily basis, so it is natural to continue to use them in a disaster. Besides traditional voice calls, cell phones often also support push-to-talk capabilities, text messaging, and Web access, all of which could be usefully employed. Wireless networking is also becoming ubiquitous in many areas and is supported on laptops and handheld devices. More generally, Internet Protocol (IP)-based communications networks could allow support of emerging IP-based multimedia services, high-data-rate access, and mission-critical tactical group voice and interoperable communications during emergency responses.6 All these technologies are widely used and relatively inexpensive. Their familiarity addresses another issue re-
lated to communications robustness—the tendency to fall back on technology used routinely.7
Although some commercial communication technologies have been successfully adopted by some first responders, there is certainly additional scope for using these technologies, often with minimal adaptation, to improve disaster response. Rapid deployment of wireless, cellular, and satellite infrastructure to replace infrastructure damaged in a disaster is one such opportunity. Mobile, rapidly deployable infrastructure has the advantage of leveraging commercial technology without necessarily relying on commercial service providers. Mobile infrastructure can be brought in after an event to quickly reestablish communications, (partially) replacing infrastructure lost during the event. Mobile cellular technology might also be used to bring cellular infrastructure to rural or other areas lacking pre-deployed cellular service just as it has already been demonstrated to bring such service to areas where existing cellular infrastructure was destroyed.8
There are, however, important communications problems that arise in disaster management, especially in response activities, that are unlikely to be addressed in the commercial market. One notable example is meeting communications needs inside buildings or other enclosed spaces, including damaged structures. Possible approaches include mandating the deployment of hardened repeaters in buildings, developing and deploying special low-frequency radios, and developing and deploying low-cost “bread crumb” repeaters for first responders. Another example of such a communications problem is the congestion (both local and systemwide) that arises when and where communication is most needed, resulting in a spike in communications traffic, especially in the area where infrastructure is likely to be most damaged. Handling surging demand that coincides with reduced capability will require new approaches.
Not surprisingly, given the number of organizations that must come together to cope with a major disaster, the interoperability of communications and other IT systems is often cited as a major concern. The overall evolution of communications systems (and, indeed, IT more broadly) in disaster management has been characterized by local, agency-level acquisition and deployment driven by local budgets from local taxing bodies
and by local priorities. This has led to the creation of a heterogeneous mixture of voice and (more recently) data systems across the United States. The result is that different public safety agencies (e.g., police, fire, and emergency medical), even in the same community, are often unable to communicate or share information with each other. Interoperation is not typically considered when IT is acquired. Thus, it is not surprising that limited technical interoperability exists.
Concerns about public safety communications interoperability are not new, though they have received increasing attention in recent years. For example, the Public Safety Wireless Advisory Committee (PSWAC), in a 1996 report to the Federal Communications Commission and the National Telecommunications and Information Administration (NTIA), concluded that “unless immediate measures are taken to alleviate spectrum shortfall and promote interoperability, public safety will not be able to adequately discharge their obligation to protect life and property in a safe, efficient, and cost-effective manner.”9 A 1997 National Institute of Justice (NIJ) study that surveyed state and local law enforcement agencies confirmed and quantified a number of issues identified in the PSWAC report.10
Additional emphasis has been placed on interoperability and associated issues in the wake of the 9/11 attacks, the debate on the transition to digital television, and the response to Katrina. The 9/11 Commission, for example, cited first responder voice communications interoperability as a considerable issue in the response to the attacks and concluded that the highest-priority remedy was assignment of additional radio-frequency spectrum as a way of achieving interoperability. It recommended that Congress enact legislation providing for the “expedited and increased assignment of radio spectrum for public safety purposes.”11 The 9/11 Public Discourse project report-card-like “Final Report on 9/11 Commission Recommendations” gave an “F” to progress on providing adequate radio spectrum for first responders, citing lack of progress freeing up the analog television broadcast spectrum and reserving some of it for public
9 |
Public Safety and Wireless Advisory Committee (PSWAC), Final Report of the Public Safety and Wireless Advisory Committee, presented to the FCC and NTIA September 11, 1996, p. 2; available at http://ntiacsd.ntia.doc.gov/pubsafe/publications/PSWAC_AL.PDF. |
10 |
Mary J. Taylor, Robert C. Epper, and Thomas K. Tolman, “Wireless Communications and Interoperability Among State and Local Law Enforcement Agencies,” NCJ 168945, National Institute of Justice, Washington, D.C., January 1998. |
11 |
National Commission on Terrorist Attacks Upon the United States (also known as the 9/11 Commission), The 9/11 Commission Report: Final Report of the National Commission on Terrorist Attacks Upon the United States, Government Printing Office, Washington, D.C., 2004, p. 397; available at http://www.9-11commission.gov. |
safety purposes.12 More recently, legislation was enacted that calls for a handover in 2009.13
First responder interoperability is often cited as a major problem in responding to disasters, and recommendations aimed at addressing interoperability frequently appear prominently in after-action reports on major disasters. Indeed, improved first responder communications would have important benefits in terms of enabling communications across jurisdictions and among fire, police, and medical services. The issue has deservedly received attention from the public, government officials, and lawmakers. However, to place this issue in context, it is worth noting that interoperability is only one of many significant communications issues facing first responders, and first responder communications are not the only technology-related disaster management need (see Box 2.1).
Furthermore, interoperability issues arrive in many guises, from compatibility of waveforms and message formats (technical), to terminology and definitions (semantic), to practices and procedures (organizational). Much of the public attention has been focused on voice communications, but within the public safety community, data communications interoperability is an increasingly critical component and central part of any communications system. Data communications interoperability, while having some issues in common with voice communications, raises a number of specific issues that arise when sharing information from different sources. The 1997 NIJ study noted the trend toward increasing reliance on information sharing and the importance of data communications interoperability to enable it. (For an overview of interoperability initiatives, see Appendix B.)
A number of efforts are underway to increase technical standardization, and a number of technical solutions have been developed that allow systems such as first responder radios to be “patched” together. However, interoperability should not be viewed as solely a technical problem. The harder problem is deciding when the various users across these interoperable systems should talk to each other, the protocol for doing so, who can make those decisions, and how teams get formed and dissolved.
Although information could, in principle, flow arbitrarily in distributed networks, in order to act, some sort of structure is needed. Most information is hierarchically organized, but there are many different possible hierarchies, reflecting the need and point of view of the creators.
BOX 2.1 Interoperability in Context Interoperability is only one of many significant communications issues facing first responders. Consider the following, for example:
First responder communications are far from the only technology-related disaster management need. Consider the following:
|
There are numerous examples where elements are covered in different hierarchies, and for good reasons. Efforts to determine a priori a “correct” information hierarchy are often ineffective. Thus, simply making communications and systems technically capable to interact may create more problems than it solves, unless the deeper meaning of interoperation is understood and addressed.
IMPROVED SITUATIONAL AWARENESS AND A COMMON OPERATING PICTURE
Situational awareness capabilities, like communications capabilities, have received considerable attention recently, especially in the aftermath of Hurricane Katrina. Involving much more than “having all the information,” situational awareness is an achieved mental state; it does not begin or end with the presentation of data on a display. It is the degree to which one’s perception of the situation reflects reality. Improving situational awareness capabilities must include advancing and integrating technology that assists disaster managers in building an accurate and complete mental model, in addition to improving the amount and quality of the information available. Reflecting the difficulties of achieving situational awareness, one researcher has compiled a list of the “demons of situational awareness” (Box 2.2). Just as it is important to caution that solving technical interoperability is not enough to achieve interoperation among organizations, it is important to clearly caution that solving technical situational awareness is not enough. IT could enable implementation of solutions to situational awareness demons, but the solutions must come from an understanding of the human dimensions of those demons, and the IT systems incorporate that understanding.
As described in Chapter 1, an increasing amount of information can potentially be brought to bear in disaster management. More information about a disaster may initially seem like a good thing. Yet, data from disparate sources can be difficult to assimilate into useful information because of a multitude of formats, the difficulty of placing sensors where they are needed, and the difficulty of communicating sensor data to those who need it. Moreover, without filtering by human or automated mediators, those receiving the information are likely to become overloaded and to ignore excessive inputs as distractions or to devote already-scarce resources to monitoring or processing them.
Research to identify leverage points for IT to augment and amplify the human ability to make sense of data may improve the effectiveness of disaster management. “Sensemaking” is “the process of searching for a representation and encoding data in that representation to answer task-
BOX 2.2 Eight Major Demons of Situation Awareness
|
specific questions.”14 There are a number of implications for the development of IT to aid human sensemaking, including integrated design of human interfaces, representational tools, and information retrieval systems. Sensemaking systems have both front ends (visualization) and back ends (content analysis and reasoning) that could aid human ability to skim, power read, recognize patterns, take notes, summarize, drill for details, and flag biases.15
Another factor that can color a person’s understanding of a situation is organizational affiliation. Affiliation will drive decision making in early stages over situational needs. This reflects the need to fall back on ingrained and trusted culture and training when uncertainty is high and trusted information scarce. Over time, if uncertainty decreases and information on the situation increases, affiliation goal drivers dissipate and situational needs come to drive decisions. However, if there is the perception that the situation is out of control, then affiliation factors again drive the decision making. Situational awareness tools that help build a shared reality can reduce affiliation and culturally related misperceptions.
Given the number and variety of relevant parties, a detailed, accurate, and shared picture of both the disaster area and the status of the response is an obvious requirement for effective management of the disaster, and it is an area where information technology can be of considerable benefit. Technologies that can help in the near term include improved resource tracking (e.g., using radio-frequency identification tags [RFIDs]), information fusion from a priori known sources, reconciling data collection with privacy concerns, and publish-subscribe systems to deliver the information to the appropriate people. Longer-term research is needed to develop large-scale embedded sensor networks, automatic calibration of data confidence, automatic information fusion and data mining of diverse resources, routing of information to users based on semantics, filtering of false alarms, and effective presentation of information to users. Another relevant area is augmented cognition. DARPA is currently funding the Improved Warfighter Information Intake Under Stress program, which seeks to enhance human performance in diverse, stressful, operational environments by developing a closed-loop computational system in which the computer adapts to the state of the human to determine information presentation.
Data monitoring about situational variables, ranging from long-term
14 |
See D.M. Russell, M.J. Stefik, P. Pirolli, and S.K. Card, “The Cost Structure of Sense-making,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Association for Computing Machinery Press, Amsterdam, The Netherlands, 1993, pp. 269-276. |
15 |
Mark Stefik, “The New Sensemakers: The Next Thing Beyond Search Is Sensemaking,” Innovation Pipeline (a Red Herring newsletter), 2(10):13, December 2004. |
variables (e.g., demographics) to midterm (e.g., status and location of stockpiles) to immediate (e.g., road and traffic conditions, weather), may be used to continuously update and adapt both mitigation and response plans to reflect local issues.
Finally, continuing advances in unmanned or robotic search and rescue capabilities could improve the ability to obtain access remotely to detailed information about casualties, infrastructure damage, and other information critical to response where responders are unable to reach as quickly or at all. Advances in this area hold promise for improving the safety of responders, as well as the timeliness and effectiveness of response and recovery.
IMPROVED DECISION SUPPORT AND RESOURCE TRACKING AND ALLOCATION
Whereas situational awareness focuses on providing operators and decision makers with information relevant to their tasks and goals, decision support focuses on assisting them in formulating prospective actions, primarily by helping them understand and assess characteristics and consequences of alternative courses of action. Decision support is about explicitly recording candidate course(s) of action and generating, analyzing, and evaluating those alternatives. It also provides the means to monitor the effectiveness and progress of response activities. Lines between situational awareness and decision support tend to blur—decision support is dependent on understanding the situation, and decisions affect the subsequent situation, thus setting up a continuous feedback loop.
Some specific examples of how decision support systems might aid responders include the following: recommending on-the-fly decision evaluation, triggering “nagging” for decisions to be made and executed, tracking down the next alternate decision maker (an assistant chief, for example), tracking data and underlying simulation models used to make decisions against the actual situation and presenting warnings when deviations from the model appear, and updating the response plan. Further examples include the providing of early-warning triggers for notification that time is running out to exercise an option and the raising of red flags when decisions need to be made or when execution lags.
One open research challenge is how to support decisions in the face of significant uncertainty. Example research topics include computer-assisted decision-making tools, resource use modeling, risk management in the face of uncertain data and outcomes, sentinel processes to automatically monitor processes, and technologies that support distributed emergency operations centers. Progress on uncertainty management could
yield significant progress—perhaps eventually making it possible to take actions that come well within 90 percent of perfect hindsight.16
An especially sensitive aspect of decision support tools deals with taking humans out of the decision loop. Some decisions may be appropriately made using a rule-based decision-making process. This could offload routine decisions so that human decision makers can focus more attention on those decisions requiring policy-based decision and judgment. Determining which decisions can be rules-based and which policy-based and under what circumstances will require both research and an adoption strategy that allows disaster managers time to build trust and confidence in these systems.
Decision support technology could also facilitate access to and collaboration with other organizations, such as mass media, the private business sector, the Army Corps of Engineers, public health and public works, as well as traditional first responder organizations.17 Indeed, the global character of current information and communications technology means that the decision support could come from qualified sources anywhere in the world.
The issue of scalability requires particular attention for decision support systems to be truly effective in a large-scale disaster. For disaster management, scalability of these systems is intertwined with the issue of the dynamics of the situation. IT has been implemented on a massive scale, resulting in impressive gains in efficiency and productivity. But the inherent chaos that arises in a disaster creates unique problems for realizing these gains in disaster management practice.
Closely linked to decision support is the topic of logistics management. The ability to monitor movements of personnel, goods, services, and victims; to recognize mismatches; and to trigger adaptive action (whether it be redirecting the patients or augmenting the facility) would greatly enhance disaster response management. Often, retrospective analyses of disaster responses effectively say, “Regardless of resource limitations, we didn’t make the best use of the resources that we did have.”
Routing systems for trucking and airline industries are examples of
16 |
Laboratory results are already beginning to show early promise toward this possibility: e.g., see R.T. Maheswaran, C.M. Rogers, R. Sanchez, and P. Szekely, “Reward Estimation by Communicating Aggregated Profiles (RECAP): Distributed Coordination in Uncertain Multi-Agent Systems,” submitted to 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6-12, 2007. |
17 |
For more perspective on collaboration and associated tools, see C.A. Bolstad and M.R. Endsley, “Choosing Team Collaboration Tools: Lessons Learned from Disaster Recovery Efforts,” Ergonomics in Design 13(4):7-13, 2005. |
decision support systems that manage the reallocation of resources and give human operators feedback on potential options. These systems know the resources available and make adjustments based on changes in the status of those resources. They handle massive amounts of resources and processes in a highly efficient manner—they scale extremely well—as long as resources are added slowly and in an orderly manner and processes are predictable. They are optimized for bounded, stable situations. They handle problems by planning for a degree of reserve capacity. These systems become quickly overwhelmed when reserve capacity is exceeded. The problem of decision support for disaster management is that it is inherently an out-of-bounds, unstable situation. (See Box 2.3 for a discus-
BOX 2.3 The Limits of Commercial Logistics Operational Models for Disaster Management A frequently expressed frustration with recent disaster management efforts concerns the inefficiency of logistics operations deploying resources (e.g., ice, water, trailers, medical supplies, generators) to affected areas. “If FedEx and UPS can do it, why can’t disaster managers?” is a common refrain. As performed today in disaster management practice, logistics activities tend to make very limited use of information technology (despite the models provided by leading-edge distribution companies). Recent investments by the Federal Emergency Management Agency have begun to bring its practices up to date with those of commercial systems, adding some tracking capabilities, though it appears that much more can be done.1 Expanding these investments will certainly improve logistics operations. Ideally, technologies like radio-frequency identification (RFID) tagging, Global Positioning System (GPS), and others used to manage massive supply chains like those managed by Wal-Mart and others could be pervasively implemented for disaster management. However, there are critical differences between commercial logistics operations and those required for disasters. These differences mean that even if state-of-the-art commercial logistics technologies are adopted, they will go only so far in addressing the logistics and resource management needs in a disaster. The importance of techniques for recognizing impending problems is not generally understood. Thus, the history of military and emergency supply chain management contains periodic calls for blindly adopting commercial methods. Rapid shipping companies such as FedEx are quite rightly lauded for the speed and efficiency of their operations. However, the underlying algorithms are designed not to ensure that all packages are delivered on time, or even that important packages are delivered on time. Priorities are handled through channeling into faster shipping methods, but the same algorithms apply. The underlying commercial algorithms determine the schedules and transportation resources needed to ensure that most packages will be delivered in timely fashion if the depots and distribution centers do not change and the situation stays within predicted bounds. The underlying algorithms further assume that priorities of packages are undifferentiated within broad categories (i.e., if you cannot deliver all packages in a priority class, it is |
acceptable to allow circumstances to determine which ones get through). They also tend to assume that packages’ priorities do not change once in the system and that destinations of packages do not change once in the system. If any of these assumptions is violated, commercial shipping organizations do the cost-effective thing: they apologize, remind customers of disclaimers in the shipping agreement, and perhaps offer a refund. The techniques used by commercial organizations are highly adapted to their purpose. They have achieved excellence by developing specialized logistics solutions, but to problems that are qualitatively different from those present during disasters. Ensuring delivery of critical items in disaster situations is a related problem—but it is not the same problem. Because it is a related problem, techniques that have been applied to the commercial problem are relevant—but recognizing that they are relevant is not the same as having actually adapted and applied them. Nor, because the problem is related but different, are those techniques sufficient. When lives depend on a shipment—deaths ensue if that shipment is late— the function, and thus the underlying algorithms, must be different. Disasters are by definition unstable, “out-of-bounds” situations. Logistics systems developed for them must be designed to handle a complex and evolving set of problems. Once problems with a plan or schedule are detected, the critical question is how to revise it to contain the ripple effects—which is essential to minimizing costs and time delays in adapting to the changed situation. Thus, a number of technical challenges differentiate the commercial problem from the disaster management problem:
Because of the inherent complexity of the number of issues and constraints that must be considered, humans are limited in their ability to handle these challenges unaided by technology. Instead, for ordinary situations, most approaches have focused on maintaining reserve capacity as a buffer. A large amount of work in operations research and other fields has focused on techniques for calculating what a safe reserve capacity should be. Disasters, by their very definition, exceed available resources and overtax the process of effectively allocating them. Thus, computing and maintaining safe reserve capacities, although highly necessary, is not sufficient. Intelligent adaptive planning technology that addresses the five challenges listed above is still largely in the research stages. Nevertheless, it is a critical capability. |
sion of differences between commercial logistics operations and logistics and resource tracking for disaster management.)
As performed today in disaster management practice, logistics, resource tracking, and allocation activities tend to make very limited use of information technology (despite the models provided by leading distribution companies). Consequently, any notification of problems is likely to be reactive rather than proactive (e.g., “We’re at the bridge but it’s down. … Weren’t they supposed to be here by now? … Why are there only 500 child-sized crutches when we asked for 5,000?”).
Problem notification and consequent situational awareness tend to be sequential and therefore subject to propagation delay during which parties can get out of synchronization, rather than having immediate and shared situational awareness. (An all-too-familiar conversation frequently starts with Destination asking Dispatch, “When are they coming?” and Dispatch replying, “They’re not there? I’ll find out what’s going on and get back to you.”). Problem resolution processes are unlikely to identify and simultaneously involve all affected parties, thus producing local instead of global solutions.
Existing technology makes it quite possible to do much better. With RFID tags, every item, box, pallet, and truck can know and report its relationship to the shipping manifest. With the Global Positioning System (GPS) and satellite connections, continuous position reporting is possible. Thus, knowing what you have, exactly where it is, and where it is going (even if the destination itself is a moving target) is all possible. Geographic information systems make it possible to record and display that status information, while simultaneously recording and displaying other status information such as weather conditions and transportation infrastructure condition.
It is easy to set alarms when planned directions of travel and rates of motion are not met, and it is feasible for users monitoring these systems to notice when displays indicate potential barriers to planned movements. When human operators notice problems, route-planning software can greatly facilitate rerouting the movements. Peer-to-peer file-sharing tools can ensure that all participants automatically receive updates of the information they need. Many such commercially available tools have encryption built in, ensuring the security of shipment information in the event of extreme situations in which theft or diversion is a concern. Exercises such as Strong Angel have demonstrated the effectiveness of relatively crude measures to provide synchronization, even in situations where conventional communications have broken down (e.g., jeeps or unmanned aerial vehicles [UAVs] surveying an area and receiving and re-broadcasting update information on wireless network frequencies).18
18 |
See Strong Angel III at http://www.strongangel3.net. |
Thus, solutions exist today that would reduce error rates, promote proactive action in recognizing problems, and provide higher levels of shared situational awareness regarding those problems. Functions that remain problematic, however, include initial decisions about resource allocations, recognition of many problems in the execution of plans and schedules, and development of good repairs to the plans.
If these problematic issues are addressed, it is possible to do even better. There are nascent technologies in the research community that could make a significant impact. If nurtured and transitioned, these would produce far better initial resource allocations, identify a wider range of problems proactively (with significantly greater lead time for resolution), and help develop more effective and globally beneficial solutions to problems as they arise. Effective determination of needs requires combining pushes and pulls—interrelating, rather than stovepiping, intelligent forecasting of requirements with rapid aggregation of requests.
Simulation systems provide one useful tool for decision makers to test potential resource allocation and planning options in a virtual environment. They can provide a vehicle to promote understanding and dialogue on actions and issues related to the development of an effective preparedness and response plan, and serve as a forum and basis for mutual understanding between agencies and disaster management practitioners. Further advances in simulation environments promise to provide comprehensive modeling frameworks that integrate both inverse and forward points of view, applicable at multiple levels of analysis in diverse fields of study, in a structured manner. Computational architecture that is flexible, scalable, and adaptable promises the ability to create persistent virtual worlds for continuous replication, verification, validation, uncertainty quantification, and margin-of-error estimations.
Alternative recovery plans could be developed and tested in the context of simulations and risk models that allow plan effectiveness to be tested and that enable continuous adaptation of plans in light of available resources and past experience. Instrumentation and data collection are critical elements for learning from one disaster to amend management practices of future events. Data collected during response and recovery operations can also be used in post-disaster analyses to feed future mitigation efforts. It can be used to validate and improve models and simulations. Learning from instrumentation and post-incident analysis have proven invaluable in other fields, such as health (with today’s emphasis on evidence-based medicine), defense (where the military collects reports and other data from exercises and operations to develop lessons learned that are fed back into the development of both doctrine and weapons systems), and air transportation (where the National Transportation Safety Board investigates every civil aviation accident in the United States in order to improve the reliability and operational safety of airplane sys-
tems). Indeed, in most of these fields incorporation of lessons learned is highly systematized to ensure timely implementation of changes in practices and systems.
Advances in high-performance computing are now demonstrating the ability to execute, log, and analyze discrete event simulations with literally millions of separate ongoing computational processes. These have been integrated into wargames extending over weeks, involving tens and hundreds of thousands of human personnel. This means that it is becoming technically feasible to run situation analysis systems for disasters that continuously operate on “best available understanding”—filling in missing information with simulations, models, and forecasts when necessary, replacing them with sensor data, situation reports, and incoming supply requests when available.
Advances in computing power, as well as the development of new algorithms, also create the prospect of adaptive planning, scheduling, and resource allocation processes that continually fine-tune logistical support plans to the evolving situation. These algorithms are increasingly able to reason about managing the risk of a plan in the face of various kinds of uncertainty. One type of uncertainty concerns information quality. (How sure are we that the roads are passable?) Another type concerns the likelihood of success for potential courses of action. (Since we don’t know whether a convoy from the north will get through quickly, would it make sense to also route some of the supplies from the south or to send excess shipments that will be redirected if the others get through?)
In addition, advances both in power and accessibility of high-performance computing combined with new algorithms are pointing the way toward automated solutions of increasingly larger planning and scheduling problems. Not only are the algorithms more efficient, but techniques such as “backdoors” and hybrid problem solvers have shown how large problems can be transformed into simpler and more manageable ones. “Backdoors” is a technique for identifying critical decisions within a large set of alternatives that, once made, reduce the number of other options that have to be considered without significantly reducing the likelihood that the best overall choices are still made. Hybrid problem solving is a related technique in which fast but “suboptimal” techniques are used to produce rough sketches of plans, which are examined for characteristics that can be used to restrict the options considered by slower, higher-quality algorithms.
Taken together, these techniques offer a future in which much more efficient plans are generated on the basis of much more meaningful information. This ensures that responses are initiated with the best-tuned courses of action available at the time. The next challenge is to keep those responses tuned when the situation changes during the course of executing them.
In order to do better, the challenge is to provide individuation of shipped items. The first element of this, tracking the items, is increasingly easy (see above). The next challenge is to use the tracking information to trigger warnings that a shipping plan is going awry. As noted above, the technology is basically in place to do this by asking, “Is it where it should be at this time?” However, it is possible to do better than that. If a bridge on the trucking route is down, rerouting should start as soon as any part of the system knows that fact—rather than waiting for an overtaxed human to notice, much less waiting until the trucks get to the bridge.
DARPA has funded related research in the areas of plan sentinels and mathematical approaches for estimating progress and probability of success on goal structures. Plan sentinels extend a range of existing planning techniques that use explicit descriptions of goals and methods for achieving those goals. Plan sentinels focus on capturing assumptions underlying methods and generate software in order to explicitly monitor data streams for evidence that those assumptions are violated. For example, driving a truck from point A to B is a method for getting to a location which assumes that roads and bridges are intact, so plan sentinel software would generate code for monitoring data streams and routing traffic reports. In contrast to plan sentinels, the mathematical approaches adapt techniques such as nonlinear filtering methods (previously used to track progress in geometric spaces) in order to track progress in symbolic spaces. These techniques generate probability distributions that can be used to assess likelihood of the “speed and direction” of future progress on the plans.
Symbolic approaches like plan sentinels and mathematical techniques such as just described can make complementary contributions to the ability to proactively predict problems and to begin as early as possible to respond to them.
GREATER ORGANIZATIONAL AGILITY FOR DISASTER MANAGEMENT
Disasters are varied and no single organizational structure is necessarily the best for dealing with the range of possible incidents. During disaster response, organizations must be able to form quickly and work well together across space and time. They must be able to adapt and resize easily as the disaster develops. The 9/11 disaster expanded in scope and scale in a matter of hours even as the primary emergency operations center was lost within the World Trade Center complex. The National Response Plan and National Incident Management System provide frameworks for dynamic organizations to be formed, but do not address the diversity of technology in different organizations, the lack of rapport, or the ability of organizations to quickly integrate operations. They also ar-
ticulate an elaborate set of command and control systems that are inflexible and work against the idea that agility and flexibility are at least as important as command and control hierarchy.
Simulation systems offer one avenue for training disaster management professionals, from first responders to disaster managers, to better anticipate problems and to become more flexible and adaptable to working across organizational boundaries, integrating operations, and adapting to different technologies. Simulation systems can simultaneously interface with and drive both planning systems and training systems.19 This would enable preparatory work to ensure the robustness of plans against multiple scenarios. It could also support training according to the plans and seamless transition into systems for response execution. Using detailed analysis of the response to past disasters, these simulations would be solidly grounded in reality, and their accuracy would increase with each incident.
Capturing lessons learned and making them available in a form useful to the broad community of disaster management is another area of potential for IT to support organizational learning. Learning from past experiences is an important element supporting the ongoing improvement of disaster management. Lessons learned can cover a wide range of topics—from experience with specific processes and organizational structures to the effectiveness of specific communication technologies or software systems. The Department of Homeland Security has established Lessons Learned Information Sharing (http://www.llis.gov) as a mechanism for disseminating lessons learned.20 It provides an illustration of how the Internet can be used to support dissemination of lessons learned. Nevertheless, lessons learned from previous disasters have not typically propagated quickly through the disaster management community.
Another organizational issue is management of the flow of personnel in and out of an incident area. Improved IT infrastructure for credentialing and identification checks—who they are, whether they have the capabilities they say they have—could improve the efficiency of response and recovery operations, such as dispatching medical workers, repair technicians, and other appropriate people into a disaster area. In the aftermath of Hurricane Katrina, much damage to information and communications
19 |
Several areas of potential opportunity to improve disaster management practice using simulation technology are drawn from a briefing to the committee by Alok Chaturvedi, director of Purdue University Homeland Security Institute. Presentation on September 20, 2005. |
20 |
Lessons Learned Information Sharing is a national network for emergency response providers and homeland security officials to share lessons learned and best practices. For more information, see https://www.llis.gov. |
infrastructure was fixable, or there were backups and alternatives. However, the lack of access because of the security ring around New Orleans and limited means for authorities to quickly validate people’s credentials meant that repairs could not be effected; generators ran out of fuel and alternatives could not be switched in. Similar problems involving physical access were cited during the response to 9/11.21 Recent credentialing efforts for emergency responders, including databases to keep track of volunteers, have been advancing and expanding to include telecommunications specialists, utilities workers, and other private-sector disaster response workers.
Authentication and credentialing constitute a complex topic, involving many technical and non-technical issues.22 A few example areas of where further IT research might bear fruit include voice print analysis in the network, fingerprint sensor on the push-to-talk radio buttons,23 RFID tags (badges), and verbal authorization codes issued to first responders and other authorized personnel (e.g., city workers, volunteers, and so on) that are linked to the radios. In addition, back-end database and architectural considerations regarding the identity and credentialing management system as a whole also merit attention. Currently, as with Internet purchase orders, identity authentication is accomplished through the verification of several independent pieces of information, not one code or password. In the case of false negatives, lost tags, mismatches between tag and radio, and so on, one model of mitigation is for the system to transfer the transaction to a dispatcher who could decide whether sufficient evidence exists to authorize authentication. The entire realm of credentialing and identity management is clearly a fruitful research area.
Further advances are needed to improve understanding of how communication structures map onto organizational structure requirements. Dynamic authority mapping is one potentially useful tool.
In the short term, information technology is likely to be applied chiefly to automate and accelerate traditional disaster management processes and practices. In the midterm to long term, however, increases in information-processing capacity offer the potential to enable a transfor-
21 |
See, for example, National Research Council, The Internet Under Crisis Conditions: Learning from September 11, The National Academies Press, Washington, D.C., 2003. |
22 |
More information about the technical, architectural, and policy challenges associated with large-scale identity and credentialing systems can be found in a report from the Computer Science and Telecommunications Board; see National Research Council, IDs—Not That Easy: Questions About Nationwide Identity Systems, National Academy Press, Washington, D.C., 2002. |
23 |
One potential drawback is that such a radio could not be used by others if the designated operator was disabled or otherwise not available. |
mation of those very processes by enabling innovation in organizational practices. Such IT-driven shifts have occurred in many sectors, with major organizational implications.24 Although these transformations are not entirely predictable, the empirical evidence suggests a number of possibilities.
The military’s hierarchical command chain is one well-known organizational model for managing extremely complex, distributed activities. It reflects the development over many years of a clear sense of what to centralize and what to decentralize. In recent years, the military has undertaken significant, IT-enabled revisions to doctrine based on the idea of providing more information to individual units or warfighters and enabling increasingly distributed network-centric operations. The committee believes that a similar analysis and evolution of doctrine that takes into account the unique characteristics of disaster management (such as diverse actors and jurisdictions in a federal system and the important role of private-sector organizations) as well as new technological capabilities are also needed.
One such possible shift would be from information-centered architectures updated in batches (e.g., “reports”) toward distributed processing of continual messaging-streams fed by pervasive sensors providing real-time situational awareness data, with different users detecting trends and transitions according to their local requirements. There could be a corresponding shift from specialized management systems that are activated for disasters and deactivated afterward to an “always-on” state of activation that varies more in scale than in nature as events occur. A move should also be possible from command models of resource management toward negotiated “brokerage” approaches working with current models of the best actions that can be taken with available resources. Another possibility is the ability to reach a mature compromise between the dual extremes of rigid bureaucracy and all-to-all interoperability toward a “managed ad-hoc-racy” of disaster management and responder organizations that can evolve seamlessly and continuously over the entire course of a disaster. Finally, a role-based concept of individual and unit identity could reduce the significance of jurisdictional, disciplinary, and even official/civilian distinctions. The cumulative effect of these changes could be a shift from a mechanical focus on preserving and restoring the status quo ante toward a flexible strategy of resilience and adaptability to the dynamics and inherent complexities of disasters.
A common organizational theme is the strong tension between cen-
tral authorities, which want to assert hierarchical control over disasters, and the distributed nature of most disasters. Authorities may want to be seen as “in charge,” even though most of the actual work in disasters results from the many less-coordinated and distributed actions of individuals. Responders typically bring tremendous energy to the scene. One response is to put someone in charge to channel (and bind) this energy. Another is to let the energy emerge and then harvest it. Organizations have a hierarchical comfort zone that has driven them to the former response, but disasters are also accompanied by the rapid development of emergent multiorganizational networks.25
These networks form the locus for collective sensemaking and organizational learning under conditions where ambiguity and uncertainty are an inherent part of the environment.26 Information technology could support emergent networks by helping them deal more effectively with unpredictable information sources, lowering barriers to information flow, making organizational boundaries more permeable, and easing coordination between diverse and distributed actors. For example, a number of emergent groups, existing in entirely virtual space and formed using the Internet and technology such as blogs and wikis, performed important services during Hurricane Katrina. Research to find more systematic ways to leverage such technology may yield new means for supporting emergent networks.
While this report is primarily limited to the IT aspect of disaster management and did not specifically look at problems from an organizational theory or management theory point of view, the potential contributions that management and organizational science can make to better understanding the situation in command centers and to improving other aspects of disaster management are undoubtedly significant. Moreover, as research progresses in these areas, analysis both of the impacts that greater use of IT may have and of how IT can help ameliorate other problems will be useful. For instance, organizational research on collective sensemaking could help focus IT research on how to address confusion that stems from ambiguous information rather than simply finding better ways to reduce ignorance arising out of uncertainty by increasing the quantity and quality of information.27
Another common theme, and one that came up frequently in testi-
25 |
Kathleen Tierney and Joseph Trainor, “Networks and Resilience in the World Trade Center Disaster,” Research Progress and Accomplishments 2003–2004, Multidisciplinary Center for Earthquake Engineering Research, University at Buffalo, State University of New York, May 2004, pp. 157-172. |
26 |
K.E. Weick, Sensemaking in Organizations, Sage Publications, Newbury Park, Calif., 1995. |
27 |
Ibid., pp. 185-187. |
mony to the committee, is the issue of building trust among the various actors involved in responding to a disaster. But in a disaster situation, cooperation is often required between people who are strangers with no existing trust relationship. Thus, approaches and supporting technologies are needed that aid coordination and structure across people and organizations that have little trust but some common goals. IT’s ability to enhance organizational agility in disaster management may be limited by the extent to which it is unable to overcome barriers to working in an environment of limited trust. Applying existing understanding about the relationship between trust and technology and extending that knowledge through further research will be critical to advancing organizational agility.
BETTER ENGAGEMENT OF THE PUBLIC
Two distinct aspects of better engagement of the public through better use of IT involve (1) the use of warning systems and broadcast alerts to inform the public of actions that they should take to protect themselves and their property and (2) the ability to leverage the public as providers of information and sources of valuable technology tools. The potential to improve the use of IT in both areas is substantial, although the second will also require considerable shifts in culture among public safety and emergency management professionals.
Alerting and Warning Systems
Improving warning systems for various types of disasters has received considerable attention, especially in the aftermath of the 2004 Indian Ocean tsunami.28 Effective warnings save lives, reduce damage, and speed recovery. Warnings are most effective under the following circumstances:
-
They are accurate and result in appropriate action.
-
Any probabilistic aspects (e.g., likely hurricane landfall probabilities) are clearly communicated.
-
They are standard, consistent, and easily understood.
-
They are delivered to just the people at risk and in a timely manner.
-
They are delivered through a variety of mechanisms to achieve maximal reach.
Technology has greatly improved the ability of forecasters to make accurate predictions about natural disasters. Public education has improved actions that people take in response to warnings. Experience and policy changes have made authorities better at communicating consistent, clear messages. Further improvements are possible in all of these areas—especially through broader deployment of sensor systems and further advances in sensor technology. But, there is now a significant gap in how warnings are delivered and what is possible with existing technology.
Forecasting and sensing technology has made it possible for siren-based warnings to be issued minutes to seconds before the onset of a disaster where previously no warning was possible, and future advances may improve detection and lengthen warning times (earthquake detection is one such promising area). Where the time for delivering alerts is still very short, sirens can be highly effective because they can be rapidly and broadly disseminated. Sirens require people to know what action to take (e.g., for a tornado find shelter, under ground if possible). Public education and drills are used to instill this knowledge. State-of-the-art siren technology can now include a public address capability, allowing more specific information to be communicated. However, sirens are inherently outdoor systems. They are valuable because they may reach people who would otherwise receive no warning. They will continue to play an important role in alerting the public, and technology advances can make them even more effective.
Warning systems using information and communications networks can be significantly upgraded using existing and emerging technologies. The Congressional Research Service report Emergency Communications: The Emergency Alert System (EAS) and All-Hazard Warnings describes a number of government efforts to develop a digital warning system, including the ongoing pilot projects of the Federal Emergency Management Agency, the Information Analysis and Infrastructure Protection directorate at DHS, and the Association of Public Television Stations to develop an integrated public alert and warning system.29
A Presidential Executive Order of June 26, 2006, aims to establish an integrated alert and warning system and to “establish or adopt, as appropriate, common alerting and warning protocols.”30 Revamping the Emer-
29 |
Linda Moore and Shawn Reese, Emergency Communications: The Emergency Alert System (EAS) and All-Hazard Warnings, Congressional Research Service (CRS) Report for Congress (RL32527), CRS, Washington, D.C., 2005. |
30 |
George W. Bush, Executive Order: Public Alert and Warning System, 2006; available at http://www.whitehouse.gov/news/releases/2006/06/20060626.html. |
gency Alert System (EAS) was also a finding and recommendation in the Federal Communications Commission (FCC) independent study of communications during Hurricane Katrina31 and in the White House report on Katrina.32 These reports note the potential of new technologies (satellite, cellular, pagers, Internet, wireless) to send more targeted messages. As the FCC review of EAS notes, “wireless products are becoming an equal to television and radio as an avenue to reach the American public quickly and efficiently.”33
One common technological denominator in recent efforts has been the Common Alerting Protocol (CAP), a warning format standard developed by emergency managers, promoted by the Partnership for Public Warning, and codified by the Organization for the Advancement of Structured Information Standards (OASIS) standards organization. CAP has been used in most of the major warning system prototypes in recent years and features prominently in the FCC proceedings on the future of EAS.
Communication with the Entire Affected Population
The importance and challenge of reaching the entire affected population—including all social and socioeconomic groups, disabled, elderly, and other special-needs groups—before, during, and after a disaster strikes were highlighted by Hurricane Katrina. More generally, reaching these groups is among the most significant of issues relating to engaging the public to ensure their own survival and recovery during disasters. Hurricane Katrina tragically demonstrated the error in assuming that better communications alone guarantees effective public engagement.34 It also served as a reminder that access to and familiarity with information technology is not universal. Further, the ability to act on available information, even when it is accessible, may be limited.
31 |
See Federal Communications Commission (FCC), “Review of the Emergency Alert System, First Report and Order and Further Notice of Proposed Rulemaking,” FCC 05-191, Washington, D.C., November 2005; and FCC, Recommendations of the Independent Panel Reviewing the Impact of Hurricane Katrina on Communications Networks, FCC 06-83, Washington, D.C., June 2006. |
32 |
See The White House, The Federal Response to Hurricane Katrina: Lessons Learned, February 2006, pp. 83, 109-110. |
33 |
FCC, “Review of the Emergency Alert System: First Report and Order and Further Notice of Proposed Rulemaking” (EB Docket No. 04-296), FCC, Washington, D.C., p. 29; available at http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-05-191A1.pdf. |
34 |
National Research Council, Facing Hazards and Disasters: Understanding Human Dimensions, The National Academies Press, Washington, D.C., 2006, p. 68. |
Risk Communication
Understanding what information should be provided to the public before, during, and after a disaster falls under the general topic of risk communications, which an earlier National Research Council report defines as “an interactive process of exchange of information and opinions among individuals, groups, and institutions [and] a dialog involving multiple messages that express concerns, opinions, or reactions to risk messages or to legal and institutional arrangements for risk management.”35 Insights from such work can be used to inform future efforts to apply IT for improved risk communication.
The Public as an Information and Technology Resource
While warning and alerting systems are important, their perspective is of the center talking out to the masses. The committee finds vast potential to further engage the public by changing this perspective to embrace two-way communications between authorities and the public. Interactions with the public are an important part of disaster management, yet these interactions have received relatively little attention. Changes in technology available to the public mean that there are not only new ways to reach them (with warnings), but also new ways for the public to both gather and communicate information. One typically thinks of the public playing a passive role of information receiver in a disaster. People have thus far been engaged only marginally and conditionally as sources of information with valuable technology in critical or otherwise unreachable locations (e.g., certified amateur radio operators). The now ubiquitous 911 emergency calling system is an example of how responders can get useful information from the public. It is a simple mechanism for allowing the public to report information.
Yet, there is potential for the public to play a much larger role during a disaster, and information technology is increasingly making it possible to engage the public in a variety of ways. Civilians—people on the street— are nearly always the very first people on the scene of disaster, especially in situations with little or no warning. Collectively they have a richer view of at least a small portion of a disaster situation than is available from within an emergency operations center. Even 10 years ago most people carried little or no technology around with them. Today, much of
the public has sophisticated mobile communication and sensing capability. Camera phones, wikis, the Web, and text messaging are all capabilities increasingly available to people on the scene. Harnessing these sources of valuable information holds great promise for providing critical information to disaster managers, especially in the initial response stages.
Victims can be transformed into actively engaged responders if given a meaningful and appropriate means to participate. They can be recruited to assist in disaster response. They can be kept informed as to how and when to act appropriately. Furthermore, they can offer critical redundant IT resources as traditional sources are impacted by a disaster. IT mechanisms that interface disaster response agency information systems to interactive public communications channels (e.g., Internet, wireless communication) could provide information gathering and dissemination mechanisms that ameliorate problems of agency overload from affected populations seeking situational information or having the ability to provide local information.
The redundancy of this approach would also improve the reliability of the communications with the attendant advantages of improved performance and public perception of appropriate actions. Valid concerns about the trustworthiness of information have inhibited any major steps toward incorporating these types of changes more fully. Yet, changes in the amount and quality of technology carried by individuals and continuing advances in filtering, qualifying, and analyzing information of uncertain quality means that a major opportunity may be missed for making better use of IT, especially given limited resources available to public officials.
The Strong Angel exercises have explored “techniques and technologies that support the principle of resilience within a community” with the objective of effectively tapping “the expertise and creativity within an affected community, including through public-private partnerships. A second overarching objective is the development of social tools and techniques that encourage collaborative cooperation between responders and the population they serve during post-disaster reconstruction.”36
Cybercitizen networks also hold promise as an important new element of response similar to, but on a much larger scale than, the role filled by amateur radio operators. IT can help facilitate these interactions. In the near term it is possible to use or adapt existing technologies to create validated online information sources, to deploy multimodal public reporting sites, and to build directories of private resources to facilitate
36 |
See Strong Angel III at http://www.strongangel3.net. |
deployment. Research is needed into technologies that do dynamic capability profiling and credentialing, semantic routing, and filtering of multimodal public input, and research is also needed to optimize data formatting for diverse terminal devices.
ENHANCED INFRASTRUCTURE SURVIVABILITY AND CONTINUITY OF SOCIETAL FUNCTIONS
Disasters inherently cause disparate communities, infrastructures, and organizations to interact (and not interact) in unanticipated ways. Hurricane Katrina is an obvious example, having displaced much of an entire metropolitan region, with residents being dispersed across the country. But smaller-scale disasters also disrupt societal functions. Families and friends are separated. Families need access to housing, schools for children, and social services. People lose access to medical facilities and medical records. Jobs are lost. Important cultural institutions are disrupted. There is considerable opportunity for improving the use of IT to reconnect people, provide a temporary bridge to restore and maintain relations and interactions, and to speed their restoration. Hurricane Katrina demonstrated that people can come together, using the Internet and other information and communications technologies, to apply knowledge, skills, and technology to have a positive impact on the lives of those affected by a disaster.37
Another well-publicized example of an emergent use of IT was the creation of wikis38 that enabled volunteers to connect with victims. Some of the functions wikis served in the aftermath of Katrina were listing helpline numbers, posting offers of temporary shelter, identifying where and how to make donations, serving as a clearinghouse for identifying government resources, offering health and safety information, sharing advice and experience on relocation, publicizing fund-raising events, providing information about lost and found pets, reconnecting families, and posting help-needed notifications. The emergent behavior exhibited by these wikis is one of their great strengths—rising to meet an unanticipated and highly diffuse need. They are highly flexible and adaptable and
37 |
For example, see Keith Axline, “Craigslist Versus Katrina,” Wired News; available at http://www.wired.com/news/planet/0,2782,68720,00.html. |
38 |
According to Wikipedia, “a wiki is a type of website that allows users to easily add, remove, or otherwise edit and change some available content, sometimes without the need for registration. This ease of interaction and operation makes a wiki an effective tool for collaborative authoring. The term wiki can also refer to the collaborative software itself (wiki engine) that facilitates the operation of such a website.” See http://en.wikipedia.org/wiki/wiki. |
demonstrate just one way that the emergence and ongoing advance of collaborative tools could contribute to disaster management. The challenge for disaster managers is to leverage the power of these emergent uses of IT—and support research and development to advance their application to disaster management—without destroying their inherent flexibility and adaptability.
In addition to making better use of IT to help people, IT can be used to improve the survivability of critical infrastructure—another major factor in speeding recovery and restoring societal functions. The committee heard numerous possibilities for advancing IT and extending its applications to improve the resilience and management of critical infrastructure systems, such as the electric grid, water, transportation, housing, and health care. The interdependencies between these systems are often not well understood and rarely proactively engineered for resilience because they are usually designed and operated by independent entities over a long period of time. The structural couplings between these systems may also manifest themselves on a wide range of spatial and temporal scales, making it difficult to quantify them. Solving these problems requires different jurisdictions—cities, counties, states—to work closely with each other and with federal agencies. The restoration of New Orleans, for example, is widely understood to require a concerted rebuilding across government agencies, public safety organizations, businesses, and public utilities. Simply clearing damage, processing insurance claims and other compensation, and rebuilding residential, government, and commercial infrastructure constitute a very complex, multifaceted process that is likely to take years to complete.
Advances in IT can revolutionize other technical disciplines with direct and indirect implications for advances in disaster management. A salient example is the entirely new class of monitoring and control capabilities made available to civil and mechanical engineers by the creation of small sensors, microprocessors and wireless communication devices. Many applications require the deployment and use of sensors on a wide scale—capabilities that are starting to emerge from research into distributed sensor networks.
When terrorists attacked the World Trade Center in New York in September 2001, thousands of occupants of those doomed structures had an hour to escape. But delays in the assessment of the structures’ integrity stole crucial minutes from the evacuation and ultimately doomed thousands of innocent civilians. Today it is possible to instrument such structures using sensors and wireless connections, in such a way that the changing forces within the towers’ structures could be recognized and evaluated almost instantly.
On a larger scale, ubiquitous monitoring of the condition and utilization of highways could trim minutes or even hours from the travel time of responders by routing them around damaged or clogged routes. Supervisory Control and Data Acquisition systems for water, gas, and other utilities are being enhanced to provide detailed analysis of damage due to earthquakes or explosions, enabling system operators to speed restoration and minimize peripheral disruptions of service.
The benefits of comprehensive monitoring and management of engineered systems can extend beyond their own boundaries, for example managing interactions between systems, such as the power grid and the communication networks that rely on them. This underscores the importance not only of collecting system-specific data but also of normalizing and exchanging real-time assessment data between systems.
Buildings, roads, and other constructed infrastructure exhibit significant resilience and robustness in the face of disaster. However, infrastructure that appears to be intact may in fact have been severely damaged in ways that are not readily apparent. For example, in the wake of an earthquake, a building might be on the verge of collapse or a bridge might be ready to fail after even the smallest aftershock. By making hidden damage more apparent, sensors combined with information technology can enhance response and recovery operations by reducing uncertainty about the state of infrastructure.
Continuous monitoring and analysis of critical infrastructure could be done by developing new instrumentation capabilities. This would enable the routing of sensor information from buildings, bridges, and infrastructure systems—for example, roads and water, gas, sewer, communications, and power systems—to monitoring locations, providing responders with information about the robustness and safety of the infrastructure affected. As in other areas, power supply independent of the electric grid is a critical issue that must be addressed to extend sensor capabilities.