Improving Acquisition and Adoption of IT for Disaster Management
This chapter focuses on information technology (IT) acquisition and adoption issues confronting the various federal, state, and local agencies and private organizations (hereinafter called disaster management organizations) that have official responsibility for disaster management. It does not explore the complex issues of IT acquisition or adoption by individuals or private firms for use in disasters; however, it does briefly consider opportunities for leveraging IT systems and services of private-sector firms, citizens, and non-governmental organizations. The chapter starts by considering some of the key barriers to more effective use of IT in disaster management. It then discusses some best practices and design principles that would help address these barriers. It concludes with a discussion of roadmapping as a technique for guiding overall investment in research and development and a discussion of multidisciplinary centers as a way of better coupling IT research and practice.
OVERVIEW OF NON-TECHNICAL BARRIERS
Many sectors, such as banking, manufacturing, and services, have been able to adopt new IT technologies routinely and aggressively. Some disaster management organizations have also been quite effective in integrating state-of-the-art IT technologies into their day-to-day operations (e.g., the use of Internet Protocol [IP]-based emergency management tools, the use of cell phones to listen in on first responder land mobile radio traffic, and the use of laptops and wireless local area networks). However,
in the committee’s view, the disaster management community has not been nearly as broadly successful.
The following are among the complicating factors:
Disaster management organizations often lack the resources to acquire valuable capabilities. Responsibility for disaster management is widely distributed among agencies and organizations at all levels of government— with resources and operational responsibilities mainly concentrated at the local level. These organizations have vastly different technologies and capabilities. These characteristics lead to highly scattered adoption and lengthy adoption cycles and a highly fragmented market for disaster management IT. Moreover, many of the organizations are small and have very constrained budgets for IT. Most acquisition resources are focused on capabilities to improve day-to-day operations, whereas disaster management is, by definition, not a routine activity. Some of what agencies do acquire specifically for disaster incidents nonetheless becomes “shelf-ware”—unused even when the need for which it was acquired arises.
Both the development and the deployment of many promising technologies are risky and costly compared with the opportunity presented by the commercial market for these technologies today. For example, there are sensors that would be very useful for assessing in real time the status of the built environment. However, developing and manufacturing such sensors for the uncertain and highly cost-constrained disaster management market do not constitute attractive commercial opportunity at this time.
In most agencies with disaster management responsibilities, there is no one who is charged specifically with tracking IT technology, identifying promising technologies, integrating them into operations, or interacting with IT vendors to make sure that needs are addressed. Many organizations are too small to grow and support significant in-house expertise, and they naturally look to vendors to provide turn-key solutions, which may mean that the organization’s long-term, broad needs are not fully met. Long intervals occur between acquisitions, with the result that any institutional learning that does occur is likely lost in the interim. The acquisition dynamics created by this situation tend to limit the potential market, leading IT vendors to adapt IT technologies only slowly for use in disaster management. There is no focal point for addressing these issues at the federal level, further contributing to the problem. Finally, the complexity of IT systems and the organizational changes that they introduce are often met with resistance and ambivalence by both managers and users, especially in the absence of a technology “champion.”
Decisions regarding IT tend to be made independently by local organizations that must work together in disasters. Organizations with disaster management responsibilities are typically highly independent and have lim-
ited regular contact with one another. However, these organizations find themselves having to collaborate in disasters, giving rise to interoperability issues at many levels. State and federal organizations charged with disaster management face similar coordination challenges, further complicating collaboration in responding to a disaster. Acquisition managers concerned about collaboration typically have no place to go to determine if the technologies they are acquiring will interoperate with those of their peers. Further, no mechanism exists for them to synchronize technology acquisitions in order to make them compatible. Recent trends toward the establishment of regional groups to address IT and related disaster management issues are a promising trend.
Disaster management is concerned with environments that are intrinsically uncertain and unstable. This contrasts with the typical IT acquisition environment, where development, deployment, operation, and maintenance take place in fairly well understood and stable environments and where requirements are better understood.
Important sources of funds are typically only available once a disaster has been declared and must also be spent in a short window of time. Funds tend to become available in much greater quantity during a period of time after disaster declarations. Experienced emergency managers are well aware of this recurrent “window of opportunity” effect, and many of them keep IT and communications projects in draft, ready to proceed as soon as a disaster redirects attention and money to their needs. However, these purchases are naturally driven by immediate concerns rather than longer-term considerations.
One conclusion (overly pessimistic in the committee’s view) given these barriers would be that advanced IT solutions are impractical for most local governments and emergency management agencies. Such a view assumes that the existing problems are insurmountable, whereas the committee believes that many of these problems can be mitigated if best practices and principles are followed and if appropriate mechanisms are put in place to support their adoption, such as the research centers that couple technology advancement with practice and community-wide technology roadmapping.
Another related potential misreading of the challenge is that technology that is “advanced” or “leading-edge” is necessarily more complex— and is thus unsuited for organizations without considerable in-house technology expertise. In fact, some trends in information technology are in exactly the opposite direction, with advances aimed at reduced complexity from the standpoint of those acquiring, managing, or using the technology. A reflexive avoidance of advanced technology and new
developments could thus counterproductively translate into a failure to adopt systems that are more robust, reliable, and usable.
BEST PRACTICES FOR ACQUISITION
Best practices for acquisition include an emphasis on iterative development; increased opportunities to test and evaluate technology in practice, together with realistic concepts of operations; and design and evaluation processes that allow for strong coupling among practitioners, researchers, and industry.
From Waterfall Acquisition to Iterative Development
Historically, as in many other areas, the introduction of technology in disaster management has been characterized by a series of major deployments, occurring at intervals sometimes measured in years or even decades. These long cycle times reflect in part the traditional “waterfall” acquisition process. This acquisition model presumes a linear development process that proceeds in stages from development of a comprehensive requirements specification to design, then to implementation followed by integration, next to testing, then to installation, and finally to maintenance. Modified versions of the model acknowledge some role for feedback between each of these stages and preceding ones.1 They also mirror the typical capital planning cycles of federal, state, and local government and agencies, which have traditionally made periodic, large investments in new systems and capabilities.
Long acquisition cycles are well known to make it hard to incorporate rapid technological change. The doubling of various measures of computing performance every 1 to 2 years places an obvious premium on processes that can more rapidly incorporate new technology. Moreover, this linear process that periodically seeks to produce the solution often fails to deliver the expected capabilities. Requirements creep may end up making the ultimate design overly cumbersome, complex, or costly to implement, leading to cost overruns, delays, and even program cancellation. Users, who only have input to the front end of the process, may find that the delivered capabilities do not meet their needs.
Also, new capabilities and technology opportunities that arise after
the system development leaves the initial requirements stage are difficult and expensive to incorporate. The reason is that many artifacts of a system grow organically. The practical reality is that large systems emerge from incremental additions in ways entirely unanticipated by the designers of the original system. If the original system is successful, users will almost certainly want to add new functionality. The new functionality desired is by definition unanticipated—if the designers had known it would be useful, they would have included it in the first place.
Indeed, it is essentially impossible in practice for even the most operationally experienced IT systems developers to be able to anticipate in detail and in advance all of a system’s requirements and specifications. Often users change their minds about the features they want, or (even more difficult to deal with) they want contradictory features. And, of course, it is difficult indeed to anticipate all potential uses. Thus, system requirements and specifications are inherently incomplete, even though they underlie and drive the relationships among various components of the system. Put differently, the paradox is that successful system development requires non-trivial understanding of the entire system in its ultimate form before the system can be successfully developed. System designers need experience to understand the implications of their design choices. But experience can be gained only by making mistakes, learning from them, and having a mechanism to modify and evolve systems overtime as the understanding of both user and designer grows and as requirements and technology evolve.
For these reasons, development methodologies have been developed that presume an iterative approach to building systems. An iterative process uses multiple, short acquisition cycles, which over time deliver and improve on system capabilities. Such a process encourages feedback from users and allows them to play a constructive and central role in a system’s evolution. An iterative process requires, among other things, mechanisms for users to provide feedback to technology innovators and providers. (The committee discusses some possible mechanisms for supporting this process later in this chapter.)
With iterative development, systems that initially include limited functionality are often introduced. As users adopt the technology, they have a mechanism for identifying improvements to that functionality and for identifying desirable new features that technology providers can incorporate into the new product versions. The progression of mobile phone functionality to incrementally include increasingly greater performance and a wider range of features is a familiar example of this process.
An iterative acquisition process has other advantages. Often requirements thought to be essential turn out to be relatively unimportant or little used once deployed. The functionality supporting those require-
ments can be dropped from future product versions, helping minimize complexity creep. Essential features frequently go unidentified until the system begins to be widely used. These features can be added in a more orderly fashion, evolving the system with continuing feedback from users. Incremental introduction of technology also allows one to exploit the current technology “sweet spot”—where the costs of components such as microprocessors are lowest—keeping down costs and making more frequent acquisition cycles possible.
In disaster management a tension inevitably arises between a natural desire to fully meet demanding or perceived unique requirements and the cost and speed of development and deployment. Disaster management professionals often say that they must be able to depend “absolutely” on the technology they employ—noting the life-or-death nature of their work. An iterative process allows time for users to build trust in the system’s ability to deliver on those critical requirements and a mechanism for providing feedback to request (or demand) changes as needed. It also allows an opportunity to minimize initial demands for unique requirements involving specialized equipment and maximizing the opportunity to incorporate “commodity” components, thus minimizing cost and delays.
As the saying goes, one can only manage what one can measure. The resources available for disaster management are limited, and decision making always involves tradeoffs. To motivate the IT expenditures needed to provide adequately for disaster management, there must be an understanding of the benefits that are obtainable. Weighing the available benefits from particular IT investments against the returns on other sorts of investment is challenging. When considering the effects of disasters, these tradeoffs can easily be driven by emotions, even more than in many other sectors. Having metrics allows an analytical assessment to be made, comparing the costs of preventive and mitigating investments with the likely impacts of disasters, and with other potential investments. The sections that follow briefly discuss several aspects of metrics-based decision making. The development of suitable metrics to guide investment in IT for disaster management is a topic for further research and something that a roadmapping effort (described later in this chapter) might address.
Estimating the risks of infrequent events is hard, but failing to consider risks explicitly cripples any rational decision-making process. Gath-
ering the necessary information will necessarily be an iterative process, with initial information providing a basis for further discussion, expansion, and revision. An additional benefit of systematizing this process is the potentially useful feedback on needs and opportunities that it can provide to the technology research and development community.
Costs and Benefits
The economic model needed to assess the trade-off of the costs versus the benefits of investing in technology for disaster mitigation differs from business investment models. Typical business investments are related to a steady income stream, not to a variety of infrequent future costs and benefits. An economic model for disaster management must combine initial investments, ongoing costs, and infrequent events. An investment is based on the net present value, computed using some discount rate for all those components. Those rates incorporate the expected lifetime of the assets and the risks associated with deriving income from those assets over that period. For business IT investments, those rates typically vary from 12 to 20 percent. Communication infrastructure has used much lower discount rates in the past, but the merger of those technologies is forcing the rates for those investments upward.
Any technology deployment has initial costs, as well as ongoing maintenance and training costs, and a finite life. This long horizon requires using discounting of future benefits, as well as ongoing costs. Since the occurrence, magnitude, and timing of future disasters are uncertain, appropriate discount rates may have to be quite high, so that results adequately reflect the intuition of the participants and funders of disaster mitigation expenses. While economic cost estimates of disaster impacts are never precise, they do provide order-of-magnitude estimates needed to allow projects and proposals to be ranked.
Estimation of the savings resulting from reduced impact due to mitigation efforts is particularly difficult. Many IT benefits will be due to being able to respond more rapidly. Developing models on how faster response can reduce eventual costs is a substantial, but interesting, challenging, and rewarding task. An actual economic quantification of the cost of disaster mitigation versus the benefits obtained could be a fruitful area for research.
Use of a Cost-Benefit Model
Any recommendation for new and increased outlays must be accompanied by a quantification of their benefits. While costs are easy to quantify, the benefits of disaster mitigation are hard to quantify, but a reason-
able attempt is required. It is expected that the costs of improving the technology available for disaster mitigation will be offset by substantial benefits accruing to the country. The most important of these benefits cannot be directly quantified, since they represent the human dimension: reduction of suffering, preservation of family stability, and prevention of losses of items of purely personal value. Other benefits of disaster mitigation can and should be quantified.
The low frequency of major disasters greatly reduces the priorities that local planners, faced with many short-term needs, actually assign to accumulating and maintaining the resources that are adequate for dealing with disasters. While some supplies can be stockpiled for decades, IT becomes obsolescent much faster and requires an ongoing infusion of funds. In the commercial world, a spending rate of 15 percent of the initial and upgrade investments is expected. Costs are reduced when obsolete systems are taken out of service. For many systems, the military tends to spend less annually but is then faced with huge, wholesale replacement costs every 12 years and has an inadequate system for more than half of that period.
Readiness for mitigating disasters requires a modest but steady investment in technology. The total benefits are due to cost reduction that occur at unpredictable times and are of unpredictable magnitude. Major, quantifiable benefits are due to infrequent events. For an individual county, the investment in disaster mitigation technologies appears to represent an instantaneous expense, sometimes aided by state or federal grants. Maintenance costs are in different budgets and are hard to assess.
The decentralized nature of disaster management, spread across thousands of agencies—from the smallest volunteer fire department, to sophisticated urban police departments, to state, regional, and federal agencies—presents particular problems for effective technology evaluation and diffusion. Today, many managers responsible for the acquisition of technology for public safety and emergency management are, quite understandably, unable to keep-up-to date with the volume of technology and choices available. Managers often rely on vendors to tell them what they need and have to base decisions largely on the often-conflicting “advice” of various vendors.
Professional conferences, workshops, and other meetings held by public safety and emergency management associations are one mechanism for facilitating diffusion of the latest technology. IT capabilities, most notably the Internet, may prove useful as well by providing a conduit for sharing and discussing information about what works.
More systematic approaches to evaluation would likely yield deeper and broader technology adoption over the long term. One option is to make use of formal mechanisms for providing unbiased evaluations and guidance—a sort of Consumer Reports for disaster managers. The military’s experience with technology demonstrations, described in the next section, may provide one model for this type of “clearinghouse” approach. Important differences exist between the defense and disaster management contexts regarding technology evaluation. For instance, the military has a somewhat-well-defined acquisition chain that flows from initial ideas to deployment agencies. In civilian disaster management, development and procurement are far more decentralized. Decentralization introduces hand-off issues for successfully demonstrated technology. Still, adapting the lessons of military technology transfer to civilian disaster management yields at least two insights.
The first insight is that technology demonstration will be successful to the extent that more knowledgeable technology adopters are available for experimentation. The next two sections—on processes that bring technology developers and practitioners together and on building capacity at the intersection of IT and disaster management—discuss mechanisms and examples for growing the capacity of practitioners as knowledgeable technology adopters.
A second insight is the importance of an honest broker serving as a neutral technology clearinghouse that can help provide the expertise to identify and evaluate technology. There are a few examples, from which others have successfully learned, where local and state agencies have taken the lead in demonstrating the viability of a technology. Some states (e.g., South Dakota and Indiana2 ) have taken on this role, identifying and evaluating technology, infrastructure, and services, and in several cases providing one or more of these to local agencies.
Private-sector integration centers aimed at bringing together diverse technologies can have value in getting vendors to make their products work with those of other vendors. But, they are necessarily designed to promote both their particular partner’s products and their own consulting and integration services. These vendor-driven efforts will likely fall short of being truly neutral.
One option for achieving the neutrality of an honest broker to vet technology intended for disaster management is used for similar reasons in other government mission areas. It is the Federally Funded Research
and Development Center (FFRDC) model. FFRDCs are independent, non-profit entities sponsored and funded by the U.S. government to meet specific long-term technical needs. FFRDCs typically assist government agencies with scientific research and analysis, systems development, and systems acquisition. They draw together expertise and perspectives from government, industry, and academia to address complex technical issues.
An FFRDC for disaster management would not have any operational responsibilities. Rather, it would serve the disaster management community by identifying, developing, and assessing technologies and concepts of operation for using those technologies.
Processes That Bring Technology Developers and Practitioners Together
An iterative development process goes hand in hand with an acquisition process which assumes that technologies and organizational processes will co-evolve. Coordinating technological advances and organizational process changes requires new knowledge and skills on the part of both practitioners and technology developers and new relationships between them. Such coordination depends on practitioners and developers gaining a better understanding of one another’s methods and on mechanisms that maintain dialogue between them in order to identify promising technologies, define appropriate uses for them, and evaluate and disseminate the outcomes.
The Department of Defense has a broad set of programs aimed at bringing together technology developers and users, speeding innovation, and transitioning it into use in the field. One notable model with considerable applicability to disaster management is the Advanced Concept Technology Demonstration (ACTD) model,3 which also pays particular attention to the interplay between technology and organization. An ACTD is used at the phase where promising technologies have been developed together with a vision of how they could be used. An ACTD provides a framework in which to assemble a group that is willing to be an early adopter and a context into which the technology can be inserted and evaluated. A well-run ACTD includes a phase where the system, organization, and technology are all analyzed together, and modifications to each are identified and implemented.
A particular strength of the ACTD approach is that it recognizes that
A description of the goals of an ACTD is available at http://www.acq.osd.mil/actd/transit.htm.
it is not enough to build the technology. One needs to analyze the organization and look at how processes are going to change as a result of having the technology. The idea is to simultaneously develop requirements for organizational change and adaptations of technology to fit that.
To use the ACTD approach effectively, patience is required. Otherwise, the design phase may be overly compressed to the detriment of the ultimate product. Promising innovations may wither while waiting to be adopted, or problems identified during the acquisition phase may result in a technology’s being abandoned before researchers are able to find solutions.
Building Capacity at the Intersection of IT and Disaster Management
The committee heard from state and local agencies that one of the major barriers to advancing practice and adoption of technology was a lack of resources to allow staff time for ongoing development of technical expertise. Yet, without the in-house development of technology expertise able to draw on external resources (such as centers of excellence), adoption of technology will continue to lag and is unlikely to be optimally implemented when adopted.
The interdependence of technology and practice means that developing a cadre of experts at the intersection of disaster management and IT is likely to yield significant payoffs. Expanding the human assets available involves both the promotion of cross-fertilization between the technology and practitioner communities and the promotion of a culture of innovation in both. Such a cadre of people will be more astute at translating user requirements to technical need, and will serve as a self-reinforcing feedback mechanism between technology advances and disaster management practices.
A number of mechanisms could contribute to increasing human capital along these lines. These include both mechanisms for fostering innovative environments wherever possible and mechanisms for disseminating their results elsewhere. For example, programs could be established to support fellowships, field tests and other experiments, and training and educational activities. Also, programs that incorporate both disaster and IT expertise could be funded to analyze the performance of systems after a disaster.
Federal grants could support creation of expertise within state and local agencies by, for instance, sending people from public safety agencies to regional centers for training and to interact with technology experts and other practitioners to stay abreast of the latest developments in both practice and technology.
Exploiting Open, Practitioner-Driven Processes
The rapid pace of technological change, the growing complexity of technology, and perceived economies of scale are driving many disaster managers to move from being owner-operators of their IT and communications to being customers of contractors and service providers. Even where public safety and emergency management agencies retain ownership of their IT and communications assets, they increasingly have become reliant on vendors for information on what is technically possible and worthwhile.
Formal acquisition and management processes are generally designed to enforce appropriate procurement standards. However, they also run the risk of filtering out many useful, original ideas and possible contributions from outsiders and can present insurmountable obstacles to the acceptance of new technologies, unless and until they have been converted into profitable commercial products. Even when emerging capabilities are tracked and assessed, traditional design and acquisition methods are ill-suited to keep pace with accelerating shifts in technology.4 As discussed below, there are a number of opportunities to broaden the set of users that participate in the shaping, development, and evolution of IT systems.
The recent movement of open source software and standards development is one recent expression of this voluntary impulse. (See Box 3.1 for background on open source software and open standards.)
Although some of the resistance to the use of open source software in disaster management might ultimately be traceable to active opposition by commercial vendors, there are also structural obstacles to non-commercial innovation within disaster agencies. This is regrettable, since many technical innovations begin as non-commercial experiments, though only a relatively few survive the road to commercialization. Despite this resistance, there are some examples of how the open source software/ open standard development model has been applied to disaster management. (See Box 3.2 for one such example.)
Several strategies are available for extracting the most valuable items from the non-commercial offerings of the volunteer and open source communities:
Organized experimentation and evaluation of non-commercial and pre-commercial technology. (The city of San Diego and the state of Cali-
Open Source Software and Open Standards
Open source software and open standards are two aspects of a cooperative approach to information technology that has deep roots in the Internet. Open source refers to the internal programming—the source code—of an application. In traditional proprietary software, the secrecy of the source code is the foundation of its commercial value. Access to source code is restricted, and details of its operation are disclosed only in general terms. Open source software, on the other hand, is freely published and generally visible. While some financial value can be recovered in fees for ancillary services (e.g., consulting and sales of reference books), the chief reward for the creation of open source software accrues to the reputation of the programmer or programmers who contribute to its creation.
Although much open source software is the work of individual programmers or small, discrete teams, the “reputation economy” of open source software development lends itself to the formation of large ad hoc collaborations, often involving developers widely distributed geographically and interacting via the Internet, competing for recognition of the quality of their contributions instead of for commercial equity.
The open source approach has produced some of the world’s most popular software, such as the widely used Apache Web server program and the Linux computer operating system. Proponents of open source software argue that it tends to be more secure (since security weaknesses cannot be hidden within proprietary programming) and more flexible (since it can be readily customized) at lower costs than commercial software products. They suggest that the lack of financial constraints leads to software that is “problem-oriented rather than profit-oriented” and that the lack of a commercial incentive to lock in customers to a particular program helps preserve market efficiencies over time.
Critics of open source software dispute the cost-effectiveness of such “free” software, countering that it demands more of IT staff and cannot be proven to be
fornia, among others, have ongoing programs under which non-commercial innovators can test and demonstrate their ideas in parallel with disaster exercises.)
Assistance in refining useful non-commercial technologies. Such assistance might be provided in a number of forms, including coordinating small-business loans and developmental microgrants, brokering introductions to commercial implementers, and offering user feedback and review to non-commercial development projects.
Use of selected non-commercial advisers as independent sources of information and reviewing IT and communication plans under the auspices of an advisory organization.
Judged on the merits of their results, open source software/open standards developers could become valuable members of the disaster
any less expensive to maintain than commercial software (for which 15 percent of the purchase price annually is a common estimate of maintenance costs). They also warn that the design of open source software may reflect the interests of the developers more than the needs of the users.
Open standards, by contrast, are blind to the internal composition or economics of any particular program or device. Open standards specify certain external behaviors of a program or device in order to ensure interoperability among various implementations. Open standards may describe file formats, communication protocols, or equipment configurations. They are deemed “open” to the extent that they are published so that products and applications can implement them without licensing or other costs.
Arguably the most successful open standards effort to date has been the Internet itself. By publishing a collection of non-proprietary technical standards (the Internet protocols) for unencumbered use, the creators of the Internet enabled data exchange among diverse computer systems and software packages. As a result, the data and functionality available on any Internet-connected computer now vastly exceed the usefulness of that same computer standing alone.
The open standards movement and the open source community share an emphasis on collaboration and cooperation as sources of value. This “network economics” view sees value as the product of connectivity among numerous entities, as opposed to the more traditional view that value is a function of scarcity. (The classic illustration of the former line of thought is that the value of the first fax machine was nothing until there was a second one. And the more fax machines there were in the world, the more valuable each of them became.)
Open source software and open standards are not so much challenges to traditional commercial approaches as they are extensions and supplements that take advantage of voluntary collaboration and cooperation, especially in low-frequency, high-risk, and high-uncertainty applications where commercial incentives alone may not yield needed results.
management community. Successful examples of applying this model in other areas range from widely used programming languages and operating systems to the Internet. They have a few key common elements important to their success:
Commitment to open source and open standards processes, with supported organizations for managing them, to harness the energy and skills of individuals and small groups that best understand the needs of the community;
Small but very-high-leverage investments in accelerating technology dissemination by funding organizations to provide “supported shareware”—reference implementations of technology robust enough for users to use and evaluate, while open to all members of the community (users and technologists alike) to refine and improve and extend to others. This is a highly proven model for transitioning to robust, commer-
Common Alerting Protocol (CAP)—An Example of Open Standards Development Applied to Disaster Management
One recent example of open-standards-style development in support of disaster management is the Common Alerting Protocol (CAP). This international data standard for alerting and warning messages was initially developed by an ad hoc group and subsequently attracted the attention of two non-profit organizations (the ComCARE Alliance and the Partnership for Public Warning). Later, the CAP standard was adopted by the international Organization for the Advancement of Structured Information Standards (OASIS) standards body and by a number of federal agencies (including Department of Homeland Security, Department of Defense, National Oceanic and Atmospheric Administration, and U.S. Geological Survey), and state and local organizations began to use CAP in a variety of emergency alerting and notification applications.1 Ultimately, the international OASIS standards group partnered with U.S. and international disaster response programs to formalize CAP and an associated family of Emergency Data eXchange Language (EDXL) standards2 for disaster management.
cially supported products without locking government to limitations resulting from single sources of supply;5 and
Support for the development and exchange of information about best practices through Internet support for forums of information dissemination and discussion, for example via blogs and wikis.
Full exploitation of the open source software/open standards development model requires organizations and institutions that take on new responsibilities to guide and carry out activities. Many different types of
organizations could fill this role. However, there is a tremendous community-building opportunity if the role is filled by an organization that has an educational component, has systems development and support skills, and is rooted in the practitioner communities. The ideal structure would be one where training of both IT developers and technology-savvy practioners is performed in an environment that includes testbed technology innovations and where the practitioner community and IT development community could learn from each other, providing a positive feedback loop for the development of IT solutions grounded in practical realities.
TRAINING AND THE IMPORTANCE OF ROUTINE USE
Disaster managers emphasize the importance of technology being used on a routine basis by practitioners if it is to be used effectively, or at all, during a disaster event. Understandably, practitioners turn to the things they trust and are most familiar with, especially in the high-stress situation of disaster response. Training is important, but it often occurs only once, shortly after a technology is introduced. Technology that sits idle until the onset of an event will likely remain unused. The committee heard of instances where responders did not know or remember where the technology was or how to access it when it was eventually needed. Frequent training is also necessary because the technology itself often changes rapidly.
In each of the areas of disaster management activity, most of the organizations and personnel (e.g., law enforcement, firefighting, emergency medical treatment, sea patrol and rescue) involved have regular day-to-day responsibilities and activities different in kind and in magnitude from major disaster operations. Training and equipment provided for disaster responsibilities may not be optimum for day-to-day activities, so that compromises or separate parallel training and equipment may be necessary. Full consideration of cost, complexity, and other issues associated with this problem are such that implementation is difficult.
Training programs are expensive and take critical resources out of service for the duration of the training. It is often logistically difficult to arrange training for huge public safety organizations, with training sometimes spread over many months. The less relevant the lessons are to their daily tasks, the less likely practitioners are to retain lessons from the training. Yet, IT developed for disaster management is frequently designed for use only during major events, and using it effectively requires special knowledge or familiarity based on experience (which rarely exists). The result is that much IT goes unused or underused when an event occurs. Building adaptable tools that people use every day is critical to
those tools being used in a disaster.6 Indeed, routine use is more important than specialized training in building the competence and confidence required to use a technological capability successfully, especially in the high-stress situation of disasters.
One partial solution is to seek technology that has the broadest possible application in daily operations, with smooth transition (from a user’s perspective) to functionality required only in handling disaster events. The switch over to disaster operations should be as seamless as possible. As an example of what to avoid, the committee saw an operations center during a site visit where two separate software applications with overlapping functionality coexisted; one application was used during routine operations, the other only in disaster situations. The application used on a daily basis was widely and effectively used. The application with additional specialized disaster functionality was ill understood by all but a few personnel.
LEVERAGING “NON-OFFICIAL” INFORMATION TECHNOLOGY
Although disaster management is typically associated with government response agencies, the true boundaries of the disaster management community are difficult to draw. Non-governmental organizations of various kinds, government agencies not normally concerned with disasters or public safety, trained volunteers and emergent ones (especially within the victim population itself)—all these are active participants in disaster management. Private voluntary organizations play important roles in disaster response and recovery activities, including the distribution of food, water, and other supplies and the provision of shelter. Amateur radio operators have long played a role in disaster communications and have been formally incorporated into disaster communications planning and procedures. Moreover, in a disaster, many individuals and organizations will step forward as volunteers. Some will be those who happen to be present at the scene of a disaster. (For example, a number of live rescues after an earthquake are performed by bystanders, not official responders.)
Many volunteers prove to be crucial contributors to a disaster response and possess resources or expertise not otherwise available. Others
This is one of the conclusions of Sharon Dawes, Thomas Birkland, Giri Kumar Tayi, and Carrie A. Schneider, Information, Technology, and Coordination: Lessons from the World Trade Center Response, Center for Technology in Government, University at Albany, State University of New York, June 2004; available at http://www.ctg.albany.edu/publications/reports/wtc_lessons/wtc_lessons.pdf.
may consume more of official responders’ time than their contributions are worth. Some create confusion for the response system: Are they responders or victims? Are their motives altruistic, selfish, or even criminal? Are they actually who they say they are? Nor are the priorities of volunteers necessarily well aligned with those of officials. As a result, volunteers and their contributions to official agencies are generally filtered through coordinators and programs that act to limit their impact on the agencies’ official programs.
Recent IT advances offer new ways for volunteers to contribute to disaster response and recovery efforts. For example, the following are just a few of the many volunteer activities in the response to Hurricane Katrina:
Web-based information aggregation systems at http://katrinalist.net and http://www.disastersearch.org. These are systems for posting and aggregating information about people affected by Hurricane Katrina and resources available for those directly affected by the disaster.
An online repository at http://www.hurricanearchive.org of user-contributed information about the hurricane experience. This repository has a “map browser” page that incorporates the use of Google maps. A similar map display was used on another site during the immediate aftermath of Hurricane Katrina to provide information about damage at specific sites throughout the city of New Orleans.
A toll-free locator service established by MCI in the aftermath of Hurricane Katrina that provided toll-free numbers where volunteers could register themselves (877-HELP-KAT) and could search for missing family and friends (866-601-FIND).
Exceptional circumstances require the rapid construction of working relationships among groups and individuals with very different backgrounds and priorities. Network technology has created new opportunities, as the above examples demonstrate, for rapid establishment of working relationships, but much remains to be learned about the practical management of these crucial “adhocracies.” The large number of private companies providing IT-related services, wide access to information systems, the natural support that the Internet provides for “virtual communities” of users and developers, and the relative ease with which new methods of communications can be tried out have led to the creation of many novel communications mechanisms that have applications for disaster management. The success of technologies like Google’s search engine, Wikipedia, blogs, and Google maps is evidence that information technologies that provide user-driven information access are powerful enablers.
These tools are revolutionary in that the information providers are not forced into pre-authorized pre-formatted reporting and because users are presented with multiple sources providing data on any query. At the same time, users are forced to sift, evaluate, and decide what information is helpful. And old problems, such as conflicting and redundant efforts, unclear motivations, lack of accountability, inaccurate information, and differing priorities are all present.
Problems or not, it can be safely assumed that future disasters will bring about the spontaneous creation of similar non-official uses of IT— and most likely completely novel ones as well. The obvious potential value of these initiatives to various aspects of disaster response and recovery activities coupled with the potential for problems should drive disaster managers to seek constructive ways to incorporate them into disaster management practice whenever feasible and appropriate. Careful study should be undertaken with the goal of making iterative improvements in leveraging these non-official, emergent activities so that they eventually become part of the standard inventory of techniques and procedures used to deal with disasters.
EFFECTIVELY USING COMMERCIAL COMMUNICATCIONS SERVICES
Commercial communications services offer potentially large cost savings over dedicated systems but raise questions as to whether stringent capacity and reliability requirements can be met. Striking the right balance in the face of market forces that drive commercial service providers will require careful and ongoing evaluation by public safety agencies.
Public safety agencies maintain extensive infrastructure independent of the commercial infrastructure available to the general public. There are a number of important reasons for this. The National Task Force on Interoperability (NTFI) guide for public safety officials “Why Can’t We Talk?” asks the question, “Why can’t they just use cell phones?”7 It answers the question by noting that responding to disaster situations, where every second counts, requires reliable, dedicated equipment. Public safety officials cannot depend on commercial systems that can be overloaded and unavailable; experience has shown that these systems are often the most unreliable during critical incidents when public demand over-
National Task Force on Interoperability, “Why Can’t We Talk?: Working Together to Bridge the Communications Gap to Save Lives: A Guide for Public Officials,” February 2003, p. 11; available at http://www.ncjrs.gov/pdffiles1/nij/204348.pdf.
whelms the systems. The NTFI guide summarizes the “unique and demanding communications requirements” for optimal public safety communication systems as including dedicated channels with priority access, reliable one-to-many broadcast capability, highly reliable and redundant networks, the best possible coverage within a geographic area, and equipment designed for quick response—in short, something officials can control and count on.
And yet, the use of commercial infrastructure by first responders and public safety officials is increasing. This is happening for several reasons. First, commercial infrastructure is much less expensive than dedicated, purpose-built public safety communications infrastructure. Second, it is often what is at hand during an incident. Third, it helps achieve resilience by provided redundancy and diversity, potentially at relatively low costs. (See the detailed discussion on redundancy and diversity to achieve resilience later in this chapter.) Efforts, both formal and informal, are under way to make additional use of commercial services. Commercial infrastructure used today for disaster management includes the following:
Cellular voice networks,
Cellular data networks for mobile data terminals (such as those in police cars and ambulances),
Cellular data networks for text messaging and e-mail services,
Satellite phone networks,
Ad hoc wireless networks,
Municipal wireless networks (some of these are dedicated networks for public safety personnel),
Traffic management systems,
Aerial photo systems,
Radio over IP systems,
Cable/public television broadcasting systems,
Video monitoring and conferencing systems,
Internet services, including (but not limited to) Voice over IP,
Remote hosted incident management services (e.g., eTeam, WebEOC), and
Commercial imagery satellites and image databases.
Commercial hardware, software, and network services are subject to intense competitive price pressure. Maintaining profitability entails a vigorous effort by the providers to minimize costs. But many of the steps available to control costs—reduction of component count, use of the least expensive components available, avoidance of unused “overhead” capac-
ity, and the like—can directly reduce the resilience of the product when it is used in unusual circumstances.
For example, unused capacity is expensive for shared telecommunications systems such as the public telephone network, cellular phones, and satellite systems. System operators conduct careful statistical studies to estimate the level of capacity needed to satisfy most of their customers under most circumstances. Any capacity in excess of that is, from a business perspective, wasted, and operators have a fiduciary duty to avoid such waste. Unfortunately, this means that during exceptional surges in traffic, as occur during disasters and even on holidays and during certain sporting and cultural events, the available network capacity is exceeded and service becomes unreliable. (This economic phenomenon is sometimes obscured by memories of successful use during previous disasters, when the system in question was new and not yet fully loaded to profitability.)
A high degree of reliance on turn-key systems has meant that disaster management organizations have not had to pay a great deal of attention to the underlying design issues that ultimately affect the functionality of their IT systems. Today, pervasive networking and the increasing importance of off-the-shelf technologies make it possible for organizations to assemble a greater portion of their disaster management systems from components provided by different vendors. Doing so successfully will require that those organizations take on more responsibility for the design of those systems. In the course of its work, the committee has identified four system design principles that have particular importance for disaster management systems:
Build emergency management systems for effective scaling from routine to disaster operation. System designs that feature wrenching and risk-laden transitions between daily operational mode and “disaster” mode can make bad situations worse. All system components should be used regularly, even if not at full scale, by the people who will need them in a disaster. Familiarity smoothes transitions from routine to disaster operations. Indeed, people tend to avoid or are ineffective with the unfamiliar.
Exploit redundancy and diversity to achieve resilience. Improving overall resilience can be achieved using many different techniques in combination. One approach is to “harden” systems, designing them to higher operational standards than those typically applied to commercial services and technology. A valuable alternative, which offers both the promise of
improved performance and lower cost, is to exploit redundancy and diversity to improve the overall resilience of an IT system.
Design systems with flexibility, composability, and interoperability as core guiding principles. Each of these characteristics has important, wide-ranging implications for system design. Systems that place a premium on flexibility and agility lend themselves to the sorts of ad hoc use that often is needed in a disaster. Systems composed of standard components will improve the ability to evolve them incrementally and reduce reliance on any single vendor. Systems with designed-in interoperability points greatly facilitate interoperation with other systems.
Distinguish between the user interface and the underlying technologies used to deliver a capability. A particular user experience and the particular technologies used to deliver it need not be the same. Consider, for example, that first responder radios encapsulate multiple attributes—push-to-talk, the form factor of the handset and microphone, and push-to-talk communications within defined groups—that could be unbundled and repackaged for more effective use. Similarly, future handheld communications devices might employ cell phone technology (not commercial cell phones themselves) but with appropriate adaptations (in durability, form factor, or frequency) that meet disaster management requirements yet leverage lower-cost commercial technology.
Each of these principles is discussed in more detail below.
Effective Scaling from Routine to Disaster Operation
Disasters are sufficiently different from day-to-day life events to be perceived as a completely separate mode of operations involving different organizational structures and priorities, different attitudes and skills, different methods, and different technologies. This impression is reinforced by a collection of binary words and concepts: disasters are “declared,” emergency plans and facilities are “activated,” resources are “mobilized” and then later “demobilized.”
Yet this clear-cut distinction between “disaster mode” and normal operations is in some sense a matter of administrative convenience. Actual disasters vary enormously both in scale and quality. Some, such as earthquakes, terrorist explosions, and electrical blackouts, come on suddenly and with a minimum of warning. Others, such as riverine floods, hurricanes, and food shortages, come on relatively gradually. Some, such as infectious disease epidemics and bioterrorism attacks, might actually be in progress for some time before being recognized. By the same token, some disasters conclude quickly, allowing a rapid shift from response
into recovery, while others persist for longer and sometimes unpredictable periods. And while the generalized incident command functions of command, operations, plans, logistics, and finance are generic and applicable to all hazards, the specific makeup and activities of each function can vary considerably depending on the nature, scope, and phase of the particular event.
In addition to the challenge of matching the response to the situation, these broad ranges of possibilities create challenges for managing the transition into and back out of “disaster mode.” Many problems in disaster response may be artifacts of these transition challenges, especially as regards interpersonal and interorganizational communication and coordination. Differences in situational awareness and assessment and resulting differences in the timing and degree of “activation” between organizations and individuals can lead to procedural and perceptual disconnects and to falsely calibrated expectations. For example, in 1991 California regional emergency management personnel near a major urban wildfire experienced difficulty in mobilizing their headquarters to respond to resource requests. The more distant headquarters had not yet received tactical and media reports of the extent and rapid growth of the blaze and thus had not yet fully activated.
This tendency toward representation errors and “mode binding” may be exacerbated by the tendency to focus on worst-case scenarios in planning and exercises. The tacit assumption that a worst-case scenario will exercise all emergency management functions overlooks the problems inherent in transitions and the ambiguities of borderline situations that may or may not turn out to be disasters or that are evaluated differently by different response entities.
Thoughtful use of IT could do a great deal to mitigate the problem of transitions, largely by reframing occasional disaster activities as continual processes that can be scaled up or down without creating discontinuities. At least four potential benefits could be realized.
First, the interconnection of emergency management applications using shared data standards could permit activities such as situation reporting and resource management to be turned into continual “flow” processes rather than “batch” activities based on paper-based document-oriented systems. This conversion could reduce the time delays and the unintended synchronization of reports, requests, and orders inherent in fixed reporting cycles. It could also permit more rapid and precise detection of trends and conditions by allowing observations to be shared at a rate suited to the phenomena observed rather than one dictated by a fixed reporting schedule.
Second, a continually updated shared knowledge base, in the form of a “common operating picture,” could reduce various difficulties that can
result from different entities having different editions of situation and resource reports at a single point in time.
Third, the greater “virtualization” of emergency management processes using network technologies could reduce the impact of disasters on day-to-day operations by allowing many organizations and individuals to participate in the emergency information system from their regular workplaces. It also allows for more phased relocation to emergency facilities, thus reducing the vulnerability of the emergency management system that can result when all participants are in transit at the same time.
Finally, in addition to reducing the necessary size of physical emergency operation centers, greater virtualization might also permit a more gradual activation of emergency procedures, one that can be more closely matched to the particular “curve” of a particular disaster. This could reduce both the disruption and missteps associated with all-at-once activations and the tardy activations that sometimes result from hesitance to take the disruptive and expensive step of dislocating large numbers of key personnel.
One key to obtaining each of these benefits would be to abandon contingent “in case of disaster, break glass” systems and procedures in favor of an “always on” approach enabled by modern IT and communications. It is also critical to understand the degree to which IT can effectively complement, support, and substitute for face-to-face communications.
Redundancy and Diversity to Achieve Resilience
One of the fundamental dynamics of disaster management is the tradeoff between efficiency and resilience. At a technical level, communication can be disrupted by many factors, such as the following: physical destruction of equipment or infrastructure (e.g., towers, cables, and substations), loss of power, interoperability problems, and environmental factors (e.g., obstacles for wireless). Hardening devices and infrastructure is an obvious way to reduce failures of equipment and infrastructure and reduce the chances of power loss. While it is not economically feasible to harden all critical equipment, improvements are certainly possible. For example, it should be possible to develop tools to analyze the robustness of the communication infrastructure in certain disaster scenarios and to apply resources optimally to harden those at greatest risk.
While hardening equipment and infrastructure continues to offer potential for improvements, other means of achieving resilience in IT systems for disaster management are often discounted or overlooked. Redundancy and diversity are two well-known techniques for building resilient (reliable, robust) IT systems.
Redundancy can be provided inside a network that uses a specific
technology—for example, by building additional relay stations or access points into the infrastructure. This would make it possible to route traffic over alternate paths if some components fail. The ability to establish parallel redundant networks quickly can reduce down time. Two examples are (1) parallel wireless networks based on commercial technology and (2) cellular-on-wheels (COW)—movable cellular sites with satellite backhaul to establish cellular bubbles using dedicated spectrum.8 It is necessary to plan for such deployments in advance. Equipment must be available, spectrum must be allocated, and people must be trained. Also, organizational support must be created (e.g., directory services).
Technological diversity in the basic infrastructure for all organizations (including large and small businesses, government agencies, and community organizations) would reduce the impact of disasters.9 A related risk is that a technology monoculture, in which a technical attack in a single operating system or other piece of software could cripple response activities across the board. Indeed, integrating disaster planning efforts across private and public IT systems that embrace diversity and redundancy could lead to significant improvements in the overall reliability of information and communications capabilities in the event of a disaster.
As discussed earlier in this chapter, one important opportunity for achieving better resilience is by using commercial technology and services. Commercial technologies offer significant opportunities for both better redundancy and increased diversity. Two examples of this opportunity are cellular and wireless networking technologies. Nearly everyone uses cell phones on a daily basis, so they will naturally continue to use them during a disaster. Besides traditional voice calls, cell phones also increasingly support other communications capabilities—for example, push-to-talk, text messaging, Web access, and instant messaging—all of which may be useful and familiar. Wireless networking is also becoming ubiquitous in populated areas and is supported on multiple mobile de-
COWs have been deployed for wilderness firefighting, and Qualcomm deployed an improvised system that included movable switches (switches-on-wheels, or SOWs) with satellite backhaul to create cellular bubbles in the aftermath of Hurricane Katrina.
Sharon Dawes, Thomas Birkland, Giri Kumar Tayi, and Carrie A. Schneider, Information, Technology, and Coordination: Lessons from the World Trade Center Response, Center for Technology in Government, University at Albany, State University of New York, June 2004; available at http://www.ctg.albany.edu/publications/reports/wtc_lessons/wtc_lessons.pdf.
vices such as laptops and handheld devices. These technologies are commercially supported and widely used, and the equipment is relatively inexpensive. These characteristics naturally lead to IT capabilities that are both redundant and diverse.
Flexibility, Composability, and Interoperability as Core Guiding Principles
Systems designed to exhibit the characteristics of flexibility, composability, and interoperability as core guiding principles can produce many long-term improvements with wide-ranging implications for the effectiveness of IT use for disaster management. Each of these characteristics has important implications for system design.
Almost by definition, one cannot anticipate every contingency that must be dealt with in a disaster. In disaster management, where information channels can be interrupted and informal channels established without official sanction, developing and encouraging these characteristics are especially important, though often counter to the urge for control.
This observation suggests favoring composed systems consisting of diverse components over turn-key, integrated solutions. Composing systems from diverse (standard) components allows selection of best-of-breed solutions, enables federated solutions made up of diverse technologies with carefully chosen operational boundaries at which interoperability can be managed, and provides greater overall robustness through diversity.
Another important design principle is to build systems assuming future needs for interoperation with other systems. By constructing systems using layered architectures with potential points of interoperation designed in from the beginning (e.g., by designing to established standards), systems can be made interoperable without requiring technological monocultures.
Composability offers further advantages. The short-term economies attributed to lockstep standardization can be substantially offset by the improved marketplace power of users who are not locked in to a particular vendor’s wares but instead obtain the benefits of continued competition for their business. The costs of technological diversity are real, but so are the benefits. This is true in non-disaster systems and even more so in disaster management systems where resilience is more important than efficiency. The diverse agencies responsible for responding to disasters and for acquiring technology to do so should embrace the natural tendency toward heterogeneity that this organizational reality engenders. The challenge is to work with other agencies to standardize the right things.
Another driver of purpose-built proprietary systems involves current efforts to eliminate technological stovepipes and single-agency systems that greatly hamper effective, interoperable communication on the scale necessary during a disaster. This is an important goal given the significant and ongoing problems with interoperability already discussed in Chapters 1 and 2. These systems improve interoperability but generally do so by moving the stovepipes to a higher organizational level (e.g., multiple police departments are integrated into a state-wide police communication system, or municipal police and firefighting communication systems are integrated within a municipality or region), but boundaries will remain (e.g., police and fire fighters cannot communicate, or public safety agencies across jurisdictions cannot communicate). Federated solutions made up of diverse technologies and platforms could be implemented without the technological and operational risks associated with purpose-built, integrated solutions—and still provide interoperability, as long as interoperability points are designed into the system.
Mobile phone companies have managed a major technology generation change about once every 10 years. Is this an argument for a monolithic system or a federated system? Mobile networks generally consist of monolithic and non-interoperable technology at the wireless interface. However, mobile phones have a clear interoperability point, the telephone network (and more recently the Internet). Operators may make technology changes (large and small) internally to their network while continuing to maintain an interface to the interoperability point. This is, in fact, why cellular networks built on Global System for Mobile Communication (GSM) technology can interoperate with Code Division Multiple Access (CDMA)-based networks. In the same way, different agencies might have different and incompatible networks but maintain an interoperability point with other agencies’ networks.
Even within integrated, all-in-one systems, substantial value from technological diversity can be achieved by allowing for integration of diverse components built with standard interfaces rather than building end-to-end proprietary systems. Systems that have taken the trouble to meet the challenges of a diverse technology base at any one point in time are also “future-proofed” against technological change. New components become just additional instances of diversity, allowing incremental system upgrades that can leverage the latest technology advances, rather than introducing dilemmas requiring fundamental and potentially disruptive “forklift upgrades” or lost opportunities for improving effectiveness. That same flexibility makes diverse systems more agile in the face of sudden change. Mergers, relocations, and major changes in scale are characteristic of day-to-day operations as much as of disasters, and a diversity-tolerant approach to technology can facilitate them all.
The User Interface and the Underlying Technologies to Deliver a Capability
Many familiar information and communication technologies are best known by the various controls, displays, sounds, or other sensory experiences they present to the user. The familiar package of a public safety responder’s portable two-way radio stands, in many people’s minds, for the whole infrastructure of frequencies, repeaters, and towers of which the radios are the outward and visible sign. First responder radios encapsulate multiple attributes—push-to-talk capability, the form factor of the handset and microphone, and communications within defined groups— that could be unbundled and repackaged for more effective use. Handheld communications devices might employ cell phone technology but with adaptations (in durability, form factor, or frequency).
But the association of the “user interface” with the underlying infrastructure is not nearly as fixed as in the past, and grows less so every year. Push-to-talk operation has, for example, been added to cell phone systems. Similarly, the Internet connects two-way radio networks and telephone calls, while telephones send e-mails and images to computers over the Internet. Television transmitters broadcast data, while TV programs are “streamed” over data connections. The long-heralded “digital convergence” of various communications and computing platforms has arrived, and in arriving has demolished many of the traditional connections between how a communications capability is embodied in a user interface and the underlying infrastructure used to deliver it. This is creating no small degree of confusion in procurement, regulation, and management of IT systems.
Too often, would-be innovators have been hamstrung by regulations or procedures that confuse the user interface with the underlying infrastructure. However, the opportunities created by tearing down the walls between systems are much greater than the costs. Single devices can be used to replace multiple devices, thus cutting costs and simplifying the work of first responders and emergency managers alike. Different systems can be interconnected in unforeseen and expedient ways, creating new capabilities such as circumventing damaged infrastructure.
The conflation of interface device (e.g., computer, phone) and infrastructure (e.g., the network hardware and networking protocols) also leads to the mistaken presumption that a diversity of communication systems must present huge challenges for the users to learn and effectively operate in a diverse technology environment. However, a variety of familiar as well as novel interface devices could coexist while still enabling interoperation and communication between systems—even where the underlying networks are vastly different. Likewise, users could be
provided with a single, consistent interface that nonetheless utilizes more than one discovery, routing, and transport regime. The underlying infrastructure can hide the technological “glue” necessary to communicate between systems. Either way, interface device choices should be driven by how best to ease training and promote familiarity through routine use, and should be largely independent of decisions about the underlying infrastructure.
An awareness of the difference between these two distinct layers of technology and a thorough understanding of how they can be most effectively coupled with each other, depending on system requirements, is a critical skill for builders and managers of disaster management IT systems.
THE TECHNOLOGY PIPELINE
The current state of the art in technology and level of technology adoption differs considerably across the capabilities discussed in Chapters 2 and 4. These differences in terms of readiness can be usefully thought of in terms of a technology pipeline. Where a given capability is in the technology pipeline affects the type of investment needed to advance it toward use in practice and the amount of time before a payoff can be expected. A technology investment process to produce a steady stream of improvements for incorporation of IT in disaster management should include a detailed assessment of where technologies are in the pipeline. Investments are most likely to provide continuous improvements when balanced across the spectrum of possibilities—from adoption, adaptation, and development to applied research and general research. A roadmapping process, described in the following section, is a tool that could help guide such investments.
One way of dividing up the stages in the technology pipeline is the following:
Adoption—technology available today requiring efforts to overcome adoption barriers. Some IT technology is already available but has not been widely (though possibly partially) adopted by the public safety or disaster management communities that could benefit from it. Such technology does not require significant further adaptation, development, or research. There are many reasons that such technology has not been adopted, including cost, lack of awareness or training, and resistance to change. Nonetheless, adoption of these technologies can have immediate impact on attaining important goals for the use of IT for disaster management. The time to realize benefits from these technologies is limited chiefly by the time necessary to overcome the barriers to adoption, which will vary
by organization and the nature of the barriers to be overcome. These barriers may include the time to secure the investment and to make the appropriate connection between that investment and the fully discounted net present value of the savings that the investment will accrue. Advances in general data processing, storage, communication, display, and software technologies will continue to make components of IT systems less expensive and more capable with time—aiding the adoption process. (Examples of technologies in this category and described in Chapter 4 include radio-frequency identification (RFID) for resource tracking and logistics; computer-mediated exercises; reverse 911 capability, i.e., two-way emergency reporting; and portable unmanned aerial vehicles and robots.)
Adaptation—technology on the horizon and ready for transfer to disaster management practice. Effective systems to serve disaster management may be assembled by combining and adapting available commodity components. IT systems are mainly built using hardware that is available off the shelf and adapted using off-the-shelf software components. Some hardware components—such as personal computers and disk drives—have become commodities. Commodity hardware benefits from significant economies of scale, so that the hardware cost of a cell phone is roughly one-tenth that of a police handheld radio. Part of the cost differential is due to specialized functionality and ruggedness requirements, but a significant portion of the difference comes from much higher production volumes. Adapting commodity hardware to disaster management (in contrast to developing wholly novel hardware) could significantly lower cost due to specialized requirements. Software exhibits more flexibility. Many software and Web-based applications are expressly designed to allow customization for specific uses. Some useful software exists that has been developed in an open and freely shared environment that lends itself to adaptation and customization. In such an environment, investment can be distributed, often close to the end users, making it possible for users and vendors to adapt many existing IT technologies to disaster management readily and rapidly. Useful results can be obtained on time scales measured in months. (Examples of technologies in this category and described in Chapter 4 are commercial collaboration software and file sharing, online resource directories, multiple input/multiple output wireless systems, integrated ad hoc data collection tools (blogs/wikis), and mobile cellular infrastructure.)
Development—technology on the horizon and development needed for use in disaster management. For some requirements the technology and design principles are fairly well understood, but existing technology is simply not adequate for disaster management. A concerted effort is required to develop significant software, hardware, and organizational structures to take advantage of them. In this case, a request-for-proposals
process can be used to solicit capable organizations to deliver a product that implements the desired technology. Development time depends on project complexity, but useful results can often be obtained within a year. (Examples of technologies in this category and described in Chapter 4 are volunteer mobilization systems, event-replay tools, and intelligent adaptive planning tools.)
Applied research—issues requiring disaster-management-specific research. There remain some difficult issues in disaster management for which solutions are not at hand—for example, reliable radio communications inside buildings or rubble. Research aimed specifically at improving disaster management could be conducted by university, commercial, and government laboratories, and even volunteer relief agencies such as the Red Cross. This type of activity is managed and directed within the government by agencies such as the National Science Foundation, the Department of Homeland Security (DHS), the National Institutes of Health, and by defense-related organizations such as the Defense Advanced Research Projects Agency and the service research laboratories. Support for smaller companies is given through Small Business Innovation Research and Small Business Technology Transfer programs administered by many of the above agencies. Because of the nature of disaster management and the types of challenges that the community faces, disaster-management-related IT research is becoming increasingly broad and interdisciplinary (see Box 3.3 for a discussion of the challenges of interdisciplinary research), involving contributions from multiple technical and social science fields. Fully realizing potential gains will often involve the fostering and management of collaborative research. Even so-called short-term research is typically a multiyear proposition and requires validation in the field at real disasters, as well as simulations. (Examples of technologies in this category and described in Chapter 4 are software-defined radios, tools for data mining across diverse information sources, decision sentinels, deployable sensor networks, and computer-assisted disaster simulation training tools.)
General research—issues requiring research followed by adaptation to disaster management. Some problem areas in disaster management overlap general needs in IT management. IT is a broad, active area of research, and relevant research aimed specifically at disaster management is performed at university, commercial, and government laboratories and is sponsored by the same constellation of agencies. Many of these labs are engaged in broad areas of research that have the potential to develop new IT capabilities which, though not directed specifically to that end, could be harnessed for disaster management. As with applied research, this research is typically a multiyear proposition. Further development or adaptation may also be needed for effective utilization in disaster manage-
Interdisciplinary Research for Enhancing Disaster Management
Interdisciplinary approaches to disaster management have been discussed for quite some time. As noted in the recent NRC report Facing Hazards and Disasters, interdisciplinary research (i.e., research that blends researchers, expertise, and tools from a variety of disciplines to address compelling and crosscutting problems) has been gaining prominence in almost every field of scientific endeavor, including disaster management research.1 Indeed, the report cites earlier NRC work that describes four factors promoting the growth of interdisciplinary research: (1) the complexity of nature and society, (2) the desire to address scientific problems that cross disciplines, (3) the need to solve society’s problems, and (4) the power of new technologies.2
The benefits of interdisciplinary research for disaster management can be substantial. For example, Facing Hazards and Disasters describes a number of “exemplars” of interdisciplinary research in disaster management—from infrastructure failures and urban economics to casualty analysis through a common framework to decision making for risk protection. However, Facing Hazards and Disasters also goes on to describe how interdisciplinary research can be particularly challenging when overlapping social sciences with natural sciences (something one sees quite a bit of in disaster management research).
According to the same report, interdisciplinary research for disaster management faces a number of significant challenges of its own (in addition to the normal challenges for such research, such as lack of funding and academic incentives) if it is to prove successful. For example, the report notes that some issues often stem from “the failure of a research team to function collaboratively” owing to such things as difficulties in spanning culture gaps between the disciplines or the devaluation or undervaluation by one discipline of the work of another. Another challenge cited revolves around how disaster management research is most often viewed as applied research rather than basic research geared toward advancing overall knowledge in a given area.
Facing Hazards and Disasters surveys the available literature in the area and also suggests a number of factors contributing to the success of interdisciplinary research (pp. 186-187). First, it notes that problem-oriented research is probably best suited for interdisciplinary work in disaster management. Second, it notes that the particular characteristics and abilities of researchers—including such things as interpersonal skills—are very important for such interdisciplinary research. Third, it describes how studies that keep research teams relatively small and have stable membership appear to be more successful at integration and research.
ment, which may add more time. This report identifies topics that require general research, but it is not expected that funding specific for disaster management will be employed for these topics. (Examples of technologies in this category and described in Chapter 4 are delay-tolerant networking, automated information fusion from diverse sources, and calibrated information confidence tools.)
Any research agenda aimed at improving the long-term effectiveness of IT use in disaster management must be placed in the context of the technology pipeline and must prioritize the items in the agenda against each other, in particular noting where progress in one area may be dependent on progress in other areas or on organizational advances. An efficient approach to investment requires a clear vision of the path to improvement and a detailed understanding of the individual pieces of the problem and their interrelationships, together with a mechanism to measure progress.
Disaster management is, ultimately, a system-level problem. Thus, improving IT use in disaster management requires a system-level approach. The research agenda is likely to have the most impact if it conforms to a clear vision of the path to improvement defined in a fully articulated roadmap. By establishing a process for making improvements, currently unimagined concerns can be efficiently addressed as they arise, and as both technology and practice evolve.
A technology roadmap is a planning tool that can provide information to make better technology investment decisions by identifying critical technologies, technology gaps, and interdependencies between technologies that dictate coordination of research and development cycles. It can also help uncover interconnections between technologies and adoption issues related to organizational or human behavior characteristics. Perhaps most importantly, it can serve as a mechanism through which diverse participants, often with conflicting priorities but with common goals (i.e., saving lives and reducing economic and other impacts), can cooperate to address a larger problem of common interest—in this case the most effective handling of disasters possible.
Until relatively recently, the technology choices facing most disaster management organizations were comparatively few, with much of the investment focused on building specialized communications systems in close partnership with a small set of vendors. Today, there is a much wider set of technology options available. There is also an increasing need for the diverse organizations with public safety and homeland security responsibilities to be able to cooperate during large-scale disasters. In
response, efforts have been made to identify appropriate technologies (such as DHS’s Select Equipment List).
An effective, useful roadmap is driven by a clear set of user-driven (not technology-driven) goals and needs to evolve continuously as a living document in consultation with the full range of stakeholders. Some pieces of a roadmap are in place (e.g., the National Incident Management System), but an overarching strategic vision of how IT can best be evolved and applied to disaster management is missing.
A roadmap can serve as an anchor for a strategic vision and help policy makers avoid lurching from one priority to the next, driven by the most recent major disaster. Unfortunately, in the absence of a roadmap, more or less haphazard, reactionary IT investment is occurring and is likely to continue.10 New incidents (like a major hurricane) should trigger a reevaluation of the existing roadmap, potentially leading to some adjustments in priorities within the strategic framework. But the continuity of investments should result in continuous and more predictable improvements in the application of IT to disaster management.
A roadmap can also assist policy makers and planners in balancing investments across different technologies appropriate at different times in the disaster management life cycle, that is, mitigation preparation, response, and recovery. A roadmap can also make explicit investment choices concerning tradeoffs among competing priorities and between tensions such as security versus openness and other such tensions previously identified.
Finally, a roadmapping process provides an opportunity to consider the interrelationships between technology and organizational models and technology and policy. Successful technology development and deployment are widely understood to require active consideration of the organizational context in which they will be introduced. Similarly, potential policy barriers must be considered when developing new technologies and organizational approaches.
Critical to the success of a roadmap activity is the inclusion of a broad array of stakeholders and an institutional home to get started and remain viable. All participants must make a long-term commitment to the resulting roadmap and to its continuing evolution as technological advances and organizational innovations are made.
The fiscal year 2007 Senate Appropriations Bill for the Department of Homeland Security shows evidence of this type of planning, focusing on hurricanes and immigration. See Michael Arnone, “DHS Bill Slashes Research Funds,” Federal Computer Week, July 17, 2006, p. 11; available at http://www.fcw.com/article95287-07-17-06-Print.
Examples of Successful Technology Roadmapping
Roadmapping is a technique frequently used by firms to plan future research and development activities. The U.S. military, for example, uses a roadmapping approach in its Quadrennial Defense Review report11 to drive plans for incorporating technology advances into its future capabilities. The International Electronics Manufacturing Initiative has developed a sensors technology roadmap that examines technology capabilities and applications in a variety of sectors, including transportation, health care, and consumer electronics.12
Perhaps the most familiar application of roadmapping is the semiconductor industry’s roadmap. In the late 1980s, it became clear that the integrated circuit industry was not only a rapidly growing part of the global economy but also critically important to the economy and national security of the United States. Unfortunately, concerns grew that the United States, after an initial leading role, had fallen behind in technology leadership relative to other countries, especially those in Asia. A 1990 National Research Council report outlined the consequences of not maintaining a commercial and technological lead in this area.13 The Semiconductor Industry Association (SIA) would take on the role of pulling together a long-term technology roadmap for the industry based both on end-user needs and technology trends.
This was not an industrial plan in the usual sense, but rather an agreed on, coordinated vision that would help each organization plan development and investment strategies that would bring the thousands of pieces of technology needed to make an integrated circuit together at the right time and the right level of development. A technology piece developed too early would be prohibitively expensive—developed too late it would not be profitable. Market forces would ensure that vendors and suppliers would tool up to meet a particular need at the right time, and even researchers understood the targets for conventional technology and could choose areas for research that, if successful, would have the most impact. Unlike other attempts at planning, this was not directed at a specific technology goal, but rather at the process of continuously improv-
The 2006 Quadrennial Defense Review Report notes in its preface that the ideas and proposals in the report are provided as a roadmap for change. The report is available at http://www.comw.org/qdr/qdr2006.pdf.
Charles E. Richardson et al., “Sensor Technology Roadmapping Efforts at iNEMI,” IEEE Transactions on Components and Packaging Technologies 28(2): 372-375, June 2005.
Computer Science and Telecommunications Board, National Research Council, Keeping the U.S. Computer Industry Competitive: Defining the Agenda, National Academy Press, Washington, D.C., 1990.
ing a key technology over a span of decades. Each step along the way would have important economic and strategic value and would form the foundation for the next important advance.
Built into the creation of the roadmap was a process that drove continuous updates and refinements, making it a living document with continued relevance and ensuring that it was up to date with advances in science, technology, and market needs. By the end of the 1990s, after three major updates and the associated strengthening of the U.S. semiconductor industry, there was a push to expand the roadmap process to a global scale so that the vision would match the expanding scale of the industry. Today it is a joint effort of industry, government, and academic representatives from the United States, Europe, Korea, Japan, and Taiwan. It is the critical common view driving ongoing investment in research, development, and manufacturing in one of the largest and most complex components of the global economy.
The roadmap for disaster management would be quite different from that of the semiconductor industry. Specifically, the SIA roadmap was made possible when an entire industry needed to plan for future generations of fabrication equipment and realized that this highly capital-intensive equipment was beyond the means of any one industry participant— it required cooperation with other participants. This created an environment where cooperation within a specific framework, embodied by the SIA roadmap, was possible, while allowing continued competition in areas outside of that framework.
In contrast, disaster management organizations and the associations that represent them would necessarily drive the roadmapping process envisioned here. Yet, the key element remains—the need to create a framework within which cooperation can happen in order to address common goals that are otherwise unattainable or suboptimal. By joining together to develop a roadmap, they would have a forum for speaking with a common and consistent voice to the vendor community about technology needs.
There are also similarities from a process perspective; stakeholders create a living document that explicitly lays out a vision for continuous progress based on balancing value and cost, as well as carefully considering technical and organizational feasibility. Then, investment from all sectors can then be committed to track this vision as it evolves. A successful roadmapping process would ultimately result in full and active participation of the vendor community, just as the SIA roadmap process eventually included the entire worldwide semiconductor industry, including those who spurred its development as the perceived “adversaries.”
RESEARCH CENTERS: COUPLING TECHNOLOGY RESEARCH WITH PRACTICE
Successful development, adoption, and utilization of IT for disaster management require several different communities be in regular and close contact with one another. Researchers tend to look for overarching themes, but experience has demonstrated the importance, in the field of disaster management, of starting with real problems faced by real practitioners, working back from there to overarching themes. Starting with overarching research themes will likely lead to dead ends and unimplementable technology.14 Practitioners must help define needs for new technology, thus acting as inspiration for researchers and developers. They must interact with developers and vendors throughout the prototyping cycle and development process to ensure that their needs are indeed addressed. IT researchers must have opportunities to expose practitioners to novel concepts in order to generate an understanding of potential new capabilities and how they might fit into current and future operations. Public administrators, social scientists, and IT researchers all play important roles in ensuring that IT innovations are introduced with the necessary organizational changes to enable new devices and systems to be smoothly integrated into practice.
Forging organizational ties is harder in disaster management than it is in sectors like defense because the vast majority of practitioners are distributed across local agencies that are normally fairly isolated from each other and from the research community. Nevertheless, integrating the experiences and needs of these different agencies is crucial, since sooner or later when a disaster of a severe enough magnitude strikes, they are bound to have to work together. Some regional groups of organizations that have already experienced the need to work together have successfully initiated the process of forging organizational ties, suggesting that building from a bottom-up approach is likely to be most effective.
Moreover, successful IT development is iterative. It is important to provide practitioners with initial prototypes to bootstrap the iterative process. Testbeds and exercises are particularly critical in the area of disaster management because they provide opportunities for feedback from actual users about critical requirements of responders that may not otherwise be apparent. In some cases, large-scale testbeds are required to understand issues that only emerge at scale. Simulations present oppor-
tunities not only for training but also for observation and assessment of IT capabilities such as decision support tools. Operational facilities that permit instrumentation, experimentation, and iteration are needed.
Collaborative research centers could, therefore, play a highly useful role in advancing the effective application of IT to disaster management. The major goals of such centers would be sixfold—(1) to develop a shared understanding of the experiences and challenges in all phases of disaster management from both a technological and organizational perspective, (2) to evaluate the application of technology advances to disaster management practice, (3) to develop a culture and processes for transitioning knowledge and technology to the operational communities on a sustained basis, (4) to build human capital at the intersection of information technology and disaster management, (5) to serve as repositories for data and for lessons learned from past disasters and disaster management efforts, and (6) to provide forward-looking analysis to inform the development of technology capabilities, associated organizational processes, and roadmap development.
The research conducted by these centers would be multidisciplinary, combining the efforts of information scientists, engineers, and social scientists. Participants would be charged with collecting knowledge and experience from past disasters and using it to build a core set of knowledge that would inform the development of technology capabilities and associated organizational processes to enhance the management of future events. The centers would closely partner with federal, state, and local agencies responsible for disaster management. Indeed, experienced and capable emergency management officials and operational units from disaster management organizations should be deeply involved in the work of these centers. One approach for engaging these government agencies could be to provide them with incremental funds specifically for working with researchers and to develop next-generation technologies.
To ensure that the work of the centers is informed by and responsive to the needs of disaster management, centers would bring in disaster management professionals from all levels of government as visiting fellows. To inform additional researchers about the problems of disaster management, university faculty and students would be offered internships and fellowships. Finally, to help encourage development of technology based on the research results, the involvement of relevant industry would be promoted through informational activities and the sharing of expertise and results.
Multiple centers for research would have several advantages over a single research center. They would enable healthy intellectual competition and cross-fertilization of ideas and allow for specialization in specific types of disasters, specific technology capabilities, or the compre-
hensive needs of particular geographical areas. Certain research centers could, for example, specialize in disasters common to their locations, in order to benefit from expertise residing in local emergency response organizations and other local government agencies. For instance, a center near known earthquake-prone areas may focus on technology related to improving earthquake-specific disaster management. Different centers could specialize in practical and response-oriented work, combining core as well as geography-specific expertise. Close coordination and sharing of information and expertise among centers would help avoid unnecessary duplication.
A major goal of these centers would be to develop a culture of continuously transitioning knowledge and IT between researchers and operational communities. This is very different from the usual academic model of licensing technology to a third party, or creating a start-up. A continuous process of reviewing user requirements, knowledge generation, collaboration, validation, acceptance, implementation, and incorporation of new user needs must be encouraged.
Field research—working on large problems outside the labs—appears to be particularly valuable to making progress on using IT for disaster management. It pushes researchers in new directions. It also stresses the technology under the extreme conditions inherent in disaster situations, exposing issues unlikely to be discovered in a laboratory setting. Practitioners’ participation in such research gives them an opportunity to see the potential for new information technologies. It also gives them a chance to influence its direction. The goal is to close the gap between researchers and practitioners and create a unified core community that can speed up the process of delivering research results of immediate relevance to disaster management. Panelists at the workshop held by the committee cited the Disaster Management Interoperability Services (DMIS) Program and the Biological Warning and Incidents Characterization (BWIC) projects as successful examples of programs carrying out field research that involved the public safety community.15 The Strong Angel exercises mentioned in Chapter 2 are another example of how technologies still in the development stage can be tested in the field and can begin to gain acceptance in the practitioner community that is ultimately indispensable to adoption, as well as provide researchers with feedback on the proper direction for further research and development.
Finally, as the use of advanced sensors, communication technology, and similar IT increases, it becomes ever easier to collect data about the
process of dealing with a disaster in a completely unobtrusive manner. Such data ought to form a basis for studies that will ultimately lead to improving the disaster management process, and it should be used to help evaluate new proposed technologies and methodologies. Centers should serve as repositories for these data.
Several research centers devoted to certain aspects of disaster management already exist. Some well-known centers are the Natural Hazards Center at the University of Colorado at Boulder; the Disaster Research Center at the University of Delaware, which investigates the social science aspects of disasters; the Hazard Reduction and Recovery Center at Texas A&M University; and Dartmouth College’s Institute for Security Technology Studies.16
Such centers could provide the basis for a network of research centers where IT researchers, hazard and disaster researchers, and disaster management practitioners can collaborate to study and evaluate the use of IT for disaster management from both a technological and an organizational perspective; transition knowledge and technology to those who practice disaster management; build human capital at the intersection of IT and disaster management; and develop future IT capabilities.
Texas A&M University provides a Web site at http://archone.tamu.edu/hrrc/related-sites/Centers.html#Domestic with links to domestic and international disaster research centers. The Natural Hazards Center at the University of Colorado at Boulder provides links to Web sites of U.S. and international organizations dealing with hazards and disasters, including academic research centers; see http://www.colorado.edu/hazards/resources/centers.