The task statement for this study charges the committee to develop a roadmap built on the goals and objectives of the 2008 NEHRP Strategic Plan. In this context, a roadmap is a plan that describes the actions and activities that will be needed to achieve the plan’s objectives. Further, the charge requires an estimate of costs, recognizing that some activity costs can be specified fairly accurately (e.g., based on previous detailed studies), whereas others can only be estimated roughly. Also, some activities are scalable, that is, they can be conducted at varying levels of effort or units.
At the outset of its work, the committee was briefed on the NEHRP Strategic Plan and subsequently reviewed the plan, with supporting documents, in detail. The committee then considered the steps that would be required to make the nation and its communities more resilient to the impacts of earthquakes, based on the collective expertise of committee members and taking into account the substantial input from a community workshop (see Appendix D), but without constraining its thinking to the specifics of the Strategic Plan. In the end, 18 broad, integrated tasks, or focused activities, were identified as the elements of a roadmap to achieve earthquake resilience. These tasks are focused on specific outcomes that could be achieved in a 20-year timeframe, with substantial progress realizable within 5 years. We consider these tasks to be critical to achieving a nation of more earthquake-resilient communities.
Although the committee did not set out to explicitly ratify the elements of the Strategic Plan, in the end the committee embraced and supported these elements. The goals address loss reduction by expanding knowledge,
developing enabling technologies, and applying them in vulnerable communities. The objectives identify the logical elements in fulfilling these goals.
The committee endorses the 2008 NEHRP Strategic Plan, and identifies 18 specific task elements required to implement that plan and materially improve national earthquake resilience.
The tasks identified are:
1. Physics of Earthquake Processes
2. Advanced National Seismic System
3. Earthquake Early Warning
4. National Seismic Hazard Model
5. Operational Earthquake Forecasting
6. Earthquake Scenarios
7. Earthquake Risk Assessments and Applications
8. Post-earthquake Social Science Response and Recovery Research
9. Post-earthquake Information Management
10. Socioeconomic Research on Hazard Mitigation and Recovery
11. Observatory Network on Community Resilience and Vulnerability
12. Physics-based Simulations of Earthquake Damage and Loss
13. Techniques for Evaluation and Retrofit of Existing Buildings
14. Performance-based Earthquake Engineering for Buildings
15. Guidelines for Earthquake-Resilient Lifeline Systems
16. Next Generation Sustainable Materials, Components, and Systems
17. Knowledge, Tools, and Technology Transfer to/from the Private Sector
18. Earthquake-Resilient Community and Regional Demonstration Projects
The tasks generally cross cut the goals and objectives described in the 2008 NEHRP Strategic Plan because they are formulated as coherent activities that span from knowledge building to implementation. The linkage between the goals and objectives, on the one hand, and the tasks on the other, is shown in the following matrix (Table 3.1). The matrix is richly populated, illustrating the cross-cutting nature of the tasks.
Each of the 18 tasks is described below under a series of subheadings: proposed activity and actions, existing knowledge and current capabilities, enabling requirements, and implementation issues.
Goal A of the 2008 NEHRP Strategic Plan is to “improve understanding of earthquake processes and impacts.” Earthquake processes are difficult to observe; they involve complex, multi-scale interactions of matter and energy within active fault systems that are buried in the solid, opaque earth. These processes are also very difficult to predict. In any particular region, the seismicity can be quiescent for hundreds or even thousands of years and then suddenly erupt as energetic, chaotic cascades that rattle through the natural and built environments. In the face of this complexity, research on the basic physics of earthquake processes and impacts offers the best strategy for gaining new knowledge that can be implemented in mitigating risk and building resiliency (NRC, 2003).
The motivation for such research is clear. Earthquake processes involve the unusual physics of how matter and energy interact during the extreme conditions of rock failure. No theory adequately describes the basic features of dynamic rupture and seismic energy generation, nor is one available that fully explains the dynamical interactions within networks of faults. Large earthquakes cannot be reliably and skillfully predicted in terms of their location, time, and magnitude. Even in regions where we know a big earthquake will eventually strike, its impacts are difficult to anticipate. The hazard posed by the southernmost segment of the San Andreas Fault is recognized to be high, for example—more than 300 years have passed since its last major earthquake, which is longer than a typical interseismic interval on this particular fault. Physics-based numerical simulations show that, if the fault ruptures from the southeast to the northwest—toward Los Angeles—the ground motions in the city will be larger and of longer duration, and the damage will be much worse, than if the rupture propagates in the other direction (Figure 3.1). Earthquake scientists cannot yet predict which way the fault will eventually go, but credible theories suggest that such predictions might be possible from a better understanding of the rupture process. Clearly, basic research in earthquake physics will continue to extend the practical understanding of seismic hazards.
To move further toward NEHRP Goal A and improve the predictive capabilities of earthquake science, the National Science Foundation (NSF) and the U.S. Geological Survey (USGS) should strengthen their current research programs on the physics of earthquake processes. Bolstering research in this area will “advance the understanding of earthquake phenomena and generation processes,” which is Objective 1 of the 2008 NEHRP Strategic Plan. Many of the outstanding problems can be grouped into four general research areas:
TABLE 3.1 Matrix Showing Mapping of the 18 Tasks Identified in This Report Against the 14 Objectives in the NEHRP Strategic Plan (NIST, 2008)
1. Advance understanding of earthquake phenomena and generation processes
2. Advance understanding of earthquake effects on the built environment
3. Advance understanding of the social, behavioral, and economic factors linked to implementing risk reduction and mitigation strategies in the public and private sectors
4. Improve post-earthquake information acquisition and management
5. Assess earthquake hazards for research and practical application
6. Develop advanced loss estimation and risk assessment tools
7. Develop tools that improve the seismic performance of buildings and other structures
8. Develop tools that improve the seismic performance of critical infrastructure
9. Improve the accuracy, timeliness, and content of earthquake information products
10. Develop comprehensive earthquake risk scenarios and risk assessments
11. Support development of seismic standards and building codes and advocate their adoption and enforcement
12. Promote the implementation of earthquake-resilient measures in professional practice and in private and public policies
13. Increase public awareness of earthquake hazards and risks
14. Develop the nation’s human resource base in earthquake safety fields
|A. Improved Understanding—Processes and Impacts||B. Develop Cost-Effective Measures to Reduce Impacts||C. Improve Community Resilience|
1. Physics of Earthquake Processes
2. Advanced National Seismic System
3. Earthquake Early Warning
4. National Seismic Hazard Model
5. Operational Earthquake Forecasting
6. Earthquake Scenarios
7. Earthquake Risk Assessment and Applications
8. Post-Earthquake Social Science Response and Recovery Research
9. Post-Earthquake Information Management
10. Socioeconomic Research on Hazard Mitigation and Recovery
1. Advance understanding of earthquake phenomena and generation processes
2. Advance understanding of earthquake effects on the built environment
3. Advance understanding of the social, behavioral, and economic factors linked to implementing risk reduction and mitigation strategies in the public and private sectors
4. Improve post-earthquake information acquisition and management
5. Assess earthquake hazards for research and practical application
6. Develop advanced loss estimation and risk assessment tools
7. Develop tools that improve the seismic performance of buildings and other structures
8. Develop tools that improve the seismic performance of critical infrastructure
9. Improve the accuracy, timeliness, and content of earthquake information products
10. Develop comprehensive earthquake risk scenarios and risk assessments
11. Support development of seismic standards and building codes and advocate their adoption and enforcement
12. Promote the implementation of earthquake-resilient measures in professional practice and in private and public policies
13. Increase public awareness of earthquake hazards and risks
14. Develop the nation’s human resource base in earthquake safety fields
|A. Improved Understanding—Processes and Impacts||B. Develop Cost-Effective Measures to Reduce Impacts||C. Improve Community Resilience|
11. Observatory Network on Community Resilience and Vulnerability
12. Physics-Based Simulations of Earthquake Damage and Loss
13. Techniques for Evaluation and Retrofit of Existing Buildings
14. Performance-Based Earthquake Engineering for Buildings
15. Guidelines for Earthquake-Resilient Lifeline Systems
16. Next Generation Sustainable Materials, Components, and Systems
17. Knowledge, Tools, and Technology Transfer to/ from the Private Sector
18. Earthquake-Resilient Community and Regional Demonstration Projects
FIGURE 3.1 Maps of Southern California showing the ground motions predicted for a magnitude-7.7 earthquake on the southern San Andreas Fault; high values of shaking are purple to red, low values blue to black. The left panel shows faulting that begins at the southeast end and ruptures to the northwest. The right panel shows faulting that begins at the northwest end and ruptures to the southeast. The ground motions predicted in the Los Angeles region are much more intense and have longer duration in the former case. SOURCE: Courtesy of K. Olsen and T.H. Jordan.
• Fault system dynamics: how tectonic forces evolve within complex fault networks over the long term to generate sequences of earthquakes. The tectonic forces that drive earthquakes are still poorly understood. They cannot be directly measured and are influenced by unknown heterogeneities within the seismogenic upper crust as well as by slow deformation processes. The latter include intriguing new discoveries—aseismic transients such as “silent earthquakes,” as well as newly discovered classes of episodic tremor and slip. How these slow phenomena are coupled to the earthquake cycle is currently unknown; a better understanding could potentially provide new types of data for improving time-dependent earthquake forecasting. A major issue is how the distribution of large earthquakes depends on the geometrical complexities of fault systems, such as fault bends, step-overs, branches, and intersections. In many cases, fault segmentation and other geometrical irregularities appear to control the lengths of fault ruptures (and thus earthquake magnitude), but large ruptures often break across segment boundaries and branch to and from subsidiary faults. For example, the magnitude-7.9 Denali earthquake in Alaska initiated as a rupture on
the Susitna Glacier Thrust Fault; the rupture branched onto the main strand of the Denali Fault, and then branched again onto the subsidiary Totschunda Fault. A key objective is to develop numerical models of a brittle fault system that can simulate earthquakes over many cycles of stress accumulation and fault rupture for the purpose of constraining the earthquake probabilities used in time-dependent forecasts (see Task 5). An example of a sequence of earthquakes on the San Andreas Fault system from such an “earthquake simulator” is shown in Figure 3.2.
• Earthquake rupture dynamics: how forces produce fault ruptures and generate seismic waves during an earthquake. The nucleation, propagation, and arrest of fault ruptures depend on the stress response of rocks approaching and participating in failure. In these regimes, rock behavior can be highly nonlinear, strongly dependent on temperature, and sensitive to minor constituents such as water. A major problem is to understand how the microscopic processes of fault weakening control the dynamics of rupture. Are mature faults statically weak because of their compositions and elevated pore pressures, or are they statically strong but slip at low average shear stress because of dynamic weakening during rupture? Many potential weakening mechanisms have been identified—
FIGURE 3.2 Example output from an earthquake simulator showing a sequence of large earthquakes during a 4-month period on the southern San Andreas Fault. There were 72 aftershocks in the 2-day interval between the magnitude-7.8 and magnitude-7.5 events, and 183 aftershocks during the 100-day interval before the magnitude-7.6 event. The three snapshots displayed here were part of a longer simulation that included 227 earthquake greater than magnitude-7. Of these, 137 were isolated by at least 4 years; 34 were pairs, and 5 were triplets such as this one. SOURCE: Courtesy of J. Dieterich.
the thermal pressurization of pore fluids, thermal decomposition, flash heating of contact asperities, partial melting, elasto-hydrodynamic lubrication, silica gel formation, and normal-stress changes due to bimaterial effects—but the physics of these processes, and their interactions, remains poorly understood. A combination of better laboratory experiments, field observation of exhumed faults, and numerical models will be required, including studies of how ruptures propagate along geometrically complex faults with distributed damage zones and off-fault plastic deformation. A priority is to validate models for application in ground motion forecasting (see Tasks 4 and 5).
• Ground motion dynamics: how seismic waves propagate from the rupture volume and cause shaking at sites distributed over a strongly heterogeneous crust. Seismic hazard analysis currently relies on empirical attenuation relationships to account for event magnitude, fault geometry, path effects, and site response. These generic relationships do not adequately represent the physical processes that control ground shaking: rupture complexity and directivity, basin effects, the role of small-scale crustal heterogeneity, and the nonlinear response of the surface layers (such as soft soils). Physics-based numerical simulations of the generation and propagation of seismic radiation have now advanced to the point where they are becoming useful in predicting the strong ground motions from anticipated earthquake sources (e.g., Figure 3.1). The physics needs to account for the complexities of rupture propagation along the fault, wave propagation through the heterogeneous crust, response of the surface rocks and soils, and response of the buildings embedded in those soils. An important objective is to couple numerical models of these physical processes in end-to-end (“rupture-to-rafters”) earthquake simulations (see Task 12).
• Earthquake predictability: the degree to which the future occurrence of earthquakes can be determined from the observable behavior of earthquake systems. Because earthquakes cannot be deterministically predicted, forecasting requires a probabilistic (i.e., statistical) characterization of earthquake sources in terms of space, time, and magnitude (Jordan et al., 2009). Long-term earthquake forecasting is the basis for seismic hazard analysis. Current forecasts, such as those used in all three iterations of the National Seismic Hazard Maps (Frankel et al., 1996, 2002; Petersen et al., 2008), are time-independent; i.e., they assume earthquakes occur randomly in time and are independent of past seismic activity. This assumption is known to be false—almost all large earthquakes have many aftershocks, some of which can be damaging, and they often occur in clustered sequences. For example, the three largest earthquakes in the historical record of the central United States—each magnitude-7.5 or larger—occurred in the New Madrid region between mid-December, 1811, and mid-February, 1812, within a period of just 2 months.
Time-dependent forecasts that account for the occurrence of past earthquakes using stress renewal models have been developed for California (see Figure 3.10 under Task 5). However, according to these long-term models, large earthquakes on major faults decrease the probability of additional events on that fault, and they cannot therefore adequately represent the increased probability of event sequences, such as New Madrid or the hypothetical sequence illustrated in Figure 3.2. The goal of research on earthquake predictability is to develop a consistent set of probabilistic models that span the full range of forecasting timescales, long-term (centuries to decades), medium-term (years to months), and short-term (weeks to hours). Bridging the current gap between the long-term renewal models such as the Uniform California Earthquake Rupture Forecast–Version 2 UCERF2 (see Task 5) and short-term models based on triggering and clustering statistics, such as the USGS Short-Term Earthquake Probability (STEP) forecast for California1 (Gerstenberger et al., 2007; see Figure 3.11 under Task 5), will require a better understanding of how earthquake probabilities depend on the quasi-static stress transfer caused by permanent fault slip and related relaxation of the crust and mantle, as well as the dynamic stress triggering caused by the passage of seismic waves.
Many of the potential advances in earthquake forecasting, seismic hazard characterization, and dissemination of post-earthquake information will depend on harnessing the predictive power of earthquake physics.
• Physics-based earthquake simulations can be used as tools to improve the rapid delivery of post-earthquake information for emergency management and to enable the new technology of earthquake early warning (Task 3).
• Ground motion dynamics can be used to transform long-term seismic hazard analysis into a physics-based science that can characterize earthquake hazard and risk with better accuracy and geographic resolution (Task 4).
• Research on earthquake predictability can yield better models for operational earthquake forecasting, which can help communities live with natural seismicity and prepare for potentially destructive earthquakes (Task 5).
Taken together, the technologies of Tasks 3-5 can deliver timely information needed to improve societal resilience during all phases of the earthquake cascade (Figure 3.3).
Research on earthquake physics can also contribute directly to four other NEHRP objectives. Better dynamical models of earthquake ruptures
FIGURE 3.3 A diagram of the earthquake cascade showing the time domains of four geotechnologies that can improve earthquake resilience (as described under Tasks 3, 4, 5, and 8). A better understanding of earthquake physics will be needed to implement and improve these technologies. SOURCE: Courtesy Southern California Earthquake Center.
and seismic wave propagation can advance the understanding of earthquake effects on the built environment (Objective 2). A better understanding of earthquake predictability can guide the development of improved forecasting models needed for assessing earthquake hazards for research and practical application (Objective 5). Physics-based models capable of tracking earthquake cascades in real time can be used to improve the accuracy, timeliness, and content of earthquake information products (Objective 9). And more accurate earthquake simulations can provide a physical basis for developing comprehensive earthquake risk scenarios and risk assessments (Objective 10).
Existing Knowledge and Current Capabilities
Since its inception in 1977, NEHRP has been organized to gain new knowledge about earthquake hazards and risks and to implement
this knowledge through effective risk mitigation and rapid earthquake response. Some of the advances in understanding earthquake processes have been highlighted in the 2008 NEHRP Strategic Plan (see Chapter 1 and Appendix A), as well as in the Earthquake Engineering Resaerch Institute (EERI) (2003b) report (see Appendix B). The National Research Council (NRC) (2003) report, Living on an Active Earth: Perspectives on Earthquake Science, provides one of the most expansive treatments of earthquake science and its rise under NEHRP.
The science of earthquakes, like the study of many other complex natural systems, is still in its juvenile stages of exploration and discovery. Until recently, research was focused on two primary problems: (a) earthquake complexity and how it arises from the brittle response of the lithosphere to deep-seated forces and (b) the forecasting of earthquakes and their site-specific effects. Investigations of the first problem began with attempts to place earthquake occurrence in a global framework and contributed to the discovery of plate tectonics, while work on the second addressed the needs of earthquake engineering and led to the development of seismic hazard analysis. The historical separation between these two lines of inquiry has been narrowed by progress on dynamical models of earthquake occurrence, fault ruptures, and strong ground motions. This research has transformed the field from a haphazard collection of disciplinary activities into a more coordinated “earthquake system science” that seeks to describe seismic activity not just in terms of individual events, but also as an evolutionary process involving dynamical interactions within networks of interconnected faults. Such a system-level approach recognizes that the earthquakes are emergent phenomena that depend on a wide range of interactions, from the microscopic inner scale (frictional contact asperities breaking over microseconds) to the fault-system outer scale (regional tectonic loading and relaxation over hundreds of kilometers and thousands of years).
Much has been learned from multidisciplinary investigations coordinated in the aftermath of large earthquakes, and this experience makes clear the importance of standardized instrumental data and geologic field work. Research has been accelerated through the development of new observational and computational technologies. Subsurface imaging can now be applied with sufficient resolution to delineate the deep architecture of fault systems and the three-dimensional earth structure that controls the propagation of seismic waves. In well-studied regions of the western United States, neotectonic research has improved constraints on fault geometries and long-term slip rates, and paleoseismology has furnished an extended record of past earthquakes (McCalpin, 2009), providing evidence for the clustering of large events in “seismic storms.” The Global Positioning System (GPS) and Interferometric Synthetic Aperture
Radar (InSAR) satellites are mapping, with unprecedented resolution, the crustal deformations associated with individual earthquakes, long-term tectonic loading, and the stress interactions among nearby faults. Networks of broadband seismometers have been deployed to record earthquake ground motions faithfully at all frequencies and amplitudes (see Figure 3.4 under Task 2). By using high-performance computing and communications, scientists now have the means to process massive streams of observations in real time and, through numerical simulation, to quantify the many aspects of earthquake physics that have been resistant to standard analysis. New discoveries include slow slip transients that propagate at velocities systematically lower than ordinary fault ruptures.
Large earthquakes can be forecast on timescales of decades to centuries by combining the information from the geological record with data from seismic and geodetic monitoring (see Figure 3.10 under Task 5). Earthquake scientists have begun to understand how geological complexity controls the strong ground motion during large earthquakes (Figure 3.1) and, working with engineers, how to predict the site-specific response of buildings, lifelines, and critical facilities to seismic excitation. The long-term expectations for potentially destructive shaking have been quantified in the form of seismic hazard maps, which display estimates of the maximum shaking intensities expected at each locality in the United States (see Figure 3.8 under Task 4). Once a large earthquake has occurred, automated systems can rapidly and accurately compute hypocenter location, fault-plane orientation, and other source parameters. Predicted distributions of the extent of strong ground motions can be broadcast in near real time, helping to anticipate damage and guide emergency response (e.g., Figure 3.1). In the case of distant, sub-oceanic earthquakes, post-event predictions of the earthquake-generated sea waves (tsunamis) can warn coastal communities with sufficient lead times to permit evacuations. All of these advances have benefited from NEHRP-sponsored research in earthquake physics.
Knowledge of earthquake processes is highly data limited, and there is an urgent and continuing need for better observations of earthquakes, especially through remote sensing of deformation and seismicity, and detailed field-based studies of fault rupture processes. Essential observations are provided by seismology, tectonic geodesy, and earthquake geology. The general objectives recommended in NRC (2003) have not yet been achieved:
• An Advanced National Seismic System (ANSS) capable of recording all earthquakes down to moment magnitude-3 and up to the largest anticipated magnitude with fidelity across the entire seismic bandwidth and with sufficient density to determine the source parameters of these events. The location threshold for regional networks should reach down to magnitude-1.5 in areas of high seismic risk. Full implementation of the current ANSS plan (see Task 2) would provide this capability.
• Geodetic instrumentation for observing crustal deformation within active fault systems with high enough spatial and temporal resolution to measure all significant motions, including aseismic events and the transients before, during, and after large earthquakes. Critical new data on earthquakes are coming from the denser networks of GPS receivers and strainmeters that have been deployed since 2005 in the Plate Boundary Observatory of NSF’s EarthScope Program (EarthScope, 2007). Spatial imaging of differential motions by satellite-based InSAR has demonstrated its potential for the study of fault deformation (e.g., Helz, 2005; Pritchard, 2006). However, an InSAR satellite for collecting crustal deformation data, proposed in the original EarthScope plan (NRC, 2001), has still not been launched by the United States, and as a result researchers remain dependent on data from European and Japanese satellites (Williams et al., 2010). This reinforces the importance of the planned NASA DESDynI (Deformation, Ecosystems Structure, and Dynamics of Ice) mission, proposed to launch in 2018,2 to provide a dedicated InSAR platform optimized for studying hazards and global environmental change.
• Programs of geologic field study to locate active faults, quantify fault slip rates, and determine the history of fault rupture over many earthquake cycles. Light Detection and Ranging (LiDAR) techniques are now capable of high-resolution topographic imaging of fault-controlled surface morphology. For example, airborne LiDAR mapping has been used to reduce by 40% the slip along the Carrizo section of the San Andreas Fault previously ascribed to the 1857 Fort Tejon earthquake (magnitude-7.9), implying a higher medium-term probability that another large earthquake will occur on this section of the fault (Zielke et al., 2010). However, LiDAR data have been collected on a synoptic scale for only a few major faults. The methods for dating rock on neotectonic timescales of hundreds to thousands of years have been greatly improved in the past decade, but well-constrained geologic slip rates are still not available for many of the faults known to be active in the United States. Again, only a few have been studied with the paleoseismic techniques needed to resolve the slip history of fault ruptures over many earthquake cycles.
Large earthquakes are rare events, and the strong motion data from them are sparse. Numerical simulations of large earthquakes in well-studied, seismically active areas are important tools for basic earthquake science, because they provide a quantitative basis for comparing hypotheses about earthquake behavior with the limited observations. Simulations are playing an increasingly crucial role in our understanding of regional earthquake hazard and risk. This convergence of basic and applied science is comparable to the situation in climate studies, where the largest, most complex general circulation models are being used to predict the hazards and risks of anthropogenic global change. Considerable computational power will be needed to fully realize this scientific transformation and put it to practical use. Earthquakes are among the most complex terrestrial phenomena, and modeling of earthquake dynamics is one of the most difficult computational problems in science. Taken from end to end, the problem comprises the loading and eventual failure of tectonic faults, the generation and propagation of seismic waves, the response of surface sites, and—in its application to seismic risk—the damage caused by earthquakes to the built environment (see Task 12). This chain of physical processes involves a wide variety of interactions, some highly nonlinear and multi-scale. For example, long-term fault dynamics is coupled to short-term rupture dynamics through the nonlinear processes of brittle and ductile deformation, which requires earthquake simulators that can span this range of scales (see Figure 3.2).
The implementation of physics-based ground-motion prediction using numerical simulations requires estimates of the three-dimensional structure of the fault network and the material properties—seismic velocities, attenuation parameters, and density distribution—within the tectonic blocks. These structures are interrelated, because material property contrasts are often governed by fault displacements. Therefore, the development of unified structural representations requires cross-disciplinary collaboration between geologists and seismologists.
The key research issues of earthquake science are true system-level problems: they require an interdisciplinary approach to understand the nonlinear interactions among many fault-system components, which themselves are often complex subsystems. Because the behavior of each fault system is contingent on its structure, earthquake studies are necessarily conducted in a system-specific context (e.g., the Cascadia subduction zone or the San Andreas transform-fault system). Therefore, a generic understanding of earthquake processes requires a synthesis of the knowledge obtained from different regions. International collaborations can promote such a synthesis by bringing together data from many fault systems around the world.
NSF and UGSG already have well-developed research programs in earthquake physics, and strengthening those programs along the lines described here poses no major implementation issues. That said, these agencies will have to work together more closely to foster highly integrated collaborations that are (1) coordinated across scientific disciplines and research institutions, (2) enabled by high-performance computing and advanced information technology, (3) capable of assimilating new theories and data into system-level models, and (4) can partner with earthquake engineering and risk-management organizations in delivering practical knowledge to society. An additional implementation issue, of course, is the need for information from major earthquakes that can only be provided by the monitoring systems described in Task 2.
Seismic monitoring is vital for meeting the nation’s needs for timely and accurate information about earthquakes, tsunamis, and volcanic eruptions—information to determine their locations and magnitudes and estimate their potential effects. As well as guiding response efforts, this information also provides the basis for research on the causes and effects of earthquakes. ANSS is the USGS initiative to broadly improve the monitoring and reporting of earthquakes in the United States by integrating and modernizing the prior patchwork of state, local, and academic regional seismic networks, and coupling the seismological data with a modern earthquake information center. Begun in 2000, ANSS is modernizing and expanding capabilities nationally by establishing an integrated national system of 7,100 sensors providing data to national and regional centers. ANSS provides real-time information on the distribution and intensity of earthquake shaking to emergency responders so that they can rapidly assess the full impact of an earthquake and speed disaster relief to the most heavily affected areas. ANSS also provides engineers and designers with the information they need to improve building design standards and engineering practices to mitigate the impact of earthquakes, and provides scientists with high-quality data to understand earthquake processes and solid earth structure and dynamics. After analyzing the economic benefits of seismic monitoring, NRC (2006b; p. 8) concluded that
Full deployment of the ANSS offers the potential to substantially reduce earthquake losses and their consequences by providing critical information for land-use planning, building design, insurance, warnings, and emergency preparedness and response. In the committee’s judgment, the potential benefits far exceed the costs—annualized buildings and building-related earthquake losses alone are estimated to be about $5.6
billion, whereas the annualized cost of the improved seismic monitoring is about $96 million, less than 2 percent of the estimated losses. It is reasonable to conclude that mitigation actions—based on improved information and the consequent reduction of uncertainty—would yield benefits amounting to several times the cost of improved seismic monitoring.
The rate at which ANSS was deployed was relatively modest between 2000 and 2008, but because of substantially increased investment as part of the ARRA (American Recovery and Reinvestment Act) economic stimulus, ANSS will be about 25 percent complete by the end of 2011 (Figure 3.4). By that time, ANSS will consist of more than 1,500 modern digital seismic stations, upgraded regional seismic networks, and a National Earthquake Information Center that is operated 24×7 and delivers information for emergency response to state and local officials, operators of lifeline facilities, the Federal Emergency Management Agency (FEMA), and other critical users.
Deployment of the remaining 75 percent of ANSS is a critical requirement for national resilience, reflected by the many tasks listed in this chapter that require full ANSS deployment. One of the important components of ANSS that is still needed is an expansion of the building instrumentation component to provide crucial information on how common buildings respond to earthquake shaking.
Existing Knowledge and Current Capabilities
The ANSS plans were developed through a broad consultative process that resulted in a comprehensive description of the infrastructure elements and a detailed deployment strategy (USGS, 1999). Implementation of the plan has been approved through the USGS appropriation process, with availability of funding being the only impediment to full deployment. Because the system is already partially deployed, the technical and scientific knowledge base for ANSS is fully developed and tested.
As part of its monitoring activities, ANSS includes:
• A national “backbone” network of seismological stations.
• The National Earthquake Information Center (NEIC), the central focus for analysis and dissemination of earthquake information.
• The National Strong Motion Project, to monitor and understand the effects of earthquakes on man-made structures in densely urbanized areas to improve public earthquake safety.
• Fifteen regional seismic networks operated by USGS and its partners.
The range of products produced by the USGS Earthquake Hazards Program3 derived from the ANSS network has grown steadily over recent years, as the network elements have been deployed, and these products now serve a diverse scientific, emergency management, and community base:
• Descriptions of Recent Earthquakes. Automatic maps and event information are available online from the Earthquake Hazards Program website within minutes of an earthquake occurring.
• Did You Feel It Maps and Reports. Present a compilation of community reports of shaking in the form of a Community Internet Intensity Map (CIIM) that summarizes the questionnaire responses provided by Internet users.
• ShakeMaps. Provides near-real-time maps of ground motion and shaking intensity following significant earthquakes, for use by federal, state, and local organizations, both public and private, for post-earthquake response and recovery, public and scientific information, as well as for preparedness exercises and disaster planning.
• ShakeCasts. Critical users, e.g., lifeline utilities, can receive automatic notifications within minutes of an earthquake, indicating the level of shaking and the likelihood of impact to their own facilities.
• Hazard Maps. National Seismic Hazard Maps show earthquake ground motions for various probability levels across the United States, for application in the seismic provisions of building codes, insurance rate structures, risk assessments, and other public policy uses (see Task 4).
• PAGER Earthquake Notification. Automated notifications of earthquakes through e-mail, pager, or cell phone. Rapid information and updates to first responders, and resources for media and local government.
A broad range of additional data and resources—information about earthquake hazards, historical seismicity, faults, etc. is available by state; an online searchable earthquake catalog providing downloadable information and technical data; QuickTime movies created from the recordings of fully instrumented structures during earthquakes; and real-time waveforms and spectrograms.
To be fully functional, the ANSS will require the following additional components:
• Structural instrumentation. ANSS requirements call for extensive instrumentation of buildings, bridges, and other structures in areas of high earthquake risk. This is the least developed component of ANSS; 9,000 data channels are needed, and instrumentation installed to date is less than 1000 channels.
• Expanded urban monitoring. ARRA funding is targeted for the modernization of existing seismic stations, but not for an expansion of the networks. To meet the ANSS requirements, an additional 1,700 stations are needed for deployment in the highest risk urban areas.
• Data management. Currently, a large proportion of the data management needs of the system are being accommodated through the IRIS Data Management System, funded by NSF. At full implementation, USGS needs to assume this funding responsibility, as well as the task of developing seamless data and product access for ANSS.
Full implementation of ANSS simply requires additional funding; there are no technical issues.
ANSS, when fully implemented, will provide the infrastructure necessary for development of earthquake early warning (EEW) systems. The goal of network-based EEW is to detect earthquakes in the early stages of fault rupture, rapidly predict the intensity of the future ground motions, and warn people before they experience the intense shaking that might cause damage. The most damaging shaking is usually caused by seismic shear and surface waves, which travel at only half the speed of the fastest seismic waves, and much slower than an electronic warning message. EEW systems detect strong shaking at an earthquake's epicenter and transmit alerts ahead of the damaging earthquake waves.
Potential warning times depend primarily on the distance between the user and the earthquake epicenter. There is a “blind zone” near an earthquake epicenter where early warning is not feasible, but at more distant sites, warnings can be issued from a few seconds up to about 1 minute prior to the strong ground shaking (Figure 3.5). Such warnings can be used to reduce the harm to people and infrastructure during earth-
FIGURE 3.5 Snapshot of the seismic waves produced by a simulated magnitude-7.8 earthquake on the southern San Andreas Fault (dashed white line) with an epicenter at the southeastern end of the fault (yellow point). The snapshot is taken 85 seconds after the earthquake origin time, just as strong surface waves are arriving in downtown Los Angeles. In this scenario, an EEW system deployed along the southern San Andreas Fault could provide up to a minute of warning at sites in the most urbanized regions of Los Angeles. This particular earthquake simulation was used to define the hazard for the 2008 Great Southern California ShakeOut.
SOURCE: Courtesy of R. Graves, G. Ely, and T.H. Jordan.
quakes. Potential applications include alerting people to “drop, cover, and hold-on,” move to safer locations, or otherwise prepare for shaking (e.g., surgeons in operating rooms), as well as many types of automated actions: stopping elevators at the nearest floor, opening firehouse doors, slowing rapid-transit vehicles and high-speed trains to avoid accidents, shutting down pipelines and gas lines to minimize fire hazards, shutting down manufacturing operations to decrease potential damage to equipment, saving vital computer information to avoid losses of data, and controlling structures by active and semi-active systems to reduce building damage.
Operational EEW systems been deployed in at least five countries—Japan, Mexico, Romania, Taiwan, and Turkey. Japan is the only country with a nationwide system that provides public alerts. The Japan Meteorological Agency uses a national seismic network of about 1,000 seismological stations to detect earthquakes and issue warnings, which are transmitted via the Internet, satellite, and wireless networks to cell phone users, to desktop computers, and to automated control systems that stop trains, place sensitive equipment in a safe mode, and isolate hazards while the public takes cover (Figure 3.6). Mexico City and Istanbul also have public warning systems.
EEW has been identified as an ANSS objective (USGS, 1999), and it will be an important outcome of ANSS implementation. The NEHRP 2008 Strategic Plan recommended the evaluation and testing of EEW systems as part of Objective 9, to “Improve the accuracy, timeliness, and content of earthquake information products.” Some activities are under way in California, where a USGS-sponsored demonstration project is testing several EEW algorithms using real-time data from the California Integrated Seismic Network (CISN), a component of ANSS. AARA stimulus funding is being used to upgrade many of the older seismic instruments throughout the CISN and reduce the time delays in gathering data and issuing alerts. When completed, this prototype system, called the CISN ShakeAlert System, will provide warnings to a small group of test users including emergency response groups, utilities, and transportation agencies (USGS, 2009). While in the testing phase, the system will not provide public alerts. If these tests are successful, high priority should be given to the development and deployment of an ANSS-based operational earthquake early warning system that can issue public alerts through various types of public media. The most suitable location for the first fully operational deployment of EEW would be the San Andreas Fault system, where the risk level is high and early detection of large strike-slip ruptures can provide up to a minute of early warning (e.g., Figure 3.5). If sufficient funding
FIGURE 3.6 Portion of a leaflet prepared by the Japan Meteorological Agency describing simple instructions on how to react when an EEW alert is received. SOURCE: Japan Meteorological Agency. Available at www.jma.go.jp/jma/en/Activities/EEWLeaflet.pdf.
is made available, upgrading the prototype CISN ShakeAlert System to a fully operational, public system should be possible within 5 years.
Planning should also begin for an EEW system for the Cascadia region of the northwestern United States. Earthquakes with magnitudes greater than 8 (and as large as 9) are anticipated on the offshore megathrust of the Cascadia subduction zone. In favorable situations, EEW could provide more than a minute of warning for urban centers such as Seattle and Portland. For example, the megathrust faulting that caused the great Sumatra-Andaman Islands earthquake of December 26, 2004, (magnitude-9.2) had a total rupture duration that exceeded 1,200 seconds (Shearer and Bürgmann, 2010). Moreover, an EEW capability would complement and improve the accuracy of the tsunami warning systems already operated by National Oceanic and Atmospheric Administration (NOAA) and USGS (see NRC, 2010).
EEW systems should include the capability for enhanced alerts during periods of aftershock activity following major earthquakes, which can warn rescue personnel operating in dangerous and unstable conditions. The enhancements could be based on existing dense urban seismic networks with directed annunciation of the warning to the exposed individuals, or on fully mobile aftershock monitoring networks that can be rapidly installed in sparsely monitored locales.
Current EEW systems are based on earthquake detection and forecasting by seismometer networks such as the CISN. However, as described in the following section, continuously recording GPS networks can also provide real-time information on large fault displacements that is potentially valuable for EEW, especially in subduction zones such as Cascadia (Hammond et al., 2010). Additional research and development is needed to facilitate the rapid integration of GPS network data with seismometer network data.
Existing Knowledge and Current Capabilities
Three basic seismographic strategies have been developed for earthquake early warning (Allen et al., 2009):
• on-site or single-station warning: predicting the peak shaking from the P wave recorded at the site,
• front detection: detecting strong ground shaking at one location and transmitting a warning ahead of the seismic energy, and
• network-based warnings: using seismic networks to locate and estimate the size of a growing fault rupture.
Research indicates that dense seismometer arrays in the vicinity of shallow hypocenters can determine whether an event will grow into a large earthquake (magnitude > 6) using only several seconds of recorded P-wave data (Allen and Kanamori, 2003; Lancieril and Zollo, 2008). However, whether such measurements saturate above magnitude-7 is an unresolved problem that is related to fundamental issues of earthquake predictability.
Operational EEW systems been deployed in at least five countries—Japan, Mexico, Romania, Taiwan, and Turkey (see review by Allen et al., 2009). The most highly developed systems are in Japan. Japan Railways began using alarm-seismometers in the 1960s and then front-detection EEW systems in 1982 to shut off power to the Shinkansen bullet trains. An onsite system (Urgent Earthquake Detection and Alarm System, UrEDAS) started operation along the Shinkansen lines in 1992, which was improved after the 1995 Kobe earthquake. The system demonstrated its effectiveness during the magnitude-6.6 Niigata Ken Chuetsu earthquake of 2004, when it issued an alert that stopped a Shinkansen train. Although the train derailed, all but one car remained on the tracks. Japan has also developed a technology for network-based EEW that now provides public alerts (Kamigaichi et al., 2009). The Japan Meteorological Agency (JMA) employs a network of 1,000 seismic instruments to detect earthquakes and predict the intensity of the resulting ground motions. Warnings are sent via TV and radio and go out over public address systems in schools, some shopping malls, and train stations. Alerts of impending shaking are also transmitted via the Internet, satellite, and wireless networks to cell phone users, to desktop computers, and to automated control systems that stop trains, place sensitive equipment in a safe mode, and isolate hazards while the public takes cover (Figure 3.6). Mexico, Taiwan, Istanbul, and Bucharest have active systems providing warning to one or more users.
The finite bandwidth and the dynamic range of current seismometers limit their accuracy in measuring ground displacements near large earthquake ruptures. Complementary information can be obtained from geodetic observations using GPS networks. Continuously monitoring GPS stations can provide total displacement waveforms at sampling intervals on the order of 1 second, which can be used directly to estimate earthquake source parameters (Crowell et al., 2009). This sampling rate is lower than the seismic observations, and the noise levels of the GPS data are higher. Therefore, an integrated network of seismometers and GPS receivers can provide better performance for EEW than either type of instrument alone.
Full implementation of ANSS, as recommended in Task 2, will provide the instrumental platform for the development of EEW systems. As noted
above, development of the CISN ShakeAlert prototype is already underway. A fully operational, end-to-end system will require the densification of seismic networks in the likely epicentral regions of large earthquakes, such as along California’s San Andreas Fault, which can be guided by the long-term earthquake rupture forecasts discussed under Task 4. Upgrades to the equipment currently used to record, transmit, and process seismic signals will be necessary to reduce the latencies in the automated broadcasting alerts. The robustness of the ANSS components such as the CISN will need to be improved through redundant telecommunication paths and software enhancements.
Substantial research and development will be needed on the algorithms used to detect earthquakes in the early stages of fault rupture, to predict future ground motions, and to automatically issue alerts. The basic science requirements are described under Task 1. Particularly important is a better understanding of the earthquake rupture physics, including the processes that govern the nucleation, propagation, and arrest of seismic ruptures. Short-term earthquake rupture forecasts can improve the efficacy of EEW algorithms by adjusting the a priori rupture probabilities to reflect current seismic activity (see Task 5). Automated algorithms will have to recognize and map finite-fault sources, including multi-fault ruptures, in real time.
Large uncertainties in EEW alerts of prospective shaking can arise from uncertainties in the ground motion prediction equations. The ground motion predictions will have to account for three-dimensional geologic structures, particularly near-surface heterogeneities such as sedimentary basins, and to account for rupture propagation effects such as directivity and slip complexity. Physics-based numerical simulations of strong ground motions have the potential for substantially improving these predictions (see Tasks 1 and 12).
EEW algorithms will have to be verified and validated by extensive field-testing, such as that now under way in California. This testing will need to evaluate the quality and consistency of the ground motion predictions, as well as the costs and benefits to potential users. Because of the latter requirement, the design, operation, and testing of EEW systems will have to involve end users.
Private-sector service providers will be needed to adapt EEW information for utilization in automated control and response systems. In Japan, private providers offer a variety of services ranging from simple translation of the JMA information into a site-specific predicted intensity and warning time to more sophisticated systems that incorporate local
seismometers to provide additional on-site warning. Engineering and construction companies are also using the warning systems to provide both enhanced building performance during earthquakes and to protect construction workers. An effective public-private partnership will be necessary in developing “best practices” for EEW users.
Although there have been limited studies addressing the social science context of earthquake early warning (e.g., Bourque, 2001; Tierney, 2001), implementation of EEW will require additional research to determine optimal ways to interact with the public and a broad education campaign to inform the public about the availability and use of earthquake alerts. The experience of the Japanese (e.g., Figure 3.6) will be useful in this regard.
The National Seismic Hazard Maps produced by USGS are the authoritative reference for earthquake ground motion hazard in the United States. These maps are the basis of the probabilistic portion of the NEHRP Recommended Provisions, are a resource for the model building codes, and are used in seismic retrofit guidelines, earthquake insurance, land-use planning, and the design of highway bridges, dams, and landfills. They are also used in nationwide earthquake risk and loss assessment and development of credible earthquake scenarios for planning and emergency preparedness.
Improved mapping of seismic hazard, at both local and national scales, can reduce the uncertainty in earthquake probabilities and ground motion values and provide a more scientifically credible basis for engineering and policy decision-making. Seismic hazard mapping directly benefits from the advances in earthquake science described in Tasks 1, 2, and 3. Continued interaction between NEHRP researchers and the user-community will also serve to identify new earthquake hazard and risk information products of value to the community.
• Continue the development of National Seismic Hazard Maps. Three focus areas for the next generation of National Seismic Hazard Maps are (a) the improved characterization of faults capable of producing magnitude-6.5 to 7 earthquakes (Category B faults) using field investigations and seismic monitoring, (b) the development of improved ground motion attenuation models for the eastern and central United States, and (c) the development of, and improvements to, numerical ground motion simulations.
• Create hazard maps for urban areas. Expansion of the Urban Seismic Hazard Mapping program, with the goal of mapping all of the major U.S. urban areas at risk over the next 20 years. Providing greater detail about the geographic distribution of strong ground motion, geologic site conditions, and potential ground failure (fault rupture, landslides, and liquefaction) is a critical component to the earthquake risk applications discussed in Tasks 6 and 7 as well as the building and lifeline guidelines discussed in Tasks 13, 14, and 15. The development of urban seismic hazard maps involves partnerships between state and local agencies, local government, universities, and the NEHRP agencies. Integration of enhanced local hazard information with the national-scale engineering design guidance provided in the National Seismic Hazard Maps will need to be addressed by NEHRP as well as by the standards and code developing organizations. Both the San Francisco, CA, and Evansville, IN, examples described in Chapter 2 provide valuable case histories describing how such partnerships can be established.
Existing Knowledge and Current Capabilities
The current knowledge of earthquakes, active faults, crustal deformation and seismic wave generation/propagation must be integrated and translated into a form that can be used by others in order to be effective in reducing earthquake losses. The National Seismic Hazard Maps and related information products produced by USGS accomplish this critical information transfer.
Seismic Hazard Maps
During the past 60 years, the National Seismic Hazard Maps have evolved from a series of broad zones depicting 4 damage levels (none, minor, moderate, and major) based on Modified Mercalli Intensity, a qualitative measure of earthquake shaking (see Figure 3.7; Roberts and Ulrich, 1950; Algermissen, 1969), to the current series of USGS maps that provide earthquake engineering-based parameters such as spectral acceleration (Sa, at multiple periods, 0.1, 0.2, 0.3, 0.5, and 1.0 sec) and Peak Ground Acceleration (PGA) for ~150,000 sites across the country (see Figure 3.8). The current USGS hazard maps are based on a combination of state-of-the-art probabilistic methodology, ANSS earthquake monitoring, and the latest NEHRP research findings that provide a long-term geologic perspective for earthquake activity (Crone and Wheeler, 2000). These hazard maps have been developed through a scientifically defensible and repeatable process that involves input and peer review at both regional and national levels by expert and user communities (Petersen et al., 2008).
FIGURE 3.7 Seismic probability map of the United States in 1950. SOURCE: Roberts and Ulrich (1950). © Seismological Society of America.
FIGURE 3.8 U.S. National Seismic Hazard Map showing Peak Ground Acceleration (PGA) with a 2 percent chance of exceedance in 50 years (or a 2,475-year return period). SOURCE: USGS (2008).
The USGS National Seismic Hazard Maps are the basis of the probabilistic portion of the NEHRP Recommended Provisions, a resource for the model building codes developed by the Building Seismic Safety Council and published by FEMA (FEMA, 2009b). These design maps are adopted by the International Building Code and national consensus standards such as ASCE-7 Minimum Design Loads for Buildings and Other Structures, ASCE-31 Seismic Evaluation of Existing Buildings, ASCE-41 Seismic Rehabilitation of Existing Buildings, and the NFPA 5000 Building Construction and Safety Code. Through these codes and standards, the National Seismic Hazard Maps affect billions of dollars of construction and represent one of the principal economic benefits of seismic monitoring in the United States (NRC, 2006b). In addition to new construction, they are used in seismic retrofit guidelines, earthquake insurance, land-use planning and the design of highway bridges (AASHTO, 2009), dams, and landfills. The national maps were used in a nationwide Hazards U.S. (HAZUS) earthquake risk assessment by FEMA (2001, 2008), and provide for basis for developing credible earthquake scenarios for planning and emergency preparedness and earthquake risk and loss assessments in the United States.
Continued NEHRP research has resulted in a new generation of earthquake hazard and risk maps that provide more specific information to support community decision-making. Urban Seismic Hazard Maps address strong ground shaking and ground failure at the community level. Seismic Risk Maps address the earthquake hazard to specific building types.
Urban Seismic Hazard Maps
Urban seismic hazard maps provide the foundation for developing realistic earthquake loss and damage estimates. By incorporating the effects of local geology, probabilistic and scenario earthquake maps provide a credible basis for community stakeholders to identify and prioritize community mitigation activities. Site and soil conditions vary geographically, and regional or local seismic hazard maps are needed to provide a higher spatial resolution to account for these differences and more accurately estimate strong ground motion effects.
A number of successful pilot programs around the United States have demonstrated the value the NEHRP Urban Seismic Hazards Mapping Program. USGS initiated a program to develop urban seismic hazard maps in 1998 for three pilot areas (San Francisco Bay region; Seattle, WA; Memphis, TN) and has since expanded the program in central United States (greater St. Louis area; Evansville, IN) and in southern California. These urban seismic hazard mapping programs involve state geological surveys, emergency management organizations, as well as local universities and consulting firms. In southern California, USGS is partnered with the Southern California Earthquake Center.
FIGURE 3.9 Seattle Urban Seismic Hazard Map, showing ground motions for the 10 percent chance of exceedance in 50 years or a 1 percent chance of exceedance in 475 years. SOURCE: Hearst Corporation. Available at seattlepi.com/U.S.G.S.
Seismic hazard maps for Seattle, WA, were improved following the magnitude-6.7 Nisqually, WA, earthquake in 2001. These maps provide a high-resolution view of potential ground shaking, which is particularly important because much of Seattle is sited on a sedimentary basin that strongly affects patterns of ground shaking and damage (see Figure 3.9). In the Nisqually earthquake, unreinforced masonry (URM) damage was disproportionately large compared with other building types, with the greatest damage occurring in areas of soft soils. Improved earthquake information for the Seattle area guided elected officials toward policy decisions about the need to mitigate hazards from URM buildings. Seattle is currently considering a URM retrofit ordinance4 that would be the first mandatory retrofit program outside California.
Seismic Risk Maps
Earthquake risk, expressed as a level of building damage or economic loss, is dependent on both the type of building or structure and the geographic location of the structure with respect to strong ground shaking. Mapping uniform earthquake ground motions (e.g., 2 percent in 50 years, or 1 chance in 2,475 (0.04%) of exceedance in any year) does not necessarily result in identifying a uniform earthquake risk. A new series of earthquake risk maps combine hazard information from the National Seismic Hazard Maps with building fragility curves from FEMA’s HAZUS-Multi-Hazard earthquake loss estimation model to show mean annual frequencies of exceeding different structural damage states (Luco and Karaca, 2007). This type of information is fundamental to seismic risk assessment (see Task 7) and can be used by communities to make risk-informed decisions and identify performance targets for specific building types based on local hazards and local building practices (e.g., 1 percent annual likelihood of collapse). Additionally, integration of this risk-map approach with USGS ShakeMaps would provide emergency responders with accurate “damage maps” for use following an earthquake impact to a risk-mapped urban area.
The National Seismic Hazard Maps integrate knowledge of earthquakes, historic earthquakes, active faults, crustal deformation, and seismic wave generation/propagation. The availability of on-line design and analysis tools has enabled engineers and earth science professionals to determine ground motion values for specific building codes as well as create customized hazard maps. The scientific credibility of these maps is based on basic geologic and seismologic research that includes:
The National Seismic Hazard maps use the basic earthquake data collected by ANSS, and, as discussed above under Task 2, ANSS is the “backbone” of seismology research in the NEHRP program.
NEHRP-supported paleoseismic research has provided the necessary long-term geologic constraints on earthquake activity to validate probabilistic seismic hazard assessments. Paleoseismic information for major fault systems capable of producing earthquakes with magnitude > 7 (Category A faults such as the San Andreas, San Jacinto, Elsinore, Imperial, and
Rodgers Creek) has been well developed during the past 30 years. These techniques need to be extended to other faults lacking sufficient paleoseismic data to constrain their recurrence intervals (defined as Category B faults). Recent examples of destructive earthquakes occurring on Category B faults include the 1971 San Fernando, CA, (magnitude-6.7) and 1994 Northridge, CA (magnitude-6.7) events. In areas where time-dependent models of earthquake activity may be more appropriate, paleoseismic research on the variability of inter-event times can help identify aleatory uncertainties and help improve the overall resolution of earthquake hazard estimates.
Better ground motion attenuation models help improve structural design and construction. The introduction of the Next Generation Attenuation (NGA) models into the 2008 hazard maps (Petersen et al., 2008) modified ground motion values in many areas of the United States, significantly impacting earthquake damage and loss estimates. Continued improvement of attenuation relations for the central and eastern United States through the use of physics-based numerical simulations (see discussion in Task 1) can advance understanding of earthquake effects to the built environment and help reduce uncertainties in areas of infrequent seismicity. Significant improvements to the empirical attenuation relations may be possible through the use of numerical simulations of ground motions that incorporate realistic models of source dynamics and three-dimensional geological structure (see Figure 3.1).
Active geotechnical research and mapping programs by federal, state, and local agencies, universities, and consultants continue to improve our knowledge of subsurface and geologic site effects at the community scale. The COSMOS Geotechnical Virtual Data Center5 (Swift et al., 2004), for example, provides a distributed system for archiving and web dissemination of geotechnical data collected and stored by various agencies and organizations.
Seismic hazard products developed by the states and university groups need to be coordinated with national maps through national and
regional peer review processes to provide nationally consistent information to users. One example is the coordination of the UCERF2 seismic hazard study maps for California with the USGS National Seismic Hazard Mapping Program (WGCEP, 2008).
As discussed in Tasks 14 and 15, predictive models of ground shaking and deformation are required for performance-based earthquake engineering. Yet, in many areas, these types of models still exhibit large uncertainties. In those regions of our nation where earthquake data are sparse or nonexistent, earthquake-physics simulations should be used to build or augment the dataset. Continued deployment of ANSS in urban environments to collect strong motion recordings and site response information is essential to validate these simulation models. Systematic expansion of hazard mapping products and the development of national- and local-scale hazard maps for liquefaction (including lateral spreading and settlement), surface fault rupture, and landslide potential is needed to complement the maps already available for ground shaking.
Although the adoption of the USGS National Seismic Hazard Maps into the model building codes is a major NEHRP success story, the actual implementation and enforcement of these codes remains a community choice. A clearer understanding by community policy-makers and stakeholders of the role that both the National Seismic Hazard Maps and the building codes play in community safety is essential for the development of earthquake-resilient communities.
With the current state of scientific knowledge, individual large earthquakes cannot be reliably predicted in future intervals of years or less; i.e., “deterministic” earthquake prediction is not yet possible. Nevertheless, the public needs up-to-date information about the likelihood of future events, especially following widely felt earthquakes, even if the probabilities of a strong earthquake are too small to warrant high-cost preparedness actions such as mass evacuations. The goal of operational earthquake forecasting is to provide communities with authoritative information on how seismic hazards change with time, including a consistent set of earthquake forecasts that range from the long term (centuries to decades) to the short term (hours to weeks) (Jordan et al., 2009; Jordan and Jones, 2010).
Seismic hazards are known to change on short timescales, because earthquake occurrences suddenly alter the conditions within the fault system that lead to future earthquakes. One earthquake can trigger others
nearby; the probability of such triggering increases with the initial shock’s magnitude and decays with elapsed time according to simple (and nearly universal) scaling laws. Statistical models of earthquake triggering can explain much of the observed spatio-temporal clustering in seismicity catalogs, such as aftershocks, and the models can be used to construct forecasts that estimate future earthquake probabilities based on prior seismic activity. These short-term models have demonstrated significant skill in forecasting future earthquakes—the probability gain factors achieved in several-day intervals can range up to 100-1,000 relative to the long-term forecasts typically used in hazard estimation described under Task 4. However, although these gain factors can be high, the forecasting probabilities for large earthquakes usually remain low in an absolute sense, rarely reaching more than a few percent for intervals of a week or less.
Nevertheless, short-term forecasts, properly applied, can be used to improve resilience. Authoritative statements about the increase in seismic hazard following a significant earthquake allow emergency management agencies, as well as the population at large, to anticipate aftershocks. Such advisories also fulfill the public’s need for current information during periods of anomalous seismic activity, which can help to reduce the concern about amateur predictions and rumors that overly inflate the hazard.
Under the Stafford Act (P.L. 93-288), USGS has the federal responsibility for earthquake monitoring and forecasting. Its National Earthquake Prediction Evaluation Council (NEPEC) provides advice and recommendations on earthquake forecasts and related scientific research to the USGS director, in support of the director’s delegated responsibility to issue timely warnings of potential geologic disasters. Thus far, USGS and NEPEC have not established protocols for operational forecasting on a national level.
USGS should develop a national plan, coordinated with state and local agencies, for the implementation of operational earthquake forecasting. In formulating the plan, USGS should consider the following elements:
• Support for research. Through its internal research program and external grants program, USGS should continue to support research on the scientific understanding of earthquakes and earthquake predictability.
• Coordination of earthquake information. USGS should continue to coordinate across federal and state agencies to improve the flow of earthquake information, particularly the real-time processing of seismic and geodetic data and the timely production of high-quality earthquake catalogs. Full support of ANSS operations will allow substantial improvements in the real-time seismic information needed for short-term forecasting.
• Development of operational systems. USGS should support the development of earthquake forecasting methods—based on seismicity changes detected by ANSS—to quantify short-term probability variations, and it should deploy the infrastructure and expertise needed to utilize this probabilistic information for operational purposes. Working with local agencies, USGS should provide the public with authoritative, scientific information about the short-term probabilities of future earthquakes. The source of this information needs to properly convey the epistemic uncertainties in these forecasts.
• Operational qualification of forecasts. All operational procedures involved with the creation, delivery, and utility of forecasts should be rigorously reviewed by experts. Earthquake forecasting procedures should be qualified for usage according to the three criteria commonly applied in weather forecasting (Jordan and Jones, 2010): they should display quality, a good correspondence between the forecasts and actual earthquake behavior; consistency, compatibility among procedures used at different spatial or temporal scales; and value, realizable benefits (relative to costs incurred) by individuals or organizations who use the forecasts to guide their choices among alternative courses of action.
o Operational forecasts should incorporate the results of validated short-term seismicity models that are consistent with the authoritative long-term forecasts and demonstrate reliability (correspondence to observations collected over many trials) and skill (performance relative to the long-term forecast).
o Verification of reliability and skill requires objective evaluation of how well the forecasting model corresponds to data collected after the forecast has been made (prospective testing), as well as checks against data previously recorded (retrospective testing). All operational models should be subject to continuous prospective testing against established long-term forecasts and a wide variety of alternative, time-dependent models.
o Experience has shown that such evaluations are most diagnostic when the testing procedures conform to rigorous standards, and the prospective testing is blind (Field et al., 2007). In this regard, advantage can be taken of the Collaboratory for the Study of Earthquake Predictability (CSEP),6 which has begun to establish standards and an international infrastructure for the comparative, prospective testing of earthquake forecasting models (Zechar et al., 2010). Regional experiments are now under way in California, New Zealand, Japan, and Italy, and will soon be started in China; a program for global testing has also been initiated.
o Continuous testing in a variety of tectonic environments will be critical for demonstrating the reliability and skill of the operational forecasts, and for quantifying their uncertainties. At present, seismicity-based forecasts can display order-of-magnitude differences in probability gain, depending on the methodology, and there remain substantial issues about how to assimilate the data from ongoing seismic sequences into the models.
• Assessment of forecast utility. Most previous work on the public utility of earthquake forecasts has anticipated that they would deliver high probabilities of large earthquakes, i.e., deterministic predictions would be possible. This expectation has not been realized. Current forecasting policies need to be adapted to applications in a “low-probability environment”—one in which earthquake forecasting probabilities can vary by several orders of magnitude, but remain low in an absolute sense (< 10 percent in the short term).
The implementation of operational earthquake forecasting will enable cost-effective measures to reduce earthquake impacts on individuals, the built environment, and society-at-large—Goal B in NIST (2008). A national plan for operational forecasting will address NEHRP Objective 5 (assess earthquake hazards for research and practical application), and it will provide new information tools for Goal C (improve the earthquake resilience of communities nationwide), particularly for Objective 9 (improve the accuracy, timeliness, and content of earthquake information products).
Existing Knowledge and Current Capabilities
An up-to-date overview of existing knowledge and capabilities in earthquake forecasting and prediction is the subject of an extensive recent review by the International Commission on Earthquake Forecasting (ICEF), which was convened by the Italian government following the magnitude-6.3 L’Aquila earthquake of April 6, 2009 (Jordan et al., 2009). The statements in this section are based on this overview.
Given the current state of scientific knowledge, individual large earthquakes cannot be reliably predicted in future intervals of years or less; i.e., reliable and skillful deterministic earthquake prediction is not yet possible. In particular, the search for diagnostic precursors—signals observed before earthquakes that reliably indicate the location, time, and magnitude of an impending event—has not yet produced a successful short-term prediction scheme.
Any information about the future occurrence of earthquakes contains large uncertainties and therefore needs to be expressed in terms of prob-
abilities. Probabilistic earthquake forecasting is a rapidly evolving field of earthquake science. Long-term forecasts provide probabilistic estimates of where earthquakes will occur, how large they might be, and how often they will happen, averaged over time intervals of decades to centuries. This information is essential for seismic hazard mapping, and it is the foundation on which the operational earthquake forecasting is built (see Task 4).
Earthquakes tend to cluster in space and time. Large earthquakes produce aftershocks by stress triggering, and sequences of earthquakes clustered in space and time are common. Aftershock excitation and decay, as well as other aspects of earthquake clustering, show statistical regularities on timescales of hours to weeks that can be captured in short-term earthquake forecasts. Additional information on earthquake probabilities over the medium-term (months to years) can be obtained from the disturbance of the tectonic forces acting on faults caused by previous large earthquakes.
Although this type of seismicity-based forecasting can provide substantial probability gains relative to long-term forecasts, the absolute probabilities remain low. Consider the southernmost segment of California’s San Andreas Fault, which has a fairly high long-term probability; according to the UCERF2 model (Figure 3.10), there is a 1-in-4 chance of a magnitude ≥ 7 earthquake occurring on this fault during the next 30 years. Over a 3-day period, however, the probability of such an event is very small, about 10–4. In March 2009, a swarm of more than 50 small earthquakes occurred within a few kilometers of the southern end of this fault, near Bombay Beach, California, including an magnitude-4.8 event on March 24. Using a methodology developed for assessing foreshocks on the San Andreas Fault, the California Earthquake Prediction Evaluation Council (the state equivalent of NEPEC) estimated that the swarm increased the 3-day probability of a major earthquake on the San Andreas to about 1-5 percent, corresponding to a gain factor of about 100-500 relative to UCERF2.
Foreshocks cannot be discriminated a priori from background seismicity. Worldwide, less than 10 percent of earthquakes are followed by a larger earthquake within 10 kilometers and 3 days; less than half of the large earthquakes have such foreshocks. Many earthquakes strike without warning; for example, no foreshocks or other short-term precursors have been reported for the magnitude-7 Haiti earthquake of January 12, 2010, the fifth-deadliest seismic disaster in recorded history.
Protocols for issuing advisories are best developed in California, where the dissemination of forecasting products is becoming more automated (Jordan and Jones, 2010). For every earthquake recorded above magnitude-5, the California Integrated Seismic Network, a component of the ANSS, now automatically posts the probability of an magnitude
FIGURE 3.10 Uniform California Earthquake Rupture Forecast. SOURCE: Field et al. (2007); U.S. Geological Survey.
≥ 5 aftershock and the number of magnitude ≥ 3 aftershocks expected in the next week. Authoritative short-term forecasts are also becoming more widely used in other regions. For instance, beginning on the morning after the damaging L’Aquila earthquake of April 6, 2009, the Italian authorities began to post 24-hour forecasts of aftershock activity.
An operational system is the Short-Term Earthquake Probability (STEP)
FIGURE 3.11 Short Term Earthquake Probability (STEP) map. SOURCE: U.S. Geological Survey; continuously available on-line at earthquake.usgs.gov/earthquakes/step/.
model, an aftershock forecasting web service provided for California by USGS since 2005 (Gerstenberger et al., 2007). STEP uses aftershock statistics to make hourly revisions of the probabilities of strong ground motions (Modified Mercalli Intensity ≥ VI) on a 10-km, statewide grid (Figure 3.11).
Data other than seismicity have been considered in earthquake forecasting (e.g., geodetic measurements and geoelectrical signals), but so far, studies of non-seismic precursors have not quantified short-term probability gain, and they therefore cannot be incorporated into operational forecasting methodologies (Jordan et al., 2009).
A fundamental uncertainty in earthquake forecasting is the short sampling interval available from instrumental seismicity catalogs and historical records, which is reflected in the large epistemic uncertainty in earthquake recurrence statistics. These uncertainties can be reduced by better instrumental catalogs, improved geodetic monitoring, and geologic field work to identify active faults, their slip rates, and recurrence times. ANSS implementation is an enabling requirement.
Increasing the (low) probability gains afforded by existing forecasting models will require a much improved understanding of earthquake predictability. This is an important goal of the NEHRP basic science program described under Task 1. A particular knowledge gap is our lack of knowledge about the state of stress in active fault systems and how this stress evolves over time.
Current models used for aftershock forecasting can be improved by incorporating more information about main shock deformation patterns and geological settings, such as more detailed descriptions of local fault systems. In the STEP prototype system, for example, the probability change calculated to result from a particular earthquake does not depend on the proximity of that earthquake to major faults. In this regard, short-term forecasting models that incorporate earthquake clustering and triggering need to be integrated with long-term, fault-based models, such as UCERF. A new Working Group on California Earthquake Probabilities plans to incorporate short-term forecasting into the next version of the fault-based UCERF3, which is due to be submitted to the California Earthquake Authority in mid-2012.
Forecasting models considered for operational purposes should demonstrate reliability and skill with respect to established reference forecasts, such as long-term, time-independent models. Verification of reliability and skill requires objective evaluation of how well the forecasting model corresponds to data collected after the forecast has been made (prospective testing), as well as checks against data previously
recorded (retrospective testing). CSEP is setting up an infrastructure for this purpose (Zechar et al., 2009). The adaptation of CSEP to the testing of operational forecasts faces a number of conceptual and organizational issues. For example, fault-based models will need to be reformulated to permit rigorous testing—a considerable challenge for the development of UCERF3 and more advanced versions of this time-dependent California model.
CSEP evaluations are currently based on comparisons of earthquake forecasts with seismicity data. However, from an operational perspective, forecasting value can be better represented in terms of the strong ground motions that constitute the primary seismic hazard. This approach has been applied in the STEP model, which forecasts ground motion exceedance probabilities at a fixed shaking intensity, and it should be considered in the future formulation and testing of operational models. The coupling of physics-based ground motion models with earthquake forecasting models offers new possibilities for developing ground motion forecasts.
The utilization of earthquake forecasts for risk mitigation and earthquake preparedness requires two basic components—scientific advisories expressed in terms of probabilities of threatening events, and protocols that establish how probabilities can be translated into mitigation actions and preparedness. Although some experience has been gained in California (Jones et al., 1991; Jordan and Jones, 2010), there is no formal national approach for converting earthquake probabilities into mitigation and preparedness actions. One strategy that can assist decision-making is the setting of earthquake probability thresholds for such actions. These thresholds should be supported by objective analysis, for instance by cost/benefit analysis, in order to justify actions taken in a decision-making process.
Providing probabilistic forecasts to the public in a coordinated way is an important operational capability. Good information keeps the population aware of the current state of hazard, decreases the impact of ungrounded information, and contributes to reducing risk and improving preparedness. The principles of effective public communication have been established by social science research and should be applied in communicating seismic hazard information.
Earthquake risk studies can take the form of deterministic or scenario studies where the effects of a single earthquake are modeled, or
probabilistic studies that weigh the effects from a number of different earthquake scenarios by their annual likelihood or frequency of occurrence. Task 6 addresses the role of the individual scenario in community planning, and Task 7 addresses the earthquake risk assessment and loss estimation methodologies themselves. Earthquake scenarios integrate earth science, engineering, and social science information into a format that enables communities to visualize the impacts from earthquakes without actually having the event occur. Using scenarios, communities can evaluate potential local and regional disruptions to the built environment and society, as well as their capabilities to respond to, and recover from, earthquakes, and they can start to identify the necessary steps to reduce such impacts in the future.
The development of realistic earthquake scenario maps involves the linking of scientifically credible earthquake and ground motion maps with high-resolution urban geology and cultural inventory information in a GIS platform. Many of the issues associated with establishing credible earthquakes, ground motion, and local site condition maps are discussed under Task 5. Guidelines for the development and conduct of earthquake scenarios using NEHRP products were proposed in EERI (2006) to provide communities with information about the level of detail and effort required:
• Development of additional scenario ShakeMaps for high-risk communities. Currently, only a few of the 18 states with high or very high seismicity have ShakeMap scenarios readily available on the web for use in scenario and exercise development.7 Producing a comprehensive series of ShakeMaps for all of the 43 high-risk communities identified in either USGS Circular 1188 (USGS, 1999) or FEMA 366 (FEMA, 2008) (see Table 3.2) should be undertaken by the NEHRP program during the next 5 years. ShakeMap guidelines for earthquake scenarios (Wald et al., 2001) provide technical information to assist with scenario development. Over the next 20 years, NEHRP should continue to update this information by incorporating the latest developments in both the National and Urban Seismic Hazard and Risk Maps.
• Local data collection. There is a widely recognized need to increase the level of detail of building and inventory data at the local level. Locally coordinated data collection can increase the resolution and
7See www.earthquake.usgs.gov/earthquakes/shakemap/list.php?y=2011 (accessed November 30, 2010).
TABLE 3.2 HAZUS-MH Annualized Earthquake Loss (AEL) and Annualized Earthquake Loss Ratios (AELR) for 43 High-Risk (AEL greater than $10 million) Metropolitan Areas
|1||Los Angeles-Long Beach-Santa Ana, CA||1,312.3||1||San Francisco-Oakland-Fremont, CA||2,049.44|
|2||San Francisco-Oakland-Fremont, CA||781.0||2||Riverside-San Bernardino-Ontario, CA||2,021.57|
|3||Riverside-San Bernardino-Ontario, CA||396.5||3||El Centro, CA||1,973.77|
|4||San Jose-Sunnyvale-Santa Clara, CA||276.7||4||Oxnard-Thousand Oaks-Ventura, CA||1,963.00|
|5||Seattle-Tacoma-Bellevue, WA||243.9||5||San Jose-Sunnyvale-Santa Clara, CA||1,837.58|
|6||San Diego-Carlsbad-San Marcos, CA||155.2||6||Santa Rosa-Petaluma, CA||1,662.57|
|7||Portland-Vancouver-Beaverton, OR-WA||137.1||7||Santa Cruz-Watsonville, CA||1,580.97|
|8||Oxnard-Thousand Oaks-Ventura, CA||111.0||8||Los Angeles-Long Beach-Santa Ana, CA||1,574.85|
|9||Santa Rosa-Petaluma, CA||68.6||9||Napa, CA||1,398.18|
|10||St. Louis, MO-IL||58.5||10||Vallejo-Fairfield, CA||1,375.94|
|11||Salt Lake City, UT||52.3||11||Anchorage, AK||1,238.56|
|12||Sacramento-Arden-Arcade-Roseville, CA||52.0||12||Santa Barbara-Santa Maria-Goleta, CA||1,207.93|
|13||Vallejo-Fairfield, CA||39.8||13||Reno-Sparks, NV||1,150.40|
|14||Memphis, TN-MS-AR||38.2||14||Bremerton-Silverdale, WA||1,110.13|
|15||Santa Cruz-Watsonville, CA||36.2||15||Salinas, CA||1,075.54|
|16||Anchorage, AK||34.8||16||Seattle-Tacoma-Bellevue, WA||1,052.43|
|17||Santa Barbara-Santa Maria-Goleta, CA||34.4||17||Salt Lake City, UT||984.61|
|18||Las Vegas-Paradise, NV||33.1||18||Olympia, WA||969.50|
|19||Honolulu, HI||32.0||19||Portland-Vancouver-Beaverton, OR-WA||942.62|
|20||Bakersfield, CA||30.3||20||Bakersfield, CA||870.43|
|21||New York-Northern New Jersey-Long Island, NY-NJ-PA||29.9||21||San Luis Obispo-Paso Robles, CA||848.65|
|22||Salinas, CA||29.2||22||Ogden-Clearfield, UT||826.52|
|23||Reno-Sparks, NV||29.0||23||Salem, OR||79750|
|24||Charleston-North Charleston, SC||22.3||24||San Diego-Carlsbad-San Marcos, CA||770.20|
|25||Columbia, SC||21.6||25||Charleston-North Charleston, SC||766.01|
|26||Stockton, CA||20.9||26||Eugene-Springfield, OR||701.95|
|27||Atlanta-Sandy Springs-Marietta, GA||19.1||27||Provo-Orem, UT||683.30|
|28||Bremerton-Silverdale, WA||17.7||28||Stockton, CA||59779|
|29||Ogden-Clearfield, UT||17.5||29||Memphis, TN-MS-AR||509.13|
|30||Salem, OR||17.4||30||Evansville, IN-KY||485.60|
|31||Eugene-Springfield, OR||16.5||31||Columbia, SC||478.05|
|32||Napa, CA||15.9||32||Modesto, CA||473.60|
|33||San Luis Obispo-Paso Robles, CA||15.7||33||Las Vegas-Paradise, NV||390.28|
|34||Nashville-Davidson-Murfreesboro, TN||15.4||34||Sacramento--Arden-Arcade--Roseville, CA||374.73|
|35||Albuquerque, NM||14.7||35||St. Louis, MO-IL||33723|
|36||Olympia, WA||13.7||36||Albuquerque, NM||322.20|
|37||Modesto, CA||13.0||37||Honolulu, HI||311.12|
|38||Fresno, CA||12.6||38||Fresno, CA||283.13|
|39||Evansville, IN-KY||11.7||39||Little Rock-North Little Rock, AR||248.74|
|40||Birmingham-Hoover, AL||11.3||40||Nashville-Davidson-Murfreesboro, TN||167.26|
|41||El Centra, CA||10.7||41||Birmingham-Hoover, AL||115.54|
|42||Little Rock-North Little Rock, AR||10.5||42||Atlanta-Sandy Springs-Marietta, GA||65.39|
|43||Provo-Orem, UT||10.4||43||New York-Northern New Jersey-Long Island, NY-NJ-PA||20.90|
SOURCE: FEMA (2008).
reduce the uncertainty in earthquake scenario results. This includes using local assessor databases or specialized inventories (ImageCat, Inc. and ABS Consulting, 2006) and updating those inventories using tools such as the HAZUS Comprehensive Data Management System (CDMS)8 and the Rapid Observation of Vulnerability and Estimation of Risk (ROVER) (Porter et al., 2010) to produce upgrades to data necessary for conducting more site specific analyses.
• Community earthquake exercises. Community earthquake exercises provide the opportunity for communities to assemble the hazard studies and collect inventories, and stimulate community involvement, to better understand and prepare for an eventual earthquake. The success of the 2008 Great ShakeOut earthquake exercise in southern California has lead to the establishment of yearly statewide ShakeOut drills throughout California.9 This success has led to other states adopting the ShakeOut model, including Nevada in 201010 and those in the central United States for the New Madrid earthquake bicentennial in 2011.11
Existing Knowledge and Current Capabilities
Earthquake scenarios provide opportunities to examine alternative outcomes and stimulate creative thinking about the need for new policies and programs. Incorporating the latest scientific, engineering, and societal knowledge about a region’s seismic hazard, local soil characteristics, building types, lifelines, and population characteristics, a scenario can create a compelling picture that members of the local community can recognize and relate to. Scenarios show communities the potential levels of disruption of their daily life and how long the disruption may last, providing a motivation to perform the necessary actions to reduce impacts. Not only can such scenarios stimulate new policies and programs, but also the process of scenario development itself often results in greater understanding and improved trust and communication between members of the scientific, engineering, emergency management, and policy communities, resulting in a “new community” dedicated to seismic risk reduction.12
Earthquake scenarios have been developed for a number of fault zones in the United States, and are available from the EERI Developing Earthquake Scenarios website.13 The earthquake scenarios that have been developed for California include the Hayward and San Andreas Faults in
the San Francisco Bay region (CGS, 1982, 1987; EERI, 1996, 2005; Kircher et al., 2006) and the San Andreas, San Jacinto, and Newport Englewood Faults in southern California (CGS, 1982, 1988, 1993; Jones et al., 2008; Perry et al., 2008). Both the Bay Area and southern California scenarios impact some of the largest population centers in the United States, with damage estimates ranging between $100 and $200 billion and with thousands of fatalities and tens of thousands of injuries. Similarly, scenario indications that earthquake-induced levee failures in the Sacramento-San Joaquin River delta would disrupt drinking water supplies to more than 22 million Californians as well as irrigation water to delta and state agricultural lands14 provides a powerful motivation for community awareness programs and mitigation activities.
In the Pacific Northwest, scenarios for a great Cascadia earthquake (CGS, 1995; CREW, 2005), as well as a magnitude-6.7 earthquake on the Seattle Fault (EERI, 2005), have been developed based on NEHRP research. Both the Cascadia Region Earthquake Workgroup (CREW) and the EERI reports were developed through collaborative public-private efforts involving local public- and private-sector organizations including the American Society of Civil Engineers (ASCE), Structural Engineers Association of Washington (SEAW), USGS, University of Washington, and Washington State Emergency Management (Ballantyne, 2007). The Cascadia scenario drew examples from recent great subduction zone earthquakes, such as the 1964 Alaska and 2004 Sumatra events, to illustrate some of the effects that these events would have on local communities. In contrast to the scenarios for large urban areas in California, the magnitude-6.7 Seattle Fault scenario provided a small city perspective (Figure 3.12). Damage to modern and older construction was estimated at $33 billion, with 1,600 fatalities. Focus on heavily impacted areas, such as Pioneer Square—which was badly damaged in the 2001 Nisqually earthquake—provided additional realism and credibility to the scenario.
The Oregon Department of Geology and Mineral Industries (DOGAMI) worked with Oregon Emergency Management and the University of Oregon to develop countywide earthquake and landslide hazard maps as well as earthquake damage and loss estimates as part of its natural hazard mitigation plans. Based on improved information, one Cascadia earthquake scenario estimates more than $11 billion in building damages for the mid- and southern Willamette Valley (Burns et al., 2008).
In the central United States, the FEMA New Madrid Catastrophic Planning Initiative is being developed for the 200th anniversary of the 1811/1812 New Madrid earthquakes, involving 4 FEMA regions, 8 states, and detailed assessments in 161 counties in the 8 states (Alabama, Arkan-
sas, Illinois, Indiana, Kentucky, Mississippi, Missouri, and Tennessee). The project created new, regionally comprehensive soil characterization maps, new ground motion maps for scenario events, updated transportation and utility networks models for Memphis, TN, and St. Louis, MO, and methods to quantify the uncertainty in various impact model results. Initial scenario results for a 2:00 a.m. scenario include 3,500 fatalities, 86,000 injured, ~$300 billion in direct economic loss, ~715,000 damaged houses, and ~2.6 million households without electrical power (Elnashai et al., 2009; see Box 2.1). In the eastern United States, an earthquake loss estimation for the metropolitan New York–New Jersey–Connecticut area showed that even a moderate earthquake would significantly impact the region’s large population (18.5 million) and predominately unreinforced masonry building stock (Tantala et al., 2003). South Carolina recently completed a comprehensive risk assessment for the repeat of the 1886 magnitude-7.3 Charleston earthquake, producing an estimate of $20 billion in direct losses (URS et al., 2001).
Earthquake scenarios also provide situational awareness for emergency managers. Hawaii has developed a web-based catalog, the Hawaii HAZUS Atlas,15 of 20 “plausible” hypothetical earthquakes based on historic events that have happened in and around Maui and Hawaii counties. During an actual earthquake, emergency managers would be able to quickly assess the situation using a scenario event with similar location and size, while HAZUS modelers at the Pacific Disaster Center analyze the actual earthquake in near-real time and issue event-specific information.
Scientifically credible earthquake scenarios and ground motions are based on NEHRP products such as the National Seismic Hazard Maps and ShakeMaps. Disaggregation of the national hazard maps to produce scenario ground motion maps allows communities to examine the local seismic hazard from individual earthquakes. The maps are usually produced for peak ground acceleration, peak velocity, and acceleration at various periods, which would affect structures of different heights or lengths. Urban Seismic Hazard Maps integrate the necessary information about geologic hazards and characteristics at the community level. High-resolution local information is used to refine bedrock ground motion inputs and ground failure models. Levels of shaking at the ground surface depend on the thickness and nature of the soils resting on the bedrock. These types of data are captured through urban hazard mapping projects such as the pilot programs in the eastern San Francisco Bay area; Seattle,
WA; Memphis, TN; Evansville, IN; and the Greater St. Louis area. Urban Risk Maps, coupled with HAZUS-MH loss estimation software and databases, enable economic and social loss estimates as well as physical damage estimates. All these parameters combine to provide a realistic picture of the various types of impacts earthquake can have on a local community.
Federal, state, and local emergency management organizations provide the framework to conduct and organize community exercises. Although pilot studies have demonstrated the value of earthquake scenarios for increasing public awareness, these are ultimately community-level programs where success is dependent on the extent of community involvement. Communities need to feel that they “own” the scenarios and the results that stem from these exercises, and increasing local capacity by providing support and training for staff helps to establish that ownership. FEMA-sponsored HAZUS training, coupled with guideline development and networking support for scenario developers through professional organizations like EERI, provides communities with the tools and the capabilities to develop their own scenarios. Scenarios can also allow stakeholders to perform “what if” types of analyses (i.e., if we mitigate x what is the benefit to y?) to help identify cost-effective mitigation and loss avoidance strategies.
While national seismic hazard maps and earthquake scenarios contribute to understanding earthquake hazards, there is an increased recognition among policy-makers, researchers, and practitioners of the need to analyze and map earthquake risk in the United States. As urban development continues in earthquake-prone regions, there is a growing concern about the exposure of buildings, lifelines, and people to the potential effects of destructive earthquakes. Earthquake risk assessments and loss estimations build on the scenario earthquakes described in Task 6 by integrating engineering and social science information in a GIS-based loss estimation methodology. Although publicly available risk assessment methodologies, data, and results have been developed and used by states and local communities, much has been based on simplified analysis modules and the use of estimated parameters or data. This has reduced the granularity of the analyses, creating uncertainty and limiting the ability to identify and act on specific hazard and risk issues. Many of these uncertainties can be addressed and reduced through NEHRP activities.
The primary source of uncertainty in loss estimation models is the lack of accurate input data. This includes not only the data used by the models—such as information about seismic source characterization, strong ground motion attenuation, local soil conditions, and inventories of the built environment—but also data used to develop the models themselves. Different parameters used in different loss estimation models can change the level of uncertainty of the mean by a factor of five or more. A sensitivity study conducted by Porter et al. (2002) showed that the parameters that most impact the damage estimate for a particular building are related to earthquake ground motion and include the fragility curves, which provide an estimate of damage to various building components as a function of ground motion, the spectral acceleration of the ground motion used in the analysis, and the ground motion record or time history used in the analysis. For example, one AEL estimate for California based on improved soils classification is ~30 percent less than an estimate based on a single default soil type throughout the state (Rowshandel et al., 2003; FEMA, 2008). Similarly, California building related losses based on Next Generation Attenuation (NGA) models are 28 percent to 63 percent lower than those based on earlier ground motion prediction equations (Chen et al., 2009). Modifications to the HAZUS building damage modules in the HAZUS-MH4 release have been shown in earthquake simulations in Washington State to reduce estimates of death, injuries, and the number of people requiring shelter by as much as 30 percent (Terra et al., 2010). These types of reductions reflect improvements in the ability to characterize both hazard and risk. Continued improvements to earthquake risk and loss estimation methodologies and the development of community risk models are two activities that have been identified as NEHRP focus areas:
1. Promote the continued development and enhancement of earthquake risk assessment and loss estimation methodologies and databases. EERI (2003b) identified five areas of emphasis for system level simulation and loss assessment:
• Validation studies to calibrate accuracy of loss estimation models, incorporating the full range of physical and societal impacts and losses
• National models for seismic hazards, building and lifeline inventories, and exposed populations with application to other natural and man-made hazards
• Improved damage and fragility models for buildings (structural and nonstructural) and lifelines
• Improved indirect economic loss estimation models
• Development of system-level simulation and loss assessment tools that address lifeline interdependency issues (also addressed in Tasks 12 and 15)
The use of “default” data or simplifications of data contribute to the uncertainty in earthquake risk estimation. The aggregation of building inventory data to the census tract or census block level and the use of model building types with simplified fragility curves and model-driven databases may misrepresent the actual characteristics of building inventories. Differences between Assessor’s databases and the HAZUS database tend to underestimate nonresidential exposure in large urban counties and overestimate exposure (square footage) in smaller, less urban counties (Seligson, 2007). Improvements to HAZUS data management tools—such as the HAZUS Comprehensive Data Management System16 (CDMS)—help to address and reduce some of these uncertainties by allowing communities to import higher resolution data.
2. Promote a “living” community risk model. Because our communities are changing all of the time, community resilience is a dynamic concept. Optimum decision-making, at all levels of society, depends on the availability of current, up-to-date information. Risk assessment needs to be made real to the community by being open and accessible to users. The ability to define what the current acceptable level of disruption is at the local community level requires flexibility to incorporate:
• Local inventory data from various sources.
• New information and data (i.e., new attenuation models, building fragility curves, demographics, lifeline performance models, network interdependencies, indirect economic losses).
• New software or improve upon existing software, such as front-end and back-end software modules (e.g., programs that can address lifeline network disruptions and network interdependencies).
In addition to the basic risk metrics already available (e.g., direct/indirect economic loss, causalities, debris), the development of new analysis techniques or new metrics that may be specific to an individual or to the needs of a community should be supported.
Establishing community risk models for Earthquake-Resilient Community and Regional Demonstration Projects would be one means of
showcasing how risk assessment can be used to inform risk reduction activities.
Existing Knowledge and Current Capabilities
The ability to compare risk across states and regions is critical to the management of NEHRP. Loss assessment tools provide uniform engineering-based approaches to measure damages and economic impacts from earthquakes. Many of these models are contained in commercial software packages that have been developed by firms specializing in the development and marketing of proprietary models to end users (e.g., the insurance industry), and include those developed by AIR Worldwide, EQECAT, Risk Management Solutions, and URS. In addition to these proprietary earthquake loss estimation programs, there are currently two publicly available loss estimation or risk assessment programs—FEMA’s HAZUS, and the Mid-America Earthquake Center’s MAEviz program:
• FEMA developed HAZUS in cooperation with the National Institute of Building Sciences (NIBS), and by 2010 had released two generations of software. The first release, HAZUS-99, only addressed earthquakes, whereas the HAZUS-MH releases address flood and wind as well as earthquakes.17
• MAEviz is a joint effort between the Mid-America Earthquake Center (MAE) and the National Center for Supercomputer Applications (NCSA) to develop open-source seismic risk assessment software based on a Consequence-based Risk Management methodology. Open-source architecture helps to reduce the time lag between discovery by researchers and implementation by end users. New research findings, software, improved methodologies, and data can be added to the system using a plug-in system. As a result, MAEviz is constantly changing and evolving with daily builds posted on the web.18
Another model, an international open-source code program called the Global Earthquake Model, is currently under development and is scheduled for release by the end of 2013.19
Uses of Risk Assessment and Loss Estimation Modeling.
Risk assessment and loss estimation modeling has been successfully used at both national and community scales to promote awareness of
earthquake risks. USGS Circular 1188 (USGS, 1999) multiplied earthquake hazard (10 percent chance of exceedance in 50 years) by population size to create a risk factor that was used to identify the number of urban seismic stations needed as part of ANSS (see Task 2). FEMA 366 (FEMA, 2008) provides a national estimate of the long-term average annual earthquake loss to the general building stock (see Box 1.1) based on the HAZUS methodology. The current AEL for the United States, based on the 2000 Census, is $5.3 billion (2005$). As seen in Table 3.2, 43 metropolitan areas—led by Los Angeles and San Francisco—account for the majority (82 percent) of the earthquake risk in the United States. Outside of California, at risk communities including Seattle, WA, Portland, OR, Salt Lake City, UT, and Memphis, TN, show that earthquakes are not just a California problem.
Loss estimates can also be used to gauge the effectiveness of various mitigation strategies such as building retrofitting or the transfer of risk through the sale of property or the purchase of earthquake insurance. FEMA (1997b), for example, estimated that the direct economic losses (building and contents damage, and income losses) in an event similar to the 1994 Northridge earthquake would have been reduced by 40 percent ($16.6 billion compared to $27.9 billion) if all buildings had been built to current high seismic design standards prior to the earthquake. Had no seismic standards been in place, losses were estimated to have been 60 percent higher than those for the baseline 1994 scenario ($45 billion versus $27.9 billion). A 2001 FEMA report, based on the HAZUS-99 earthquake loss estimation methodology, examined the impact of seismic rehabilitation in reducing the economic and social losses from magnitude-7 earthquakes on the Newport Englewood Fault in southern California and the Hayward Fault in northern California (Feinstein, 2001). In both cases, the HAZUS modeling indicated that a comprehensive rehabilitation program could reduce building and contents damage losses by more than 25 percent and business interruption losses by more than 60 percent. These types of retrospective loss avoidance studies show how future losses can be recognized and avoided through simulation modeling and proactive community mitigation programs.
Loss estimates are affected by uncertainty—uncertainty in estimating the likelihood and intensity of strong ground motion, uncertainty in actual community building and infrastructure inventories, uncertainty concerning the levels of damage to the built environment, and uncertainty in the social and economic losses associated with the predicted damage. These uncertainties also impact estimates of financial risk and the premiums that insurers charge for earthquake insurance (NRC, 2006b). The high cost of earthquake insurance, resulting in part from the uncertainty in estimates of seismic risk, limits the amount of earthquake insurance purchased. Analyses conducted as part of the 140th anniversary of the 1868
FIGURE 3.13 Comparison of insured and economic losses from recent U.S. natural disasters (in 2008$). Insured losses for Hurricanes Andrew and Katrina include National Flood Insurance Program (NFIP) policies. SOURCE: Risk Management Solutions; Zoback and Grossi (2010).
Hayward, CA, earthquake indicate that only 6 to 10 percent of total residential losses and 15 to 20 percent of commercial losses would be covered by insurance following a repeat of the magnitude-6.8 to 7.0 earthquake (RMS, 2008). In contrast, approximately 53 percent of the economic losses to homes and businesses following hurricane Katrina were covered by insurance, including payouts from the National Flood Insurance Program (Figure 3.13).
To continue the progress already made in community earthquake risk assessment, continued NEHRP-funding for the development of nationally consistent datasets—such as the National and Urban Seismic Hazard Maps discussed in Tasks 4 and 6, and improved fragility curves for model building types that account for regional differences in construction practices, code levels, and structural condition—is essential. Support for an
open cyber environment that supports the continual update and improvement of risk assessment software, and the continued development of new basic physical models (e.g., fire following earthquake) is also necessary.
In addition to these types of national-scale products, the NEHRP agencies also work at the community level providing expertise and data for knowledgeable risk management activities. FEMA has distributed HAZUS throughout the United States and has been instrumental in establishing local HAZUS User Groups (HUGs) that provide local GIS support and expertise to communities. USGS works with state and local agencies to improve urban earthquake hazard maps and provides guidelines and procedures for collecting site condition information at the community level. Mapping local geology in three dimensions and incorporating more detailed grids into maps of site response, liquefaction, and landslide potential increase the granularity of these data, which then improve the resolution of the community earthquake risk assessments.
Open- Versus Closed-Source Software
In addition to informing risk assessment and mitigation activities, the HAZUS loss estimation software—which is “closed-source” software (i.e., the source code is not available to the community)—is also used as a decision support tool for emergency management (e.g., for requests for Presidential Disaster declarations and the development of State Mitigation Grants). Although a standard source code is necessary for consistent national decision-making, an open version where developers can test new data and develop algorithms should be supported as well. Increasingly, the scientific and engineering community is making use of “open-source” software, to create a cyber environment where new data, concepts, and applications can be developed and tested. Community model environments like MAEviz and the Open System for Earthquake Engineering Simulation (OpenSees)20 provide a software framework for regional- or community-based scenario development and for simulating impacts and the seismic response of structural and geotechnical systems. Linking the open-source environment to the risk and loss assessment development process would enable faster application and implementation of research results. Once these new models and concepts have been appropriately vetted, they can be incorporated into a more standardized platform for use by the NEHRP agencies.
It would be useful for the NEHRP agencies to implement a series of coordinated activities tied to the Earthquake-Resilient Community and Regional Demonstration Projects, discussed in Task 18. These activities would provide the basic data and mapping (through the USGS Urban Hazard Mapping projects) and building and infrastructure inventories (through FEMA-supported HAZUS activities) to support community risk management activities.
Many stakeholders, especially those in areas of critical infrastructure, are reluctant or, because of provisions in the Homeland Security Act of 2002, are unable to release inventory information beyond their organizations. These restrictions impact the ability of communities to recognize and plan for service disruptions during disasters. Public-private partnerships should be encouraged, where individual utilities and lifeline organizations conduct their own internal risk assessments—using standardized methodologies and earthquake scenarios—and then share the results with their counterparts and other stakeholders to address inter-utility interdependencies and community impacts from the loss of utility service. These types of partnerships would permit more informed disaster planning within the community.
Summarized most recently in NRC (2006a), early to more recent social science research under NEHRP has highlighted, on the one hand, major obstacles to achieving anything more than modest levels of pre-disaster mitigation and preparedness practices at household, organizational, community, and regional levels, and on the other hand, the often extraordinary resilience at all these levels of human response during and after actual earthquakes and other events (Kreps and Drabek, 1996; Kreps, 2001; Drabek, 2010). In so doing, research over decades has contradicted misconceptions that during a disaster panic will be widespread, those expected to respond will abandon their roles, social institutions will break down, and anti-social behaviors will become rampant. The more important research questions have become how and why communities and regions are able to leverage expected (and perhaps planned) and improvised emergency response and recovery activities in both the public and private sectors.
There are major demands and considerable public pressure in the
immediate post-disaster environment to return to normalcy as quickly as possible (Kreps, 2001; Tierney et al., 2001; Tierney, 2007; Johnson, 2009). That is why social science studies of expected and improvised emergency response activities under NEHRP’s legislative research mandate continue to be important. Yet, social science research has suggested also that the post-disaster environment provides one of the most opportune times for disaster recovery activities to support hazard mitigation—to rebuild stronger, change land-use patterns, and reduce development in hazardous areas, and also to reshape those negative social, political, and economic conditions that existed pre-event (NHC, 2006; NRC, 2006a; Olshansky et al., 2006). Thus, just like emergency response activities, disaster recovery activities need to be prepared to the extent possible, and then executed appropriately to reduce future risks. The NEHRP agencies, most notably FEMA, have responsibility for many of the federal programs that provide funding to communities and regions for emergency response and recovery. Thus, the social science research under NEHRP proposed here aims to ensure that its related mission to enhance community and regional resilience can be more fully realized.
Fundamental social science studies are needed of post-disaster practices that increase the resilience of communities and regions following large-scale earthquakes and other major disasters (see also Tasks 10 and 11). Such studies will document and model the mix of expected (and perhaps planned) and improvised emergency response and recovery activities and outcomes at community and regional levels, as they are supported in varying degrees by the federal government. The primary research targets on emergency response and recovery activities are governmental, medical, and educational organizations, social services agencies, public utilities, and industrial and commercial organizations. The disaster demands to which these entities must respond include mobilizing emergency personnel and resources, evacuation and other types of protective action, search and rescue, care of victims, damage assessment, restoration of lifelines and basic services, reconstruction of the built environment, and maintaining continuity of the economy and government. The studies we propose will contribute to both NEHRP’s legislative research mandate and its related mission to enhance community and regional resilience (see also Task 18).
Existing Knowledge and Current Capabilities
While the clear majority of pre- and post-disaster practices at community and regional levels are expected and sometimes planned, improvisa-
tion is an absolutely essential complement of pre-determined activities. Heretofore, social science studies of emergency response practices have been given primary attention under NEHRP and, to some extent, the pre-disaster preparedness practices related to them. These studies have documented the mix of expected and improvised activities of emergency management personnel, the public and private organizations of which they are members, and the multi-organizational networks within which these individual and organizational activities are nested (e.g., Kreps and Bosworth, 2006; NRC, 2006a; Mendonca, 2007).
Little research has focused on pre- and post-disaster recovery practices (expected or improvised) in either the public or private sectors (NRC 2006a). However, the outcomes of these practices are increasingly being given focused attention by social scientists (e.g., NRC, 2006a; Rose, 2007; Alesch et al., 2009; Olshansky and Chang, 2009; Zhang and Peacock, 2010). Accordingly, the proposed research has three primary aims: first, to build on existing knowledge of emergency response and related preparedness practices; second, to expand knowledge about disaster recovery and related preparedness practices; and third, to develop models and decision support tools that are increasingly grounded in social science knowledge about disaster response and recovery. The use of such models and tools, we believe, will enhance community and regional resilience before, during, and after earthquakes and other disasters.
The emergency response improvisations that have been documented systematically for a broad range of disasters include the following:
• At the individual level, the spontaneous adoption of important post-disaster roles by individuals who, based on their pre-disaster positions, would not be expected to do so. In effect, such individuals rise to the occasion because they happen to be in the right place and at the right time when there is a compelling demand for action and often leadership.
• At the individual level, the spontaneous development of new as opposed to pre-existing relationships among individuals performing important post-disaster roles. New relationships are forged because they facilitate the performance of roles by either or both partners in the relationship.
• At the individual level, the unconventional performance of post-disaster roles regardless of whether they are pre-determined or spontaneously adopted, and regardless of whether they are facilitated by pre-existing or new relationships. The improvisations can range from procedural or equipment changes related to how the roles are enacted, to changes in the usual locations of the role enactments, to taking on activities that are not authorized, to the issuing of orders to others over whom there is no pre-existing authority, and to the commandeering of supplies and equipment without prior approval.
• At the individual level, the primary reasons for the above improvisations are human and material resource needs, operational issues, time pressures to get things done, and frequently mixes among these kinds of problems and opportunities at either intra- or inter-organizational levels of response.
• At the organizational level, the timing and location of core activities may be changed, human and material resources may be reconfigured, some core tasks may be suspended while others are expanded or newly created, and, in some cases, relatively complete short-term reorganizations of pre-disaster routines may occur.
• At the multi-organizational response network level, the most frequent types of improvisation relate to unconventional exchanges of human and material resources, newly coordinated activities, or unconventional exchanges of resources in association with newly coordinated activities. More elaborate arrangements that involve changes of authority patterns or one or more organizations being absorbed within more inclusive entities are not common. But such a result is not inevitable because completely new organizations of various forms and sizes have sometimes arisen during large-scale disasters and proven to be consequential.
The individual, organizational, and multi-organizational improvisations that have been documented by previous and ongoing social science studies relate primary to immediate post-disaster demands for emergency services as opposed to short- and longer-term demands for reconstruction and recovery. An important difference between emergency response and recovery is that the key players in the former (e.g., police, fire, emergency medical services, public utilities, local emergency management offices) and the latter (e.g., community development agencies, land-use boards, real estate companies, banks, insurance companies, local businesses) generally do not interact routinely and have different organizational cultures. However, the data collection tools that have been used to codify the mix of expected and improvised activities by emergency response personnel in the public sector can be applied across the board. But if data collection is to become more comprehensive, standardized research protocols on expected and improvised activities within and between the public and private sectors must be developed and new arrangements for data archiving, data management, and data sharing must be created.
Social scientists studying earthquakes and other hazards have used a myriad of research methods. They have employed both quantitative and qualitative data collection strategies. They have conducted pre-, trans-,
and post-event field studies of individuals, households, groups, and organization. These studies have relied on open-ended to highly structured surveys and face-to-face interviews. They have used public access data such as census materials and other historical records from public and private sources to document community and regional vulnerabilities to earthquakes and other hazards. They have employed spatial-temporal data and related statistical models to document these vulnerabilities as well. They have engaged in archival studies of previous events when data from the original studies have been stored and made accessible. They have run disaster simulations and gaming experiments in laboratory and field settings. The social science research methods heretofore been used have been enabled by both “off the shelf” and cutting-edge technologies (NRC, 2006a, 2007).
Three key enabling requirements relate directly to the post-disaster response and recovery research proposed above: standardized data collection, improved data management, and sustained model building. Meeting these requirements will support the development and use of management support tools by those actively engaged in emergency response and recovery at local and regional levels.
• Standardized Data Collection: Post-disaster data collection by social scientists historically has been undertaken under very difficult conditions (see also Task 9). The timing and location of field observations have been heavily constrained by the circumstances of the events themselves as have the possibilities to make audio and video recordings of response activities. There have been special constraints and difficulties in sampling of and collecting data on emergency response personnel, their organizations, and social networks of responding organizations within the public and private sectors. Unobtrusive data such as meeting minutes, formal action statements, communications logs, memoranda of understanding, telephone messages, and email exchanges are difficult and sometimes impossible to obtain, and so on. The proposed pre-selection of community and regional demonstration projects (see also Task 18) and collection of data on a cross-section of communities prior to and following disasters (see also Task 11) are therefore very important for reducing the inherent ad hoc quality of most previous post-disaster studies. The cooperation of key organizations in the public and private sectors can be secured and obviously will become essential if or when actual events occur. With that cooperation, sample frames of those engaged in emergency response and recovery activities can be pre-determined to a much larger extent than has been possible in the past. Standardized data protocols on expected and improvised activities and their determinants can be developed and made ready on a standby basis. Methods of data storage and agreements on
data sharing can be set up before rather after the fact. Previous attempts to standardize social science data on earthquakes and other hazards have been intermittent and not well-coordinated among respective individual researchers or teams working on the same or related topics. But the potential for standardization in future studies is enormous. Simply put, social scientists now know what to look for in studies of post-disaster response and recovery. State-of-the-art computing and communications technologies can be used to implement data protocols more efficiently and effectively than in the past.
• Sustained Model Building: Modeling is the sine qua non of science (see also Tasks 6, 7, 10, 12, and 17). Its goal is to help researchers and practitioners alike to better understand how the world works. Technological advances in computing have enabled the development of complex models that can be used to describe and explain phenomena in both physical and social systems, from the smallest to most inclusive imaginable. And these advances have contributed greatly to the development of interdisciplinary research. An important use of computing in the natural sciences, social sci-
ences, and engineering remains statistical models of existing data. These statistical models range from relatively simple to highly complex configurations of variables. They are being used increasingly to create structural models of post-disaster response and recovery practices and outcomes. And over time, the expanding computing capacity has enabled the development of decision-making models as well. Decision-making models often rely on simulations and other forms of field or laboratory experimentation that place subjects (e.g., emergency response and recovery practitioners) in hypothetical situations (e.g., disaster circumstances) to see how decisions are made and actions taken (see also Task 6). It is important to emphasize that these decision-making models are theoretically driven, and their power is enhanced to the extent they are empirically based (NRC, 2006a). In combination, structural and decision-making models can serve as a key foundation for development and use of preparedness and training tools at community and regional levels (see also Task 18).
Three central implementation issues, and their possible resolution, merit serious consideration: lack of predictability about when large-scale earthquakes and other major disasters will occur; current lack of standardized research protocols on post-disaster response and recovery activities and related pre-disaster preparedness practices; and lack of standby research facilities and capabilities to implement standardized research protocols and manage data resulting from their use.
• Predictability of large-scale earthquakes and other major disasters: At community and regional levels, disasters are low probability events and, as such, very difficult to predict. Accordingly, research sites for post-disaster studies are largely ad hoc, there are major difficulties in mobilizing field research teams quickly, and emergency contexts present serious difficulties for data collection. Despite these historical research barriers, social scientists have been able to collect a wide range of ephemeral data on emergency response and recovery activities. The pre-selection of pilot communities and regions during the next 5-20 years of NEHRP’s Strategic Plan (see Tasks 11 and 18) will facilitate social science research greatly, first, because research plans can be developed, second, because by-ins by local and regional officials in the public and private sectors will be more likely, and third, because the likelihood of one or more events occurring during the next 5-20 years in the pilot research sites will be more likely (NRC, 2006a).
• Lack of standardized data collection protocols: Much of the groundwork has already been established, thus the potential for highly structured
research designs and replicable datasets across multiple disasters can now be realized. The key requirement is to have standardized data collection protocols on emergency response and recovery activities already in place before specific events occur (NRC, 2006a). To that end, we suggest that NEHRP agencies fund as soon as possible, under the auspices of the National Science Foundation, a specific initiative on the development of these research protocols. The competition should attract existing or new research teams interested in related methodological issues. The budget to fund 2-4 projects during the next 2 years, excluding the cost of actual data collection, should be on the order of $1.5 million.
• Lack of standby research facilities and capabilities to implement research protocols and address related data management issues: The development of standardized research protocols needs to be matched by the existence of standby research capabilities and facilities to collect, manage and disseminate resulting data. Existing university-based social science research centers could serve this purpose in the near term through newly designated funding. But ultimately, as recommended in NRC (2006a), a National Center for Social Science Research on Earthquakes and Other Disasters is needed. Such a center would include a distributed consortium of investigators and research units nationally and internationally. Similarly to Network for Earthquake Engineering Simulation (NEES), it would take advantage of telecommunications technology to link spatially distributed data repositories, facilities, and researchers. It would provide an institutionalized, integrative forum for social science research on hazards and disasters, much as the Southern California Earthquake Center (SCEC) does for the earthquake earth sciences community. We suggest that the NEHRP agencies provide funding for the new social science center, under the auspices of the National Science Foundation, for the next 5 years, at an annual budget of $2 million per year. Such funding would be consistent with previous NSF funding of earthquake research centers.
Although catastrophic earthquakes are rare, damaging earthquakes occur more frequently. Capturing, distilling, and disseminating lessons about the geological, structural, institutional, and socioeconomic impacts of earthquakes, as well as the responses post-disaster, are critical requirements for advancing knowledge and more effectively reducing earthquake losses. The 2008 NEHRP Strategic Plan for 2006-2010 identifies the creation and maintenance of a repository of important post-earthquake reconnaissance data as a strategic priority to improve understanding of earthquake processes and impacts (NEHRP, 2007). This task aims to ensure that
NEHRP’s activities would be more effective in the post-disaster period by improving post-earthquake information acquisition and management.
This task proposes to construct and maintain a national post-earthquake information management system to capture, distill, and disseminate lessons from damaging earthquakes. The system, in itself, will be a significant engineering effort, and it will also require sustained multi-year funding to implement and maintain in order to cost-effectively preserve data over time, so it will still be accessible and usable for future infrastructure design, and for mitigation and disaster management efforts. It will help ensure that NEHRP’s mission—to develop, disseminate and promote knowledge, tools, and practices for earthquake risk reduction in the pre-disaster environment—can also be successful in the post-disaster environment.
Existing Knowledge and Current Capabilities
It has long been recognized that any national effort to reduce economic losses and social disruption resulting from severe natural disasters requires a mechanism to capture and preserve engineering, scientific, and social performance data in a comprehensive and coherent system that will contribute to our learning from each disaster event that occurs (EERI, 2003a). Such a resource would play a vital role in efforts to enhance infrastructure design and to optimize mitigation, disaster planning, and response and recovery efforts. Despite this recognition, no mechanism is currently in place across the United States to ensure that necessary data are systematically collected and archived for future use. Further, those data that are gathered often are lost relatively soon after they have been collected, instead of being organized and maintained to enable study, analysis, and comparison with subsequent severe natural disasters that may not occur for many years or even decades (NRC, 2006a).
There are many agencies and professional organizations that currently support or are involved in post-disaster information acquisition and management. They include NSF’s funding of EERI’s Learning from Earthquakes21 and Geotechnical for Extreme Events Reconnaissance Association;22 both are working on more systematic approaches to conducting the NSF-sponsored reconnaissance efforts of the effects of extreme events. USGS is also very active in post-disaster reconnaissance, both in
the United States and internationally, and it has developed a plan to help coordinate NEHRP Post-Earthquake Investigations (USGS, 2007).
Recently, FEMA funded some initial scoping of the requirements for such a system under the auspices of the Multihazard Mitigation Council's American Lifelines Alliance (ALA). The goal of the ALA effort was to identify both infrastructure requirements (e.g., data system architecture, technological needs and issues), and implementation requirements (e.g., facilities, expertise, policies, and funding) for a Post-earthquake Information Management System (PIMS). A PIMS would provide users with the ability to query data in an intuitive and interactive manner to investigate the past performance of the built environment during earthquakes.
The ALA held a Workshop on Unified Data Collection in Washington, DC, on October 11-12, 2006 (NIBS, 2007). The workshop served as a forum for open and candid discussion of common needs of the utility and transportation systems (lifelines) community and possible opportunities for cooperation and collaboration in addressing those needs. The findings from the workshop contributed to the identification of “Improve post-earthquake information acquisition and management” as an objective and “Develop a national post-earthquake information management system” as a strategic priority in the 2008 NEHRP Strategic Plan (NIST, 2008). The workshop participants recognized that an integrated PIMS needed to include all aspects of the built environment and could potentially be expanded in scope to address all types of natural hazards.
In December 2007, the ALA, with funding from FEMA, tasked a team of researchers from the University of Illinois to conduct a 10-month scoping study to assess user needs and system requirements, challenges, and system-level issues for implementing a PIMS, and the design strategy needed to overcome the challenges and satisfy user needs (PIMS Project Team, 2008). As a follow-on project, researchers at the University of Illinois utilized “wiki” technology to collect summaries of information needs and applications, so that anyone can review and edit existing summaries or add a new summary. Funding and implementation of the ALA project did not occur, nor has integration with GEER, EERI, USGS and other more recent efforts.
Building a more earthquake-resilient nation will require better systems to capture, distill, and disseminate lessons from damaging earthquakes. Development of a PIMS would be a significant engineering effort, and will require sustained multi-year funding to implement a system capable of cost-effectively preserving data for 50 to 100 years. There are
both user needs and system requirements/issues for a PIMS (PIMS Project Team, 2008):
• User Interfaces: the interfaces that users would prefer to use for the discovery and retrieval of PIMS data.
• Information Needs: the types of information that users want to obtain from PIMS, including a range of general information, hazard data, building data, bridge data, lifeline data, critical structures data, historical data, loss/socioeconomic data, and pre-event inventory data.
• Data Access, Privacy, and Security Issues: a PIMS would need to respect data privacy, consistent with state and federal laws, by removing personal information from data, creating aggregated datasets, and restricting access to certain types of data.
• Direct Ingestion of Data: there would need to be the capability to directly upload data into PIMS.
• Harvesting and Exchange of Data: a PIMS would need to be able to harvest and exchange data with a variety of existing electronic databases.
In addition to serving the direct needs of users and other stakeholders, a PIMS must address their implicit assumptions about how the system's scope should align with their goals. A PIMS also must address issues related to the cultural, political, technological, and organizational context in which it will operate. System requirements and system-level issues have been identified that relate to data collection, organization, and storage; data curation and quality assurance; information presentation, discovery, and retrieval; privacy and security; and long-term data preservation.
PIMS is similar to NSF’s national environmental observatory efforts, and the overall timescale from project initiation to mature operational capability is 5 to 10 years; however, it could occur in two phases (PIMS Project Team, 2008):
• An initial PIMS capability (PIMS Phase 1) could be accomplished in 2 years and would include development of an initial PIMS capable of harvesting data from a few key sources, basic ingestion and archiving capability for hazards events in the near future, and a simple interface to provide for data discovery and retrieval.
• Phase 2 would take from 5 to 10 years and would involve development of a more advanced, “full-function” PIMS capable of harvesting data from a wide variety of sources, providing advanced tools for ingesting and archiving and offering sophisticated user interfaces for data discovery
and retrieval. Phase 2 will involve about seven to nine pilot projects that would have both a development phase and an implementation phase. Operations costs would continue beyond the development period of Phase 2.
Social science research complements research in other fields of earthquake resilience. For example:
• Hazard loss estimation, including its extension to macroeconomics, helps us establish an understanding of the severity of the earthquake problem to society.
• Psychology helps us understand how people perceive the earthquake threat and the need to address it.
• Decision science and behavioral economics assess the motivations and prudence of individual decision-making with respect to this threat.
• Organizational behavior analyzes group decisions in the realms of business, government, and nonprofit organizations.
• Sociology emphasizes how individuals and groups interact under stress in the aftermath of an event.
• Economics and finance help provide guidance on the allocation of funding to projects and policies, including insurance.
• Planning examines how the built environment can be modified by structural and nonstructural approaches in a cohesive fashion.
• Political science indicates how research, resource availabilities, and debate are translated into laws and regulations by several levels of government.
In the post-disaster environment, governments—particularly local governments—face considerable public pressure to provide a quick return to normalcy. Yet, research has consistently shown that the post-disaster environment provides one of the most opportune times for mitigation—to rebuild stronger, change land-use patterns, and reduce development in hazardous areas, and also reshape social, political, and economic preexisting conditions—and thereby helps to break the repetitive loss cycle (Berke et al., 1993; Schwab, 1998; Mileti, 1999; NHC, 2006; NRC, 2006a). Long-term recovery needs time to be accomplished thoughtfully and to allow for proper deliberation and public discourse on how to achieve risk reduction and betterments. The NEHRP agencies, most notably FEMA, have responsibility for many of the federal programs that provide funding to communities to recover from an earthquake or other damaging
disasters. This action aims to ensure that NEHRP’s mission can be more effective in the post-disaster period by promoting support to increase the earthquake resiliency of impacted communities, including mitigation ahead of the next disaster.
Basic and applied research in the social sciences, as well as such related fields as business and planning, is needed to evaluate mitigation and recovery (both short-term business and household continuity and long-term economic and community viability). Such studies would examine individual and organizational motivations to promote resilience, the feasibility and cost of resilience actions, and the removal of barriers to successful implementation. They should focus on the appropriate roles for both the private and public sectors, and look toward partnerships that avoid one sector undercutting the appropriate role of the other. Improved data and models are needed at the basic research level that will promote a sounder basis for policy prescriptions. Key hypotheses should be tested to give the models much needed behavioral content. Benefit-cost and other evaluative studies of pre-disaster mitigation and post-disaster resilience are encouraged to improve the management of our nation’s resources.
Task 8 complements this section by addressing emergency response and related short-term recovery. Task 11 recommends the creation of an Observatory Network that would help promote these goals in part, especially with respect to on-going data collection and analysis. However, this section covers a broader range of activities. Also, continued sponsorship of individual researchers and established and new research centers is encouraged to promote innovation, practical tools, and policy advice on resilience to earthquakes
Existing Knowledge and Current Capabilities
Pre-Disaster Loss Prevention
On the mitigation side, research is well advanced in the social sciences. Studies have found that FEMA hazard mitigation grants yield a benefit-cost ratio of 4 to 1 overall, and 1.5 to 1 for earthquakes (MMC, 2005; Rose et al., 2007). The reason that the ratio for earthquakes is lower than for other threats is that earthquake mitigation has focused much more on life saving than on property damage, and because there are more easily implemented mitigation actions (e.g., buy-outs of homes in flood plains) for these other threats. This study of mitigation projects, however, was highly weighted toward government initiatives, and more research is needed for private-
sector efforts. It is important to overcome the temptation to say that the marketplace will guide business decisions-makers to the optimal level of mitigation because of the profit motive. The existence of externalities of individual decisions, i.e., impacts of one decision-maker that affects others in some positive or negative way, is prevalent in this area. An excellent example of the extent of this issue is the work on “contagion effects” by Heal and Kunreuther (2007), which points out the limitations of one person undertaking protective measures if neighbors do not, such as in the case of the fire threat following earthquakes.
The Multiharzard Mitigation Council (MMC) study only scratched the surface in understanding the effectiveness of individual process grants and broader mitigation strategies, as well as general resilience, capacity-building grants. The former refer to funding for activities such as earthquake mapping and monitoring systems. The latter refer to broader block grants such as Project Impact. These are difficult to assess, because they are fewer in number and it is difficult to measure their effectiveness (Rose et al., 2007).
One of the greatest research needs in the pre-event phase is still, after many years, why individuals and businesses fail to make rational decisions about self-protection and insurance (Ehrlich and Becker, 1972; Jackson, 2005). Some excellent studies have identified limitations of the decision-making process under the category of “bounded rationality” (Gigerenzer, 2004). This includes classic work by Kunreuther et al. (1978) on understanding the failure of people to purchase adequate amounts of earthquake insurance. More research is needed on how to counter this problem, including overcoming myopia and other perception issues, dealing with moral hazard, and determining how government policy can inspire individual motivations as opposed to undercutting them. An excellent start in the general areas of hazards and terrorism has been provided by Smith et al. (2008) and Kunreuther (2007).
The research on community resilience has made substantial progress in just a few years (e.g., Norris et al., 2008). This research still faces challenges and can also yield many valuable spin-offs to researchers pursuing interdisciplinary and comprehensive approaches to resilience.
Predictive models of individual and community resilience are also valuable. Some initial attempts, analogous to Cutter’s design of a vulnerability index (Cutter et al., 2003), are being developed (Schmidtlein et al., 2008; CARRI, 2010; Cutter et al., 2010; Sherrieb et al., 2010).
Disaster Recovery and Reconstruction
Most post-disaster resilience activities are intended to reduce business interruption. Nearly all property damage occurs during the ground
shaking, but business interruption begins at that point and continues until the recovery is complete. The analysis is complicated by the fact that business interruption has extensive behavioral and policy connotations. For example, it is a factor of the length of time it takes to recover, which is not constant but highly dependent on the mix of individual motivations and government policies.
Operational metrics that can be applied to measure post-disaster resilience have been developed and applied effectively (Chang and Shinozuka, 2004; Haimes, 2009). Studies have examined the relevant contribution of various types of resilience in reducing losses from natural disasters and terrorism (e.g., Rose and Liao, 2005; Rose et al., 2007). However, only a few studies have actually evaluated the costs of these various resilience strategies (e.g., Vugrin et al., 2009). A priori, one would expect these strategies to be much less costly than pre-disaster mitigation. Conservation of inputs that have become even scarcer usually pays for itself, the cost of inventories is merely their carrying cost, and production recapture at a later date requires only the payment of overtime for workers.
Enabling Requirements—Needed Studies
The following areas of research are needed to better understand post-disaster resilience actions individually and as a group:
• Inventory of the many actions that can be undertaken to implement resilience after an event. This pertains to three levels (Rose, 2009)—microeconomic (individual household, business, or government entity); mesoeconomic (an entire industry or market); and macroeconomic (the entire economy, including interactions between decision-makers and institutions).
• Assessment of the efficacy of actions that can be taken to enhance this resilience prior to an event (e.g., the building up of inventories, emergency drills) or after the event (e.g., relocating businesses quickly and matching customers who lost their suppliers with suppliers who lost their customers). The extent to which business interruption losses can be reduced. Studies by Tierney (1994), Rose et al. (2007), and Kajitani and Tatano (2007), for example, indicate that the potential for reducing losses is great for selected approaches to resilience. However, many types of resilient actions have not yet been assessed.
• Estimation of the costs of implementing resilience. The studies just noted also give some indication that many post-disaster resilient activities are relatively inexpensive. Conservation pays for itself, inventories only incur carrying costs, and production rescheduling simply requires the
payment of overtime for employees. Still more studies are needed to cover the entire range of resilience alternatives.
• Evaluation of the emerging business continuity industry. An increasing number of private firms offer disaster recovery services (Rose and Szelazek, 2010). This professionalization of recovery is likely to improve the efficiency of recovery and reduce the need for government assistance. Still, the broader implications of this industry need to be ascertained with respect to adherence to professional standards, potential market power, price gouging, and affordability to small business.
• Organizational response. Research by Comfort (1999) provided a valuable foundation in terms of nonlinear adaptive systems. Such research captures the evolving nature of the institutional decision-making process, including learning and feedback effects. More case studies are needed.
• Identification of obstacles to implementation. Most studies of resilience to date focus on an ideal context in which a resilient action is implemented. Various types of market failures, transaction costs, regulatory restrictions, and limited foresight need to be understood first before research on how they might be overcome can proceed (e.g., Boettke et al., 2007; Godschalk et al., 2009).
• Identification of best-practice examples. There are notable examples of successful resilience, as in the use of backup electricity generators in the aftermath of the Northridge earthquake and business relocation following 9/11. Research analyzes underlying issues; however, practitioners are more likely to be won over by real-world successes. (e.g., Tierney, 1997; Rose and Wein, 2009).
• Design of remedial policies. This includes innovative approaches, such as the use of vouchers and other incentive-based instruments to promote resilient actions. Especially critical is research that identifies areas in which the public and private sector can work in harmony to achieve resilience, rather than at cross purposes. Many have pointed to government bailouts as providing a disincentive for both mitigation and resilience, although it would help to try to measure the extent to which this takes place (e.g., Smith et al., 2008).
• Characterization of infrastructure network vulnerability and resilience. Resumption of infrastructure services is one of the primary needs of recovery. Network characteristics make this segment of the economy/community all the more challenging, especially in light of new technology, trends in both centralization and decentralization, and new pricing strategies. Many advances are continuing at the Earthquake Engineering Research Centers, but more fresh approaches are needed (e.g., Grubesic et al., 2008).
• Development of planning frameworks for a cohesive set of policies. These would integrate structural and nonstructural initiatives in the
urban landscape to avoid duplication, establish consistency, and capitalize on synergies.
• Exploration of equity and justice considerations. It is well known that earthquakes and other disasters have disproportionate effects across strata of society. The poor, minorities, the aged, and the infirm are more vulnerable, and even the middle class and those well off can be rendered indigent as a result of a disaster. Further study of the distribution of impacts of disasters is needed. Moreover an analysis of deeper equity and justice considerations are as well. For example, there is no consensus on the best definition of equity in any field, be it philosophy, political science, or economics (Kverndokk and Rose, 2008). An exploration of any unique aspects of the problem pertaining to earthquakes and other disasters is yet to be identified although some progress has been made (e.g., Schweitzer, 2006). Research would help call attention to this typically neglected area. Of course, it will take not only additional research but also political will to address it properly.
• Economic valuation methods have been used to measure cultural and historic “non-market” values in general (e.g., Navrud and Ready, 2002; Whitehead and Rose, 2009). Findings indicate individual willingness to pay to be rather low, but studies are needed to validate the methods and to identify the broader population for whom historic values are relevant, rather than just the property owners (Whitehead et al., 2008).
• Extension to ecological considerations. Some initial efforts appear to be promising (Renschler et al., 2007) in what looks to be an expanding field in terms of importance and research challenges. Additional work is also needed to explore the relationship between resilience (usually considered a short-run response) and adaptation to climate change (the long-run counterpart).
• All-hazards approach. Although this is typically acknowledged to be a valuable area of pursuit, research is still lagging since the last major advance (Mileti, 1999). Much of the work on terrorism, for example, has not yet been examined for application to earthquake resilience (e.g., NRC, 2006a; Vugrin et al., 2009).
• Long-run effects of earthquakes. We still do not have a definitive assessment of these implications. Part of the reason is the influence of external factors beyond the control of those who make the key decisions on reconstruction (e.g., business cycles, technological change, and globalization trends). It is also difficult to sort out the influence of external assistance vs. indigenous resources. Conceptual frameworks for more formal analysis of the issue should be encouraged (e.g., Chang, 2009, 2010).
• Resilience metrics indicators. It is important to be able to measure resilience in terms of both potential and practice. Basic metrics have been developed (Rose, 2004, 2007; Chang and Shinozuka, 2004) and applied
successfully (e.g., Rose et al., 2009; SPUR, 2009), but more research is needed to include dynamic elements. More recent research on indicators of the various capacities needed to promote resilience (i.e., prospective resilience indicators) also appears promising (CARRI, 2010; Cutter et al., 2010). The relationship of performance-based engineering to resilience metrics is still unexplored.
• The evaluation and prediction of demand surge. Construction prices are likely to rise following a major earthquake. Although this is often attributed to the fact that there is an increased demand for repair and reconstruction, it also stems from the fact that construction equipment has been damaged, as have inventories of construction materials. Moreover, the production of even more materials may be limited because of damage to their manufacturers. This condition can raise the cost of recovery significantly. It involves an important tradeoff between recovering quickly at a high price and minimizing business interruption losses vs. incurring business interruption losses and waiting until prices settle down in order to reduce recovered costs. Theoretical and empirical analyses are needed to better understand this phenomenon and to be able to predict its course.
• Strategic planning. Now that much more emphasis has been placed on the post-disaster period, it is prudent to examine risk management across the entire timeline of earthquake disasters. This includes a closer examination of the relative payoff of resources expended before and after an event. In terms of reducing business interruption, post-disaster resilience appears to have an edge in terms of cost given the large number of low-cost alternatives and the fact that they need only be implemented once the event has taken place (unlike mitigation). Of course, mitigation is still the most relevant strategy for the prevention of building damage and the promotion of life safety.
• The post-disaster environment is an extreme one, where time appears compressed because of the pressures to restore normalcy (Johnson, 2009; Olshansky and Chang, 2009). The urban setting that might have taken decades or more to construct must be repaired or rebuilt in a much shorter time. However, much of the data that has been collected in past earthquakes will often have been effectively lost. If data collection is to become even more comprehensive, improved data management, archiving, and linking to existing data is required.
• Although leadership and management of post-disaster response-related activities are fairly well defined in the United States, the government’s role in disaster recovery is less clear (NRC, 2006a). The disaster recovery process happens with the many simultaneous decisions made and resulting actions taken, by individuals, businesses, and institutions, both directly and indirectly impacted (Johnson, 2009). In turn, managing recovery should be about planning, organizing, and leading a comprehen-
sive recovery vision, and influencing the many simultaneous decisions to achieve that vision as effectively and efficiently as possible (Johnson, 2009). Without a comprehensive understanding of the post-disaster needs or a recovery vision, bureaucratic management approaches often tend to be reactive, inflexible, and inefficient.
• Research has consistently shown that the post-disaster environment provides one of the most opportune times for mitigation and betterment, such as improved efficiency, equity, or amenity. The key influences that government can have are vision—often in the form of leadership and plans—and resources, most importantly money (Rubin, 1985; Johnson, 2009). But, recovery needs time to be accomplished thoughtfully and to allow for proper deliberation and public discourse on how to achieve risk reduction and betterments; and, recovery managers are often pressured to go faster than information, knowledge, and planning generally flows. Furthermore, the amount and flow of money often doesn’t match the compressed pace of recovery or work efficiently and effectively to achieve substantial, long-term risk reduction. It also is not flexible enough to address the evolving needs in a post-disaster environment.
• In recent years, researchers have proposed some guiding principles to manage the complexities of post-disaster response and recovery and help reduce disaster-related costs and repetitive losses. Post-disaster resilience—as one of these guiding principles—means that there is more decentralized and adaptive capacity in affected communities to effectively contain the effects of disasters when they occur and manage the recovery process, as well as an ability to minimize social disruption and mitigate the effects of future disasters (Sternberg and Tierney, 1998; Bruneau et al., 2003; NRC, 2006a).
Enabling Requirements—Methods and Models
Improved data and methods are needed to analyze and promote resilience, both pre- and post-disaster. This includes more extensive data collection, building on the work by Cutter and Mileti (2006). The data should be made available through established clearinghouses, such as the Natural Hazards Research Applications and Information Center (NHRAIC) at the University of Colorado at Boulder and the Community and Regional Resilience Institute (CARRI), and new knowledge hubs such as the proposed National Center for Social Science Research on Earthquakes and Other Disasters (Task 8 above). It is essential that the PIMS in Task 9, above, include data on social and economic consequences.
Many sound economic models exist to measure the losses from natural hazards before and after the implementation of resilient actions. Input-output (I-O) analysis, with a few exceptions (e.g., Gordon et al., 2009),
is not up to the task because of its inflexibility, (i.e., inability to readily incorporate aspects of substitution between inputs, domestic productions and imports, changes in behavior, etc.).23 Mathematical programming (MP) models overcome some of the limitations of I-O and are useful where spatial analysis or technical details are important (e.g., Rose et al., 1997).
Computable general equilibrium (CGE) analysis captures all of the advantages of I-O (e.g., multi-sector detail, total accounting of all inputs, focus on interdependence) but overcomes its limitations by including behavioral considerations, allowing for substitution and other nonlinearities, and reflecting the workings of markets and prices (Rose, 2005; Rose et al., 2009). Macro-econometric models are coming into increasing use in the analysis of the impacts of the economic consequences of earthquakes and other disasters (Rose et al., 2008). They have the advantage over CGE models in being based on time series data, rather than mere calibration of a single year of data, and provide a basis for forecasting the future. They are also superior thus far in incorporating financial variables, although this is not necessarily an inherent advantage of the modeling approach.
Agent-based models are increasingly in vogue in disaster studies. They examine individual motivations and their actions within groups. They are especially adept at simulating panic, contagion, and aversion behavior (e.g., Epstein, 2008). They have more recently been expanded to analyze aspects of urban form, and would therefore be useful in analyzing people’s location decisions under alternative zoning and broader land-use restrictions (e.g., Heikkla and Wang, 2009). Finally, systems dynamic models represent an excellent overall framework for the combination of various types of models into an overall framework. It is unlikely that any single modeling approach is best for all aspects of an earthquake or other disaster; the trick is often being able to integrate several different models successfully (e.g., Chang and Shinozuka, 2004). In effect, several integrated assessment models of earthquake risk, vulnerability, and consequences (e.g., HAZUS) are a subset of systems dynamic models.
Many potential modeling candidates continue to flourish, but the research community and practitioners have been rather slow to validate them (e.g., Rose, 2002). Further work on validation would inspire confidence in the use of many valuable models.
Supplementing overall model validation, individual hypotheses contained in many of these models relating to individual and group behavior need to be tested. In addition to those noted elsewhere in this section, especially urgent are hypotheses about changes in individual risk perceptions due to the social amplification of risk. They have been shown in prelimi-
23 The measurement of resilience is contextual, i.e., it requires a reference point of what the world would look like without resilience. Ironically, I-O analysis is ideal for this purpose in that it best mimics an inflexible rather than a resilient response.
nary analyses to vary by type of threat and other factors (Burns and Slovic, 2009) but in general can lead to economic behavior that greatly exacerbates business interruption losses (Giesecke et al., 2010). Also key will be new studies on the public reaction to improved earthquake prediction.
Although HAZUS (FEMA, 2008) represents a major milestone in making it possible for many analysts to undertake hazard loss estimation, it is not without its limitations. Only a very small proportion of its funding has been devoted to the Direct and Indirect Economic Loss Modules (DELM and IELM). Hence, it is not surprising that HAZUS contains major gaps and shortcomings.
One of the most glaring is the inability of HAZUS to estimate the majority of economic losses from utility lifeline disruptions. HAZUS provides only an estimate of the damage to utility system components. No capability is provided in the DELM to evaluate the much larger disruption to the first round of utility customers. This problem is magnified, because there is input into the IELM to evaluate the further ripple effects throughout the economy.
We acknowledge that this is a challenging subject largely because of the complex network characteristics of electricity, gas, water, transportation, and communication lifelines. However, we encourage the study of the feasibility of incorporating network data and computational algorithms into HAZUS. We also suggest the consideration of a HAZUS “patch” that can be added to the software as an interim remedy. Such a construct has been developed as part of the report to Congress on the benefits of FEMA hazard mitigation grants (MMC, 2005; Rose et al., 2007), and further refined in a contribution to the Risk Analysis and Management for Critical Asset Protection (RAMCAP) loss estimation software (Rose and Wei, 2007). These patches make use of the input-output data on the lifeline requirements of the sectors of the economy and provide a reasonable approximation of the direct and indirect economic impacts and without a full network capability.
Other aspects of HAZUS that could stand refinement pertain directly to resilience:
• The recapture factors in the DELM are labeled as being applicable for periods up to 3 months, but no guidance is provided for longer time periods. The possibility of recapture of lost production wanes over time as customers seek other suppliers.
• Potential business relocation is implicit in the DELM estimates, but is mentioned only in passing in the Users Guide. The user has no way of ascertaining the proportion of impacts that can be avoided through average relocation practices.
• The IELM contains several sources of resilience, including import
substitution and excess capacity. Implementing these options can result in extreme play in the model, i.e., small changes in parameter values can yield extreme changes in impacts. More extensive analysis is needed to determine the extent to which the ensuing results are overly sensitive to the parameter's changes and limitations of the user's abilities.
Although HAZUS serves as a useful tool for loss estimation, it is necessarily simplified to render it accessible to a broad range of users, as it is intended to be accessible at every emergency planning office in the United States. Myriad advances have been made in more sophisticated, and typically more accurate integrated assessment models, especially at the various NSF Earthquake Engineering Research Centers (e.g., Shinozuka et al., 1998). These represent major advances in the state of the art conceptually and empirically that provide greater insights into the loss and recovery process. The pace of this research has recently declined because of the end of NSF support of these centers following their “graduation.” Private industry support has always been lacking because of the double “public good” nature of this research. First, it focuses a great deal on infrastructure, which provides benefits to all of society, and from which it is difficult to extract a payment for all of its services. Moreover, the gains from any single research effort exceed the value to any one sponsor.
To provide enhanced leadership for reducing long-term risk and repetitive losses, post-disaster, will require a commitment and leadership among the NEHRP agencies to work with other federal agencies that have responsibility for many of the federal programs that provide funding to communities to recover from an earthquake or other damaging disaster.
The implementation issues presented here overlap in part and are generally consistent with Tasks 8 and 11.
NEHRP agencies must help ensure that post-disaster federal programs contain support to increase the earthquake resiliency of the impacted communities. For example, FEMA regulations should promote increased application of mitigation funding under the 406 Public Assistance Program.
The lead agency and staff for NEHRP must also ensure that the intents of this action are integrated with the recommended community-level capacity-building initiative also proposed as part of this study. Pilot city projects (see Task 18) could be initiated post-disaster in communities affected by damaging earthquakes or other disasters. The post-disaster environment would provide an excellent opportunity to develop and test mitigation tools at the community level. Efforts must also be made to provide both pre- and post-disaster mitigation planning for recovery.
Having communities prepared to make recovery decisions and take action can result in a better organized and executed local recovery. Savings from such efforts are likely to rival savings that can be achieved by structural measures alone.
It is not a requirement that the federal government design, fund, direct, and implement all resilience activities; rather, stronger working relationships and enlightened multi-level governance will further promote resilience in practice.
The research community has, on numerous occasions, identified key data collection obstacles that are impeding the advancement of knowledge regarding earthquakes and other disasters, their impacts on society, and factors influencing communities’ disaster risk and resilience (see also Task 8) (Mileti, 1999; NRC, 2006a; Peacock et al., 2008). In particular:
1. Current funding mechanisms do not allow monitoring of long-term changes in disaster resilience and vulnerability. Disaster research has traditionally been supported by “quick-response” grants to gather ephemeral post-disaster data, standard 3-year research grants, and research centers that are typically funded for 5 years. Although these mechanisms have led to significant advances in knowledge, they preclude longer-term monitoring over multi-year or decadal timescales.
2. Standardized data collection protocols do not exist. Data collection efforts by individual investigators rarely replicate measurement tools and methods applied in previous studies. Consequently, although substantial amounts of data have been collected on numerous disaster events, the ability to compare across events and studies, replicate findings, and draw generalized conclusions is very limited.
3. Effective mechanisms for coordinating and sharing data among researchers and practitioners are largely lacking. Issues relate not only to long-term data repositories but also to intellectual property, confidentiality, data archiving protocols, etc. Data that are currently collected are not shared and utilized as widely as they could be.
4. A comprehensive, holistic view of community disaster vulnerability and resilience is beyond the purview of individual research studies. Individual studies, for reasons of practicality, largely focus on specific, limited aspects of risk; however, understanding and fostering community resilience requires comprehensive knowledge that builds on and synthesizes across many studies. For example, understanding how community
resilience varies with the size of the earthquake event is an important knowledge gap that requires synthesis across many studies.
Investment in the research infrastructure to overcome these barriers would be strategic and highly cost-effective. It would catalyze rapid advances in the knowledge needed for disaster resilience—facilitating the development of systematic and cumulative rather than piecemeal knowledge bases, addressing fundamental knowledge gaps, and enabling development of models that explain changes in community vulnerability and resilience over time.
In the next 5 years, NEHRP should establish a network of observation centers—an “Observatory Network”—to measure, monitor, and model the disaster vulnerability and resilience of communities across the nation. This observatory network would focus on the dynamics of social systems and the natural and built environments with which they are linked. The network would facilitate efficient collection, sharing, and use of data on disaster events and disaster-prone communities. Its activities—including standardization of data collection protocols, archiving of data, long-term monitoring of community vulnerability and resilience (especially in high-risk regions), and regular reporting of these assessments—would provide critical research infrastructure for advancing and implementing knowledge to reduce disaster risk nationwide.
An Observatory Network would focus on data collection related to four principal thematic areas: (1) resilience and vulnerability; (2) risk assessment, perception, and management strategies; (3) mitigation activities; and (4) reconstruction and recovery.
Existing Knowledge and Current Capabilities
The state of knowledge in disaster social sciences with respect to the thematic areas that would be addressed by an Observatory Network has been thoroughly documented in several reports. For comprehensive overviews of research progress, see in particular: the five-volume second assessment of research on hazards and disasters (Mileti, 1999; together with companion volumes, Burby, 1998; Kunreuther and Roth, 1998; Cutter, 2001; and Tierney et al., 2001), and a report by the National Research Council that documented social science contributions under NEHRP (NRC, 2006a). (See also discussions related to Tasks 8 and 10 in this report.) Here, a few examples of knowledge advances and important remaining
gaps are provided to illustrate the potential benefits of an Observatory Network on community vulnerability and resilience.
Vulnerability and resilience represent core concepts in understanding the effects of hazard events on populations. Vulnerability broadly encompasses exposure (e.g., number of people living in seismic regions) and physical vulnerability (e.g., building design and construction), as well as social vulnerability (e.g., access to financial resources to cope with an earthquake). Resilience, as noted in Chapter 2, refers to the ability to reduce risk, maintain function in disaster events, and recovery effectively from them. Key research questions include: Which communities are most vulnerable to disaster losses, and why? How can we measure and assess disaster resilience? How are vulnerability and resilience changing over time? What accounts for these patterns and changes?
A substantial knowledge base has developed regarding disaster vulnerability and, increasingly, disaster resilience. Empirical studies have, however, been dominated by one-time case studies limited in time and space. One exception is work by Cutter et al. (2003) that systematically assessed an index of social vulnerability, using 42 census variables to identify clusters of highly vulnerable counties across the United States. It is now necessary to extend this type of work to more holistically assess vulnerability and resilience; in particular, by integrating social vulnerability with patterns of exposure, physical vulnerability, and hazard probabilities, by considering interactions with ecological health, infrastructure, and institutional and community capacity, and by tracking changes over time (Cutter et al., 2008b). This will require access to local data on such important factors as local building codes, their enforcement practices, and seismic upgrading or other mitigation programs. In contrast to census data, such information will be highly variable from one jurisdiction to the next. Moreover, such information will likely be gathered by many research teams in the context of diverse, locally oriented projects. Thus it is imperative for the research community to develop and utilize standardized, common data collection protocols and instruments, and to share the data acquired in a shared repository.
Knowledge regarding risk assessment, risk perception, and management strategies is also fundamental to advancing national earthquake resilience. Risk assessment refers to estimates by technical experts on the likely consequences of potential hazard events, typically deriving from formal probabilistic models. (See also Tasks 6 and 7 regarding earthquake scenarios and earthquake risk assessment.) How risk is perceived by individuals, organizations, and groups within society, however, often differs from experts’ assessments of risk. As noted by Peacock et al. (2008, p. 9), “risk management strategies … require the development of policies that take into account both risk assessment and perception and include
economic incentives (e.g., subsidies and fines), insurance, compensation, regulations (e.g., land use restrictions) and well-enforced standards (e.g., building codes) … and … will often require private-public sector partnerships.” Central research questions include: How do the risk perceptions of stakeholders (e.g., residents, community leaders, elected officials) differ across communities and over time? How does the actual experience of disasters affect risk perception and change risk-related behaviors? How long do these changes last? How prevalent is insurance among different communities and groups within communities, and how does this change over time? What explains differences in pro-active behaviors?
Substantial progress has been in social scientific understanding of such questions. A few attempts have been made to develop insights from multiple, rather than single, case studies; for example, Webb et al. (2000) conducted consistently designed, large-scale surveys of businesses in five communities, one of which had not experienced a major disaster (Memphis/Shelby County, TN) and four of which had (Des Moines, IA; Los Angeles, CA; Santa Cruz County, CA; and South Dade County, FL). Their inquiry included business disaster preparedness, as well as loss and recovery. Although this series of studies was ground-breaking, it was nonetheless cross-sectional rather than longitudinal in design, “making it impossible to track changes that occurred over time” (Webb et al., 2000; p. 89). Findings were also highly variable across cases, suggesting the need for many more cases to better understand patterns of difference. The study focused on organizational, agent-specific, and community factors that contribute to disaster preparedness (e.g., business size and disaster experience). Results suggested that these factors alone provide only a partial explanation of behavior; other multiple and complex influences, including for example the decision-making processes of business owners, must also be investigated. These limitations—lack of longitudinal perspectives, small samples of communities, and partial rather than holistic explanations of risk-related behaviors—remain important knowledge gaps.
Pre-disaster mitigation activities—including addressing seismic risk through building codes, structural and nonstructural retrofits, and land-use planning—represent the primary means through which earthquake losses can be reduced in the long term. Fundamental research questions include: What factors influence the adoption of mitigation measures by households, businesses, and communities? To what extent have mitigation activities been adopted in different communities? What types of mitigation measures are most cost-effective? How can insurance and other programs promote mitigation? How effective are plans and planning processes (e.g., the state mitigation plans that are required, and the local mitigation plans that are encouraged, under the Disaster Mitigation Act of
2000) in reducing vulnerability and increasing resilience? How do different types of legal and legislative contexts influence mitigation activities?
Although a number of studies have addressed mitigation decision-making at various scales, from individuals to state governments, relatively few have rigorously assessed the effectiveness and cost-effectiveness of mitigation. In one key study, the Multihazard Mitigation Council recently conducted a congressionally mandated, independent study to assess the future savings from various types of mitigation activities (MMC, 2005; see also Task 10 above). Focusing on data for 1993 to 2003, this landmark study found that FEMA’s natural hazard mitigation grant programs were cost-effective and resulted in considerable net benefits in the form of reduced future losses from natural disasters. On average, every $1 of FEMA mitigation grant funding led to societal savings of $4. Yet further knowledge about the cost-effectiveness of specific mitigation approaches is still needed for informing policy. The report concluded that (MMC, 2005, p. 6):
Continuing analysis of the effectiveness of mitigation activities is essential for building resilient communities. The study experience highlighted the need for more systematic data collection and assessment of various mitigation approaches to ensure that hard-won lessons are incorporated into disaster public policy. In this context, post-disaster field observations are important, and statistically based, post-disaster data-collection is needed for use in validating mitigation measures that are either costly, numerous, or of uncertain efficacy or that may produce high benefit-cost ratios.
Reconstruction and recovery represent a fundamental dimension of disaster resilience, yet it is widely acknowledged to be the least-understood phase of the disaster cycle (Tierney et al., 2001; NRC, 2006a; Peacock et al., 2008; Olshansky and Chang, 2009). Key questions concern: Why do some communities recover more quickly and successfully than others? How does the recovery trajectory of communities differ by type and magnitude of the hazard event, conditions of initial damage, characteristics of the community, and decisions made over the course of reconstruction and recovery? Who wins, and who loses, in the process of disaster recovery? How do different types of assistance and recovery resources affect household and business recovery? What types of decisions and strategies are most critical to recovery? How do disasters affect communities over the long term?
Current knowledge about post-disaster recovery has been developed through case studies of individual disasters, and systematic data collection is greatly needed: “Indeed, without sufficient data on short and long term recovery with respect to households, housing, businesses, and other components of our communities, developing and validating models of community resiliency or assessing the effectiveness of recovery policy
and planning will remain elaborate conjectures” (Peacock et al., 2008, p. 11). One study has attempted to develop a prototype computer model of community disaster recovery (Miles and Chang, 2006; see also Olshansky and Chang, 2009), accounting for many of the factors and interactions suggested in the case study literature. The model simulates how recovery trajectories might vary in different conditions. The study found, “The paucity of data and empirical benchmarks is a major challenge. There are not enough disaster events that have been systematically studied from the perspective of developing quantitative data and recovery indicators.… Moreover, data are hard to come by, often inconsistent and incomplete, and typically expensive to gather” (Olshansky and Chang, 2009, p. 206).
An Observatory Network on community resilience and vulnerability is needed, analogous to observatory networks that have been established in the environmental science arena. A prime example is the Long-Term Ecological Research (LTER) network established by NSF in 1980. The LTER network currently includes 26 sites located in a diversity of ecosystems, from Antarctica to the Florida Everglades.24 Across these landscapes, researchers have been investigating similar scientific questions, sharing data, and synthesizing ecological concepts. In recent years, NSF has also established the National Ecological Observatory Network25 (NEON) to provide infrastructure, as well as consistent methodologies, to support research on continent-scale ecology related to climate change impacts and other large-scale changes. It is also supporting the Collaborative Large-Scale Engineering Analysis Network for Environmental Research (CLEANER) (NRC, 2006c). Within the earthquake field, NEES represents a large-scale investment in a nationally distributed network of shared engineering facilities for experimental and computational research.
A national observatory network is needed to address the disaster vulnerability and resilience of human communities (e.g., cities), using methodologies applied consistently over time and space, with attention to the complex, place-based interactions between changes in social systems, the built environment, and the natural environment. In June 2008, USGS and NSF sponsored a workshop to outline the goals, research agenda, data collection principles, structure, and implementation of such a network, labeled the Resiliency and Vulnerability Observatory Network (RAVON) (Peacock et al., 2008). Participants at that workshop agreed that the network should focus on natural disasters, foster interdisciplinary
research, facilitate comparative research, and emphasize issues of social vulnerability. The committee supports the establishment of such a network and considers it to be a high priority for implementing the 2008 NEHRP Strategic Plan in the next 5 years.
An Observatory Network would consist of a series of research nodes, comprising at least three types:
• Regional nodes. Regional hubs for coordinating researchers (from both within and outside the region) who are gathering data about a specific geographic region. Such a hub could coordinate activities and work closely with local governments, nongovernmental organizations, and community groups in the region. These nodes could be strategically located in disaster-prone regions of the country, pre-positioned to facilitate rapid post-disaster studies of communities struck by natural disasters.
• Thematic nodes. Existing centers whose missions directly relate to the Observatory Network could be included as thematic nodes. These could include research centers (e.g., the Natural Hazards Center at the University of Colorado at Boulder, which already provides information clearinghouse services and convenes annual workshops for the research and practice communities) as well as mission agencies such as USGS that already develop spatial databases on hazards and risks nationwide. The proposed National Center for Social Science Research on Earthquakes and Other Disasters (Task 8 above) would also serve as a key node.
• Living laboratory nodes. Nodes could be established in communities affected by major natural disasters, in order to gather and assess data on disaster impacts and recovery over the long term.
Core activities of such a network would include:
• Developing and sharing standardized definitions, measurement protocols, instruments, and strategies for data collection across multiple communities and disasters;
• Developing and archiving longitudinal databases for analyzing and modeling resilience and vulnerability over time;
• Supporting researchers investigating new disaster events.
These activities would be closely linked with post-earthquake social science research on response and recovery (Task 8), data collection and sharing capabilities of the Post-Earthquake Information Management System (PIMS) (Task 9), and socioeconomic research on mitigation and recovery (Task 10).
The scope of the Observatory Network should be multi-hazard, including but not be exclusively focused on earthquake hazard. It is assumed
that many of the nodes of the Observatory Network will be geographically distributed across regions of the United States with significant earthquake hazard. Some nodes, however, may be located in regions of low seismic risk but high risk of hurricanes, floods, or other hazards; for example, a “living laboratory” node could be established in a region recovering from a catastrophic hurricane.
This multi-hazard emphasis is advantageous for two reasons. First, because major earthquakes occur infrequently, it is important to take advantage of lessons from other types of disasters. In contrast to some domains of technical knowledge (e.g., seismology or earthquake engineering), for issues related to communities’ social and economic vulnerability and resilience, many commonalities exist between earthquakes and other hazards. Knowledge about earthquake vulnerability and resilience could thus be advanced more rapidly through multi-hazard research than through an exclusive focus on earthquakes. At the same time, data and knowledge on earthquake vulnerability and resilience should be utilized to advance communities’ resilience to the other types of hazards that many earthquake-prone communities also face. The Observatory Network should therefore be structured to foster data sharing, comparative study, and policy analysis across hazards.
Implementation of a distributed network is likely to require a multi-year, phased process. This implementation may involve a number of challenges and associated opportunities:
• Developing an effective governance and decision-making structure;
• Developing an effective phased implementation;
• Integrating with existing research centers and organizations engaged in data collection on disaster vulnerability and resilience, and avoiding duplication of effort;
• Locating and building new nodes of the network;
• Developing widely accepted, standardized data collection instruments and protocols;
• Addressing tensions between individual investigator-led research and the need for commonalities across studies;
• Developing effective infrastructure and protocols for data archiving and sharing, including addressing issues of data confidentiality and intellectual property;
• Developing a sustainable mechanism for long-term financing of the network;
• Bridging research findings with disaster risk reduction practice.
The committee supports the overall implementation structure suggested in the USGS- and NSF-supported RAVON workshop (Peacock et al., 2008). That workshop suggested that in the first 5 years:
• Phase I: an initial 5-6 nodes would be established (at $400,000 per year) through a competitive process;
• Phase II: a steering committee is established, comprising the principal investigators of the Phase I nodes. The steering committee would develop the network’s charter and constitution, and assist NSF in selecting additional nodes. Technical subcommittees would develop data collection and associated protocols. Approximately 3 to 5 additional nodes, together with a network coordinating grant, would be established after open competition.
After the first 5 years, the network steering committee would have the option of recommending a Phase III for additional network growth.
The goal of physics-based simulations of earthquake damage is to replace uncoupled, empirical computations of earthquake shaking, nonlinear site and facility/lifeline response and damage, and loss, with fully coupled numerical simulations (so-called end-to-end simulations) that use validated numerical models of materials and components. The purpose is to greatly improve the accuracy of, and reduce the uncertainty in, earthquake response, damage, and loss calculations of new and archaic elements and systems in our built environment.
Advance the practice of engineering design practice across all disciplines through the development and implementation of validated multi-scale models of materials, components, and elements of the built environment, and the use of high-performance computing and data visualization.
Maximize the impact on national earthquake resilience by integrating knowledge gained in Tasks 1, 13, 14, and 16 and “operationalizing” the integrated product using end-to-end simulation.
Existing Knowledge and Current Capabilities
Current and planned tools for performance-based earthquake engineering of the built environment involve a series of uncoupled analyses
of earthquake ground motion, site (soil) response, and foundation and structural response as follows:
• Empirical predictive ground motion models are used to estimate the effects of earthquake shaking (measured using a spectrum) at a rock outcrop below the facility or lifeline. These models aggregate the effects of P, SH, and SV waves; 6-component acceleration times series are not developed.
• Site response computations typically involve the vertical propagation of SH waves (ground shaking) from a rock outcrop to the free field, including the ground surface. The SH wave time series input to the soil column are matched to the rock outcrop spectrum. The layers of soil in the column are modeled using equivalent linear properties. The output of the site response analysis is a response spectrum in the free field.
• Engineering calculations are performed using empirical models of materials (e.g., soils, metals, concretes, polymers) and simplified macro models of components (e.g., rolled steel beams and reinforced concrete columns). Models of materials and components have been developed on the basis of regression analysis of a limited body of test data with limited understanding of the underlying physical processes. Unified models for materials subjected to arbitrary mechanical and thermal loadings do not exist (but are proposed for development in other tasks).
• Facility and lifeline response (damage) computations are typically performed using empirical macro models of structural components assembled into a numerical model of the facility/lifeline and spectrum-compatible earthquake ground motions input to the numerical model at the ground surface. Nonstructural components and assemblies are treated as cascading systems. Fragility functions for structural and nonstructural components use simplified demand parameters of maximum (lateral) story drift and peak horizontal floor acceleration.
• Losses are computed at the component level using maximum computed responses, fragility data, and consequence functions, and simply aggregated across the breadth and height of the facility/lifeline.
• Seismologists and earthquake engineers have done extensive research on simulating fault rupture and seismic wave propagation through rock, soils, and built structures, and these efforts are beginning to be coupled into end-to-end simulations (Muto et al., 2008). However, empirical linear models of soils and structures/facilities/lifelines have generally been used in these calculations, and better models that incorporate nonlinear effects are needed. Computational frameworks to accommodate physics-based simulations are being developed by SCEC and other organizations; such cyberinfrastructure is an enabling technology for this task.
Robust multi-scale nonlinear models are the future of engineering science. The coupling of these models from the point of rupture through rock and soil into structure represent the future of professional design practice. The implementation of fully coupled, nonlinear macro models of geological and structural components and facility framing systems will enable the profession to move beyond empirical models of unknown reliability. The basic science and engineering knowledge needs required to develop and implement these next generation models, tools, and procedures include the following items:
• Earthquake generation and propagation through heterogeneous underground structure is a very complex process that is impossible to observe. A comprehensive research program is needed to better characterize earthquake sources and the strong ground motions they produce, as described under Task 1.
• Facilities or lifelines are often constructed on a 3D heterogeneous soil basin of varying depth and breadth. Each basin experiences body waves propagated from the rock on the boundaries of the soil basin, where the waves on the boundary vary as a function of location and time. Each basin develops surface waves as the body waves strike the earth’s surface. The soils in the basin are highly nonlinear and may flow (liquefy) dependent on the amplitude and duration of the shaking. As described in other tasks, NSF and USGS should develop techniques to map heterogeneous geologic structure at depth and multi-scale and multi-phase models of soils and rock to enable the propagation of seismic wave fields from the earthquake source to the facility/lifeline. Multi-scale and multi-phase models should be validated from large-scale tests using NSF-NEES infrastructure.
• Facility or lifeline response (damage) is dependent on many factors including depth of embedment in the soil, the spatial and temporal distribution of the body and surface waves across the plan dimensions of the facility/lifeline, seismic wave scattering, the size and mass of adjacent structures, the numerical models of structural and nonstructural components used to compute response, and the damage functions assigned to each component. As described in other tasks, NSF should fund the development of multi-scale constitutive models for archaic, modern, and new high-performance materials, and translate these constitutive models into validated hysteretic macro models of components using advanced numerical tools and data from tests of large-size components using NSF-NEES infrastructure, as described in Tasks 13 and 14.
• The calculation of casualties, repair cost, and business interruption requires complete information on the distributions of damage to structural
and nonstructural components through failure, and the consequences (casualties, repair cost, and business interruption) of the damage and how the consequences aggregate across the breadth and height of the facility/lifeline. FEMA should fund the development of component-level fragility functions for archaic and modern components of buildings, bridges, and infrastructure by a combination of numerical studies and large-scale testing using NSF-NEES infrastructure. Robust strategies to assemble component-level consequences into estimates of system-level loss must be prepared.
• Fully coupled physics-based simulations must include a formal treatment of uncertainty and randomness. Complete Monte Carlo simulations will be prohibitively expensive in terms of computational effort. Efficient numerical techniques must be developed for end-to-end simulations.
• Any fully coupled, physics-based simulations from the point of rupture to the response of structural and nonstructural components in a facility of lifeline will be computationally expensive. NSF should continue to develop high-performance computing capabilities in support of such simulations for future use by researchers, design professionals, and decision-makers.
• Fully coupled, physics-based simulations will generate terabytes, and even petabytes, of data. NSF and USGS should develop analysis and visualization tools to process large volumes of data and enable decision-making in a timely manner.
Physics-based simulations describing the response of the built environment to fault rupture have been deployed on a limited basis (Olsen et al., 2009; Graves et al., 2010). The constitutive models used for soil and structural components are empirical and linear. The replacement of empirical linear models with multi-scale nonlinear models as described in other tasks will represent a paradigm shift in engineering science and practice in the United States. Successful execution of this task is contingent on funding of the basic science and engineering described in Tasks 1, 13, 14 and 16.
A major challenge facing the implementation of fully coupled, physics-based simulations is the interdisciplinary education of the next generation of engineers and scientists, who will have to be expert in earth science and physics, engineering mechanics, geotechnical engineering, and structural engineering, to be qualified to perform these simulations. University curricula will have to be changed. Research results will have to be disseminated quickly to academics for inclusion in graduate-level coursework.
Buildings built without adequate consideration of the earthquake effects that are appropriate for their location dominate the nation’s exposure to earthquakes. These buildings may be seismically vulnerable because they were built before seismic codes were enforced in their region, because the codes used were not yet mature, or because the earthquake threat is now known to be greater than when they were designed. Further the design provisions used for buildings that are critical for post-earthquake response and recovery (e.g., hospitals, fire stations, emergency operations centers) may not provide adequate performance for their intended function. Although lifelines are critical for community resilience, the cost of damage of buildings and their contents and the resulting business interruption or downtime typically account for the bulk of the total economic loss from a large-magnitude earthquake. Further, the greatest threat to life loss in earthquakes in the United States is posed by existing buildings. However, the replacement or retrofit of a significant portion of our vulnerable building stock is not practical in the short or medium term. The current assessment methodologies cannot identify buildings whose performance may prevent the desirable level of resilience for individuals or communities with sufficient accuracy to develop efficient overall mitigation programs. Similarly, current retrofit design standards, although performance-oriented, coupled with currently employed construction techniques, may not provide the targeted performance with adequate reliability and economy.
It is critical in the near term that assessment methods are refined to more accurately identify buildings with inadequate performance and to provide engineers with the tools and procedures to economically retrofit those so identified. The development of Next-Generation Performance-Based Design in the ATC-58 project26 is expected to provide such capabilities, but it will be 10 years or more before this methodology is refined sufficiently to provide standards for evaluation and retrofit for widespread use. Many of the activities in this task will also apply to Task 14.
Conduct integrated laboratory research and numerical simulations to substantially increase understanding of the nonlinear response of archaic materials, structural components, and framing systems.
Develop reliable, practical analytical methods that predict the response of existing buildings with known levels of reliability.
Improve consensus standards for seismic evaluation and rehabilitation to improve effectiveness and reliability, particularly with respect to predicting building collapse.
Develop simplified methods of evaluation of known reliability.
Existing Knowledge and Current Capabilities
The issue of seismic risk from existing buildings was put on a national stage when FEMA launched its Program to Reduce the Seismic Hazards of Existing Buildings. The Action Plan describing the main elements of that program was developed at a workshop held in Tempe, AZ, in 1985 (FEMA, 1985a, 1985b). Although this program generated many intermediate useful documents (reference list), its initial goals were reached with the publication of a national seismic evaluation standard, ASCE 31 (ASCE, 2003), and a standard for seismic rehabilitation, ASCE 41 (ASCE, 2007). Although these standards are now referenced in building codes and are widely used here and abroad, it is well recognized that sufficient knowledge of the nonlinear performance of many archaic materials, components, and framing systems is not available, and much of the component modeling and acceptance criteria in these standards is based on engineering judgment. Perhaps more concerning are results of several multiple-building evaluations using ASCE 31 that indicate 70 percent to 80 percent of older buildings fail the seismic life safety standard (Holmes, 2002; R&C, 2004). Limited earthquake experience in the United States in the past 60 years does not confirm these results and, importantly, it is not practical to replace or retrofit 70 percent to 80 percent of our older building stock.
In addition, there are indications that retrofits conforming to the ASCE 41 standard are overly expensive and may be conservative. Studies are currently under way at the National Institute of Standards and Technology to calibrate retrofits resulting from the provisions of ASCE 41, with respect to design requirements for new buildings. Other recent studies (FEMA, 2005) have shown that the use of the primary analysis method recommended in ASCE 41, depending on the “pushover” technique, should be limited to low-rise buildings, possibly not to exceed three stories. Although the next update cycle for ASCE 41 has begun, these major shortcomings cannot be rigorously addressed without developing a body of data from tests of large-scale components and systems, new numerical models of structural components that are suitable for computer analysis of existing buildings, and assessment procedures that are both efficient and sufficient for predicting the seismic response of archaic construction from incipient damage through collapse.
Rehabilitation techniques have been refined for some archaic structural framing systems and a few innovative techniques such as the use
of fibre-reinforced plastic (FRP) wraps and coatings and the addition of damping have been developed, but cost and business disruption remain major deterrents to the widespread seismic rehabilitation that is needed to achieve a substantial reduction in our nation’s earthquake risk. Retrofit costs are commonly of the same magnitude as the entire structure of a new building and for special cases such as historic buildings can be several times more than the structural cost of a new building. The most common retrofit techniques are described in a FEMA document, Techniques for the Seismic Rehabilitation of Existing Buildings, FEMA 547 (FEMA, 2006), but more complete training of engineers and other stakeholders about the entire retrofit process will be required to accomplish significant reduction of risk from existing buildings, particularly in regions with risks less clear than from the known faults on the West Coast.
Many of the basic knowledge needs and implementation tools required to improve this significant mitigation activity are similar to those needed to advance performance-based earthquake engineering. However, recent studies by the community have identified activities specifically related to existing buildings. ATC 73 (ATC, 2007) contains recommendations for research, and ATC 71 (ATC, 2009a) presents an action plan for implementation. The following recommendations draw heavily from these two publications:
• Establish a coordinated research program related to existing buildings. The NEES facilities provide most of the hardware to accomplish the physical testing needed, but such a program requires resources much greater than those currently supported by NSF. Numerical simulations must be integrated with physical testing, requiring additional support from NSF, other federal agencies, city and state agencies, and industry.
• Develop fragility data for structural and nonstructural components of existing buildings, both to support the development and utility of performance-based earthquake engineering in the long term and to improve decision-making using current retrofit procedures in the short term.
• Improve collapse-prediction capabilities.
• Undertake full or large-scale shake table tests of complete building structural and/or nonstructural systems.
• Perform extensive in-situ testing of existing buildings and components thereof. Many opportunities to test buildings scheduled for demolition have been missed. A program targeted at such testing, with incentives to building owners, should be started.
• Current analytical procedures often predict higher probabilities of failure than have been observed in post-earthquake reconnaissance, particularly for short period buildings. Soil-foundation-structural interaction effects on input shaking intensity may help explain these observations. Addition study in this area, particularly lateral decoupling of the structure and ground, is needed.
• Develop retrofit methods that are more targeted at specific deficiencies and are less costly and intrusive than current standards of practice. There are on-going projects at the time of this writing that seek to identify deficient buildings and quantify their exposure, including EERI’s Concrete Coalition, and research projects funded by NSF under the NEESR program. The results of these projects must be carefully mined to update prior estimates of the vulnerability of the national building stock.
• Develop techniques for nondestructive testing of in-situ structural materials and components and methods of creating the as-built concealed geometric data needed for analysis.
• Develop a comprehensive, nonproprietary building rating system, building on the ATC-50 project that developed such a system for wood frame buildings. Such a system has been recommended to bring seismic safety “into the marketplace” for decades and is discussed at virtually every workshop related to seismic safety. Analytical tools to create a reliable rating system may soon be available. Other issues related to administration of such a system and quality control must also be resolved.
• Develop a uniform method of translating test data to acceptance criteria for use with current analysis and retrofit design methods.
• Collect, curate, and archive building inventory data in all seismic regions to facilitate regional loss estimation and to focus research on the most common high-risk building and structural types.
• Calibrate evaluation methodologies and prediction of damage states both with earthquake damage data and with performance expectation of new buildings.
• Calibrate retrofit standards and techniques with performance expectations of new buildings.
• Clarify structural and nonstructural performance objectives in ASCE 31 and 41 incorporating uncertainty into the definitions.
• Increase programs to move research results into practice and to train engineers and other design engineers with use of latest consensus standards of practice.
• Develop methods to track replacement or retrofit of existing deficient buildings within all occupancy categories, and for each seismic region.
• Develop recommendations for appropriate retrofit of nonstructural systems dependent on local seismicity and occupancy.
• Support updating of standards and guidelines for evaluation, retrofit design methods, and retrofit techniques, and development of new standards and guidelines as appropriate.
• Develop methods to measure the contribution, positive or negative, of the local existing building stock to community resilience.
• Incorporate the concepts of sustainability, including preservation of embedded energy, across all aspects of the existing building seismic issue.
• Encourage risk reduction programs that educate communities and leverage interest in seismic safety and community resilience such as evaluation of schools and community emergency response buildings.
Issues associated with the effective implementation of improved techniques of evaluation and retrofit, and more generally, effective reduction of risk in existing buildings, include the following items:
• Lack of awareness of the significance of the risk from older existing buildings with respect to life safety, and perhaps more importantly, as a direct link to the level of community resiliency.
• Lack of training of engineers and other design professionals, particularly architects, planners, and building officials, with the highest priority in areas with less frequent seismic events.
• Lack of confidence regarding cost-effectiveness of current standards and techniques.
• Lack of an integrated program of research and application conditioned by feedback from informed stakeholders.
• The concept of a building rating system that would automatically place value on seismic performance is excellent, but the formalization and implementation of such a system will be difficult. The U.S. Green Building Council and the LEED rating system may provide a useful model.
• The difficulty of building an accurate inventory including the prevalence of specific seismic deficiencies prevents efficient identification of building/structural types to target for replacement or retrofit.
Performance-based earthquake engineering enables decision-makers to target explicit levels of vulnerability for components of the built environment in terms of their resilience (life safety, repair cost and business interruption). Advances in performance-based earthquake engineering
will facilitate the development of design tools, codes and standards of practice for new and retrofit buildings, lifelines, and geo-structures; a building rating system; regional loss estimation; structure-specific loss estimation; earthquake-oriented decision tools for individual owners, communities, businesses, and governments; and portfolio analysis.
Advance performance-based earthquake engineering to improve design practice, inform decision-makers, and revise codes and standards for buildings, lifelines, and geo-structures.
Existing Knowledge and Current Capabilities
Design of buildings and other structures for earthquake shaking was developed initially to avoid collapse and to prevent debris from falling into adjacent streets. The first design rules were based on estimates of the required lateral strength based on observation of the performance of structures in damaging earthquakes in Japan, Italy, and the United States but had little or no scientific basis. Continuing observation of the performance of structures under strong ground shaking, as well as a gradual increase in understanding of dynamic structural response to shaking, led to refinement of these design rules over the past 6 decades. Over that time, the performance goal of seismic design remained as “life safety” although the term was only vaguely defined. Structures deemed important, such as nuclear power plants, key bridges, and post-earthquake emergency buildings were designed with the intention of controlling or minimizing damage, generally by making them stronger. However, the seismic design of most buildings and structures today relies on design rules rather than analysis of the structure under expected shaking to estimate damage.
Many older pre-code buildings and structures were known to be high-risk but the design rules for new buildings and other structures were difficult or impossible to apply to reduce this risk. New rules were created for this purpose, often for a specific structure type (e.g., unreinforced masonry bearing wall buildings) and it was acknowledged that the seismic performance of these retrofitted buildings and structures would not be equal to that of new construction. As retrofitting became more common, trade-offs between cost and disruption of the work and expected performance was accepted as an inevitable characteristic of earthquake-risk reduction. When the FEMA-funded project to develop formal engineering guidelines for retrofit of existing buildings began in 1989 (ATC, 1994) it was recommended that the rules and guidelines be sufficiently flexible to
accommodate a wide range of local, or even building-specific, seismic risk reduction policies. The resulting document, FEMA 273, NEHRP Guidelines for the Seismic Rehabilitation of Buildings (FEMA, 1997a), contained various performance levels with titles of Operational, Immediate Occupancy, Life Safety, and Collapse Prevention and the seismic design of buildings for desired performance, rather than to prescriptive rules, began to gain traction in the design professional community. Because of these developments, this focus area is strongly related to Task 13 above.
Increases in analytical capability and a growing demand for performance-targeted seismic design led to a FEMA-funded program to develop guidelines for performance-based design of buildings, which is several years from completion and is operating under a budget significantly reduced from that initially determined to be required. The ATC-58 project focuses on the calculation of repair cost, time of business interruption, and likely casualties in the building due to earthquake shaking. There are great uncertainties related to all aspects of these calculations, including what intensity of ground motion is expected; the exact characteristics of the ground motions; the accuracy of the computer model and the analysis method; the nature of the damage to the structural framing, nonstructural components, and building contents; and the consequences of this damage. The Guidelines will explicitly consider all these uncertainties resulting in a relatively complex methodology that will have to be simplified for practical use. A 50 percent draft of the Guideline (ATC, 2009b) is available from the Applied Technology Council.27
Geo-structures, which are engineered earthen construction and include levees, dams, and landfills, are critical components of the built environment. Performance-based design and assessment tools for this important class of structures are unavailable.
Herein, buildings and other structures as defined by FEMA P-750, NEHRP Recommended Provisions for the Seismic Design of New Buildings and Other Structures (FEMA, 2009b), and industrial and power-generation facilities are described as structures, lifelines include bridges and transportation networks, and geo-structures are engineered earthen construction as noted above. Earthquake-induced ground deformation is assumed to include surface fault rupture, landslides, liquefaction, lateral spreading, and settlement.
The basic knowledge needs and implementation tools required to advance performance-based earthquake engineering include the follow-
ing items, many of which are identified in Research Required to Support Full Implementation of Performance-Based Seismic Design (NIST, 2009).
• ANSS should be fully deployed and maintained (see Task 2) to improve knowledge of seismic wave propagation, basin effects, local soil effects, ground motion incoherency, effects of embedment, wave scattering, ground deformation, and soil-foundation-structure interaction.
• Earthquake generation and propagation through heterogeneous underground structures is a very complex process that is impossible to observe. Satellite and LiDAR imaging of fault traces, zones susceptible to liquefaction and landslides, and paleoseismic studies are needed to better characterize earthquake sources and impacts.
• Performance-based seismic design and assessment use—and will continue to use—the results of seismic hazard analysis, which characterizes the effects of earthquake shaking. USGS should continue to update the National Seismic Hazard Maps and the associated design-oriented Java-based applets using new knowledge on earthquake ground motion, faults, and predictive relationships.
• Predictive models of ground shaking and deformation are required for performance-based earthquake engineering. In those regions of our nation where earthquake data are sparse or nonexistent, earthquake-physics simulations should be used to build or augment the dataset. USGS should develop urban hazard maps (where “urban” extends beyond coastal California; see Task 4) for liquefaction (including lateral spreading and settlement), surface fault rupture, and landslide potential, to complement maps available for ground shaking.
• Excessive ground deformation can fail foundations, lifelines, and geo-structures. Robust analysis procedures for predicting ground deformation and their effect on elements of the built environment are needed. Techniques for mitigating the effects of liquefaction should be developed and validated using NSF-NEES infrastructure.
• Tools for site response analysis range in complexity between deterministic site class coefficients and nonlinear site response analysis. The deterministic site class coefficients that are provided in ASCE 7 (ASCE, 2005) are approximate values, are based on limited data, and are strictly applicable to West Coast sites only. Nonlinear site response analysis uses constitutive models for soils, which vary in rigor and degree of validation, especially for deformations consistent with the intense shaking expected close to active faults. Improved site class coefficients, which are valid for sites across the United States, are needed for routine design and performance assessment. Improved constitutive models for soils are required to enable robust nonlinear site response analysis.
• Soil-foundation-structure interaction can substantially modify the
seismic response of a structure. Advanced time- and frequency domain simulation algorithms, codes and tools for such analysis (1-dimensional, 2-dimensional, and 3-dimensional) of discrete structures and clusters of structures (dense urban region) are needed for accurate assessment of performance. Improved constitutive models for soils will enable these simulations. Simplified guidelines and tools to address soil-foundation-structure interaction for scheme design and seismic performance assessment must be developed.
• Performance-based seismic design and assessment of structures typically involve a suite of three-component sets of earthquake acceleration time series. There is no consensus on how to select and scale these time series, and no single method will be widely applicable. The optimal procedures may vary as a function of earthquake intensity, geographic location (seismic hazard), local soil conditions, dynamic properties of the structure, and proximity to adjacent structures. Reliable procedures to select and scale earthquake ground motions for design and assessment of structures, lifelines, and earthen structures must be developed.
• Performance and loss computations are made by analysis of a numerical model(s). Numerical models of structures are assemblies of models of structural and nonstructural components. Improved hysteretic models are required for modern and archaic structural components using physics-based constitutive micro models and data from tests of large-size components and systems using NSF-NEES infrastructure. Component models should be able to trace behavior under arbitrary loading through to failure.
• Current procedures for performance-based earthquake engineering, such as those included in ASCE 41-06, Seismic Rehabilitation of Existing Buildings (ASCE, 2007), have not been benchmarked adequately. Many engineers opine that the procedures are conservative and that their use leads to needless construction expenditure, which impedes voluntary seismic rehabilitation. The reliability of the ASCE 41 procedures is unknown. A systematic examination of the procedures, using earthquake data and other evaluation methodologies, is needed because these procedures will not be replaced in the near term.
• Codes and standards of seismic design practice target the prevention of collapse (structures, lifelines) or catastrophic failure (earthen structures). Collapse or failure may result in catastrophic financial and/or physical losses. Current methods for collapse calculations are unproven and likely unreliable because the mechanisms that trigger collapse or failure are not understood and component and constitutive material models cannot trace behavior through failure. Substantial improvements in numerical modeling tools are needed for collapse computations. These tools must be validated by a combination of small-scale and full-
scale physical simulations of systems through failure using NSF-NEES infrastructure.
• Loss computations use fragility and consequence functions for modern and archaic structural and nonstructural components and assemblies in structures. The database of such functions for components and assemblies is small and must be expanded through coordinated numerical and experimental simulations using NSF-NEES infrastructure and collaboration between researchers and design professionals.
• The loss estimation tools developed by FEMA for the ATC-58 project on performance-based seismic design of buildings are basic (ATC, 2009b). Loss estimation tools must be advanced for structures and expanded to address loss associated with ground deformation, fire following earthquake, and carbon emissions associated with earthquake-induced damage. A recent study by NIST developed a research agenda to fully realize the benefits of performance-based seismic design as envisaged by the ATC-58 project (NIST, 2009).
• The expected performance of code-conforming structures across the range of intensity of earthquake shaking is unknown. Performance assessment tools such as ASCE 41-06 and the draft ATC-58 methodology should be used to assess the likely performance of modern, code-conforming buildings as a function of framing-system type and height, local soil conditions, and geographic location (seismic hazard) and to inform future revisions of design codes and standards.
• The performance-based earthquake-engineering framework developed in the ATC-58 project should be extended to address the effects of ground deformation and flooding, and expanded in scope to enable design and assessment of non-building structures including lifelines, earthen structures, and flood protection systems.
• Nonstructural components and contents comprise most of the investment in structures but there are no performance-based seismic design procedures for such components and contents. Such procedures should be developed but be informed by work completed over the past 2 decades by the U.S. Nuclear Regulatory Commission.
• Construction materials and framing systems are by-and-large unchanged from those used 50 years ago. Smart/innovative/adaptive/sustainable structural framing systems provide new opportunities for construction and warrant speedy development.
• A method should be developed to directly calculate the carbon footprint of proposed buildings and the potential savings available from providing better seismic performance.
• Performance-based seismic design and assessment is computationally more intensive than traditional code-based design. Each simulation may generate gigabytes of data. New visualization tools are needed
to assess these large datasets. Grid- and cloud-based computing tools will be required to support these large-scale simulations.
• A plan should be developed and executed to regularly revise guidelines, standards, and codes for performance-based design and assessment of buildings.
Issues associated with the effective implementation of performance-based earthquake engineering include the following items.
• Guidelines, standards, and codes are the primary mechanism for improving design practice. Federal agencies should develop and execute a plan to regularly revise guidelines, standards, and codes for new and retrofit design and performance (loss) assessment of facilities and infrastructure.
• The web is now the preferred portal for earthquake-related products and data. Web-based products and earthquake-related data should be further developed and maintained to transfer new knowledge to end users.
• Acceptance by stakeholders of explicit inclusion of uncertainty in descriptions of expected losses.
• Development of a simplified procedures and tools appropriate for use by design professionals for routine design.
• Development of educational materials for owners of buildings that will enable and encourage use of performance-based design.
• Educational programs for design professionals (see Task 17).
• Calibration of design rules in building codes to achieve a specified level of performance in design earthquake shaking.
Reliable infrastructure is a priority goal for earthquake-resilient communities. The capacity for critical infrastructure to withstand and quickly restore services following an earthquake or other natural or man-made disasters determines how rapidly communities can recover from such disasters. Many communities rank the availability of electricity, highways, and water supply as the top three critical infrastructure or lifeline systems that need to function following an earthquake (ATC, 1991). Reliable infrastructure is also recognized as being essential at the national level for global economic competitiveness, energy independence, and environmental protection (NRC, 2009). Reducing the vulnerability of criti-
cal infrastructure to natural disasters is identified as a strategic priority in the 2008 NEHRP Strategic Plan, by the NEHRP Advisory Committees (ACEHR, 2009), and in congressional testimony (O’Rourke, 2009) and is linked to broader federal policies and priorities, including the Department of Homeland Security’s National Infrastructure Protection Plan (DHS, 2009) and the Subcommittee on Disaster Reduction’s Grand Challenges for Disaster Reduction (SDR, 2005).
Although some infrastructure renewal is being addressed in the short term through the American Recovery and Reinvestment Act (PL 111-5), there is much to do in the long term, as documented in reports such as the American Society of Civil Engineers’ Infrastructure Report Card (ASCE, 2009), which gave 12 of America’s infrastructure categories an overall grade of D and estimated an investment of $2.2 trillion over 5 years to upgrade and improve infrastructure systems. America’s infrastructure is made all the more vulnerable to earthquakes and other natural disasters by its poor state of repair. Large segments of the nation’s critical infrastructure are now 50 to 100 or more years old, and many were built long before the current generation of earthquake codes, standards, and guidelines were put into effect. In California, past earthquakes have helped to identify and damage the weak links in infrastructure systems, and many owners have generally taken steps to adequately prepare for future disasters through repair and replacement programs and the implementation of improved standards and guidelines, updated construction materials, and current design practices. Yet there is still much to do. The catastrophic levee failures in New Orleans following Hurricane Katrina in 2005 demonstrated the vulnerability of these lifeline systems. A strong San Francisco Bay area or Central Valley earthquake could result in failure of the levee system in the Sacramento-San Joaquin Delta with consequent disruption to drinking water supplies to more than 22 million Californians; disruption of irrigation water to Delta and state agricultural lands could cascade into a national agricultural disaster.28 Elsewhere in the United States where earthquakes are less frequent, the vulnerabilities or risks posed by earthquakes to infrastructure systems may not be recognized or fully appreciated by stakeholders. As a result many owners may have only partially prepared, or have done nothing at all.
A dramatic “wake up call” concerning the vulnerability of electric systems and the resultant regional and national consequences occurred as a result of the August 2003 Northeast Blackout. This blackout affected 5 states, 50 million people, and caused an estimated $4-10 billion in business interruption losses in the central and eastern United States (U.S.Canada Power System Outage Task Force, 2004). Moreover, the power
outage caused “cascading” failures to water systems, transportation, hospitals, and numerous other critical infrastructures; such infrastructure failure interdependencies are common across many types of disasters (McDaniels et al., 2008). In 1998, a study on the effects of a large New Madrid earthquake in the central United States estimated that direct and indirect business interruption economic losses due to extended power disruption could be as high as $3 billion (Shinozuka et al., 1998). At that time, there was little evidence that such losses were possible. The 2003 Northeast Blackout demonstrated that while initiating events can vary (e.g., a falling tree, an earthquake, or an act of terrorism), the consequences can be similar.
The 2008 NEHRP Strategic Plan identified the development of earthquake-resilient lifeline components and systems as a Strategic Priority. In contrast to individual buildings, which occupy a specific site or location, lifeline systems are geographically distributed and interconnected. As a result, these systems have earth science and engineering needs that may be specific or unique to a particular lifeline system and differ from those of the building community. For example, geo-structures are engineered earthen construction and include levees, dams, and landfills. As discussed in Task 14, these critical components of the built environment currently lack performance-based design and assessment guidelines. Guidelines, standards, and codes are the primary mechanism for operating and maintaining functional infrastructure systems. Typically, standards and guidelines address individual components of lifeline systems, and while many contain earthquake loading and design provisions, others do not contain such provisions. The actions that are needed to make lifeline systems more resilient are:
1. Fill in critical remaining gaps. New standards and guidelines that fill in the remaining gaps for lifeline performance and retrofit should be developed during the initial 5-year period. The American Lifelines Alliance Matrix of Standards and Guidelines for Natural Hazards (ALA, 2003) summarized the natural and man-made hazard provisions of infrastructure standards and guidelines, and this summary provides a framework for identifying where guidance needs to be developed, improved, or updated. As discussed in Task 10, there is a need to better characterize infrastructure network vulnerability and resilience. This would identify the weaknesses in current lifeline systems and the consequences of lifeline interdependencies (both spatial and functional) in order to prioritize the most effective retrofits and functional modifications to improve future earthquake performance at the regional and community level.
There are few guidelines/standards that address system reliability, i.e., the practices that are developed to provide reasonable assurance that the consequences of a natural hazard event on system service will meet the goals established by the stakeholders. Like performance-based engineering for buildings, these consequences are defined by multiple requirements, but typically include public safety, duration of service interruption, and the costs to repair damage. Tools are needed to model the effects of these consequences not only to the utilities, but also on the local community and economy (see Task 10).
2. Systematically review and update existing lifelines standards and guidelines. Existing lifelines standards and guidelines need to be systematically reviewed and updated to include the most up-to-date utility practices and the latest engineering and geotechnical research results. The need for a consensus on the level of hazard that should be considered for use in new lifeline designs and upgrades was articulated at the community workshop hosted by this committee. NEHRP collaboration with Standards Developing Organizations can facilitate these types of reviews and coordinate code and standard updates that reference the latest edition of the National Seismic Hazard Maps.
3. Demonstration/pilot projects tied into pilot communities. Reliable electric power and water are essential for developing community resiliency. Pilot programs and demonstration projects that showcase new utility practices and implement lifeline mitigation guidelines and standards should be encouraged as part of the larger community pilot NEHRP programs discussed in Tasks 17 and 18. Involving community stakeholders in defining the level of acceptable lifeline risk for their communities, and understanding shared public-private responsibilities, are necessary to achieve those goals.
4. Lifelines earthquake engineering research. Lifelines-focused research is needed to fill many of the gaps identified in Action 1 above. NEHRP-supported collaborative research, with infrastructure owners and operators, to address user- and owner-defined issues has been a success in the past and should be reinvigorated during the next 5 years. The community workshop in Irvine, CA, identified a number of lifeline research topics that cut across engineering, earth science, and social science issues, including the need to better understand lifeline interdependencies and the physical performance of lifeline systems and the consequences to communities of their disruption, and the need to develop protocols for researchers to use proprietary data for analysis without jeopardizing security and confidentiality of the lifeline system operator. The workshop
also recognized that active support for this type of research from both the public and private sectors is a key requirement for continued success.
Existing Knowledge and Current Capabilities
Critical infrastructure or lifeline systems are the utility—energy (electric power, natural gas, and liquid fuels), water, wastewater, and telecommunications—and transportation (highway, rail, water ways, ports and harbors, and air) systems (NRC, 2009). The ownership and responsibility for critical infrastructure systems span both public and private sectors. Water and waste water systems are primarily owned and operated by public entities, while the private sector typically owns and operates power and telecommunications systems. State and local authorities are responsible for roads, highways, and bridges; and ports, airports, railroads are owned by quasi-public or private organizations. Regulatory oversight for infrastructure systems is equally broad and spans federal, state, and local jurisdictions. In contrast to individual buildings, which occupy a specific site or location, infrastructure systems are geographically distributed and interconnected. These types of networks create functional and geographic interdependencies, where damage in one part of the system can impact other parts of system, and damage in one lifeline system can disrupt other systems. Many interdependencies may have been created by default, not by plan, creating unforeseen vulnerabilities that are not apparent until a disaster strikes.
Lifeline Earthquake Engineering Research
The 1906 San Francisco and 1933 Long Beach, CA, earthquakes demonstrated the consequences of multiple lifeline systems failures on a community. Although the severe effects of these earthquakes spurred the initial development of seismic design requirements in buildings and other structures in California, lessons on the need for rapid restoration of lifelines to aid in community response waned as a result of the lack of a significantly damaging urban earthquake for nearly 4 decades after 1933.
The 1971 San Fernando, CA, earthquake is regarded as “birth of lifeline earthquake engineering” in the United States. The catastrophic effects from this earthquake to infrastructure in a rapidly growing area of southern California stimulated efforts in the engineering community to address the vulnerabilities that were exposed. ASCE created the Technical Council on Lifeline Earthquake Engineering (TCLEE) in 1974 to advance the state of the art and practice in lifeline earthquake engineering through research, standards and guideline development, and implementation at operating utility systems. TCLEE has actively published a monograph
series on lifeline topics and conducts post-earthquake reconnaissance surveys of lifeline performance after major earthquakes in the United States and worldwide.29
In the decades following the San Fernando earthquake, NSF-sponsored Engineering Research Centers (National Center for Earthquake Engineering Research, now the Multidisciplinary Center for Earthquake Engineering Research (MCEER) at the State University of New York, Buffalo;30 the Mid-America Earthquake Center (MAE) at the University of Illinois at Urbana-Champaign;31 and the Pacific Earthquake Engineering Research Center (PEER) at the University of California, Berkeley32) were established to carry out research with the goal to reduce losses due to earthquakes. Each center developed a specific focus for its research and development activities. The central focus of PEER, for example, was performance-based earthquake engineering, facility- and system-level models, and computational tools for assessing and reducing earthquake impacts. MAE, on the other hand, focused on consequence-based engineering, system-level simulation, and analysis for assessing and reducing impacts. MCEER focused on the use of advanced and emerging technologies for reducing impacts and developing methods to quantify community resilience. All three earthquake engineering centers are also participants in the NSF-funded NEES program.33 NSF funding for these engineering research centers has now ceased, and the level of center-based research on lifelines has decreased substantially. This decrease impacts not only engineering research, but also interdisciplinary research as well.
Performance-Oriented Standards and Guidelines
The NEHRP agencies also addressed lifeline issues during the 1980s and 1990s through a series of workshops and studies. A nationwide assessment of lifeline seismic vulnerability and the impact of lifeline system disruption was conducted by the Applied Technology Council (ATC, 1991), which ranked the electric system, highways, and the water system as the most critical lifelines in terms of the impact of damage and disruption. Workshops conducted by NIST and FEMA (e.g., NIBS, 1989; FEMA, 1995; NIST, 1996) noted the limited number of nationally recognized standards for the design and construction of lifeline systems at that time, and recommended a focus on system performance as well as component performance consistent with the hazard level in the region. In 1997, an ASCE
Lifeline Policy Makers Workshop (NIST, 1997) recommended emphasis on guideline development and demonstration projects. Development and implementation of those recommendations were estimated to cost $15 million over 5 years.
In 1998, a cooperative agreement between ASCE and FEMA led to the creation of the ALA. ALA’s objectives were to facilitate the creation, adoption, and implementation of national consensus standards and guidelines to improve lifeline performance during hazard events. The ALA strategy focused on using the best industry practices, involving Standards Development Organizations (SDO), and addressing both component and network performance. In late 2002, FEMA brought the ALA under the Multi-hazard Mitigation Council through a partnership with NIBS. The ALA developed the Existing Guidelines Matrix (see Table 1 in ALA, 2003) to summarize the current status of natural and man-made hazards guidance available in the United States for lifeline system operators. Lifeline design and assessment guidelines and standards prepared by SDOs, professional and industry organizations, and practitioners in the relevant fields were included to identify the needs for guidance that does not exist yet or that must be improved and updated. ALA activities were terminated in 2006 because of budget restrictions at FEMA, severely hampering efforts toward further improving national lifeline standards and guidelines.
In addition to NEHRP support for lifeline research and guideline development, other federal and state agencies (e.g., Department of Energy (DOE), Federal Highway Administration (FHWA), Department of Transportation (DOT)) and the private sector (Electric Power Research Institute (EPRI), consortium projects with PEER/MCEER/MAE, Cooperative Research and Development Agreement (CRADA) projects with USGS and other federal agencies) actively support lifelines research. Cooperative user-driven research in the MCEER and PEER Lifelines programs, for example, has brought the state (CalTrans, CA Energy Commission) and private sector (Los Angeles Department of Water and Power (LADWP), Pacific Gas and Electric Company (PG&E), and other utilities) together with researchers to address common interest topics.
Research supported by NEHRP has substantially improved the modeling of complex lifeline systems, structural health monitoring, protective systems for buildings and bridges, and remote sensing for response and recovery from extreme events (EERI, 2008). NSF support for NEES has provided a national resource for demonstrating the cost-effectiveness of performance-based design, the development of new materials to reduce the impact of earthquakes and other extreme events, and the creation
of retrofit strategies to improve existing infrastructure performance. A continued commitment to improve performance-based design and engineering practices, coupled with development of physics-based numerical simulations for both component and system performance, is necessary for next generation of earthquake-resilient lifeline systems. The systematic documentation and archiving of lifeline earthquake performance data (see discussion of PIMS, Task 9) is essential to evaluate these types of simulation models. Engineering objectives that address entire lifeline system performance (e.g., outage targets for extreme conditions) need to be developed for local needs and conditions.
In addition to mapping geologic hazards along lifeline corridors, geotechnical research to improve strong ground motion (wave passage, spatial coherency, and duration effects), ground displacement/deformation (fault rupture, landslides, liquefaction), and damage estimates is needed to more accurately describe the earthquake demands on lifeline system. Social and economic research to better understand the societal consequences of lifelines failures on communities is also needed. Emergency protocols for response to catastrophic lifeline failures, such as dam failures or natural gas-fueled fires following an earthquake, need to be reviewed as communities grow and encroach on potentially hazardous areas. The ability to model cascading failures of, and between, social and infrastructure systems can help a community visualize the impacts and identify the necessary steps to become more resilient.
Federal coordination of earthquake-related lifeline research and mitigation, however, needs to extend beyond the four principal NEHRP agencies. The 2008 NEHRP Strategic Plan states that it will “focus its efforts on critical lifelines systems and components that are not being addressed by other agencies or organizations, in order to avoid duplicative efforts and maximizing leveraging of resources.” This goal recognizes the need to bring other federal agencies that either support research or have regulatory authority, such as the Department of Energy, the Department of Transportation/Federal Highway Administration and the Office of Pipeline Safety, and the Department of Homeland Security “to the table” in order to leverage investments and optimize potential NEHRP contributions. In addition to federal coordination, multi-level coordination between all stakeholders at state and local levels (including both public and private sectors) is critical for successful lifeline risk management.
Utilities are familiar with preparing for and responding to natural hazard events such as strong windstorms and seasonal floods or human
events such intentional acts of vandalism or accidental “dig ins” or rupturing of buried pipelines. Rare and extreme events, such as severe earthquakes, flooding of historic proportions, or a concerted terrorist attack, however, can overwhelm ordinary utility experience and preparation and can result in widespread damage and service disruption. Although utilities that have experienced such severe events have generally taken steps to adequately prepare for future disasters, many others have only partially prepared, and still others are not aware of their full exposure or vulnerability to these threats. The current lack of an organization like the ALA to facilitate the creation, adoption, and implementation of national consensus guidelines and standards to improve lifeline performance during rare or extreme hazard events is a major impediment to implementation of NEHRP goals.
Various stakeholders, both public and private, have competing priorities for risk management investments. In some cases those investments can be at the expense of or delay seismic mitigation activities such as equipment or building retrofits, especially in areas of perceived low seismic hazard or risk. An unintended consequence of the restructuring of the electricity industry in the United States, for example, has been a sharp decline in expenditures for research and development by investor-owned utilities (Blumstein and Wiel, 1999).
Protocols for dealing with proprietary data and analysis, without jeopardizing the security and confidentiality of the lifeline system operator, need to be developed at federal, state, and local levels. Many stakeholders, especially those in areas of critical infrastructure, are reluctant or prohibited to release inventory information outside of their organizations. These restrictions impact the ability of communities to recognize and plan for service disruptions during disasters. Public-private partnerships, whereby individual utilities could conduct their own risk assessments, using standardized methodologies and earthquake scenarios, inside the company and then share the results with their counterparts and other stakeholders to address inter-utility interdependencies and community impacts need to be encouraged. These types of partnerships would allow for more informed disaster planning within the community without sacrificing confidentiality issues.
The construction materials used in seismic framing systems in medium- and high-rise buildings and other structures are either concrete or steel, and both of these materials have a high carbon footprint. There have been few materials developments in the past 100 years. New sustainable materials suitable for use in the construction industry should be developed to meet the goals of high performance (and thus low volume) and low carbon footprint per unit volume. Buildings constructed with these components should enhance the earthquake resilience of the built environment.
Adaptive components and framing systems have been proposed in the form of semi-active and actively controlled components and structures but have not been implemented in buildings and other structures in the United States. Adaptive components offer the promise of better controlling the response of structures across a wide range of shaking intensity to limit damage and loss.
Develop and deploy new high-performance materials, components, and framing systems that are green and/or adaptive.
Existing Knowledge and Current Capabilities
Little research and development effort has been devoted in the past 3 decades to new materials for application to earthquake-resistant construction. Notable exceptions include fiber-reinforced polymers for retrofit applications and elastomers and composites in seismic isolation systems. Some work is under way on low-cement concretes, fiber-reinforced high-performance concretes, and very high-strength steel. Despite these innovations, the field application of these emerging technologies is stymied for a number of reasons, including (a) incomplete materials characterization, (b) high perceived cost, (c) lack of regulation and/or design standards, (d) a conservative and risk-averse construction industry, and (e) limited incentives for green construction.
Structural components constructed using adaptive fluids (e.g., electro- and magneto-rheological fluids) and bracing systems have been tested in the laboratory at small and moderate scales (e.g., Whittaker and Krumme, 1993; Spencer and Soong, 1999; Soong et al., 2005). The advantages offered by adaptive components have been explored but not documented, with the advantages being dependent on the control algorithms used, and also the need for external power sources for actuating the components and
powering the sensors. There is no guidance or standards for implementing adaptive components in buildings and other structures, and there are no suppliers of adaptive products suitable for implementing in buildings and structures.
The basic knowledge needs and implementation tools required to develop and deploy high-performance, sustainable, and/or adaptive materials and framing systems for earthquake-resistant construction include the following items. Herein, buildings and other structures as defined by FEMA P-750, NEHRP Recommended Provisions for the Seismic Design of New Buildings and Other Structures (FEMA, 2009b), are denoted as structures.
• Investigate and characterize new materials, including but not limited to (a) low-cement concrete, (b) cement-less concrete, (c) very high-strength concrete, (d) steel- and carbon-fiber-reinforced concrete, (e) very high-strength steel, and (f) fiber-reinforced polymers. Characterize new materials across a wide range of strain, strain rate, temperature (including fire), and environmental exposure.
• Devise new modular pre-cast components and framing systems that best utilize the new materials, such as sandwich construction involving permanent steel shells that function as formwork and reinforcement and infill low-cement (or cement-free) concretes.
• Develop tools, technology, and details to join components constructed with new materials.
• Prototype components, connections, and framing systems.
• Conduct moderate-scale and full-scale tests of components constructed with new materials using NEES infrastructure to characterize component response in sufficient detail to enable the development of design equations suitable for inclusion in a materials standard, hysteretic models for nonlinear response analysis, and fragility functions for performance-based seismic design and assessment.
• Conduct near full-scale tests of complete three-dimensional framing systems constructed using new materials and/or components using NEES infrastructure and/or the E-Defense34 earthquake simulator.
• Develop design tools and equations for each new material, component, and framing system and prepare a materials standard similar in scope to ACI 318 (ACI, 2008). Actively support the standard-development process, its implementation in the model building codes, and its adop-
tion by the design professional community. Assign seismic parameters for routine code-based design using established procedures such as those presented in FEMA P-695, Quantification of Building Seismic Performance Factors (FEMA, 2009a).
• Prepare consequence functions for components and framing systems constructed with new materials in support of performance-based seismic design and assessment. Use NEES infrastructure for this task and ensure close collaboration between researchers and design professionals.
• Develop a family of adaptive materials suitable for implementation in structural components, including controllable fluids and shape-memory materials. Characterize new materials across a wide range of strain, strain rate, temperature (including fire), and environmental exposure.
• Develop a family of robust algorithms suitable for controlling the response of adaptive fluids and metals and traditional structural components.
• Develop a family of low-cost, low-power, zero maintenance wireless sensors suitable for controlling the response of adaptive components and monitoring the health and response of structural framing systems.
• Prototype adaptive components (devices, materials, and sensors).
• Develop a suite of algorithms for the control of linear and nonlinear structural framing systems subjected to three components of earthquake ground motion.
• Conduct moderate-scale and full-scale tests of adaptive components using NEES infrastructure to characterize component response in sufficient detail to enable the development of design equations suitable for inclusion in guidelines and standards, hysteretic models for nonlinear response analysis, and fragility functions for performance-based seismic design and assessment.
• Conduct near full-scale tests of complete three-dimensional framing systems constructed using adaptive components using NEES infrastructure and/or the E-Defense earthquake simulator.
• Develop design tools and equations for each new adaptive material and components constructed using that material.
Issues associated with the effective implementation of new materials, components, and framing systems include the following items.
• Acceptance by design professionals, contractors, and building officials and regulators of new materials, components, and framing systems.
• Development of educational materials to encourage use of high-performance, low-carbon footprint materials.
• Develop financial incentives for the use of construction materials with a low-carbon footprint.
• Lack of familiarity of design professionals, contractors, and building officials with control algorithms suitable for implementation of adaptive materials, components, and framing systems.
• Lack of familiarity of design professionals, contractors, and building officials with sensing and structural health monitoring technologies.
• Lack of guidelines, codes, and standards for the analysis, design, and implementation of adaptive materials, components, and systems.
New knowledge and technology will be developed in many of the other tasks described in this report. Analytic and design tools will be developed. Each task description includes a component on education and technology transfer. This overarching task assures that the knowledge and tools developed in other tasks are quickly put into design practice in both the private and public sectors. Long-term continuing education programs should be encouraged to increase the pool of professionals using state-of-art mitigation techniques.
Create a new program responsible for coordinating and encouraging ongoing technology transfer across the NEHRP domain that also builds new initiatives to assure that state-of-the-art mitigation techniques are being deployed across the nation.
Existing Knowledge and Current Capabilities
It is generally acknowledged that technology transfer is seldom adequate, and implementation of effective mitigation strategies and techniques is therefore unnecessarily delayed. The incorporation of mandatory education and outreach components into research projects is sometimes effective, but digestion, coordination, and packaging of research results for efficient practical use is often missing. Notable exceptions are NEHRP’s support of development of seismic standards and codes for buildings during the past 30 years and support since 2007 of research synthesis and technology transfer to the design professional community through the NEHRP Consultants Joint Venture. Continued support for these programs is needed. However, despite development of codes and standards, training materials for using these codes and standards, and a pipeline for research
synthesis and technology transfer, the state of the practice lags far behind the state of the art.
Use of state-of-the-art knowledge and technology from other areas of study that could improve resilience may lag even further behind. There are no systematic programs to consolidate and transfer research results to practice in many disciplinary areas that contribute to seismic resilience such as geotechnical engineering, seismic protection of infrastructure, use of scenarios and regional loss estimation, emergency response, post-earthquake economic recovery, and public policy.
Seismic safety and community resilience is only one of many issues facing most of the implementation community, including owners of buildings and infrastructure, policy-makers at all levels of government, engineers and planners, and the general public. A consistent education and outreach program will not only raise the quality of the state of the practice, but also keep seismic performance issues “on the table.”
NEHRP should maintain and re-emphasize existing programs:
• Fully support development of seismic standards and codes of practice for buildings, bridges, lifelines, and mission-critical infrastructure that include transparent statements regarding expected performance. Advocate for their adoption and enforcement.
• Support and expand the development of research synthesis and technology transfer documents and tools through organizations such as Applied Technology Council (ATC), Consortium of Universities for Research in Earthquake Engineering (CUREE), and Building Seismic Safety Council (BSSC).
• Include education and outreach components in research projects.
• Include a strong and significant education and training program in ongoing initiatives such as the Development of Next Generation Performance Based Engineering, the mitigation of risks from existing buildings, and HAZUS. Enable web-based delivery of products.
NEHRP should initiate a new program center that reviews on-going and completed research, couples and coordinates results in different disciplines, and develops outreach and training documents and courses to maximize effectiveness.
The primary barrier to successful implementation of this action is the need to create and fund a new unit within NEHRP to coordinate and initiate technology transfer.
The ultimate goal of NEHRP is to make our citizens, our institutions, and our communities more resilient to the impacts of earthquakes, and to ensure that earthquakes will not disrupt the social, economic, and environmental well-being of our society. For the purposes of this report the definition of a resilient nation is one in which its communities, through mitigation and pre-disaster preparation, develop the adaptive capacity to maintain important community functions and recover quickly when major disasters occur. This task supports this ultimate goal by describing a strategy to apply knowledge initially in a number of “early adopting” communities, which ultimately will create a critical mass to support continued adoption nationwide.
The characteristics of an earthquake resilient community are:
• They recognize earthquake hazards and understand their risks.
• They are protected from hazards in their physical structures and socioeconomic systems.
• They experience minimum disruption to life and economy after a hazard event has occurred.
• They recover quickly and with a minimum of long-term effects.
Governments, at all levels, own part of the earthquake risk and are better able to carry out their responsibilities when people and businesses are earthquake resilient. Private investments in resilience have public benefits. Public safety; reduced individual, business, and government financial losses; community character; housing availability and affordability; neighborhood-serving businesses; and architectural and historic resources; are all community values supported by individual, private investments in earthquake resilience.
NEHRP-supported activities would support and guide community-based earthquake resiliency pilot projects that apply NEHRP-generated and other knowledge to improve awareness, reduce risk, and improve emergency preparedness and recovery capacity. A strategy—based on
diffusion theory—would guide the selection of early-adopter communities and employ diffusion processes tailored for each community to create a critical mass of people and organizations taking appropriate actions within each community and between communities. Demonstration projects would be used to focus attention and to demonstrate the value and feasibility of resilience-enhancing measures.
Existing Knowledge and Current Capabilities
Most implementation programs do not understand, and therefore neglect, the process necessary for individuals and organizations to adopt new policies and practices. Providing documents and information, although absolutely necessary, is not enough. NEHRP should develop a comprehensive strategy—from concept to practice—that addresses the people at the community and regional levels who are responsible for earthquake risk and the ensuing consequences. This will require innovations—ideas, practices, or objects that are perceived as new by an individual or local unit that can adopt them. The diffusion of innovations is the process by which an innovation is communicated through certain channels over time among members of a social system. Diffusion is a process that depends on decisions by individuals or organizations to adopt an idea. Rogers (2003) refers to the decision to adopt an idea as the innovation-decision process, which consists of five steps: (1) knowledge, (2) persuasion, (3) decision, (4) implementation, and (5) confirmation. Better understanding of how potential adopters move through these stages can greatly improve earthquake safety efforts. The following are important diffusion principles:
1. Mass media channels are effective in creating knowledge of innovations, but inter-personal communication from a “near peer” is needed to decide to adopt an innovation and to change behavior.
2. More than just the demonstration of an innovation’s benefits is needed for the adoption of that innovation.
3. Characteristics of innovations that affect rate of adoption:
• Relative advantage—is it better than the current alternative or way of doing things?
• Compatibility—is it compatible with existing values?
• Complexity—is it easy to use and/or understand?
• Validation—can it be tested on a partial basis before adopting?
• Observability—how easy is it to observe the benefits?
Because diffusion is a socially driven process, people are critical to the spread of new ideas. Diffusion theory provides important insight into the types of people that influence the innovation-decision process and move it forward, thereby hastening the spread and adoption of new ideas. Rogers (2003) divides adopters into Innovators, Early Adopters, Early Majority, Late Majority, and Laggards. The people in each adopter category have different traits and different roles in the diffusion process. He also identifies opinion leaders, who are influential people within a system that others respect and listen to, as important in diffusion efforts; if they adopt, others are more likely to. Opinion leaders can accelerate the diffusion process if they are Early Adopters. Gladwell (2000) has similar ideas—there are a few critical people that are necessary for moving an idea from the Early Adopters to the Early Majority, which he terms “Connectors, Mavens, and Salesmen.” Other researchers, such as Watts (Thompson, 2008), disagree and argue that ordinary people can perform these functions as well.
Diffusion theory applies to earthquake risk reduction efforts because the main goal is to change people’s behavior so they will take actions to reduce their risk, as opposed to doing nothing or taking actions that actually increase their risk. Behavior change is not an engineering problem, and therefore reducing earthquake risk requires theories and methods from other fields. Diffusion theory provides a framework of ideas that explains why earthquake risk reduction projects succeed or fail, and provides instruction describing how to increase the benefits and impact of future projects.
Early adopters are critically important to the diffusion of innovations; for that reason the strategic selection of pilot communities, in which there is targeted, sustained, and direct linkages between research and application through all five states (i.e., knowledge, persuasion, decision, implementation, and confirmation) is essential to achieving earthquake resilience nationwide.
Building a more earthquake-resilient nation should include a robust capacity-building program that is implemented at the community/grassroots level, based on the theory of diffusion. Such a program should initially focus on a minimum of 10 pilot cities, of which at least 5 would be in key earthquake hazard regions of the country. Sufficient knowledge exists to initiate such a program immediately, although new knowledge from research based on this element and other NEHRP activities would improve the program. The program would have several components:
• A data component to develop community-level hazard and risk profiles as well as socio-political-economic data that will be used to assess a baseline of each community’s resilience capacity (see also Task 11).
• A research component to document resilience capacity, identify existing examples of resilience capacity, and estimate its cost and broader implications.
• A grass-roots outreach component to focus on establishing the necessary community-level, public-private partnerships of the influential social, economic, and political stakeholders and leaders for capacity building.
• A post-audit component to measure the cost and effectiveness of various resilient actions.
• A demonstration component, perhaps projects to reduce earthquake risk in schools, which would attract attention and demonstrate the value and feasibility of mitigation projects.
• An analysis component to identify gaps between resilience capacity and loss estimation, using different earthquake scenarios.
• An implementation component to work to reduce the gaps and document the results.
• A federal entity should be authorized to prepare and carry out a strategy to achieve earthquake resilience at the community and regional levels nationwide.
• Matching grants are needed for approximately 5 years for early-adopter communities to participate.
• The strategy should include measures to sustain implementation efforts over time, and a strategy to increase to a nationwide scale. At a minimum, the strategy should:
o Begin with a minimum of 10 early-adopter pilot communities to develop techniques for other communities to benefit from and emulate;
o Develop a nationwide network of community leaders (mavens) interested in earthquake resilience;
o Involve the private sector as equal and critical partners in the process. Businesses benefit from earthquake-resilient communities in myriad ways commensurate with the nature of their businesses. Businesses that understand the benefits are more apt to invest in their own resilience and offer community leadership, political support, and some incentives;
o Involve more grass-roots-level community organizers who can help disseminate and build interest and support within a community and community-level organizations;
o Require leveraging of resources;
o Incorporate new communication tools including social media;
o Address the vulnerability of buildings and lifelines, social organizations, community values, and government continuity;
o Of necessity, address other hazards that threaten communities.
• Governments should exercise their enforcement powers in ways commensurate with the community and its tolerance for risk: enforcing building codes and land-use restrictions, requiring owners of existing buildings to reduce vulnerability, and encouraging other actions that are generally intended to promote health, safety, and welfare.
• Governments should champion social justice issues raised by variations in vulnerability to earthquake risk; earthquake resilience should not be reserved for those with resources and position.
• The NEHRP implementation program needs to advocate incentives to promote the societal benefits from earthquake risk management practices, and to remove obstacles and disincentives. Meaningful incentives that represent societal value are needed to encourage and reward investments. Incentives are needed to make measures affordable (reduce initial costs and make funds available—loans) and manageable (payable over time), with the return in terms of increased safety and financial security proportional to the investment. Incentives include federal and state tax credits for building owners, accelerated appreciation for businesses, subsidies and grants (matching) for those who provide government-like services (affordable housing, medical clinics and hospitals, schools, etc.) and grants (matching) and eligibility for cost reimbursement for government agencies. Local tax credits, property tax reduction, or transfer tax incentives can exert powerful influence. Mechanisms should also be made available to insurance companies and by insurance companies to increase insurance coverage and encourage mitigation for the earthquake hazard.
• A robust constituency base needs to be developed to advocate on behalf of the entire community. Professional and trade associations should provide leadership in advocacy matters at all levels of government and throughout their respective professional discipline.
• Create partnerships and with the media and recruit them to become early adopters.