hence, their usefulness to risk managers and their amenability to validation—and are not on a trajectory to improve. The core principles for risk assessment cited above have not been achieved in most cases, especially with regard to the goals that they be documented, reproducible, transparent, and defensible.

This chapter begins with the committee’s evaluation of the quality of risk analysis in the six illustrative models and methods that it investigated in depth. Then it discusses some general approaches for improving those capabilities.

DETAILED EVALUATION OF THE SIX ILLUSTRATIVE RISK MODELS EXAMINED IN THIS STUDY

Natural Hazards Analysis

There is a solid foundation of data, models, and scholarship to underpin the Federal Emergency Management Agency’s (FEMA’s) risk analyses for earthquakes, flooding, and hurricanes which uses the Risk = T × V × C model. This paradigm has been applied to natural hazards, especially flooding, more than a century. Perhaps the earliest use of the Risk = T × V × C model—often referred to as “probabilistic risk assessment” in other fields—dates to its use in forecasting flood risks on the Thames in the nineteenth century. In present practice, FEMA’s freely-available software application HAZUS™ provides a widely used analytical model for combining threat information on natural hazards (earthquakes, flooding, and hurricanes) with consequences to existing inventories of building stocks and infrastructures as collected in the federal census and other databases (Schneider and Schauer, 2006).

For natural hazards, the term “threat” is represented by the annual exceedance probability distribution of extreme events associated with specific physical processes, such as earthquakes, volcanoes, or floods. The assessment of such threats is often conducted by applying statistical modeling techniques to the record of events that have occurred at the site or at sites similar to that of interest. Typically a frequentist approach is employed, where the direct statistical experience of occurrences at the site is used to estimate event frequency. In many cases, evidence of extreme natural events that precede the period of systematic monitoring can be used to greatly extend the period of historical observation. Sometimes regional information from adjacent or remote sites can be used to help define the annual exceedance probability (AEP) of events throughout a region. For example, in estimating flood frequencies in a particular river the historical period of recorded flows may be only 50 to 100 years. Clearly, that record cannot provide the foundation for statistical estimates of 1,000-year events except with very large uncertainty, nor can it represent with certainty probabilities that might be affected by overarching systemic change, such as from climate changes. To supplement the instrumental record, the frequency of



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement