Cover Image

Not for Sale



View/Hide Left Panel

FIG. 1. Classification of earthquake-related phenomena.

phenomenon. Nonpredictive information is derived from rates and assumes a random distribution about that rate. Causal precursors assume some connection and understanding about the failure process. To be a predictive precursor, a phenomenon not only must be related to one particular earthquake sequence but also must demonstrably provide more information about the time of that sequence than achieved by assuming a random distribution.

Let us consider some examples to clarify these differences. Determining that southern California averages two earthquakes above M5 every year and, thus, that the annual probability of such an event is 80% is clearly useful, but nonprecursory, information. On the other hand, if we were to record a large, deep strain event on a fault 2 days before an earthquake on that fault we would clearly call it a causal precursor. However, it would not be a predictive precursor because recording a slip event does not guarantee an earthquake will then occur, and we do not know how much the occurrence of that slip event increases the probability of an earthquake. The only time we have clearly recorded such an event in California (3), it was not followed by an earthquake. To be able to use a strain event as a predictive precursor, we would need to complete the difficult process of determining how often strain events precede mainshocks and how often they occur without mainshocks. Merely knowing that they are causally related to an earthquake does not allow us to make a useful prediction.

Long-Term Phenomena

Long-term earthquake prediction or earthquake forecasting has extensively used earthquake rates for nonprecursory information. The most widespread application has been the use of magnitude-frequency distributions from the seismologic record to estimate the rate of earthquakes and the probability of future occurrence (4). This technique provides the standard estimate of the earthquake hazard in most regions of the United States (5). Such an analysis assumes only that the rate of earthquakes in the reporting period does not vary significantly from the long-term rate (a sufficient time being an important requirement) and does not require any assumptions about the processes leading to one particular event.

It is also possible to estimate the rate of earthquakes from geologic and geodetic information. The recurrence intervals on individual faults, derived from slip rates and estimates of probable slip per event, can be summed over many faults to estimate the earthquake rate (68). These analyses assume only that the slip released in earthquakes, averaged over many events, will eventually equal the total slip represented by the geologic or geodetic record. Use of a seismic rate assumes nothing about the process leading to the occurrence of a particular event.

A common extension of this approach is the use of conditional probabilities to include information about the time of the last earthquake in the probabilities (911). This practice assumes that the earthquake is more likely at a given time and that the distribution of event intervals can be expressed with some distribution such as a Weibull or normal distribution. This treatment implies an assumption about the physics underlying the earthquake failure process—that a critical level of some parameter such as stress or strain is necessary to trigger failure. Thus, while long-term rates are nonprecursory, conditional probabilities assume causality—a physical connection between two succeeding characteristic events on a fault.

For conditional probabilities to be predictive precursors (i.e., they provide more information than available from a random distribution), we must demonstrate that their success rate is better than that achieved from a random distribution. The slow recurrence of earthquakes precludes a definitive assessment, but what data we have do not yet support this hypothesis. The largest scale application of conditional probabilities is the earthquake hazard map prepared for world-wide plate boundaries by McCann et al. (12). Kagan and Jackson (13) have argued that the decade of earthquakes since the issuance of that map does not support the hypothesis that conditional probabilities provide more accurate information than the random distribution.

Another way to test the conditional probability approach is to look at the few places where we have enough earthquake intervals to test the periodicity hypothesis. Three sites on the San Andreas fault in California—Pallet Creek (14), Wrightwood (15), and Parkfield (16)—have relatively accurate dates for more than four events. The earthquake intervals at those sites (Fig. 2) do not support the hypothesis that one event interval is significantly more likely than any others. We must therefore conclude that a conditional probability that assumes that an earthquake is more likely at a particular time compared to the last earthquake on that fault is a deterministic approach that has not yet been shown to produce more accurate probabilities than a random distribution.

Intermediate-Term Phenomena

Research in phenomena related to earthquakes in the intermediate term (months to a few years) generally assumes a causal relationship with the mainshock. Phenomena such as changes in the pattern of seismic energy release (19), seismic quiescence (20), and changes in coda-Q (21) have all assumed a causal connection to a process thought necessary to produce the earthquake (such as accumulation of stress). These phenomena would thus all be classified as causal precursors and because of the limited number of cases, we have not yet demonstrated that any of these precursors is predictive.

Research into intermediate-term variations in rates of seismic activity falls into a gray region. Changes in the rates of earthquakes over years and decades have been shown to be statistically significant (22) but without agreement as to the cause of the changes. Some have interpreted decreases in the rate to be precursory to large earthquakes (20). Because a decreased rate would imply a decreased probability of a large earthquake on a purely Poissonian basis, this approach is clearly deterministically causal. However, rates of seismicity have also increased, and these changes have been treated in both a deterministic and Poissonian analysis.

One of the oldest deterministic analyses of earthquake rates is the seismic cycle hypothesis (2325). This hypothesis assumes that an increase in seismicity is a precursory response to the buildup of stress needed for a major earthquake and deterministically predicts a major earthquake because of an increased rate. Such an approach is clearly causal and has not been tested for its success against a



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement