Cover Image

Not for Sale



View/Hide Left Panel

qualifying earthquake in a region. Now it is virtually certain that the aftershocks from an earthquake in one region will occur in another. One way to deal with aftershocks is to use a declustered catalog in making the test. Another is to build aftershock prediction into both the test and null hypotheses. Kagan and Knopoff (6) offer an algorithm for modeling the probability of earthquakes following an earlier quake. Testing a hypothesis on “new data”—that is, using a hypothesis formulated before the occurrence of the earthquakes on which it is tested—now requires that the hypothesis be updated very quickly following a significant earthquake. Rapid updating presents a practical problem, because accurate data on the occurrence of important earthquakes may not be available before significant aftershocks occur. One solution is to specify in advance the rules by which the hypothesis will be updated and let the updating be done automatically on the basis of preliminary information. A more careful evaluation of the hypothesis may be carried out later with revised data, as long as no adjustments are made except as specified by rules established in advance.

Introducing aftershocks into the model makes it more difficult to compute confidence limits for the log-likelihood ratio than would otherwise be the case. Because the rate density may change dramatically at some time unknown at the beginning of a test period, it is very difficult to compute even the expected numbers of events analytically unless the aftershock models are very simple.

Rhoades and Evison (7) give an example of a method, based on the occurrence of earthquake swarms, in which the prediction is formulated in terms of a conditional rate density. They are currently testing the method against a Poissonian null hypothesis in New Zealand and Japan. They update the conditional rate calculations to account for earthquakes as they happen. As discussed above, it is particularly difficult to establish confidence limits for this type of prediction experiment.

Discussion

A fair test of an earthquake prediction hypothesis must involve an adequate null hypothesis that incorporates well-known features of earthquake occurrence. Most seismologists agree that earthquakes are clustered both in space and in time. Even along a major plate boundary, some regions are considerably more active than others. Foreshocks and aftershocks are manifestations of temporal clustering. Accounting for these phenomena quantitatively is easier said than done. Even a Poissonian null hypothesis requires that the-rate density be specified as a function of spatial variables, generally by smoothing past seismicity. Choices for the smoothing kernel, the time interval, and the magnitude distribution may determine how well the null hypothesis represents future seismicity, just as similar decisions will affect the performance of the test hypothesis. In many areas of the world, available earthquake catalogs are insufficiently complete to allow an accurate estimate of the background rate. Kagan and Jackson (8) constricted a global average seismicity model based on earthquakes since 1977 reported in the Harvard catalog of central moment tensors. They determined the optimum smoothing for each of several geographic regions and incorporated anisotropic smoothing to account for the tendency of earthquakes to occur along faults and plate boundaries. They did not incorporate aftershock occurrence into their model, and the available catalog covers a relatively short time span. In many areas, it may be advantageous to use older catalogs that include important large earthquakes, even though the data may be less accurate than that in more recent catalogs. To date, there is no comprehensive treatment of spatially and temporally varying seismicity that can be readily adapted as a null hypothesis.

Binary (on/off) predictions can be derived from statements of conditional probability within regions of magnitude-time-space, which can in turn be derived from a specification of the conditional rate density. Thus, the conditional rate density contains the most information. Predictions specified in this form are no more difficult to test than other predictions, so earthquake predictions should be expressed as conditional rate densities whenever possible.

I thank Yan Kagan, David Vere-Jones, Frank Evison, and David Rhoades for useful discussions on these ideas. This work was supported by U.S. Geological Survey Grant 1434–94-G-2425 and by the Southern California Earthquake Center through P.O. 566259 from the University of Southern California. This is Southern California Earthquake Center contribution no. 321.

1. Bakun, W.H. & Lindh, A.G. (1985) Science 229, 619–624.

2. Keilis-borok, V.I. & Kossobokov, V.G. (1990) Phys. Earth Planet. Inter. 61, 73–83.

3. Kagan, Y.Y. & Jackson, D.D. (1995) J. Geophys. Res. 100, 3943–3959.

4. Nishenko, S.P. (1991) Pure Applied Geophys. 135, 169–259.

5. Papadimitriou, E.E. & Papazaehos, B.C. (1994) J. Geophys. Res. 99, 15387–15398.

6. Kagan, Y.Y. & Knopoff, L. (1987) Science 263, 1563–1567.

7. Rhoades, D.A. & Evison, F.F. (1993) Geophys. J. Int. 113, 371–381.

8. Kagan, Y.Y. & Jackson, D.D. (1994) J. Geophys. Res. 99, 13685–13700.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement