C
Seismic Event Location

Event location is an essential procedure in CTBT monitoring, playing a critical role in characterizing and identifying every source. Operational considerations related to On-Site Inspections have established a goal that remote treaty monitoring methods routinely locate sources on land to within an area of 1000 km2 or less. In practice, it is common for the areal uncertainty associated with seismic locations to be much greater than this, even for events located by using large numbers of stations. This appendix explains why the problem exists and suggests some ways that location uncertainties can be reduced to 1000 km2 level. However, no single method of improved analysis will lead to the necessary improvement on a global basis. Universally improved event location procedures require a systems approach and calibration effort.

These efforts could, in turn, greatly benefit all users of global seismic data. Traditionally, data used to estimate the origin time and location (latitude, longitude, and depth) of an earthquake or an explosion are the arrival times of various seismic waves measured at stations situated around the world. If seismic arrays are available, it is also possible to measure the directions from which the seismic waves arrive at the array. To a limited extent this can also be done with three-component stations. Such data are then interpreted by using a model of the Earth's velocity structure (i.e., a description of the velocity of seismic waves throughout the Earth's interior or travel time curves). By starting with a trial location (e.g., beneath the station that reports the earliest arrival time) and origin time, and calculating the travel time from the source to the station based on the distance and the velocity model, arrival times can be calculated at each station. These can be compared with the actual arrival time, and by iteratively revising the origin time and location to improve the match between measured and calculated arrival times, a solution can be found that gives the smallest difference between the observed arrival times and the times predicted for that Earth model.

Examination of the way in which computed arrival times change for perturbations in locations in the vicinity of the "best-fitting" location determines a relationship between the random uncertainty in measured arrival times and the size of the region in which the source is expected to lie. Such uncertainty is conventionally reported in terms of a "90 per cent confidence error ellipse," a type of two-dimensional confidence interval that would contain the actual solution 90 times out of 100 if there were no systematic error (as discussed below). If the region of uncertainty were circular,



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 107
Research Required to Support Comprehensive Nuclear Test Ban Treaty Monitoring C Seismic Event Location Event location is an essential procedure in CTBT monitoring, playing a critical role in characterizing and identifying every source. Operational considerations related to On-Site Inspections have established a goal that remote treaty monitoring methods routinely locate sources on land to within an area of 1000 km2 or less. In practice, it is common for the areal uncertainty associated with seismic locations to be much greater than this, even for events located by using large numbers of stations. This appendix explains why the problem exists and suggests some ways that location uncertainties can be reduced to 1000 km2 level. However, no single method of improved analysis will lead to the necessary improvement on a global basis. Universally improved event location procedures require a systems approach and calibration effort. These efforts could, in turn, greatly benefit all users of global seismic data. Traditionally, data used to estimate the origin time and location (latitude, longitude, and depth) of an earthquake or an explosion are the arrival times of various seismic waves measured at stations situated around the world. If seismic arrays are available, it is also possible to measure the directions from which the seismic waves arrive at the array. To a limited extent this can also be done with three-component stations. Such data are then interpreted by using a model of the Earth's velocity structure (i.e., a description of the velocity of seismic waves throughout the Earth's interior or travel time curves). By starting with a trial location (e.g., beneath the station that reports the earliest arrival time) and origin time, and calculating the travel time from the source to the station based on the distance and the velocity model, arrival times can be calculated at each station. These can be compared with the actual arrival time, and by iteratively revising the origin time and location to improve the match between measured and calculated arrival times, a solution can be found that gives the smallest difference between the observed arrival times and the times predicted for that Earth model. Examination of the way in which computed arrival times change for perturbations in locations in the vicinity of the "best-fitting" location determines a relationship between the random uncertainty in measured arrival times and the size of the region in which the source is expected to lie. Such uncertainty is conventionally reported in terms of a "90 per cent confidence error ellipse," a type of two-dimensional confidence interval that would contain the actual solution 90 times out of 100 if there were no systematic error (as discussed below). If the region of uncertainty were circular,

OCR for page 107
Research Required to Support Comprehensive Nuclear Test Ban Treaty Monitoring the area corresponding to the CTBT location accuracy goal of less than 1000 km2 would have a radius of 17.84 km. The size and shape of the error ellipse depends on the random uncertainties in arrival time measurements, the number and geographic distribution of the stations that record the arrivals, and (unknown) errors in the velocity model of the Earth. In practice, it is desirable to have detections from stations within at least two azimuthal quadrants from the event (and preferably from three or from all four) to reduce the triangulation errors incurred in working back from the detecting stations to the source location. Since the random error in measuring the arrival time of seismic waves is usually less than 1 second (generally less than 0.1 second when signal-to-noise ratios are good) and since the velocity of seismic waves is typically less than 6 km/s in the Earth's outer layers where the events of interest occur and where measurements are made, it might appear that seismic sources can routinely be located to within a few kilometers, with corresponding areal uncertainty of only a few tens of square kilometers. However, this conclusion is incorrect at present because the lack of a sufficiently good model of the Earth's velocity structure introduces systematic errors or biases, sometimes called model errors. These errors are the principal problem in determining locations and in estimating the associated location uncertainty. At depths greater than about 200 km, the Earth's global velocity structure is known quite accurately (i.e., to within about ±1 per cent, except in regions of subducting tectonic plates where the variability can be greater). At shallower depths, however, and within the crust in particular (which varies in thickness from 5 to 75 km), the velocities of seismic waves may differ from the velocities in a given seismic model in unknown ways by ±5 per cent, or even more in some regions. These are not random uncertainties but reflect a fundamental lack of information about the material properties and conditions in these regions. As a consequence, the arrival times of teleseismic waves are affected in ways that are not accounted for by standard simple Earth models (which are usually assumed to be spherically symmetric velocity distributions), and this in turn can result in systematic mislocation of the sources in a given region. The situation is even more complex for locations determined with data from regional seismic stations. The arrival times of regional waves depend strongly on the extremely heterogeneous, shallow crustal structure. Earth models often have a uniform crustal layer, and the deviations between the actual and calculated arrival times are often larger than found for teleseismic observations. As a result, event locations based only on regional arrival times or small numbers of teleseismic and regional arrivals are often poor. It is a common experience when locating a moderate or large earthquake with many teleseismic stations that inclusion of regional observations and use of a simple regional crustal model actually degrade the event location (unless the stations are close to the source so that little time is spent in the anomalous region and significant travel time errors do not accumulate). Seismic arrays and three-component stations can provide constraints on the source back-azimuth in addition to providing arrival time information, and this additional information clearly assists in the process of triangulating on the source. However, estimates of back-azimuth are also vulnerable to misinterpretation because of uncertainties in the Earth's velocity structure, unless corrections are made. The effect of inadequately modeled Earth structure for direction of approach at the array is somewhat different than in the case of interpreting arrival time data, but the result is still that a location can be quite poor and the associated estimates of location uncertainty may be wrong (i.e., the true source location may be inside the 90 per cent confidence ellipse estimated using the erroneous Earth model far less often than 90 per cent of the time). Location uncertainty estimates are made using an intrinsically inaccurate model of the Earth, and the effects of (unknown) systematic deviations between the Earth and the model are hard to quantify and to include in the source uncertainty estimate. In seismological practices there are some efforts to estimate the model uncertainty by statistical approaches, comparisons of results for different reference models, and direct measurement using events with known locations. Without such efforts, event location estimates must be viewed with skepticism. There are three principal ways to work around the problem of ignorance of Earth structure: (1) use numerous stations at different azimuths and different distances around the source in an attempt to average out the differences between the Earth's actual velocity structure and that of the model; (2)

OCR for page 107
Research Required to Support Comprehensive Nuclear Test Ban Treaty Monitoring improve the information about the Earth's velocity structure and thus determine a more sophisticated and presumably more accurate model, that includes variability; and (3) empirically "calibrate" the station (or array) so that, in effect, the source of interest is located with reference to another event with an accurately known location near the event of interest. In this approach, data for the unlocated event are usually "corrected" for ''path anomalies" determined from observations of calibration events at each station, and the corrected arrival times are used to locate the event by using a standard Earth model. In some cases, the differences in the arrival times of the unlocated and calibration events are calculated directly and used to obtain a "relative location" between the reference event and the event of interest. Such relative locations can have much higher precision than raw locations, but the accuracy of the absolute locations will only be as good as those of the calibration events, at best. The USGS/NEIS, a number of large regional and national networks, and the International Seismological Centre (ISC) all use strategy 1 for routine processing of event bulletins. The global or regional Earth models tend to be simple one-dimensional models that predict arrival times simply as a function of distance from the source. The general approach to improving location accuracy has been to add stations. Changes in the reference model are resisted because of a desire for uniformity of the historical catalog and because, with large numbers of observations, the locations are not strongly dependent on the details of one-dimensional models. Larger events that are recorded by larger numbers of stations tend to have smaller location uncertainties, whereas small event locations tend to have larger uncertainties due to the decreased resolution, the relatively larger effects of heterogeneity of paths, and the greater potential for bias associated with small numbers of observations. Regional networks with hundreds of stations separated by tens of kilometers have been deployed in seismically active areas where accurate location of small events (even down to magnitude 1 or smaller) is deemed of importance. The performance of these networks hinges on the proximity and number of the nearest stations to the sources. The research community often uses all of these strategies to study special sets of events and to develop three-dimensional velocity models. In many cases it appears that improved locations are obtained, but earthquake monitoring operations have been slow to embrace laterally varying Earth models or station corrections. There is not extensive operational experience with methods 2 and 3 on a global basis, but it is clear that method 1 can achieve global location accuracies at the 1000 km2 level only for quite large events recorded by large numbers of stations. CTBT monitoring will involve many small events recorded by small numbers of stations, even when IMS and NTM are combined, so some form or combination of strategies 2 and 3 is imperative, and no clear alternatives exist. The above approaches can result in greatly improved locations, but none of them can do a reliable job of characterizing the uncertainty of the final location estimate until accurate three-dimensional Earth models become available. Often, the difference between actual and assumed Earth structure results in locations estimates that, for a particular region, are all shifted in the same direction, perhaps a few tens of kilometers from the true locations. The closer a three-dimensional model approaches a description of the true Earth, the better will be the estimates of location uncertainty made with that model. There is a long history of coming to grips with systematic error in seismic estimates of explosion locations. Early United States experience with nuclear explosions in Nevada was used to develop a model of the Earth's crust in that region, and when the first underground explosion in the United States outside Nevada was carried out in New Mexico in 1961, it was estimated by assuming that New Mexico had a Nevada-type crust to have a depth of 130 km! (An event with such a depth estimate would normally be identified with high confidence as an earthquake, unless the formal uncertainty estimate on depth was comparable. More generally, interpretation of the event location is one of the simplest and most widely used discriminants, which again is a reason for working to obtain the best possible location estimates.) Even in areas that have been studied care fully with calibrated stations over a period of many years, ground truth has shown that seismic locations were not as good as had been thought. For example, for the last 20 explosions at the Balapan region of the Semipalatinsk test site, Thurber et al. (1993) showed that locations determined

OCR for page 107
Research Required to Support Comprehensive Nuclear Test Ban Treaty Monitoring to within about ±100 m from SPOT satellite photographs were outside the seismically determined 95 per cent confidence ellipses in most cases. In this case, the ellipses were only about 5 to 10 km2 in area and had been determined by using the known location of a reference event. The ellipses would have included the actual locations 95 per cent of the time if they had been enlarged to about 20 km2, so in this case, seismic locations were actually quite good. Yet the fundamental problem remains—until some type of ground truth becomes available, the size of the confidence ellipses does not account for model inaccuracies. Thus, there are great uncertainties in translating knowledge of Earth structure into errors for event locations. Although the use of large numbers of stations can reduce the location error, Figure C.1 shows that the area of an error ellipse decreases with event size down to a certain amount, but then does not get much smaller even for large explosions (when hundreds of stations contribute arrival times). Uncertainties in Earth structure limit the value of additional data. From the standpoint of solving the seismic event location problem in the U.S. CTBT monitoring context, the ultimate seismological solution FIGURE C.1 Variation of event location 95 per cent confidence ellipses as a function of mb for events at the Chinese test site with calibration by a satellite location for one event. Source: Gupta, 1995. is to work toward an improved three-dimensional model of the velocity, structure for the regions of interest to the United States, since this will give the most direct interpretation of monitoring data. Models of the Earth will always be simplified because its total complexity is unknowable. Experience has shown, however, that sufficiently complex models can be constructed for regions of interest so that locations are accurately known and sufficiently precise for applications such as CTBT monitoring or analysis of earthquake faulting. However, the goal of developing regional models (or a global model) with such detail is an undertaking of much greater effort than the usual research project. A small group of individuals working for a year or two is not going to solve this problem. What is needed is a systems approach. There are about 20 earthquakes per day at magnitude 4 and above, whose signals can be used to interpret global and regional Earth structure. Events of smaller magnitude can also be used to learn about regional structure if nearby stations are available. The crux of the problem, however, is that the locations of these events are not known independently when trying to improve the model of Earth structure. At best, for the vast majority of events, locations will be estimated based on models that are only approximations to structure. It, therefore, appears that location parameters must be determined at the same time as the parameters of the velocity model. Many researchers have explored ways to carry out such simultaneous determinations, which can be made to work on the scale of a local network as well as on a global scale. It has also been demonstrated that complete modeling of the full set of regional waveforms can improve constraints on the source depth and epicenter and provide information about the crustal structure that is difficult to extract from arrival times alone. Systematic efforts to determine details of regional and global structure are being conducted, funded by earthquake monitoring agencies, CTBT research programs, and basic Earth science programs, but there is no concerted effort to integrate these into a global model. The community interested in CTBT monitoring could undertake a long-term program to develop a sufficiently accurate three-dimensional Earth model, with laterally varying crustal and lithospheric structure,

OCR for page 107
Research Required to Support Comprehensive Nuclear Test Ban Treaty Monitoring so that all events at and above the monitoring threshold could be located with the desired precision. Such an effort would be coordinated with the earthquake monitoring and basic Earth science research communities because these groups would benefit from systematically improved event locations at local and global levels. An alternative strategy for making progress is an empirical approach 3 of calibrating stations and arrays by building up an archive of events whose location is known accurately. Calibration efforts could be performed on relatively localized regions, which can be tuned to specific U.S. CTBT monitoring priorities. The prototype-IDC has begun to set up such a calibration event data base. Although this prototype IDC effort is a step in the right direction, it is thought of in terms of only a few events per day. A more ambitious approach could be taken to build up an archive of accurate event locations from much larger sets of available data and use these improved locations to calibrate stations on a much more extensive scale in areas of interest, not just stations used by the IMS and NTM. To get accurate locations for the purposes of station calibration, it is possible to use locally recorded large mine blasts and earthquakes whose location becomes well known as a result of rupture of the ground surface, reports of strong ground shaking, or data provided by a good local network or a mining company. This empirical approach would result in a steady cycle of improvement: better locations can lead to better calibration of new stations and better knowledge of Earth structure, which in turn leads back to better locations. To address the immediate problem that a treaty monitoring network has in locating a new event quickly, the key is to maintain as large an archive as reasonably possible of accurately located seismic events and of their signals at the network of stations used for monitoring the new events. Comparison of the new signals with the old can then lead to a location estimate that starts with the old event and adds the relative location of the new event. This result typically can be better than an estimate made directly from the arriving signals without any comparison to events in the archive. To address the long-term problem of how to build up the archive by continuing to add well-located events, a commitment is needed to develop a comprehensive bulletin of seismicity down to low magnitudes in areas of interest (using teleseismic and regional signals from large numbers of stations) that emphasizes accuracy of location, rather than speed of production.

OCR for page 107
Research Required to Support Comprehensive Nuclear Test Ban Treaty Monitoring This page in the original is blank.