sisted of a spreadsheet listing the date of each leveling project and the height determined for each benchmark occupied during that project. The recorded heights themselves have little value for the purposes of this study, as they are determined with respect to some arbitrary starting height (typically a predefined value for the height of the primary benchmark) for each leveling run. The observations required for this study are changes in the height differences between benchmarks, so the first step was to determine the differential heights measured during each project with respect to the primary benchmark. A network solution was then performed to estimate and remove the mean differential height for each benchmark from the observations, generating a time series of observed changes in relative height for each benchmark in the network. Bad data points, due to incorrect readings or transcriptions of values, contaminate this initial analysis, and were identified and removed from the data set. These points are typically obvious as single observation outliers, displaced by several centimeters from the trend defined by the remainder of that benchmark’s data.

To interpret observed vertical motions and assess whether they are likely due to benchmark instability (Karcz et al., 1976) or rather to local land motions, the approximate spatial locations of the benchmarks are required. These locations were determined either by digitizing the survey benchmark sketches or, if available, extracted from the National Geodetic Survey (NGS) benchmark data sheet database. Some of the older, discontinued benchmarks have no recorded location and could not contribute to this aspect of the analysis. In most cases, the digitized locations of the benchmarks are estimated to be accurate to 40 m or better, while those determined by the NGS using GPS or other methods are considerably more accurate. This accuracy is sufficient to allow us to examine and interpret spatial patterns of vertical motions. Benchmarks interpreted as being unstable are identified from their time series, either objectively by virtue of very high variance about their mean trend (typically greater than 10 mm2, in contrast to most benchmarks which have variances typically less than 2 mm2), or more subjectively due to having high apparent relative vertical velocities, uncorrelated with their neighboring marks, by comparison with the rest of the network. In most cases, those benchmarks identified as unstable have been given a stability rating of C or D by the NGS, indicating unknown or doubtful long-term stability. However, in some cases even benchmarks with the highest A rating were found to have velocities and/or variances at odds with the neighboring marks.

Both the formal NGS/CO-OPS datum definition and, at this stage in the processing, this analysis, define the primary benchmark as a fixed reference. There is clearly a danger, however, that the primary benchmark itself might experience vertical motions, due either to local benchmark instability or real land motions. In order to assess and account for this, a subset of the benchmarks in the network that show no sign of either monument instability or anomalous rates of motion was used to define a combined vertical reference datum for the network, which should provide a more robust datum. The choice of benchmarks is somewhat subjective, and is unable to correct for vertical deformation occurring on spatial scales greater than the width of the network. It turns out, however, that the specific choice of reference benchmarks has only a small impact on the final estimates of rates of vertical motion for the tide gage and primary benchmark.

The rates of vertical motion were estimated using a robust linear fit that uses an iterative reweighting scheme to reduce the impact of outlier observations on the final estimation of slope. To assess the degree to which estimates of vertical motions based on decadal scale windows of data can be trusted as reasonable approximations to the longer-term trends, a moving window was applied to the data set, and vertical rates for each window were estimated. A minimum of 5 observations and at least a 10-year time span were required for each window location in order to generate an estimate for any given period in order to prevent the results being merely a reflection of the measurement errors and/or sparse data.

RESULTS

San Diego

Three benchmarks—9, N 57, and RIVET— showed anomalous vertical motions and were excluded from the spatial analysis. Benchmarks RIVET and N 75 were given a D stability code by the National Geodetic Survey, indicating they might be expected



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement