Click for next page ( 10


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 9
Geodesy in the Year 2000: An Historical Perspective John B. Rundle Division 6231 Sandia National Laboratories Albuquerque, NM 87185 INTRODUCTION: GEODESY IN THE YEAR 1900 At 0512 hours Pacific Standard Time on the morning of- April 18th, 1906, the city of San Francisco was destroyed by a major earthquake. Subsequent study determined the approximate magnitude to have been in excess of 8, the event having ruptured more than 400 kilometers of the nearby San Andreas fault. In the words of the Carnegie Commission, which was empaneled to investigate the earthquake and its causes (Lawson et al., 1908~: "The shock was violent in the region about the Bay of San Francisco, and with few exceptions inspired all who felt it with alarm and consternation. In the cities many people were injured or killed, and in some cases persons became mentally deranged, as a result of the disasters which immediately ensued from the commotion of the earth. The manifestations of the earthquake were numerous and varied. It resulted in the general awakening of all people asleep, and many were thrown from their beds. In the zone of maximum disturbance persons who were awake and attending to their affairs were in many cases thrown to the ground. Many persons heard rumbling sounds immediately before feeling the shock. Some who were in the fields report having seen the violent swaying of trees so that their top branches seemed to touch the ground, and others saw the passage of undulations of the soil. Several cases are reported in which persons suffered from nausea as a result of the swaying of the ground. Many cattle were thrown to the ground, and in some instances horses with riders in the saddle were similarly thrown. Animals in general seem to have been affected with terror.'' It was well known at the time that earthquakes are caused by the relief of elastic strain in the earth's crust. The Pittsburgh Post of April 19, 1906, page 6, recounts: ... "it is probable that earthquakes are caused by the same stresses in the earth's crust, partly due to contractions, that tilt and fold rock strata into mountains. The strain to which the rocks are thus subjected, when suddenly relieved by the rocks giving way, produces many earthquakes." 9

OCR for page 9
10 Furthermore, according to the Post: "Earthquakes ... have usually, when carefully studied, been traced to some line of rock weakness as a fault. Such earthquakes are merely phenomena accompanying rock movements that may in time greatly modify the earth's surface.'' While the San Francisco earthquake was not the first to be scientifically investigated, its report had the greatest impact, both upon scientific thought in general, and upon geodesy in particular. Other reports by Robert Mallet on the 1857 Naples earthquake, by R. D. Oldham on the 1897 Assam earthquake, and by John Milne on the 1880 Yokohama earthquake were instrumental in establishing seismology as a scientific discipline. But the 1908-1910 Carnegie Commission reports on the San Francisco earthquake stand alone, because it was there that the elastic rebound theory of earthquakes was first introduced by Harry Fielding Reid (Reid, 1910~. The primary data supporting the hypothesis that earthquakes represent a rebound from a state of previously stored elastic strain energy were geodetic survey measurements conducted between 1851 and 1906 (Figure 1, taken from Hayford and Baldwin, 1908~. To Reid, these data indicated that the earth's crust had, over the preceding decades, undergone a systematic deformation whose effect was to place the San Andreas fault into a state of disequilibrium. The earthquake was thus a result of forces in the earth's crust returning the system to -a state nearer to equilibrium. As Reid showed, -the survey data indicated that the sense of sudden motion of monuments near the fault at the time of the earthquake was the reverse of steady motion that occurred prior to the event, indicating the release of stored elastic energy. The geodetic technology in common use by the Coast Survey in the latter part of the nineteenth, and early part of the twentieth centuries, relied primarily upon triangulation for measurements of horizontal position (Hosmer, 1919~. In this method, permanent marks were fixed to the ground in networks of regular triangular patterns, over which were conducted measurements to determine the angles subtended by lines-of-sight between the marks. For the most part, the earlier triangulations were conducted during daylight hours, by sighting on a sun-reflecting heliotrope with a telescope precisely calibrated in angular direction. The most commonly used telescope was of the type known as a "Direction Instrument", first designed in England by Ramsden in 1787. Other telescopes of lesser precision were the "Repeating Instruments" designed in France in the year 1790. Beginning in 1902, triangulations were primarily conducted at night, it being realized that thermal instabilities in the atmosphere produce unacceptable lateral refraction which can be remedied by observing through thermally stable nighttime air. Acetylene lamps were used initially, later supplanted by incandescent electric lights. Generally speaking, observations were obtained by mounting the instruments and the lights at the tops of wooden, and later metal towers, whose heights ranged up to more than one hundred feet.

OCR for page 9
11 It was using these instruments and technologies that the data in the Elastic Rebound hypothesis were obtained. At the time, combination of standard observing and network adjustment methods were expected to yield first order accuracies in line lengths of about 1 part in 25,000 (Hosmer, 1919~. To obtain these accuracies, instructions such as those issued to Clem Garner, chief of a survey party operating near San Francisco in 1922, were typical (Bowie, 1924~: used the "You will, therefore, take special precautions against conditions which would cause horizontal refraction and will adopt such an observing program as will secure triangle closing errors with 2.5" as a maximum, and with not more than 1" as a mean. It is recommended that each direction at a station be measured on at least two nights with not less than 12 acceptable directions on each night. One direction at each station may have only 16 acceptable positions observed and these may be on a single night if by so doing a day may be saved and provided further that the closures are within the above limits." It was realized by the geodetic community at least as early as 1924 (e.g., Bowie, 1924), that the decade-long changes in triangulation networks observed throughout California were the result of earth movements, and that these earth movements were related to the San Andreas fault. In these early papers, one can see a clear emphasis on analyzing changes in triangulation angles between successive surveys as a means of understanding the physical role of fault movements. In modern terminology, we refer to this as relative positioning. But the most important rationale for the establishment of regular survey campaigns, from a political and economic point of view, has always been for cadastral, or boundary determination purposes. The 1850-1900 coast surveys were primarily made for coastal navigation of ships, not for boundary determinations, as shown by the fact that the published maps of the lines of sight also listed sailing instructions in the lower left corner. To improve positional accuracy, astronomic observations of monuments (Laplace Stations) in the network were obtained to control orientation, and to obtain geoid slopes for scientific studies. With proper care, positional accuracy of one part in 100,000 was achieved. In the early twentieth century astronomical longitudes of points in the field were determined by using a portable transit to measure the time of passage of stars past the local meridian. The principle sources of error were the accuracy with which the exact transit time could be determined by the observer, and undetermined errors in polar motion and universal time (UTAH. Moreover, since local transit time is supposed to be taken with reference to the geometrical reference ellipsoid, deflection of the vertical induced by anomalous masses implies a pointing error of the transit telescope, and thus errors in the inferred longitude. Upon comparison of the observed transit time to the precalculated Ephemerides, the longitude could be obtained. Alternately, a comparison could be made to a transmitted time

OCR for page 9
12 calibration signal. Prior to 1922, these signals were received in the field over telegraph lines, but subsequently were transmitted via radio. Astronomical latitudes were determined by observing the elevation of stars above the horizon using a zenith telescope. Again, a major error source arose from deflection of local vertical. Using these techniques, it was found possible to astronomically measure to about 0.10" in latitude, and 0.003'' in longitude, albeit at considerable effort and expense. With the passage of time, instrumentation for land-based horizontal positioning evolved. Triangulation was still the primary method for obtaining horizontal positions until about 1960, when electro-optical distance measuring instruments were developed, such as the Geodimeter and the Geodolite. When temperature and humidity are measured at the time of ranging, and appropriate corrections for atmospheric refraction are applied, line-lengths can be obtained over networks of monuments with typical spacings of kilometers to -tens of kilometers, with accuracies in the range of parts per million or better. Other land- based instruments have followed, including multiwavelength electro- optical distance measuring apparatus, which measure and apply atmospheric refraction corrections automatically. Although not as important as horizontal triangulation for analyzing motions related to strike-slip faulting, leveling measurements had, by the late nineteenth century, reached a high state of technical accomplishment (Hosmer, 1919~. In fact, the techniques and technology used then are, in all essential aspects, basically the same as those in use today. As in triangulation, leveling measurements are made over networks of marks (benchmarks) fixed on the earth's surface, generally along roads, railroad beds, and other gently sloping paths with good access. The major advance in leveling technology occurred with the discovery of an alloy of 35% nickel and 65% steel called "invar". Discovered originally by C. E. Guillaume, Director of the International Bureau of Weights and Measures near Paris, France, invar is distinct in having an extremely low coefficient of thermal expansion (~.1 ppm/C), due to a special heat treatment used in its preparation. By 1906, the Coast Survey had begun using invar for measurement tapes and leveling rods, and has continued this practice to the present. The performance of invar has been found to be generally satisfactory, except possibly in leveling measurements of extreme accuracy, when instabilities in material structure may cause unpredictable changes in length of the tapes at the level of parts per million. More important sources of error in precise leveling measurements arise from unequal atmospheric refraction effects over forward and backward sightings, and systematic errors in rod calibration. With more modern self-leveling telescopes, an additional source of error has been found to arise from deflections of the compensator pendulum induced by nearby electromagnetic sources such as power lines. Still, rigorous field tests demonstrate that accuracies achieved in leveling are about 10 mm over 100 kilometer distances, and about 5 mm over 1000 meters of elevation change.

OCR for page 9
13 In contrast to positioning, which is the primary geodetic observable of interest in the study of active faulting, gravity is of principal importance in studying the geophysical structure of the earth. Prior to about 1800, it was thought that the matter comprising the earth and its surface features was of roughly uniform density, and that any deformation of the earth was of an essentially elastic nature (Jeffreys, 1976~. However, following the 1855 survey of India, J. H. Pratt found that the gravitational attraction of the Himalayas observed by Everest, the Surveyor General of India, was only about one-third as large as it should be, if the mountains were treated as uncompensated masses. Shortly thereafter, G. B. Airy, then the Astronomer Royal, proposed in 1855 that mountains floated on a substratum capable of deforming inelastically in response to the excess gravitational load. Pratt, in 1859, proposed an alternate hypothesis, in which mountains of lower density than the substratum ride passively on the underlying rigid material. Both of these mechanisms are still invoked today, in studying the isostatic compensation of surface features of the earth. The applicability of both mechanisms continues to be a subject of research, in addition to other problems related to the structure and dynamics of the earth. Early gravity meters were based upon measuring the period of an accurately calibrated pendulum. The first of these, the half-seconds invariable pendulum apparatus, was designed and perfected in 1882 by Sterneck in Austria (Hosmer, 19191. In 1890, T. C. Mendenhall, Superintendent of the Coast and Geodetic Survey modified the design, producing an instrument which was used successfully for many years. The basic design involves the comparison of the locally measured period of the pendulum to a chronometer of known calibration. Due to the nature of the measurement, the amount of time needed for a single observation was typically on the order of 8 to 12 hours. The accuracy of the observations was typically a few ppm, that is to say, a few milligal. Another instrument also in use at the time, and one whose importance has undergone a resurgence due to studies of the fundamental nature of gravity, was the Eotvos torsion balance. In this instrument, two masses are fixed to the ends of a long, slender rod, which is suspended at the end of a long fiber. While the nature of these masses was not considered important for routine applications, it has since assumed considerable importance, for reasons to be discussed in this volume. Under the action of a spatially varying gravity field, the rod tends to turn into the plane of a great circle oriented perpendicular to the local meridian, called the prime vertical. By measuring the torsion in the fiber, the intensity of the local gravitational field can be deduced. Other designs were also in use at the time, but all were eventually supplanted by the far more durable and portable gravity meters. Accuracies of the torsion balance were similar to those of modern portable gravity meters, a few microgals.

OCR for page 9
14 Unlike pendulum meters and torsion balances, gravity meters, which first began to appear in the 1930's, are in essence a mass suspended on the end of a sensitive spring (e.g., Telford et al., 19769. The early gravity meters were stable, that is to say, they had a linear dependence of deflection on deviation of the length of the spring from a relatively large, nonzero value. The major problem with an arrangement of this type is that the signal of interest is the deviation about the mean deflection, thus producing a relatively poor sensitivity. The Gulf and Boliden gravity meters are of this type. Generally speaking, the stable meters have fallen into disuse. By contrast, the unstable gravity meters have inherently a far superior sensitivity, as-small deviations are magnified substantially. The most important of these instruments is the LaCoste-Romberg gravimeter, which utilizes a zero length spring. In 1934, L. J. B. LaCoste found a method of producing a spring-balance system in which the restoring force depends inversely on the length of the spring. The result is a spring-balance system of great sensitivity. In fact, the sensitivity becomes unbounded as the actual length of the spring approaches zero. In practice, such meters are used in a null mode, where rotations of an adjustment screw are used to restore the mass to its original position. The deviation of local gravity from a reference value is then proportional to the number of turns of the screw. Major sources of error are~atmospheri~c pressure changes and temperature variations. For these reasons, the mass and spring assembly are enclosed in a pressure and temperature controlled environment. Most recently, transportable absolute gravity meters have been developed which are based upon timing the free fall of a mass. In this case, the mass is a corner cube reflector, and its velocity is measured with a laser interferometer system. Accuracies are typically in the range of a few microgals, if several hundred drops of the cube are measured. CURRENT PROBLEMS: APPROACHING THE YEAR 2000 With the advent of the space age, following the launch of Sputnik in 1957, the science of geodesy entered a new era. As the chapters in this volume describe, a variety of new space-based observational techniques are under development which will allow significantly new approaches to old problems, and provide means for addressing scientific questions which were previously insoluble. As is implied by the foregoing historical discussion, current problems in geodesy revolve principally around positioning and gravity field determination. To this list can be added a new and rapidly evolving topic related to fundamental tests of natural laws, principally General Relativity, and the inverse-square law of classical Newtonian gravitation (see the chapter by Paik). Precise positioning measurements play a critical role in many geodynamical problems, some of which are summarized in Walter (1984~.

OCR for page 9
15 Following the approach of Jordan and Minster (1988), it is possible to classify the most important of these problems into two categories, secular and transient, and to classify the data types needed to address these problems according to whether motions are predominantly vertical or horizontal (see Chapter 21. Of the four distinct combinations, each typically demands its own approach, and to obtain a solution, each typically has its own accuracy standards. The division of crustal motions into transient and secular reflects the increasing recognition that even now, space-based geodetic measurements are succeeding in defining the time averaged, long-term motion of the plates to subcentimeter accuracies. That is, relative plate motion rates, when measured from the stable interior of plates, and averaged over tens of years, are in most cases essentially identical to rates averaged over millions of years. Thus, the predictions of the global plate motion models (e.g., Minster and Jordan, 1978) are being confirmed with ever-increasing confidence. Where deviations from these rates occur, it seems clear that additional physical processes are at work on a more local scale, which are unrepresented by the global models. As a result, it is becoming possible to address problems related to fluctuations about the long term rates, that is to say, processes which are responsible for producing transient crustal movements. As the long-term motions of arbitrary points becomes known to accuracies of several millimeters per year or better, it will become possible to subtract these motions from the instantaneous rates with confidence that what remains will have physical significance. The observations needed to carry out this program will have the characteristic properties of being frequent, spatially dense observations over geodetic networks of large scale, with positional accuracies of several millimeters. In terms of dynamical problems related to plate motions, it is thus possible to think in terms of a variety of scales, both spatial and temporal. The average rates given by the global rigid plate motion models are defined over the preceding 2-3 million years, and the typical spatial scales represented are on the order of plate dimensions, that is to say, thousands of kilometers. Physical understanding of the processes responsible for these observations will come from increasingly sophisticated global convection models, which even now are yielding considerable insight into the long term evolution of the earth. As motions on these space-time scales become better defined, many of the frontier-level problems will increasingly be focused on dynamical processes spanning decade and less time scales, over spatial dimensions of tens to hundreds of kilometers. Motions on these time scales have considerable importance in a variety of phenomena, including earthquakes, polar motion and length of day, and post-glacial rebound studies. To the extent that transient dynamical processes of the solid earth are reflected in observed polar motion, the understanding of such diverse phenomena as climatic changes and ocean circulation will be greatly enhanced. Thus, in effect, knowledge of rigid plate motions

OCR for page 9
16 provides a space-time reference frame, as well as kinematic boundary conditions, for understanding much shorter-lived motions on time scales of hours to hundreds of years. Transient tectonic motions on spatial scales of hundreds to a thousand or so kilometers are often termed "regional'' deformations. It is typically upon these spatial scales that one sees significant departures from the predictions of the rigid plate motion models. These departures are in the form of space-time variations in average motion, due, for example, to earthquakes, transient motions of the asthenosphere, and aseismic slip, as well as in the existence of complex boundary zones of deformation. Examples of the former can be seen at well-studied plate boundaries such as southern California, Japan, and Alaska. Examples of the latter can be seen in the western United States, the Alpide belt, Tibet, and the East African Rift Zone. Of the transient motions, perhaps the most interesting is the apparent migration of stress and strain along plate boundary zones. One of the best examples is the sequence of earthquakes which occurred along the North Anatolian fault zone in Turkey, between 1939-1967. These remarkable events, all of about magnitude 7, appeared to originate with the earthquake of December 26, 1939, which occurred in the northeastern part of the country. Subsequently, in 1942, 1943, 1944, 1953, 1957, and 1967, a sequence of events occurred progressively farther to the west, at an "average" rate of migration of something like 100 meters/day. In addition to these, an event occurred in 1966 to the east of the epicenter of the 1939 event. With the occurrence of these earthquakes, most of the north Anatolian fault, together with the northern segment of the east Anatolian fault, had ruptured. The most fascinating question is whether such a migration effect is a ubiquitous feature of fault zones. In fact, there is an accumulating, but still small, body of evidence which suggests causal relations in space and time between major earthquakes on a variety of plate boundaries. These effects have been observed in the Nankai region of southwest Japan, the Imperial Valley of California the central Nevada seismic zone, and in recent paleoseismic studies of the San Andreas fault zone of southern California. In addition to regional-scale events, an important class of local- scale episodic displacements are associated with volcanism. In a number of large caldera structures around the world, such as Long Valley in California, Rabaul in Papua New Guinea, and the Campi Flegrei west of Naples, Italy, displacements of perhaps a meter have occurred over intervals of a few years, without as yet an accompanying eruption. Typically, these caldera structures have areas of perhaps 500-1000 km . In another case, that of Mount Saint Helens, vertical motions of hundreds of meters occurred over about two months, culminating in the devastating eruption of May 18, 1980. These crustal motions occur at such a rapid pace that conventional land-based surveying cannot provide a detailed space-time picture of the course of the inflation, with the consequence that valuable information on the source process is lost. Space geodetic systems, by contrast, have the flexibility to observe

OCR for page 9
large networks of points relatively cheaply, or to operate unmanned in a continuous mode of operation. The most important and interesting problems in understanding dynamical processes associated with earthquakes and volcanoes are at regional scales, over time intervals of hours to hundreds of years. Slip along fault zones, for example, is accommodated by a combination of seismic and aseismic deformation which can be detected by geodetic methods. Given the fact that both slip in fault zones and volcanic phenomena occur over finite, nonzero regions in space and time, the fundamental dynamical question is how the system becomes organized in a physical sense into the space-time structures observed. Thus, the emerging point of view in the earth sciences is to seek a generic understanding of the dynamics of complex systems. This is a fundamentally process-oriented approach, rather than the more traditional observation-driven approach, and mirrors the same fundamental change occurring in other fields of science. Implementation of this approach emphasizes the need for models to aid in understanding, the need for observations to test the models then following as a logical consequence. These observations will involve spatially dense, temporally frequent positioning and surface gravity change observations. The evolution in the frontier for current research involving- the earth's gravity field has been no less dramatic than for the dynamics of the earth's crust. At the turn of the century, discrimination between differing compensation mechanisms, and study of the the tides and the dynamical figures of the earth and moon received the greatest attention. By contrast, gravity observations now play a critical role in almost all aspects of earth science research. These areas include understanding the structure of the earth's lithosphere and its interaction with the underlying asthenosphere, thermal structure of the continental lithosphere and the nature of the driving forces of plate tectonics, and the composition and rheology of the mantle (NASA, Report of a Gravity Workshop, 1987; see also the chapter by McNutt). To address these questions, accuracy requirements are typically in the microgal range over horizontal distances of tens to hundreds of kilometers. In addition to these structural questions, repeat measurements of gravity at the microgal level are a ready means of measuring, on a temporally continuous basis, small vertical motions of the crust. Typically, these vertical motions are most important in post-glacial rebound studies, where one expects to observe changes of a few microgals per year, and in association with volcanoes or thrust faults, where changes of tens to hundreds of microgals may be observed over time intervals of hours to years. In studies of the oceanic lithosphere, important questions involve the degree to which structural features formed by dynamical processes, such as mid-oceanic ridges, fracture zones, and seamounts, are isostatically compensated, and upon what time scale this process occurs. Of late, data from several satellite altimeter missions, including

OCR for page 9
18 SEASAT, GEOSAT, and GEOS-3, have played an important role in mapping the oceanic geoid. With these missions, uniform mapping of the ocean's gravity field and geoid have been achieved for the first time. However, due to military constraints, these data are usually obtained only in modes where the orbits have been made to repeat every few weeks. Hence, spatial coverage in the along-track direction is far better than in the cross-track direction. By contrast, bathymetry data obtained by shipborne instruments such as Seabeam or SeaMARC are able to provide far more detailed topographic coverage on the local scale of features such as fracture zones and mid-oceanic ridges. Nevertheless, there are problems that can only be addressed by the use of satellite geodetic data, such as the origin of the large gravity anomalies observed at oceanic trenches, and the mechanisms by which mid-plate swells and plateaus are supported. For the continental lithosphere, gravity data allow models of rifting and continental extension to be systematically tested. For example, locations of continental rift zones are often associated with prominent gravity anomalies. Gravity data can also provide information on the depth and lateral extent of sedimentary basins, as well as upon the nature and extent of the roots of mountain belts. Moreover, gravity data offer important constraints on the deep structure of the continental lithosphere, whose thickness and physical properties are not at present well understood. But perhaps the most critical role which gravity observations play is in determining the long term dynamics of the interior of the earth. It is clear from a variety of observations that the motions in the mantle are driven by thermal convection, implying the existence of laterally heterogeneous density variations. In concert with recent advances in seismic data analysis techniques, the gravity field of the earth, together with the shape of the geoid, provide the most important constraints on the density contrasts in the earth's deep interior. To the extent that these density variations are physically related to the convective processes which drive the plates over hundred-million year time scales, gravity data play a key role in unraveling the mechanism for long term mantle dynamics. And of course, since it is widely recognized that the source of the geomagnetic dynamo is undoubtedly convective motions within the fluid outer core, geoid and gravitational observations place critical constraints on the generation of the earth's magnetic field. The most important problem areas in understanding the long term dynamics of the earth involve understanding the depth of penetration of subducted slabs, that is to say, the vertical scale of mantle convection; determining the viscosity structure of the mantle; understanding the possible role of small scale (100 kilometer wavelength) convection in the upper mantle; and investigation of the still-unresolved source of the long-term stability of mantle plumes. In general, these problems require global gravity field coverage, with

OCR for page 9
19 accuracies on the order of milligals, and resolution of roughly 100 kilometers. In addition, there exists considerable interest in obtaining an improved marine geoid, at the level of 0.1 meter accuracy over wavelengths of 100-200 kilometers. The rationale lies in the search for an improved understanding of general circulation and dynamic topography in the oceans, which is in turn motivated by questions related to atmosphere-ocean interaction, and problems related to global climate change. As a final note, precise determinations of the earth's gravity field have taken on new importance in light of recent suggestions that classical Newtonian gravitation should be revised to admit shorter range interactions (the popularly termed fifth- and sixth-forces). These suggestions stem from analysis of anomalous keon decays seen in a few accelerator experiments, new geophysical determinations of G from experiments in mine shafts, from gravity observations in boreholes in icecaps and on towers, and from reanalysis of the classical Eotvos experiments. As yet this controversy is unresolved, but the several- hundred meter wavelengths suggested for the interaction range should be visible in some precise satellite tracking experiments, through precise gravity "radiometry, and by more conventional terrestrial means. In addition, there is considerable interest in using satellite gravity studies to test predictions of General Relativity, such as the rate of precession of a space-borne spinning gyroscope by the Lense-Thirring effect, otherwise known as the tt dragging of inertial frames. The rate of precession depends on the intensity of the local gravitational field. ACKNOWLEDGEMENT S I am indebted to my colleagues on the committee on geodesy, namely, C. Goad, T. Dixon, E. Metzger, J. B. Minster, R. Sailor, R. Stein, and H. Orlin, for reviews. The work contained in this paper was supported under contract DE-AC04-76DP00789.

OCR for page 9
20 REFERENCES Bowie, W., Earth Movements in California Spec. Publ. 106 273 U.S. , , , Govt. Printing Office, Washington, DC, 1924. Douglas, N. B., Satellite Laser Ranging and Geologic Constraints on Plate-Tectonic Motion, M. S. Thesis, University of Miami, 1988. Hayford, J. F. and A. L. Baldwin, The Earth Movements in the California Earthquake of 1906, in Lawson, A. C. and others, The California Earthquake of April 18. 1906, Report of the State Earthquake Investigation Commission, published by the Carnegie Institution of Washington, DC; Volume I, 1908. Hosmer, G. L., Geodesy Including Astronomical Observations Gravity Measurements and Method of Least Squares, John Wiley & Sons, New York, 1919. Jeffreys, H., The Earth its Origin. History and Physical Constitution, Cambridge University Press, Cambridge, 1976. Jordan, T. H. and J. B. Minster, Beyond Plate Tectonics: Looking at Plate Deformation with Space Geodesy, in The Impact of VLBI on Astrophysics and Geophysics, Proc. IAU Symp. 129, ed. M. J. Reid and J. M. Moran, Reidel, Dordrecht, 1988. Lawson, A. C. and Others, The California Earthquake of April 18 1906, Report of the State Earthquake Investigation Commission, published by the Carnegie Institution of W~hinat~n Within fir. Troll T 1908 ~ ~rat ~ ~ ~A ~ L L~ = ~ 1 ~ ~~ ~ ? V ~ 1 ~111= 1 ~ Minster, J. B. and T. H. Jordan, Present Day Plate Motions, J. Geophys.Res., 83, 5331 - 5354, 1978. NASA, Geophysical and Geodetic Requirements for Global Gravity Field Measurements, 1987-2000, Report of a Gravity Workshop, Colorado Springs, 1987, published by GeodYnamics Branch nix Perth ~r;=-rm and Applications, NASA, 1987 Reid, H. F., The California Earthquake of April 18. 1906, Report of the State Earthquake Investigation Commission, published by the Carnegie Institution of Washington, Washington, DC; Volume II, The Mechanics of the Earthquake, 1910. Telford, W. M., L. P. Geldart, R. E. Sheriff, and D. A. Keys, Applied Geophysics, Cambridge University Press, Cambridge, 1976. Walter, L. S., Geodynamics, NASA Conference Publication 2325, published by Earth Science and Applications Division, NASA, 1984.

OCR for page 9
21 ~- l ~ s - ~g Tomol~ MAP OF THE COAST RANGE REGION or MIDDL E CAL I FOR N I LEGEND 37oo Foutl o' ,906 ~Uovemont ot I.C)C Suceelsi,. mov~ment. ot 1868 ond 1906 ~Combined movement. of 1868 ond 1906 _, , _ . 123100 __ . ... _ 122 00 + \~ i, /, ~,th4~ Dioblo Fo re l l on L:+ ~/ ~ I ~ I / / ~` ~ ~ ~ ,,'` - ~unc' ~ ~E ~ece' ~.locho Pi" H~ ~ \\ S-~o jreno~__ `; - ~ \ ~ g~roo i ~n~o~u] Alt~ SCOI. of orrowS I0tO o. showr hero. Pt. P'nol L.~\Pi~ 122~104' ~ O S 10 me~r' , 12 31 OC' _ - \ ~_`Sonic ,, ~no ~ /"' , ~n I ~e ~t. roro Figure 1. Crustal motions, obtained by first order triangulation near San Francisco, California, during the years 1851-1868, and 1868-1906. Vector motions are due to the earthquakes of 1868 and 1906, as well as to long term interseismic crustal motion.

OCR for page 9
22 5+ YR IN:TE~LATE BSLNS 8 Go _ l 8 . Do a: 1 L:3 ~ $ - U) (sigma < 15 mm/yr) HIPSTER ~ JORDAN (1978) MODEL VALUES 1~ // // ' W} . ~ . ~ // SLR v.s. MODEL: BEST FIT LITHE (+---~) SLOPE = 0.9416 CORR = 0 9739 - 1 ~0 so -80.~ ~o.c~ o.oo MODEL R.~= - S~I~ 1 ~.00 80. ~LOG: Figure 2. Rates of Baseline length change obtained by Satellite Laser Ranging, compared with rates predicted by the global, rigid plate motion model of Minster and Jordan (1978~. Departures of the observed rates from the predicted rates are due to violation of the assumptions inherent in the rigid plate models. Figure taken from Douglas (1988)