National Academies Press: OpenBook

Living on an Active Earth: Perspectives on Earthquake Science (2003)

Chapter: 2. The Rise of Earthquake Science

« Previous: 1. The Challenge of Earthquake Science
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

2
Rise of Earthquake Science

Earthquakes have engaged human inquiry since ancient times, but the scientific study of earthquakes is a fairly recent endeavor. Instrumental recordings of earthquakes were not made until the last quarter of the nineteenth century, and the primary mechanism for the generation of earthquake waves—the release of accumulated strain by sudden slippage on a fault—was not widely recognized until the beginning of the twentieth century. The rise of earthquake science during the last hundred years illustrates how the field has progressed through a deep interplay among the disciplines of geology, physics, and engineering (1). This chapter presents a historical narrative of the development of the basic concepts of earthquake science that sets the stage for later parts of the report, and it concludes with some historical lessons applicable to future research.

2.1 EARLY SPECULATIONS

Ancient societies often developed religious and animistic explanations of earthquakes. Hellenic mythology attributed the phenomenon to Poseidon, the god of the sea, perhaps because of the association of seismic shaking with tsunamis, which are common in the northeastern Mediterranean (Figure 2.1). Elsewhere, earthquakes were connected with the movements of animals: a spider or catfish (Japan), a mole or elephant (India), an ox (Turkey), a hog (Mongolia), and a tortoise (Native America). The Norse attributed earthquakes to subterranean writhing of the imprisoned god Loki in his vain attempt to avoid venom dripping from a serpent’s tooth.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.1 The fallen columns in Susita (Hypos) east of the Sea of Galilee from a magnitude ~7.5 earthquake on the Dead Sea transform fault in A.D. 749. SOURCE: A. Nur, And the walls came tumbling down, New Scientist, 6, 45-48, 1991. Copyright A. Nur.

Some sought secular explanations for earthquakes and their apocalyptic consequences (Box 2.1, Figure 2.2). For example, in 31 B.C. a strong earthquake devastated Judea, and the historian Josephus recorded a speech by King Herod given to raise the morale of his army in its aftermath (2): “Do not disturb yourselves at the quaking of inanimate creatures, nor do you imagine that this earthquake is a sign of another calam-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

BOX 2.1 Ruins of the Ancient World

The collision of the African and Eurasian plates causes powerful earthquakes in the Mediterranean and Middle East. Some historical accounts document the damage from particular events. For example, a Crusader castle overlooking the Jordan River in present-day Syria was sheared by a fault that ruptured it at dawn on May 20, 1202.1 In most cases, however, such detailed records have been lost, so that the history of seismic destruction can be inferred only from archaeological evidence. Among the most convincing is the presence of crushed skeletons, which are not easily attributable to other natural disasters or war and have been found in the ruins of many Bronze Age cities, including Knossos, Troy, Mycenae, Thebes, Midea, Jericho, and Megiddo.

Recurring earthquakes may explain the repeated destruction of Troy, Jericho, and Megiddo, all built near major active faults. Excavation of the ancient city of Megiddo—Armageddon in the Biblical prophecy of the Apocalypse—reveals at least four episodes of massive destruction, as indicated by widespread debris, broken pottery, and crushed skeletons.2 Similarly, a series of devastating earthquakes could have destabilized highly centralized Bronze Age societies by damaging their centers of power and leaving them vulnerable to revolts and invasions.3 Historical accounts document such “conflicts of opportunity” in the aftermath of earthquakes in Jericho (~1300 B.C.), Sparta (464 B.C.), and Jerusalem (31 B.C.).

1  

R. Ellenblum, S. Marco, A. Agnon, T. Rockwell, and A. Boas, Crusader castle torn apart by earthquake at dawn, 20 May 1202, Geology, 26, 303-306, 1998.

2  

A. Nur and H. Ron, Armageddon’s earthquakes, Int. Geol. Rev., 39, 532-541, 1997.

3  

A. Nur, The end of the Bronze Age by large earthquakes? in Natural Catastrophes During Bronze Age Civilisations, B.J. Peisner, T. Palmer, and M.E. Bailey, eds., British Archaeological Review International, Series 728, Oxford, pp. 140-147, 1998.

ity; for such affections of the elements are according to the course of nature, nor does it import anything further to men than what mischief it does immediately of itself.”

Several centuries before Herod’s speech, Greek philosophers had developed a variety of theories about natural origins of seismic tremors based on the motion of subterranean seas (Thales), the falling of huge blocks of rock in deep caverns (Anaximenes), and the action of internal fires (Anaxagoras). Aristotle in his Meteorologica (about 340 B.C.) linked earthquakes with atmospheric events, proposing that wind in underground caverns produced fires, much as thunderstorms produced lightning. The bursting of these fires through the surrounding rock, as well as the collapse of the caverns burned by the fires, generated the earthquakes. In support of this hypothesis, Aristotle cited his observation that earthquakes tended to occur in areas with caves. He also classified earthquakes according to whether the ground motions were primarily vertical or hori-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.2 The remains of a family—a man, woman, and child (small skull visible next to the woman’s skull)—crushed to death in A.D. 365 in the city of Kourian on the island of Cyprus when their dwelling collapsed on top of them during an earthquake. SOURCE: D. Soren, The day the world ended at Kourion, National Geographic, 30-53, July 1988; Copyright Martha Cooper.

zontal and whether they released vapor from the ground. He noted that “places whose subsoil is poor are shaken more because of the large amount of the wind they absorb.” The correlation he observed between the intensity of the ground motions and the weakness of the rocks on which structures are built remains central to seismic hazard analysis.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

2.2 DISCOVERY OF SEISMIC FAULTING

Aristotle’s ideas and their variants persisted well into the nineteenth century (3). In the early 1800s, geology was a new scientific discipline, and most of its practitioners believed that volcanism caused earthquakes, both of which are common in geologically active regions. A vigorous adherent to the volcanic theory was the Irish engineer Robert Mallet, who coined the term seismology in his quantitative study of the 1857 earthquake in southern Italy (4). By this time, however, evidence had been accumulating that earthquakes are incremental episodes in the building of mountain belts and other large crustal structures, a process that geologists named tectonics. Charles Lyell, in the fifth edition of his seminal book The Principles of Geology (1837), was among the first to recognize that large earthquakes sometimes accompany abrupt changes in the ground surface (5). He based this conclusion on reports of the 1819 Rann of Cutch (Kachchh) earthquake in western India—near the disastrous January 26, 2001, Bhuj earthquake— and, in later editions, on the Wairarapa, New Zealand, earthquake of 1855. A protégé of Lyell’s, Charles Darwin, experienced a great earthquake while visiting Chile in 1835 during his voyages on the H.M.S. Beagle. Following the earthquake, he and Captain FitzRoy noticed that in many places the coastline had risen several meters, causing barnacles to die because of prolonged exposure to air. He also noticed marine fossils in sediments hundreds of meters above the sea and concluded that seismic uplift was the mechanism by which the mountains of the coast had risen. Darwin applied James Hutton’s principle of uniformitarianism—“the present is the key to the past”—and inferred that the mountain range had been uplifted incrementally by many earthquakes over many millennia (6).

Fault Slippage as the Geological Cause of Earthquakes

The leap from these observations to the conclusion that earthquakes result from slippage on geological faults was not a small one. The vast majority of earthquakes are accompanied by no surface faulting, and even when such ruptures had been found, questions arose as to whether the ground breaking shook the Earth or the Earth shaking broke the ground. Moreover, the methodology for mapping fault displacements and understanding their relationships to geological deformations, the discipline of structural geology, had not yet been systematized. A series of field studies—by G.K. Gilbert in California (1872), A. McKay in New Zealand (1888), B. Koto in Japan (1891), and C.L. Griesbach in Baluchistan (1892)—demonstrated that fault motion generates earthquakes, thereby documenting that the surface faulting associated with each of these earthquakes was consistent with the long-term, regional tectonic deformation that geologists had mapped (Figure 2.3).

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.3 Photograph of the 1891 Nobi (Mino-Owari) earthquake scarp at Midori, taken by B. Koto, a professor of geology at the Imperial University of Tokyo. Based on his geological investigations, Koto concluded, “The sudden elevations, depressions, or lateral shiftings of large tracts of country that take place at the time of destructive earthquakes are usually considered as the effects rather than the cause of subterranean commotion; but in my opinion it can be confidently asserted that the sudden formation of the ‘great fault of Neo’ was the actual cause of the great earthquake.” This photograph appeared in The Great Earthquake in Japan, 1891, published by the Seismological Society of Japan, which was one of the first comprehensive scientific reports of an earthquake. The damage caused by the Nobi earthquake motivated Japan to create an Earthquake Investigation Committee, which set up the first government-sponsored research program on the causes and effects of earthquakes. SOURCE: J. Milne and W.K. Burton, The Great Earthquake in Japan, 1891, 2nd ed., Lane, Crawford & Co., Yokohama, Japan, 69 pp. + 30 plates, 1892.

Among the geological investigations of this early phase of tectonics, Gilbert’s studies in the western United States were seminal for earthquake science. From the new fault scarps of the 1872 Owens Valley earthquake, he observed that the Sierra Nevada, bounding the west side of the valley, had moved upward and away from the valley floor. This type of faulting was consistent with his theory that the Basin and Range Province between the Sierra Nevada and the Wasatch Mountains of Utah had been formed by tectonic extension (7). He also recognized the similarity of the

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.4 Aerial view, Salt Lake City, showing the scarps of active normal faults of the Wasatch Front, first recognized by G.K. Gilbert. SOURCE: Utah Geological Survey.

Owens Valley break to a series of similar piedmont scarps along the Wasatch Front near Salt Lake City (Figure 2.4). By careful geological analysis, he documented that the Wasatch scarps were probably caused by individual fault movements during the recent geological past. This work laid the foundation for paleoseismology, the subdiscipline of geology that employs features of the geological record to deduce the fault displacement and age of individual, prehistoric earthquakes (8).

Geological studies were supplemented by the new techniques of geodesy, which provide precise data on crustal deformations. Geodesy grew out of two practical arts, astronomical positioning and land surveying, and became established as a field of scientific study in the mid-nineteenth century. One of the first earthquakes to be measured geodetically was the Tapanuli earthquake of May 17, 1892, in Sumatra, which happened during a triangulation survey by the Dutch Geodetic Survey. The surveyor in charge, J.J.A. Müller, discovered that the angles between the survey monuments had changed during the earthquake, and he concluded that a horizontal displacement of at least 2 meters had occurred along a structure later recognized to be a branch of the Great Sumatran fault. R.D. Oldham

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

of the Geological Survey of India inferred that the changes in survey angles and elevations following the great Assam earthquake of June 12, 1897, were due to co-seismic tectonic movements. C.S. Middlemiss reached the same conclusion for the Kangra earthquake of April 4, 1905, also in the well-surveyed foothills of the Himalaya (9).

Mechanical Theories of Faulting

The notion that earthquakes result from fault movements linked the geophysical disciplines of seismology and geodesy directly to structural geology and tectonics, whose practitioners sought to explain the form, arrangement, and interrelationships among the rock structures in the upper part of the Earth’s crust. Although Hutton, Lyell, and the other founders of the discipline of geology had investigated the great vertical deformations required by the rise of mountain belts, the association of these deformations with large horizontal movements was not established until the latter part of the nineteenth century (10). Geological mapping showed that some horizontal movements could be accommodated by the ductile folding of sedimentary strata and plastic distortion of igneous rocks, but that much of the deformation takes place as cataclastic flow (i.e., as slippage in thin zones of failure in the brittle materials that make up the outer layers of the crust). Planes of failure on the larger geological scales are referred to as faults, classified as normal, reverse, or strike-slip according to their orientation and the direction of slip (Figure 2.5).

In 1905, E.M. Anderson (11) developed a successful theory of these faulting types, based on the premises that one of the principal compressive stresses is oriented vertically and that failure is initiated according to a rule published in 1781 by the French engineer and mathematician Charles Augustin de Coulomb. The Coulomb criterion states that slippage occurs when the shear stress on a plane reaches a critical value tc that depends linearly on the effective normal stress sneff acting across that plane:

tc = t0 + µsneff, (2.1)

where t0 is the (zero-pressure) cohesive strength of the rock and µ is a dimensionless number called the coefficient of internal friction, which usually lies between 0.5 and 1.0. Anderson’s theory made quantitative predictions about the angles of shallow faulting that fit the observations rather well (except in regions where fault planes were controlled by strength anisotropy like sedimentary layering). However, it could not explain the existence of large, nearly horizontal thrust sheets that formed at deeper structural levels in many mountain belts. Owing to the large lithostatic load, the total normal stress sn acting on such fault planes was

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.5 Fault types showing principal stress axes and Coulomb angles. For a typical coefficient of friction in rocks (µ ˜ 0.6), the Coulomb criterion (Equation 2.1) implies that a homogeneous, isotropic material should fail under triaxial stress (s1 > s2 > s3) along a plane that contains the s2 axis and lies at an angle about 30° to the s1 direction. According to Anderson’s theory, normal faults should therefore occur where the vertical stress sV is the maximum principal stress s1, and the initial dips of these extensional faults should be steep (about 60°); reverse faults (sV = s3) should initiate as thrusts with shallow dips of about 30°, and strike-slip faults (sV = s2) should develop as vertical planes striking at about 30° to the s1 direction. SOURCE: Reprinted from K. Mogi, Earthquake Prediction, Academic Press, Tokyo, 355 pp., 1985, Copyright 1985 with permission from Elsevier Science.

much greater than any plausible tectonic shear stress, so it was difficult to see how failure could happen. M.K. Hubbert and W.W. Rubey resolved this quandary in 1959 (12) by recognizing that the effective normal stress in the Coulomb criterion should be the difference between sn and the fluid pressure Pf:

sneff = snPf. (2.2)

They proposed that overthrust zones were overpressurized; that is Pf in these zones was substantially greater than the pressure expected for hydrostatic equilibrium and could approach lithostatic values (13). Hence, sneff could be much smaller than sn. Overpressurization may explain why some faults, such as California’s San Andreas, appear to be exceptionally weak.

Elastic Rebound Model

When a 470-kilometer segment of the newly recognized San Andreas rift ruptured across northern California in 1906 (Box 2.2, Figures 2.6 and 2.7), both geologists and engineers jumped at the opportunity to observe first-hand the effects of a major earthquake. Three days after the earthquake, while the fires of San Francisco were still smoldering, California Governor George C. Pardee appointed a State Earthquake Investigation

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

BOX 2.2 San Francisco, California, 1906

At approximately 5:12 a.m. local time on April 18, 1906, a small fracture nucleated on the San Andreas fault at a depth of about 10 kilometers beneath the Golden Gate (20). The rupture expanded outward, quickly reaching its terminal velocity of about 2.5 kilometers per second (5600 miles per hour). Its upper front broke through the ground surface at the epicenter within a few seconds, and its lower front decelerated as it spread downward into the more ductile levels of the middle crust, while the two sides continued to propagate in opposite directions along the San Andreas. Near the epicenter, the rupture displaced the opposite sides of the fault rightward by an average of about 4 meters (a right-lateral strike-slip). On the southeastern branch, the total slip diminished as the rupture traveled down the San Francisco peninsula and vanished 100 kilometers away from the epicenter. To the northwest, the fracture ripped across the neck of the Point Reyes peninsula and entered Tomales Bay, where the total slip increased to 7 meters, sending out seismic waves that damaged Santa Rosa, Fort Ross, and other towns of the northern Coast Ranges. The rupture continued up the coast to Point Arena, where it went offshore, eventually stopping near a major bend in the fault at Cape Mendocino (Figure 2.6). At least 700 people were killed, perhaps as many as 3000, and many buildings were severely damaged.1 In San Francisco, the quake ignited at least 60 separate fires, which burned unabated for three days, consuming 42,000 buildings and destroying a considerable fraction of the West Coast’s largest city.

1  

G. Hansen and E. Condon, Denial of Disaster: The Untold Story of the San Francisco Earthquake and Fire of 1906, Cameron and Co., San Francisco, 160 pp., 1989. These authors present evidence that the scale of the 1906 disaster, in terms of both property destroyed and lives lost, was deliberately underreported to protect economic interests. They also argue that the damage directly caused by the earthquake was preferentially reported as fire damage, because the latter was more likely to be covered by insurance.

Commission, headed by Berkeley Professor Andrew C. Lawson, to coordinate a wide-ranging set of scientific and engineering studies (14). The first volume of the Lawson Report (1908) compiled reports by more than 20 specialists on a variety of observations: the geological setting of the San Andreas; the fault displacements inferred from field observations and geodetic measurements; reports of the arrival time, duration, and intensity of the seismic waves; seismographic recordings from around the world; and detailed surveys of the damage to structures throughout Northern California. The latter demonstrated that the destruction was closely related to building design and construction, as well as to local geology. The intensity maps of San Francisco clearly show that some of the strongest shaking occurred in the soft sediment of China Basin and in the present Marina district, two San Francisco neighborhoods that would be severely damaged in the Loma Prieta earthquake some 83 years later (15). This interdiscipli-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.6 San Andreas fault system in California, showing the extent of the surface rupture, damage area, and felt area of the 1906 earthquake. SOURCE: T.H. Jordan and J.B. Minster, Measuring crustal deformation in the American west, Sci. Am., 256, 48-58, 1988. Illustration by Hank Iken.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.7 Panoramic view of the ruins of San Francisco after the April 1906 earthquake and fire, viewed from the Stanford Mansion site. SOURCE: Lester Guensey, Library of Congress, Prints and Photographs Division, [Lc-USZ62-123408 DLC].

nary synthesis is still being mined for information about the 1906 earthquake and its implications for future seismic activity (16).

Professor Henry Fielding Reid of Johns Hopkins University wrote the second volume of the Lawson Report (1910), presenting his celebrated elastic rebound hypothesis. Reid’s 1911 follow-up paper (17) summarized his theory in five propositions:

  • The fracture of the rocks, which causes a tectonic earthquake, is the result of elastic strains, greater than the strength of the rock can withstand, produced by the relative displacements of neighboring portions of the earth’s crust.

  • These relative displacements are not produced suddenly at the time of the fracture, but attain their maximum amounts gradually during a more or less long period of time.

  • The only mass movements that occur at the time of the earthquake are the sudden elastic rebounds of the sides of the fracture towards positions of no elastic strain; and these movements extend to distances of only a few miles from the fracture.

  • The earthquake vibrations originate in the surface of the fracture; the surface from which they start is at first a very small area, which may quickly become very large, but at a rate not greater than the velocity of compressional elastic waves in rock.

  • The energy liberated at the time of an earthquake was, immediately before the rupture, in the form of energy of elastic strain of the rock.

Today all of these propositions are accepted with only minor modifications (18). Although some geologists, for at least the latter half of the nineteenth century, had considered the notion that most large earthquakes result from fault slippage, Reid’s hypothesis was boldly revolutionary. The horizontal tectonic displacements he postulated had no well-estab-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

lished geologic basis, for example, and they would remain mysterious until the plate-tectonic revolution of the 1960s (19).

2.3 SEISMOMETRY AND THE QUANTIFICATION OF EARTHQUAKES

In 1883, the English mining engineer John Milne suggested that “it is not unlikely that every large earthquake might with proper appliances be recorded at any point of the globe.” His vision was fulfilled six years later when Ernst von Rebeur-Paschwitz recorded seismic waves on delicate horizontal pendulums at Potsdam and Wilhemshaven in Germany from the April 17, 1889, earthquake in Tokyo, Japan. By the turn of the century, the British Association for the Advancement of Science was sponsoring a global network of more than 40 stations, most equipped with instruments of Milne’s design (21); other deployments followed, expanding the coverage and density of seismographic recordings (22). Working with records of the great Assam earthquake of June 12, 1897, Oldham identified three basic wave types: the small primary (P or compressional) and secondary (S or shear) waves that traveled through the body of the Earth and the “large” (L) waves that propagated across its outer surface (23).

Hypocentral Locations and Earth Structure

Milne investigated the velocities of the P, S, and L waves by plotting their travel times as a function of distance for earthquakes whose location had been fixed by local observations. From curves fit to these travel times, he could then determine the distance from the observing stations to an event with an unknown epicenter, and he could fix its location from the

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

intersection of arcs drawn at the estimated distance from three or more such stations. By applying this simple technique, he and others began to compile catalogs of instrumentally determined earthquake epicenters (24).

Improved locations meant that seismologists could use the travel time of the seismic waves to develop better models of the variations of wave velocities with depth, which in turn could be used to improve the location of the earthquake’s initial radiation (hypocenter), as well as its origin time. This cycle of iterative refinement of Earth models and earthquake locations, along with advances in the distribution and quality of the seismometer networks, steadily decreased the uncertainties in both. It also led to some major discoveries. In 1906, Oldham presented the first seismological evidence that the Earth had a central core, and in 1914, Beno Gutenberg obtained a relatively precise depth (about 2900 kilometers) to the boundary between the core and the solid-rock shell or mantle (German for coat) surrounding it. From regional recordings of the 1909 Croatian earthquake, the Serbian seismologist Andriji Mohorovicic discovered the sharp increase in seismic velocities that bears his name, often abbreviated the Moho, which separates the lighter, more silica-rich crust from the ultramafic (iron- and magnesium-rich) mantle.

After Milne’s death in 1913, H.H. Turner, an Oxford professor, took over the determination of earthquake hypocenters and origin times. Turner’s efforts to compile earthquake data systematically led, after the First World War, to the founding of the International Seismological Summary (ISS) (25). While preparing the ISS bulletins, Turner (1922) noticed some events with anomalous travel times, which he proposed had hypocenters much deeper than that of typical earthquakes. In 1928, Kiyoo Wadati established the reality of such “deep-focus” earthquakes as much as 700 kilometers beneath volcanic arcs such as Japan and the Marianas, and he subsequently delineated planar regions of seismicity (now called Wadati-Benioff zones) extending from the ocean trenches at the face of the arcs down to these deep events. The Danish seismologist Inge Lehmann discovered the Earth’s inner core in 1936; this “planet within a planet” has since been shown to be a solid metallic sphere two-thirds the size of the Moon at the center of the liquid iron-nickel outer core. By the time Harold Jeffreys and Keith Bullen finalized their travel-time tables in 1940, the Earth’s internal structure was known well enough to estimate the hypocenter of large earthquakes with a standard error often less than 10 kilometers and origin time with a standard error of less than 2 seconds (26).

Earthquake Magnitude and Energy

The next important step in the development of instrumental seismology was the quantification of earthquake size. Maps of seismic damage

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

were made in Italy as early as the late eighteenth century. In the 1880s, M.S. Rossi of Italy and F. Forel of Switzerland defined standards for grading qualitative observations by integer values that increase with the amount of shaking and disruption. Versions of their “intensity scale,” as modified by G. Mercalli and others, are still used to map intensity after strong events (27), but they do not measure the intrinsic size of an earthquake, nor can they be applied to events that humans have not felt and observed (i.e., almost all earthquakes). The availability of instrumental recordings and the desire to standardize the seismological bulletins motivated seismologists to estimate the intrinsic size of earthquakes by measuring the amplitude of the seismic waves at a station and correcting them for propagation effects, such as the spreading out of wave energy and its attenuation by internal friction. Several such scales were developed, including one by Wadati in 1931, but the most popular and successful schemes were based on the standard magnitude scale that Charles Richter of Caltech published in 1935.

Richter recognized that seismographic amplitude provides a first-order measure of the radiated energy but that these data are highly variable depending on the type of seismograph, distance to the earthquake, and local site conditions. To normalize for these factors, he considered only southern California earthquakes recorded on Caltech’s standardized network of Wood-Anderson torsion seismometers (28). He defined the local magnitude scale for such events by the formula

ML = log A – log A0, (2.3)

where A is the maximum amplitude of the seismic trace on the standard seismogram; A0 is the amplitude at that same distance for a reference earthquake with ML = 0; and all logarithms are base 10. He fixed the reference level A0 by specifying a magnitude-zero earthquake as an event with an amplitude of 1 micron on a standard Wood-Anderson seismogram at a distance of 100 kilometers (29). An earthquake of magnitude 3.0 thus had an amplitude of 1 millimeter at 100 kilometers, which was about the smallest level measurable on this type of pen-written seismogram (30). Corrections for recordings made at other distances were determined empirically and incorporated into a simple graphic procedure.

During the next decade, Richter and Gutenberg refined and extended the methodology to include earthquakes recorded by various instrument types and at teleseismic distances. Gutenberg published a series of papers in 1945 detailing the construction of magnitude scales based on the maximum amplitude of long-period surface waves (MS), which could be applied to shallow earthquakes at any distance, and teleseismic body waves (mb), which could be applied to earthquakes too deep to excite ordinary

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

surface waves. To the extent possible, these scales were calibrated to agree with Richter’s definition of magnitude, although various discrepancies became apparent as experience accumulated (31). In 1956, Gutenberg and Richter used surface-wave magnitudes as the basis for an energy formula (with E in joules):

log E = 1.5MS + 4.8. (2.4)

This relationship implies that earthquake energies vary over at least 12 orders of magnitude, a much larger range than previously supposed. It also allows comparison with a new source of seismic energy, the atomic bomb. Seismic signals were recorded by regional stations from the first Trinity test in 1945 (32) and an underwater explosion at Bikini atoll in July of 1946, the Baker test; both generated compressional waves observed at teleseismic distances. The energy released from Baker, a Hiroshima-type device, was about 8 × 1013 joules. Assuming a 1 percent seismic efficiency, Gutenberg and Richter calculated a body-wave magnitude of 5.1 from their revised energy formulas, which agreed reasonably well with their observed value of 5.3 (33). Seismology thus embarked on a new mission, the detection and measurement of nuclear explosions. By 1959, the reliable identification of small underground nuclear explosions had become the primary technical issue confronting the verification of a comprehensive nuclear test ban treaty, and the resulting U.S. program in nuclear explosion seismology, Project Vela Uniform, motivated important developments in earthquake science (34).

Seismicity of the Earth

Observational and theoretical research in Japan, North America, and Europe during the 1930s markedly improved seismogram interpretation. Seismographic readings from an increasingly dense global network of stations were compiled and published regularly in the International Seismological Summary, an invaluable source of data for refining event locations. By 1940, the ability to locate earthquakes was sufficiently advanced to allow the systematic analysis of global seismicity. Gutenberg and Richter produced their first synthesis in 1941, based on their relocation of hypocenters and estimation of magnitudes (35). They used focal depth to formalize the nomenclature of shallow (less than 70 kilometers), intermediate (70 to 300 kilometers), and deep (greater than 300 kilometers) earthquakes; they confirmed that Wadati’s depth of 300 kilometers for the transition from intermediate focus to deep focus was a minimum in earthquake occurrence rate, and they showed a sharp cutoff in global seismicity at about 700 kilometers. Their classic treatise Seismicity of the Earth documented a number of observations about the geographic distribution

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

of seismicity that helped to establish the plate-tectonic theory: (1) most large earthquakes occur in narrow belts that outline a set of stable blocks, the largest comprising the central and western Pacific basin; (2) nearly all intermediate and deep seismicity is associated with planar zones that dip beneath volcanic island arcs and arc-like orogenic (mountain-building) structures; and (3) seismicity in the ocean basins is concentrated near the crest of the oceanic ridges and rises.

Gutenberg and Richter also discussed a series of issues related to the size distribution and energy release of earthquakes. They found that the total number of earthquakes N greater than some magnitude M in a fixed time interval obeyed the relationship (36)

log N = abM, (2.5)

where a and b are empirical constants. Equation 2.5 is equivalent to N = N010bM; in this form, N0 = 10a is the total number of earthquakes whose magnitude exceeds zero. This is an extrinsic parameter that depends on the temporal interval and spatial volume considered, whereas b describes an exponential fall-off in seismicity with magnitude, a parameter more intrinsic to the faulting process. For a global distribution of shallow shocks, they estimated b ˜ 0.9, so that a decrease in one unit of magnitude gives an eightfold increase in frequency. Subsequent studies have confirmed that regional seismicity typically follows these Gutenberg-Richter statistics, with b values ranging from 0.5 to 2.0. Because spatial extent and energy release grow exponentially with magnitude, Gutenberg-Richter statistics imply a power-law scaling between frequency and size (37).

Gutenberg and Richter noted that even though small earthquakes are much more common than large events, the big ones dominate the energy distribution. According to their energy formula (Equation 2.4), an increase by one magnitude unit gives a 32-fold increase in energy, so that a summation over all events still implies that the total energy increases about a factor of 4 per unit magnitude. They used this type of calculation to dispel the popular notion that minor shocks can function as a “safety valve” to delay a great earthquake. They found that the total annual energy release from all earthquakes was only a fraction of the heat flow from the solid Earth, estimated a few years earlier by the British geophysicist Edward Bullard (38). This calculation was consistent with the idea that earthquakes were a form of work done by a thermodynamically inefficient heat engine operating in the Earth’s interior.

Earthquakes as Dislocations

Although it was known that earthquakes usually originate from sudden movements across a fault, the actual mechanics of the rupture pro-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

cess remained obscure (39), and a quantitative theory of how this dislocation forms and generates elastic waves through the spontaneous action of material failure was completely lacking.

Progress toward a dynamic description of faulting began in Japan, where the high density of seismic stations allowed seismologists to recognize coherent geographic patterns in the seismic radiation. They mapped the first-arriving P-wave pulses into regions of compression (first motion up) and dilatation (first motion down), separated by nodal lines where the initial arrival was very weak (40). Stimulated by these observations, H. Nakano formulated, in 1923, the problem of deducing the orientation of the faulting from the pattern of first motions (41). He expressed the radiation from an instantaneous event in terms of a system of dipolar forces at the earthquake hypocenter. The results appeared to be ambiguous, because the observed “beachball” radiation pattern of P waves (Figure 2.8) could be explained either by a single couple of such forces or by a double couple. A 40-year controversy ensued regarding which of these models is physically correct, until understanding began to grow in the 1960s of the definitive theoretical conclusion that a fault dislocation is equivalent to a double couple (42).

The dislocation model also shed light on the dynamic coupling between the brittle, seismogenic layer and its ductile, aseismic substrate. Geodetic data from the 1906 earthquake had shown that the process of strain accumulation and release was concentrated near the fault. In 1961, Michael Chinnery (43) showed that the displacement from a uniform vertical dislocation decays to half its maximum value at a horizontal distance equal to the depth of faulting, and he applied this result to estimate a rupture depth of 2 to 6 kilometers for the 1906 earthquake. Later workers used Chinnery’s model to provide a physical model for Reid’s elastic rebound theory, arguing that the deformation before the 1906 earthquake was due to nearly steady slip at depth on the San Andreas fault, while the shallow part of the fault slipped enough in the earthquake itself to catch up, at least approximately, with the lower fault surface.

2.4 PLATE TECTONICS

Alfred Wegener, a German meteorologist, first put forward his theory of continental drift in 1912 (44). He marshaled geological arguments that the continents had once been joined as a supercontinent he named Pangea, but he imagined that they moved apart at very rapid rates—tens of meters per year (45)—like buoyant, granitic ships plowing through a denser, basaltic sea of oceanic crust. Jeffreys showed in 1924 that this idea, as well as the dynamic mechanisms Wegener proposed for causing drift (e.g., westward drag on the continents by lunar and solar tidal forces), were

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.8 Graphical representations of three basic types of seismic point sources (left to right): isotropic, double couple, and compensated linear vector dipole. (a) Principal axis coordinates of equivalent force systems. (b) Compressional wave radiation patterns. (c) Curves of intersection of nodal surfaces with the focal sphere. Each of these source types is a specialization of the seismic moment tensor M. SOURCE: Modified from B.R. Julian, A.D. Miller, and G.R. Foulger, Non-double-couple earthquakes: 1. Theory, Rev. Geophys., 36, 525-549, 1998. Copyright 1998 American Geophysical Union. Modified by permission of American Geophysical Union.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

physically untenable, and most of the geological community discredited Wegener’s hypothesis (46). In the 1930s, however, the South African geologist A.L. du Toit assembled an impressive set of additional geologic data that supported continental drift beginning in the Mesozoic Era, and the empirical case in its favor was further strengthened when E. Irving and S.K. Runcorn published their compilations of paleomagnetic pole positions in 1956. The paleomagnetic data indicated drifting rates on the order of centimeters per year, several orders of magnitude slower than Wegener had hypothesized. Within the next 10 years, the key elements of plate tectonics were put in place. The main conceptual breakthrough was the recognition that on a global scale, the amount of new basaltic crust generated by seafloor spreading—the bilateral separation of the seafloor along the mid-ocean ridge axis—is balanced by subduction—the thrusting of basaltic crust into the mantle at the oceanic trenches.

Seafloor Spreading and Transform Faults

Submarine mountain ranges, mapped in the 1870s, came into focus as a world-encircling system of extensional tectonics after the Second World War. Marine geologists Maurice Ewing and Bruce Heezen, based at Columbia University, mapped a narrow, nearly continuous “median valley” along the ridge crests in the Atlantic, Indian, and Antarctic Oceans, which they inferred to be a locus of active rifting and the source of the mid-ocean seismicity that Gutenberg and Richter had documented (47). In the early 1960s, Harry H. Hess of Princeton University and Robert S. Dietz of the Scripps Institution of Oceanography advanced the concept of seafloor spreading to account for observations of such phenomena as the paucity of deep-sea sediments and the tendency for oceanic islands to subside with time (48). In his famous 1960 “geopoetry” preprint, Hess noted that crustal creation at the Mid-Atlantic Ridge implies a more plausible mechanism for continental drift than the type originally envisaged by Wegener: “The continents do not plow through oceanic crust impelled by unknown forces; rather they ride passively on mantle material as it comes to the surface at the crest of the ridge and then moves laterally away from it.”

Two distinct predictions based on the theory of seafloor spreading were confirmed in 1966. The first involved the striped patterns of magnetic anomalies being mapped on the flanks of the mid-ocean ridges. In 1963, F. Vine and D. Matthews suggested that such anomalies record the reversals of the Earth’s magnetic field through remnant magnetization frozen into the oceanic rocks as they diverge and cool away from the ridge axis. These geomagnetic “tape recordings” were shown to be symmetric about this axis and consistent with the time scale of geomagnetic reversals worked out from lava flows on land; moreover, the spreading

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

speed measured from the magnetometer profiles in the Atlantic was found to be nearly constant and in agreement with the average opening rate obtained from the paleomagnetic data on continental rocks (49).

The second confirmation came from the study of earthquakes on the mid-ocean ridges. Horizontal displacements as large as several hundred kilometers had been documented for strike-slip faults on land, by H.W. Wellman for the Alpine fault in New Zealand and by M. Hill and T.W. Dibblee for the San Andreas (50), but even larger displacements—greater than 1000 kilometers—could be inferred from the offsets of magnetic anomalies observed across fracture zones in the Pacific Ocean (51). In a 1965 paper that laid out the basic ideas of the plate theory, the Canadian geophysicist J. Tuzo Wilson recognized that fracture zones were relics of faulting that was active only along those portions connecting two segments of a spreading ridge, which he called transform faults (52). His model implied that the sense of motion across a transform fault would be the opposite to the ridge axis offset. The seismologist Lynn Sykes of Columbia University verified this prediction in an investigation of the focal mechanisms from transform-fault earthquakes (Figure 2.9).

FIGURE 2.9 Two interpretations of two ridge segments offset by a fault. (a) In the pre-plate-tectonic interpretation, the two ridge segments (double lines) would have been offset in a sinistral (left-lateral) sense along the fault (solid line). Earthquakes should occur along the entire fault line. (b) According to the plate-tectonic theory, the two ridge segments were never one continuous feature. Spreading of the seafloor away from the ridges causes dextral (right-lateral) motions only along the section of the fault between the two ridge segments (the transform fault). The extensions of the faults beyond the ridge segments, the fracture zones (dashed lines), are aseismic. Earthquake observations conclusively demonstrated the validity of interpretation (b) for the mid-ocean ridges. SOURCE: Modified from L. Sykes, Mechanism of earthquakes and nature of faulting on mid-ocean ridges, J. Geophys. Res., 72, 2131-2153, 1967. Copyright 1967 American Geophysical Union. Reproduced by permission of American Geophysical Union.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

Sykes’s study was facilitated by the rapidly accumulating collection of seismograms, readily available on photomicrofiche, from the new World Wide Standardized Seismographic Network (WWSSN) set up under Project Vela Uniform. These high-quality seismometers had good timing systems, fairly broad bandwidth, and a nearly uniform response to ground motions, and they were installed and permanently staffed around the world at recording sites with relatively low background noise levels (53). The high density of stations allowed smaller events to be located precisely and their focal mechanisms to be determined more rapidly and accurately than ever before. One result was much more accurate maps of global seismicity, which clearly delineated the major plate boundaries, as well as the Wadati-Benioff zones of deep seismicity (Figure 2.10).

Subduction of Oceanic Lithosphere

If the Earth’s surface area is to remain constant, then the creation of new oceanic crust at the ridge crests necessarily implies that some old crust is being recycled back into the mantle. This inference was consistent with the theories of mantle convection that attributed the volcanic arcs and linear zones of compressive orogenesis to convective downwellings (54), which David Griggs had discussed as early as 1939, calling it “a convection cell covering the whole of the Pacific basin, comprising sinking peripheral currents localizing the circum-Pacific mountains and rising currents in the center” (55). Griggs belonged to a growing group of “mobilists” who espoused the view that the Earth’s solid mantle is actively convecting like a fluid heated from below, causing large horizontal displacements of the crust, including continental drift (56). The alternative, expanding-Earth hypothesis states that the planetary radius is increasing, perhaps owing to radioactive heating or possibly to a universal decrease in gravitational strength with time, and that seafloor spreading accommodates the associated increase in surface area (57). Thus, new oceanic crust created at the spreading centers does not have to be balanced by the sinking of old crust back into the mantle.

Because of this controversy, as well as the geologic complexity of the problem, subduction was the last piece of the plate-tectonic puzzle to fall into place (58). While the system of oceanic ridges and transform faults fit neatly together in seafloor spreading, the compressional arcs and mountain belts juxtaposed all types of active faulting, which continued to baffle geologists. Benioff had pointed out the asymmetric polarity of the island arcs, correctly proposing that the deep oceanic trenches are surface expressions of giant reverse faults (59). Robert Coats used this idea to account for the initial formation of island arcs such as the Aleutians and the geochemical data bearing on the development of the

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.10 Locations of earthquake epicenters with body-wave magnitudes greater than 4.5 for the period 1960-1967, which incorporated the improved data of the WWSSN. Computer-generated epicenter maps such as this one first became available in about 1964 and were used to delineate the major plate boundaries and the descending slabs of cold oceanic lithosphere. Dots are epicenters reported by the U.S. Coast and Geodetic Survey. SOURCE: B.L. Isacks, J. Oliver, and L.R. Sykes, Seismology and the new global tectonics, J. Geophys. Res., 73, 5855-5899, 1968. Copyright 1968 American Geophysical Union. Reproduced by permission of American Geophysical Union.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

andesitic stratovolcanoes characteristic of these arcs (60). Benioff’s model was based on several misconceptions, however, including the assumption that intermediate- and deep-focus seismicity could be explained by extrapolating trench-type reverse faulting into the mid-mantle transition zone. In fact, the focal mechanism of most earthquakes with hypocenters deeper than 70 kilometers does not agree with Benioff’s model of reverse faulting (61).

The definitive evidence for “thrust tectonics” finally arrived in the form of the great 1964 Alaska earthquake (Box 2.3). The enormous energy released in this event (~3 × 1018 joules) set the Earth to ringing like a bell and allowed precise studies of the terrestrial free oscillations, whose period might be as long as 54 minutes (62). A permanent strain of 10–8 was recorded by the Benioff strainmeter on Oahu, more than 4000 kilometers away, consistent with a fault-dislocation model of the earthquake (63). However, the high-amplitude waves drove most of the pendulum seismometers offscale (64). Moreover, field geologists could not find the fault; all ground breaks were ascribable to secondary effects. What they did observe was a systematic pattern of large vertical motions—uplifts as high as 12 meters and depressions as deep as 2.3 meters, which could easily be mapped along the rugged coastlines by observing the displacement of beaches and the stranded colonies of sessile marine organisms such as barnacles (just as Darwin had done for the 1835 Chile earthquake). By combining this pattern with the seismological and geodetic data, they inferred that the rupture represented the slippage of the Pacific Ocean crust beneath the continental margin of southern Alaska along a huge thrust fault. Geologist George Plafker concluded that “arc structures are sites of down-welling mantle convection currents and that planar seismic zones dipping beneath them mark the zone of shearing produced by downward-moving material thrust against a less mobile block of the crust and upper mantle” (65). By connecting the Alaska megathrust with the more steeply inclined plane of deeper seismicity under the Aleutian Arc, Plafker articulated one of the central tenets of plate tectonics.

Plafker’s conclusions were bolstered by more accurate sets of focal mechanisms that William Stauder and his colleagues at St. Louis University derived (66). Dan McKenzie and Robert Parker took the next major step toward completion of the plate theory in 1967, when they showed that slip vectors from Stauder’s mechanisms of Alaskan earthquakes could be combined with the azimuth of the San Andreas fault to compute a consistent pole of instantaneous rotation for the Pacific and North American plates (67). At the same time, Jason Morgan’s analysis of seafloor spreading rates and transform-fault azimuths demonstrated the global consistency of plate kinematics (68).

Clarity came with the realization that the plate is a cold mechanical

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

boundary layer that can act as a mechanical stress guide, capable of transmitting forces for thousands of kilometers from one boundary to another (69). The essential elements of the subduction process were brought together in a 1968 paper by seismologists Brian Isacks, Jack Oliver, and Lynn Sykes (70). In addition to obtaining improved data on earthquake locations and focal mechanisms, they delineated a dipping slab of mantle material with distinctively high seismic velocity and low attenuation, which coincided with the Wadati-Benioff planes of deep seismicity (71). They found that they could account for their results, as well as most of the other data on plate tectonics, in terms of three mechanical layers, which J. Barrell and R.A. Daly had postulated earlier in the century to explain the vertical motions associated with isostatic compensation. A cold, strong lithosphere was generated by seafloor spreading at the ridge axis and subsequent conductive cooling of the oceanic crust and upper mantle, attaining a thickness of about 100 kilometers. It slid over and eventually subducted back into a hot, weak asthenosphere. Earthquakes of the Wadati-Benioff zones were generated primarily by stresses internal to the descending slab of oceanic lithosphere when it encountered a stronger, interior mesosphere at a depth of about 700 kilometers.

Deformation of the Continents

Plate tectonics was astounding in its simplicity and the economy with which it explained so many previously disparate geological observations. In the late 1960s and 1970s, geological data were reappraised in the light of the “new global tectonics,” leading to some important extensions of the basic plate theory. However, a major problem was the obvious contrast in mechanical behavior of the oceanic and continental lithospheres. Geophysical surveys in the ocean basins revealed much narrower plate boundaries than observed on land. The volcanic rifts of active crust formation along the mid-ocean ridges were found to be only a few kilometers wide, for example, whereas volcanic activity in continental rifts could be mapped over tens to hundreds of kilometers. Similar differences were observed for transform faults; in the oceans, the active slip is confined to very narrow zones, in marked contrast to the broad belts of continental strike-slip tectonics, which often involve many distributed, interdependent fault systems. For example, only about two-thirds of the relative motion between the Pacific and North American plates turned out to be accommodated along the infamous San Andreas fault; the remainder is taken up on subsidiary faults and by oblique extension in the Basin and Range Province (see Section 3.2).

In 1970, Tanya Atwater (72) explained the geological evolution of western North America over the last 30 million years as the consequence

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

BOX 2.3 Prince William Sound, Alaska, 1964

The earthquake nucleated beneath Prince William Sound at about 5:36 p.m. on Good Friday, March 27, 1964. As the rupture spread outward, its progress to the north and east was stopped at the tectonic transition beneath the Chugach Mountains, behind the port of Valdez, Alaska, but to the southwest it continued unimpeded at 3 kilometers per second down the Alaska coastline, paralleling the axis of the Aleutian Trench for more than 700 kilometers, to beyond Kodiak Island. The district geologist of Valdez, Ralph G. Migliaccio, filed the following report:1

Within seconds of the initial tremors, it was apparent to eyewitnesses that something violent was occurring in the area of the Valdez waterfront … Men, women, and children were seen staggering around the dock, looking for something to hold onto. None had time to escape, since the failure was so sudden and violent. Some 300 feet of dock disappeared. Almost immediately a large wave rose up, smashing everything in its path…. Several people stated the wave was 30 to 40 feet high, or more…. This wave crossed the waterfront and, in some areas reached beyond McKinley Street…. Approximately 10 minutes after the initial wave receded, a second wave or surge crossed the waterfront carrying large amounts of wreckage, etc…. There followed a lull of approximately 5 or 6 hours during which time search parties were able to search the waterfront area for possible survivors. There were none.

The height of the tsunami measured 9.1 meters at Valdez, but 24.2 meters at Blackstone Bay on the outer coast of the Kodiak Island group and 27.4 meters at Chenega on the Kenai Peninsula. The city of Anchorage, 100 kilometers west of the epicenter, was shielded from the big tsunami, but it experienced considerable damage, especially in the low-lying regions of unconsolidated sediment that became liquefied by the shaking. Robert B. Atwood, editor of the Anchorage Daily Times, who lived in the Turnagain Heights residential section, described his experiences during the landslide:

I had just started to practice playing the trumpet when the earthquake occurred. In a few short moments it was obvious that this earthquake was no minor one…. I headed for the door … Tall trees were falling in our yard. I moved to a spot where I thought it would be safe, but, as I moved, I saw cracks appear in the earth. Pieces of the ground in jigsaw-puzzle shapes moved up and down, tilted at all angles. I tried to move away, but more appeared in every direction…. Table-top pieces of earth moved upward, standing like toadstools with great overhangs, some were turned at crazy angles. A chasm opened beneath me. I tumbled down … Then my neighbor’s house collapsed and slid into the chasm. For a time it threatened to come down on top of me, but the earth was still moving, and the chasm opened to receive the house.

Migliaccio and Atwood had witnessed the second largest earthquake of the twentieth century. The plane of the rupture inferred from the dimensions of the aftershock zone was the size of Iowa (800 kilometers by 200 kilometers), and geodetic data showed that the offset along the fault averaged more than 10 meters. The product of these three numbers, which is proportional to a measure of earthquake size called the seismic moment (Equation 2.6), was thus 2000 cubic kilometers, about 100 times greater than the 1906 San Francisco earthquake. Among instrumentally recorded earthquakes, only the Chilean earthquake of 1960, which occurred in a similar tectonic setting, was bigger (by a factor of about 3). Both of these great earthquakes

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

engendered tsunamis of large amplitude that propagated across the Pacific Ocean basin and caused damage and death thousands of kilometers from their source. Along the Oregon-California coast, 16 people were killed by the Alaska tsunami. In Crescent City, California, a series of large tsunamis inundated the harbor, beginning at four and a half hours, with the third and fourth wave causing the most damage. After the first two had struck, seven people returned to a seaside tavern to recover their valuables. Since the ocean seemed to have returned to normal, they remained to have a drink and were caught by the third wave, which killed five of them.2

1  

National Research Council, The Great Alaska Earthquake of 1964, National Academy Press, Washington, D.C., 15 volumes, 1972-1973.

2  

B. Bolt, Earthquakes and Geological Discovery, W.H. Freeman, New York, p. 155, 1963.

of the North American plate overriding an extension of the East Pacific Rise along a subduction zone paralleling the West Coast. Her synthesis, which accounts for seemingly disparate events (e.g., andesitic volcanism in northern California, strike-slip faulting along the San Andreas, compressional tectonics in the Transverse Ranges, rifting in the Gulf of California) was grounded in the kinematical principles of plate tectonics (73), and her paper did much to convince geologists that the new theory was a useful framework for understanding the complexities of continental tectonics.

Convergent plate boundaries in the oceans were observed to be broader than the other boundary types, with the zone of geologic activity on the surface encompassing the trench itself, the deformed sediments and basement rocks of the forearc sequence, the volcanic arc that overlies the subducting slab, and sometimes an extending back-arc basin (74). Nevertheless, the few-hundred-kilometer widths of the ocean-ocean convergence zones did not compare with the extensive orogenic terrains that mark major continental collisions. The controlling factors were recognized to be the density and strength of the silica-rich continental crust, which are significantly lower than those of the more iron- and magnesium-rich oceanic crust and upper mantle (75). When caught between two converging plates, the weak, buoyant continental crust resists subduction and piles up into arcuate mountain belts and thickened plateaus that erode into distinctive sequences of sedimentary rock. This distributed deformation also causes metamorphism and melting of the crust, generating siliceous magmas that intrude the crust’s upper layers to form large granitic batholiths. In some instances, the redistribution of buoyancy-related stresses can lead to a reversal in the direction of subduction.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

W. Hamilton used these consequences of plate tectonics to explain modern examples of mountain building, and J. Dewey and J. Bird used them to account for the geologic structures observed in ancient mountain belts (76).

Much of the early work on convergent plate boundaries interpreted mountain building in terms of two-dimensional models that consider deformations only in the vertical planes perpendicular to the strikes of the convergent zones. During a protracted continent-continent collision, however, crustal material is eventually squeezed sideways out of the collision zone along lateral systems of strike-slip faults. The best modern example is the Tethyian orogenic belt, which extends for 10,000 kilometers across the southern margin of Eurasia. At the eastern end of this belt, the convergence of the Indian subcontinent with Asia has uplifted the Himalaya, raised the great plateau of Tibet, re-elevated the Tien Shan Mountains to heights in excess of 5 kilometers, and caused deformations up to 2000 kilometers north of the Himalayan front. Earthquakes within these continental deformation zones have been frequent and dangerous.

In a series of studies, P. Molnar and P. Tapponnier explained the orientation of the major faults in southern Asia, their displacements, and the timing of key tectonic events as a consequence of the collision of the Indian continent with Asia (77). They investigated the active faulting in central Asia using photographs from the Earth Resources Technology Satellite, magnetic lineations on the ocean floor, and teleseismically determined focal mechanisms of recent earthquakes. By combining these remote-sensing observations with the plate-tectonic information, they demonstrated that strike-slip faulting has played a dominant role in the mature phase of the Himalayan collision (78).

The more diffuse nature of continental seismicity and deformation was consistent with the notion that the continental lithosphere is some-how weaker than the oceanic lithosphere, but a detailed picture required a better understanding of the mechanical properties of rocks. When subjected to differential compression at moderate temperatures and pressures, most rocks fail by brittle fracture according to the Coulomb criterion (Equation 2.1). Extensive laboratory experiments on carbonates and silicates showed that for all modes of brittle failure, the coefficient of friction µ usually lies in the range 0.6 to 0.8, with only a weak dependence on the rock type, pressure, temperature, and properties of the fault surface. This behavior has come to be known as Byerlee’s law (79), and it implies that the frictional strength of continental and oceanic lithospheres should be about the same, at least at shallow depths.

Rocks deform by ductile flow, not brittle failure, when the temperature and pressure get high enough, however, and the onset of this ductility depends on composition. Investigations of ductile flow began in 1911 with Theodore von Kármán’s triaxial tests on jacketed samples of marble.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

It was found that the strength of ductile rocks decreases rapidly with increasing temperature and that their rheology approaches that of a viscous fluid. The brittle-ductile transition thus explained the plate-like behavior of the oceanic lithosphere and the fluid-like behavior of its subjacent, convecting mantle. Rock mechanics experiments further revealed that ductility sets in at lower temperatures in quartz-rich rocks than in olivine-rich rocks, typically at midcrustal depths in the continents. The ductile behavior of the lower continental crust inferred from laboratory data, which was consistent with the lack of earthquakes at these depths, thus explained the less plate-like behavior of the continents (80).

2.5 EARTHQUAKE MECHANICS

Gilbert and Reid recognized the distinction between fracture strength and frictional strength (81), and they portrayed earthquakes as frictional instabilities on two-dimensional faults in a three-dimensional elastic crust, driven to failure by slowly accumulating tectonic stresses—a view entirely consistent with plate tectonics. Although earthquakes surely involve some nonelastic, volumetric effects such as fluid flow, cracking of new rock, and the expansion of gouge zones, Gilbert and Reid’s idealization still forms the conceptual framework for much of earthquake science, both basic and applied. Nevertheless, because the friction mechanism was not obviously compatible with deep earthquakes, as described below, their view that earthquakes are frictional instabilities on faults had, by the time Wilson wrote his 1965 paper on plate tectonics, been considered and rejected by some scientists.

The Instability Problem

Deep-focus earthquakes presented a major puzzle. Seismologists had found that the deepest events, 600 to 700 kilometers below the surface, are shear failures just like shallow-focus earthquakes and that the decrease in apparent shear stress during these events is on the order of 10 megapascals, about the same size as the stress drops estimated for shallow shocks. According to a Coulomb criterion (Equation 2.1), the shear stress needed to induce frictional failure on a fault should be comparable to the lithostatic pressure, which reaches 2500 megapascals in zones of deep seismicity. Shear stresses of this magnitude are impossibly high, and if the stress drop approximates the absolute stress, as most seismologists believe, they would conflict with the observations (82).

Furthermore, if earthquakes result from a frictional instability, the motion across a fault must at some point be accelerated by a drop in the frictional resistance. A spontaneous rupture like an earthquake thus re-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

quires some type of strain weakening, but the rock deformations observed in the laboratory at high pressure and temperature tended to display strain hardening during ductile creep. In a classic 1960 treatise Rock Deformation, D. Griggs and J. Handin (83) concluded that the old theory of earthquakes’ originating by ordinary fracture with sudden loss of cohesion was invalid for deep earthquakes, although they did note that extremely high fluid pressures at depth could validate that same mechanism they presumed to hold for shallow events.

A renewed impetus was given to the frictional explanation in 1966, when W.F. Brace and Byerlee demonstrated that the well-known engineering phenomenon of stick-slip also occurs in geologic materials (84). Experimenting on samples with preexisting fault surfaces, they observed that the stress drops in the laboratory slip events were only a small fraction of the total stress. This implies that the stress drops during crustal earthquakes could be much smaller than the rock strength, eliminating the major seismological discrepancy. Subsequent experiments at the Massachusetts Institute of Technology found a transition from stick-slip behavior to ductile creep at about 350°C (85). Stick-slip instabilities thus matched the properties of earthquakes in the upper continental crust, which were usually confined above this brittle-ductile transition, although this could not explain the deeper shocks in subduction zones. In addition, Brace and Byerlee’s work focused theoretical attention on how frictional instabilities depend on the elastic properties of the testing machine or fault system (86).

During the next decade, the servo-controlled testing machine was developed, in which the load levels and strain rates were precisely regulated, so that the postfailure part of the load-deformation curve in brittle materials could be followed without the stick-slip instabilities encountered with less stiff machines (87). Several new aspects of rock friction were investigated, including memory effects and dilatancy (88). The subsequent development of high-precision double-direct-shear and rotary-shear devices (89) allowed detailed measurements of friction for a wide range of materials under variable sliding conditions. This work documented three interrelated phenomena:

  1. Static friction µs depends on the history of sliding and increases logarithmically with the time two surfaces are held in stationary contact (90).

  2. Under steady-state sliding, the dynamic friction µd depends logarithmically on the slip rate V, with a coefficient that can be either positive (velocity strengthening) or negative (velocity weakening) (91).

  3. When a slipping interface is subjected to a sudden change in the loading velocity, the frictional properties evolve to new values over a

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

characteristic slipping distance Dc, measured in microns and interpreted as the slip necessary to renew the microscopic contacts between the two rough surfaces (92).

During 1979 to 1983, J.H. Dieterich and A.L. Ruina (93) integrated these experimental results into a unified constitutive theory in which the slip rate V appears explicitly in the friction equation and the frictional strength evolves with a characteristic time set by the mean lifetime Dc/V of the surface contacts. The behavioral transition of Brace and Byerlee around 350 degrees, from stick-slip to creep, was interpreted by Tse and Rice (94) as a transition from rate weakening to rate strengthening in the crust and was shown to allow models of earthquake sequences in a crustal strike-slip fault to reproduce primary features inferred for natural events, such as the depth range of seismic slip and rapid after-slip below.

Scaling Relations

According to the dislocation model of earthquakes, slip on a small planar fault is equivalent to a double-couple force system, where the total moment M0 of each couple is proportional to the product of the fault’s area A and its average slip u:

(2.6)

The constant of proportionality G is the elastic shear modulus, a measure of the resistance to shear deformation of the rock mass containing the fault, which can be estimated from the shear-wave velocity. For waves that are large compared with the dislocation, the amplitude of radiation increases in proportion with M0, so that this static seismic moment can be measured directly from seismograms. K. Aki made the first determination of seismic moment from the long-period surface waves of the 1964 Niigata earthquake (95). Many subsequent studies have demonstrated a consistent relationship between seismic moment and the various magnitude scales developed from the Richter standard; the results can be expressed as a general moment magnitude MW of the form

(2.7)

Equation 2.7 defines a unified magnitude scale (96) based on a physical measure of earthquake size. Calculating magnitude from seismic moment avoids the saturation effects of other magnitude estimates, and this procedure became the seismological standard for determining earthquake size. The 1960 Chile earthquake had the largest moment of any known seismic event, 2 × 1023 newton-meters, corresponding to Mw = 9.5 (Table 2.1).

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

TABLE 2.1 Size Measures of Some Important Earthquakes

Date

Location

MS

MW

M0 (1018 N-m)

April 18, 1906

San Francisco

8.25

8.0

1,000

Sept. 1, 1923

Kanto, Japan

8.2

7.9

850

Nov. 4, 1952

Kamchatka

8.25

9.0

35,000

March 9, 1957

Aleutian Islands

8.25

9.1

58,500

May 22, 1960

Chile

8.3

9.5

200,000

March 25, 1964

Alaska

8.4

9.2

82,000

June 16, 1964

Niigata, Japan

7.5

7.6

300

Feb. 4, 1965

Aleutian Islands

7.75

8.7

12,500

May 31, 1970

Peru

7.4

8.0

1,000

Feb. 4, 1975

Haicheng, China

7.4

6.9

31

July 28, 1976

Tangshan, China

7.9

7.6

280

Aug. 19, 1977

Sumba

7.9

8.3

3,590

Oct. 28, 1983

Borah Peak

7.3

6.9

31

Sept. 19, 1985

Mexico

8.1

8.0

1,100

Oct. 18, 1989

Loma Prieta

7.1

6.9

27

June 28, 1992

Landers

7.5

7.3

110

Jan. 17, 1994

Northridge

6.6

6.7

12

June 9, 1994

Bolvia

7.0a

8.2

2,630

Jan. 16, 1995

Hyogo-ken Nanbu, Japan

6.8

6.9

24

Aug. 17, 1999

Izmit, Turkey

7.8

7.4

242

Sept. 20, 1999

Chi-Chi, Taiwan

7.7

7.6

340

Oct. 16, 1999

Hector Mine

7.4

7.1

60

Jan. 13, 2001

El Salvador

7.8

7.7

460

Jan. 26, 2001

Bhuj, India

8.0

7.6

340

NOTE: All events are shallow except Bolivia, which had a focal depth of 657 km. Moment magnitude MW computed from seismic moment M0 via Equation 2.7.

aBody-wave magnitude.

SOURCES: U.S. Geological Survey and Harvard University.

Unless otherwise noted, all magnitudes given throughout the remainder of this report are moment magnitudes.

Beginning in the 1950s, arrays of temporary seismic stations were deployed to study the aftershocks of large earthquakes. Aftershocks are caused by subsidiary faulting from stress concentrations produced by the main shock, owing to inhomogeneities in fault slippage and heterogeneities in the properties of the nearby rocks. Omori’s work on the 1891 Nobi earthquake had demonstrated that the frequency of aftershocks decayed inversely with the time following the main shock (97). In its modern form, “Omori’s law” states that the aftershock frequency obeys a power law of the form

n(t) = A(t + c)p, (2.8)

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

where t is the time following the main shock and c and p are parameters of the aftershock sequence. Aftershock surveys confirmed that p is near unity (usually slightly greater) for most sequences. They also showed that the aftershock zone approximated the area of faulting inferred from geologic and geodetic measurements (98).

With independent information about rupture area A from aftershock, geologic, or geodetic information, Equation 2.6 can be solved for the average fault displacement u. Aki obtained a value of about 4 meters for the 1964 Niigata earthquake by this method, consistent with echo-sounding surveys of the submarine fault scarp. A second method derived fault dimensions from the “corner frequency” of the seismic radiation spectrum, an observable value inversely proportional to the rupture duration (99). Corner frequencies were easily measurable from regional and teleseismic data and could be converted to fault lengths by assuming an average rupture velocity (100). Using this procedure, seismologists estimated the source dimensions for a much larger set of events, paving the way for global studies of the stress changes during earthquakes.

For an equidimensional rupture surface, the ratio

measures the decrease in strain, or strain drop, during the faulting, and ?s ˜ Gis the static stress drop, the average difference between the initial and final stresses (101). Substituting this relationship into Equation 2.6 yields M0 ˜ ?s ˜ A3/2. A logarithmic plot of seismic moment M0 versus fault area Afor a representative sample of crustal earthquakes on plate boundaries shows scatter about a linear relationship with a slope of about 1.5, implying that the stress drop is approximately constant across a large range of earthquake sizes, with an average value close to 3 megapascals (Figure 2.11) (102). The lack of any systematic variation in stress drop with event size was a fundamental observation that formed the basis for a series of earthquake scaling relations (103). Together with the Gutenberg-Richter and Omori power-law relations (Equations 2.5 and 2.8), near-constant stress drop suggested that many aspects of the earthquake process are scale invariant and that the underlying physics is not sensitive to the tectonic details.

Seismic Source Studies

Seismic moment measures the static difference between initial and final states of a fault, not what happens during the rupture. To investigate the dynamics of rupture process, seismologists had to tackle the difficult problem of determining the space-time distribution of faulting during an earthquake from its radiated seismic energy. In the 1960s, a simple kinematic dislocation model with uniform slip and rupture speed was devel-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.11 Seismic moment as a function of source dimension. When plotted on a log-log scale, the diagonal lines (slope = 3) indicate M0 ~ r3 and denote constant stress drop. Note that measurements of the static stress drop vary as the cube of the corner frequency, a sensitivity that contributes to substantial scatter in ?s for individual events. The value cited is based on a numerical study of rupture dynamics (R. Madariaga, Dynamics of an expanding circular fault, Bull. Seis. Soc. Am., 66, 639-666, 1976) and applies to shear waves radiated by a circular, cohesionless fault that stops suddenly around a circular periphery. Using a particular estimate for the source radius in terms of the corner frequency fc yields ?s ˜ 47M0(fc/ß)3. SOURCE: R.E. Abercrombie, Earthquake source scaling relationships from –1 to 5 ML using seismograms recorded at 2.5-km depth, J. Geophys. Res.,100, 24,015-24,036, 1995. Copyright 1995 American Geophysical Union. Reproduced by permission of American Geophysical Union.

oped by N. Haskell to understand the energy radiation from an earthquake and the spectral structure of a seismic source (104). Haskell’s model predicted that the frequency spectrum of an earthquake source is flat at low frequency and falls off as ?–2 at high frequency, where ? is the angular frequency. This simple model (generally called the omega-squared model) was extended to accommodate the much more complex kinematics of real seismic faulting, described stochastically (105), and it was found

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

to approximate the spectral observations rather well, especially for small earthquakes.

The orientation of an elementary dislocation depends on two directions, the normal direction to the fault plane and the slip direction within this plane, so that the double-couple for a dislocation source is described by a three-dimensional, second-order moment tensorM proportional to M0 (106). By 1970, it was recognized that the seismic moment tensor can be generalized to include an ideal (spherically symmetrical) explosion and another type of seismic source called a compensated linear vector dipole (CLVD). A CLVD mechanism was invoked as a plausible model for seismic sources with cylindrical symmetry, such as magma-injection events, ring-dike faulting, and some intermediate- and deep-focus events (Figure 2.8) (107).

The Stress Paradox

Plate tectonics accounted for the orientation of the stress field on simple plate boundaries, which could be classified according to Anderson’s three principal types of faulting: divergent boundaries (normal faults), transform boundaries (strike-slip faults), and convergent boundaries (reverse faults). The stress orientations mapped on plate interiors using a variety of indicators—wellbore breakouts, volcanic alignments, and earthquake focal mechanisms—were generally found to be coherent over distances of 400 to 4000 kilometers and to match the predictions of intraplate stress from dynamic models of plate motions (108). This behavior implies that the spatial localization of intraplate seismicity primarily reflects the concentration of strain in zones of crustal weakness (109). Explaining the orientation of crustal stresses was a major success for the new field of geodynamics.

About 1970, a major debate erupted over the magnitude of the stress responsible for crustal earthquakes. Byerlee’s law implies that the shear stress required to initiate frictional slip should be at least 100 megapascals, an order of magnitude greater than most seismic stress drops (110). The stresses measured during deep drilling generally agree with these predictions. If the average stresses were this large, however, the heat generated by earthquakes along major plate boundaries would greatly exceed the radiated seismic energy and the heat flowing out of the crust along active fault zones should be very high. Attempts to measure a heat flow anomaly on the San Andreas fault found no evidence of a peak (111). The puzzle of fault stress levels was further complicated as data became available in the middle to late 1980s on principal stress orientations in the crust near the San Andreas (112); the maximum stress direction was found to be steeply inclined to the fault trace and to re-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

solve more stress onto faults at angles to the trace of the San Andreas fault than onto the San Andreas fault itself. These results, as well as data on subduction interfaces and oceanic transform faults, suggest that most plate-bounding faults operate at low overall driving stress, on the order of 20 megapascals or less. Various explanations have been put forward (113)—intrinsically weak materials in the fault zones, high fluid pore pressures, or dynamical processes that lower frictional resistance such as wave-generated decreases in normal stress during rupture—but the stress paradox remains a major unsolved problem.

2.6 EARTHQUAKE PREDICTION

Earthquake prediction is commonly defined as specifying the location, magnitude, and time of an impending earthquake within specified ranges. Earthquake predictions are customarily classified into long term (decades to centuries), intermediate term (months to decades), and short term (seconds to weeks). The following discussion is divided the same way, but the classification is not definitive because many proposed methods span the time boundaries. Because some predictions might be satisfied by chance, seismologists almost inevitably invoke probabilities to evaluate the success of an earthquake prediction. Many seismologists distinguish forecasts, which may involve relatively low probabilities, from predictions, which involve high enough probabilities to justify exceptional policy or scientific responses. This distinction, which is adopted here, implies that predictions refer to times when the earthquake probability is temporarily much higher than normal for a given region and magnitude range. Forecasts might or might not involve temporal variations. Even if they involve only estimates of the “normal” probability, long-term forecasts can be extremely useful for input to seismic hazard calculations and for decisions about building, retrofitting, insuring, and so forth. A clear statement of the target magnitude is crucial to evaluating a prediction because small earthquakes are so much more frequent than large ones. A prediction of a moment magnitude (M) 6 earthquake for a given region and time might be very bold, while a prediction of an M 5 event could easily be satisfied by chance.

Long-Term Forecasts

G.K. Gilbert issued what may have been the first scientifically based, long-term earthquake forecast in his 1883 letter to the Salt Lake City Tribune (114), in which he articulated the practical consequences of his field work along the seismically active Wasatch Front:

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

Any locality on the fault line of a large mountain range, which has been exempt from earthquake for a long time, is by so much nearer to the date of recurrence…. Continuous as are the fault-scarps at the base of the Wasatch, there is one place where they are conspicuously absent, and that place is close to [Salt Lake City]…. The rational explanation of their absence is that a very long time has elapsed since their last renewal. In this period the earth strain has slowly been increasing, and some day it will overcome the friction, lift the mountains a few feet, and reenact on a more fearful scale the [1872] catastrophe of Owens Valley.

So far, Gilbert’s forecast for Salt Lake City has not been fulfilled (115). H.F. Reid developed Gilbert’s “principle of alternation” into a quantitative theory of earthquake forecasting. In his 1910 report for the Lawson Commission, he wrote: “As strains always precede the rupture and as the strains are sufficiently great to be easily detected before rupture occurs (116), … it is merely necessary to devise a method of determining the existence of strains; and the rupture will in general occur … where the strains are the greatest.” He suggested that the time of the next major earthquake along that segment of the San Andreas fault could be estimated by establishing a line of piers at 1-kilometer spacing perpendicular to the fault and observing their positions “from time to time.” When “the surface becomes strained through an angle of 1/2000, we should expect a strong shock.” Reid noted that this prediction scheme relied on measurements commencing when the fault was in an “unstrained condition,” which he presumed was the case following the 1906 earthquake (117).

The Gilbert-Reid forecast hypothesis—the idea that a large earthquake is due when the critical strain from the last large event has been recovered by steady tectonic motions—is the basis for the seismic-gap method. In its simplest form, this hypothesis asserts that a particular fault segment fails in a quasi-periodic series of earthquakes with a characteristic size and average recurrence interval. This interval can be estimated either from known dates of past characteristic earthquakes or from D/V, the ratio of the average slip in a characteristic quake to the long-term slip rate on the fault. A seismic gap is a fault segment that has not ruptured in a characteristic earthquake for a time longer than T. A. Imamura identified Sagami Bay, off Tokyo, as a seismic gap, and his prediction of an impeding rupture was satisfied by the disastrous Kanto earthquake of 1923 (118). Fedotov is generally credited with the first modern description of the seismic-gap method, publishing a map in 1965 showing where large earthquakes should be expected (119). His predictions were promptly satisfied by three major events (Tokachi-Oki, 1968; southern Kuriles, 1969; central Kamchatka, 1971).

Forecasting large earthquakes using the seismic-gap principle looked fairly straightforward in the early 1970s. Plate tectonics had established a

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

precise kinematic framework for estimating the rates of geological deformation across plate boundaries, specifying a deformation budget that could be balanced against historic seismic activity. For example, Sykes divided the amount of co-seismic slip during the 1957, 1964, and 1965 Aleutian Trench earthquakes by the rate of relative motion between the North American and Pacific plates, obtaining recurrence intervals of a century or so for each of the three segments (120). Self-consistent models of the relative plate motions were derived from global data sets that included seafloor magnetic anomalies tied to the precise magnetic reversal time scale (121), allowing Sykes’s calculation to be repeated for many of the major plate boundaries. Sykes and his colleagues produced maps in 1973 and 1979 showing plate boundary segments with high, medium, and low seismic potential based on the recent occurrence of large earthquakes (122) and published a more refined forecast in 1991 (123) (Figure 2.12).

While some form of the Gutenberg-Richter distribution is observed for almost all regions, Schwartz and Coppersmith (124) proposed that many individual faults, or segments of faults, behave quite differently. They proposed that most of the slip on a fault segment is released in large “characteristic” earthquakes having, for a given segment, similar magnitude, rupture area, and average displacement. It follows that characteristic earthquakes must be much more frequent, relative to smaller and larger earthquakes, than the Gutenberg-Richter relationship would predict. Wesnousky and colleagues (125) argue that earthquakes in a region obey the Gutenberg-Richter relationship because the fault segments there have a power-law distribution.

Characteristic earthquakes have profound implications for earthquake physics and hazards. For example, characteristic earthquakes can be counted confidently, and their average recurrence time would be an important measure of seismic hazard. The time of the last one would start a seismic clock, by which the probability of another such earthquake could be estimated. For Gutenberg-Richter earthquakes, the simple clock concept does not apply: for any magnitude of quake, there are many more earthquakes just slightly smaller but no different in character. The characteristic earthquake model has strong intuitive appeal, but the size of the characteristic earthquake and the excess frequencies of such events have been difficult to demonstrate experimentally (126).

The seismic-gap method met limited success as a basis for earthquake forecasting (127). Attempts to use it as a general tool were frustrated by the difficulty of specifying characteristic magnitudes and the lack of historical records needed to estimate the recurrence interval T. Moreover, the practical utility of the seismic-gap hypothesis was compromised by the intrinsic irregularity of the earthquake process and the tendency of earthquakes to cluster in space and time. The Gilbert-Reid idea that a

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.12 Circum-Pacific plate boundary segments with high, medium, and low seismic potential. Colors indicate time-dependent probability of the recurrence of either a large or a great shallow plate earthquake within a specified segment during the interval 1989 to 1999, conditional upon the event not having occurred prior to 1989. SOURCE: Modified from S.P. Nishenko, Circum-Pacific seismic potential: 1989-1999, Pure Appl. Geophys., 135, 169-259, 1991.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

given fault segment will fail periodically assumes that the stress drop in successive earthquakes and the rate of stress accumulation between earthquakes are both constant. However, stick-slip experiments in well-controlled laboratory settings show variations in the time between slip events, which had incomplete and irregular stress drops, indicating variations in either the initial (rupture) stress or the final (postearthquake) stress, or both. Shimizaki and Nakata (128) discussed two special cases (Figure 2.13). In the “time-predictable” model, the initial stress is the same for successive large earthquakes, but the final stress varies. This implies that the time until the next earthquake is proportional to the stress drop, or average slip, in the previous event (Tn = Dn–1/V), while the size of the next quake Dn is not predictable. In the “slip-predictable” model, the initial stress varies from event to event, but the final stress is the same. This implies that the slip in the next earthquake is proportional to the time since the last one (Dn = TnV), while the time Tn is not predictable. Shimazaki and Nakata found that the Holocene uplift data for several well-studied sites in Japan were consistent with a time-predictable model of the largest events.

Japanese seismologists and geologists have long been at the forefront of earthquake prediction studies, and their government has sponsored the world’s largest and best-funded research programs on earthquake phenomena (129). One area of intense concern is the so-called Tokai seismic gap, southwest of Mt. Fuji (Box 2.4). This region is threatened by a

FIGURE 2.13 Schematic models of earthquake recurrence intervals for constant tectonic loading rates. (a) Constant stress drop, showing strictly periodic behavior (modified Reid model). (b) Variable stress drop, constant failure stress (time-predictable model). (c) Variable stress drop, constant final stress (slip-predictable model). SOURCE: K. Shimazaki and T. Nakata, Time-predictable recurrence model for large earthquakes, Geophys. Res. Lett., 7, 279-282, 1980. Copyright 1980 American Geophysical Union. Reproduced by permission of American Geophysical Union.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

BOX 2.4 The Tokai Seismic Gap

Large earthquakes have repeatedly occurred in the Nankai Trough along the southwestern coast of Japan. The sequence during the past 500 years includes large (M ~ 8) earthquakes in 1498, 1605, 1707, 1854, and 1944-1946, with an approximate interval of about 120 years. In the early 1970s, several Japanese seismologists noticed that the 1944-1946 events were somewhat smaller than the 1854 and 1707 earthquakes, and they suggested that this rupture did not reach the northeastern part of the Nankai Trough, called the Suruga Trough. Given the historical evidence that the rupture of both the 1854 and the 1707 events extended all the way to the Suruga Trough, they concluded that this portion of the plate boundary, which became known as the “Tokai seismic gap,” has the potential for a magnitude-8 earthquake in the near future.1

In 1978, the Japanese government introduced the Large-Scale Earthquake Countermeasures Act and embarked on an extensive project to monitor the Tokai gap. Many institutions deployed geophysical and other instrumentation, and very detailed plans for emergency relief efforts were made. This program specified the procedures for short-term prediction. When some anomaly is observed by the monitoring network, a special evaluation committee comprising technical experts is to decide whether it is a precursor for the predicted Tokai earthquake or not. If the anomaly is identified as a precursor, a large-scale emergency operation is to be initiated by the local and central governments. A detailed plan for this activity has been laid out as part of the prediction experiment.

After more than 23 years since the project began, no anomaly that requires initiation of the preplanned emergency operation has been detected. The chair of the Tokai evaluation committee, Professor K. Mogi, resigned in 1997, expressing doubts about the ability of the committee to perform its expected short-term prediction function, and the new chair, M. Mizoue, has voiced similar concerns. Also, a report released in 1997 by the Geodesy Council of Japan concluded that a technical basis for short-term prediction of the kind required by the Countermeasures Act does not currently exist in Japan, and that the time frame for establishing such a capability is not known.2

1  

K. Ishibashi, in Earthquake Prediction—An International Review, American Geophysical Union, Maurice Ewing Series 4, Washington, D.C., pp. 297-332, 1981; K. Mogi, Earthquake Prediction, Academic Press, New York, Chapter III-5, 1985.

2  

State-of-the-Art Review of the National Programs for Earthquake Prediction, Subcommittee for Review Drafting, Special Committee for Earthquake Prediction, Geodesy Council of the Ministry of Education, Science, and Culture, Tokyo, 137 pp., 1997.

potentially large earthquake on the thrust fault of the Suruga Trough, known to have ruptured in the great earthquakes of 1707 and 1854 and thought to be ripe for failure at any time. So far, the expected Tokai earthquake has not occurred. Many seismologists now agree that accurate forecasts are difficult even for plate boundaries such as this one that have seemingly regular historical sequences of earthquakes.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

The Parkfield, California, earthquake prediction (130), arguably the boldest widely endorsed by the seismological community, was also based on the seismic gap theory. Moderate earthquakes of about M 6 on the San Andreas fault near Parkfield were recorded instrumentally in 1922, 1934, and 1966, and pre-instrumental data revealed that similar-size earthquakes occurred in 1857, 1881, and 1901. The regular recurrence of Parkfield events at an average interval of about 22-years and the similarity of the foreshock pattern in 1934 and 1966 led to the hypothesis that these events were characteristic earthquakes, breaking the same segment of the San Andreas with about the same slip. Estimates of the recurrence time from the ratio of earthquake displacement to fault slip agreed with the 22-year value above. Based on these and other data, the U.S. Geological Survey (USGS) issued an official prediction of an earthquake of about M 6 in about 1988, on an identified segment of the San Andreas, with 95 percent probability before the beginning of 1993. While the size and location of the predicted event were not precisely specified, no earthquake matching the description has occurred as of January 1, 2002 (131).

The seismic-gap model forms the basis of many other forecasts. Most involve low enough probabilities that they are not predictions by the usual definition, and they cannot yet be confirmed or rejected by available data. A notable example was the 1988 “Working Group Report” (132). The authors postulated specific segments of the San Andreas and other major strike-slip faults in California, and then tabulated characteristic magnitudes, average recurrence times, and 30-year probabilities for each segment. They estimated a 66 percent probability of at least one large characteristic earthquake on the four southern segments of the San Andreas fault before 2018, with a similar chance for northern California.

The 1989 Loma Prieta earthquake (M 6.9) occurred in an area where several seismologists (and the Working Group) had made long-term or intermediate-term forecasts of a large earthquake (133). It occurred near the southern end of the 1906 rupture, a segment of the San Andreas to which the Working Group assigned a 30-year probability of 30 percent. The earthquake was considered a successful forecast, especially as it happened just two years after the report was published. On the other hand success by chance cannot be ruled out, and the earthquake did not exactly match the forecasts (134).

Intermediate-Term Prediction

Intermediate-term prediction efforts are generally based on recognizing geophysical anomalies that might signal a state of near-critical stress approaching the breaking point. Apparent anomalies have been observed in small earthquake occurrence: accelerated strain or uplift; changes in

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

the gravity field, magnetic field, electrical resistivity, water flow, groundwater chemistry, atmospheric chemistry; and many other parameters that might be sensitive to stress, cracks in rock, or changes in the frictional properties of rocks. The literature is extensive (135); only a few examples are discussed here.

A logical successor to the seismic-gap model is the hypothesis that earthquake occurrence is accelerated or decelerated by stress increments from previous earthquakes. One version of this hypothesis is the stress shadow model—that the occurrence of large earthquakes reduces the stress in certain neighborhoods about their rupture zones, thus decreasing the likelihood of both large and small earthquakes there until the stress recovers (136). The stress model differs from the seismic-gap model in that it applies not just to a fault segment, but to the region surrounding it. Furthermore, because stress is a tensor, it may encourage some faults and discourage others. In some regions near a ruptured fault segment, the stress is actually increased, offering an explanation for seismic clustering. At present, the model offers a good retrospective explanation for many earthquake sequences, but it has not been implemented as a testable prediction hypotheses because the stress pattern depends on details of the previous rupture, fault geometry, stress-strain properties of the crust, possible fluid flow in response to earthquake stress increments, and other properties that are very difficult to measure in sufficient detail.

Seismicity patterns are the basis of many prediction attempts, in part because reliable seismicity data are widely available. Mogi described a sequence of events that many feel can be used to identify stages in a repeatable seismic cycle involving large earthquakes (137). In this model a large earthquake may be followed by aftershocks of decreasing frequency, a lengthy period of quiescence, an increase of seismicity about the future rupture zone, a second intermediate-term quiescence, a period for foreshock activity, a third short-term quiescence, and finally the “big one.” Any of the stages may be missing. This behavior formed the basis of an apparently successful prediction of the M 7.7 Oaxaca, Mexico, earthquake of 1978 (138). Unfortunately, there are no agreed-on definitions of the various phases that can be applied uniformly, nor has there been a comprehensive test of how Mogi’s model works in general (139).

Computerized pattern recognition has been applied in several experiments to recognize the signs of readiness for large earthquakes. V. Keilis-Borok and Russian colleagues have developed an algorithm known as “M8” that scans a global catalog for changes in the earthquake rate, the ratio of large to small earthquakes, the vigor and duration of aftershock sequences, and other diagnostics within predefined circles in seismically active areas (140). They report significant success in predicting which circles are more likely to have large earthquakes (141). Since 1999, they

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

have made six-month advance predictions for magnitude thresholds 7.5 and 8.0 accessible on their web page (142), and fully prospective statistical tests will be possible in the near future.

Short-Term Prediction

The “Holy Grail” of earthquake science has always been short-term prediction—anticipating the time, place, and size of a large earthquake in a window narrow and reliable enough to prepare for its effects (143). However, interest in the possibility of detecting earthquake precursors grew as new technologies were developed to monitor the crustal environment with increasing sensitivity. In the year following the destructive 1964 Alaskan earthquake, a select committee of the White House Office of Science and Technology issued a report called Earthquake Prediction: A Proposal for a Ten Year Program of Research, which called for a national program of research focused on this goal (144).

Optimism about the feasibility of short-term prediction was height-ened in the mid-1970s by the apparent successes of empirical prediction schemes and the plausibility of physical process models, such as dilatancy diffusion. Laboratory studies had measured dilatant behavior in rocks prior to failure, caused by pervasive microcracking. Dilatancy creates measurable strain, changes the material properties, and increases the permeability of the samples (145). Field evidence for such effects came from the Garm region of the former U.S.S.R., where Soviet seismologists had identified changes in the ratio of shear and compressional velocities, VS/VP, as precursors to some moderate earthquakes (146). Positive results on VS/VP precursors were also reported in the United States (147). These observations prompted refinements of the dilatancy diffusion model and a wider search for related precursors.

A reported prediction of an M 7.3 earthquake in Haicheng, China, is widely regarded as the single most successful earthquake prediction. An international team that visited China shortly after the quake (148) reported that the region had already been subject to an intermediate-term earthquake forecast based on seismicity patterns, magnetic anomalies, and other geophysical data. Accelerating seismic activity (Figure 2.14) and rapid changes in the flow from local water wells prompted Chinese officials to issue a short-term prediction and to evacuate thousands of unsafe buildings. At 7:36 p.m. (local time) on February 4, 1975, less than 24 hours after the evacuation began, the main shock destroyed 90 percent of the city. Chinese officials stated that because of the evacuation the number of casualties was extremely low for such an earthquake. This reported success stimulated great optimism in the earthquake prediction community, but it did not signal a widespread breakthrough in predic-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.14 Plot of the magnitude versus time occurrence of the larger foreshocks of the Haicheng earthquake sequence, February 1-4, 1975. As indicated by the final large event, the earthquake occurred at 7:36 p.m. on February 4. SOURCE: P. Molnar, T. Hanks, A. Nur, B. Raleigh, F. Wu, J. Savage, C. Scholz, H. Craig, R. Turner, and G. Bennett, Prediction of the Haicheng earthquake, Eos, Trans. Am. Geophys. Union, 58, 236-272, 1977. Copyright 1977 American Geophysical Union. Reproduced by permission of American Geophysical Union.

tion science. First, the foreshock series and hydrologic precursors were highly unusual, and similar phenomena have not been recognized before other large earthquakes. Second, the Chinese issued many false alarms, so the possibility of success by chance cannot confidently be rejected. Unfortunately, complete records of predictions and consequent actions are not accessible. The apparent triumph of the Haicheng prediction was soon overshadowed by disaster in July 1976, when a devastating (M 7.8) quake struck the Chinese city of Tangshan, resulting in the deaths of at least 240,000 people—one of the highest earthquake death tolls in recorded history. Although this area was also being monitored extensively, the disaster was not predicted.

Nevertheless, many prominent geophysicists were convinced that systematic short-term prediction was feasible and that the challenge remaining was to deploy adequate instrumentation to find and measure precursors of earthquake warnings (149). By 1976 a distinguished group of earthquake scientists convened by the National Research Council was willing to state (150):

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

The Panel unanimously believes that reliable earthquake prediction is an achievable goal. We will probably predict an earthquake of at least magnitude 5 in California within the next five years in a scientifically sound way and with a sufficiently small space and time uncertainty to allow public acceptance and effective response.

In 1977, the U.S. government initiated the National Earthquake Hazards Reduction Program (Appendix A) to provide “data adequate for the design of an operational system that could predict accurately the time, place, magnitude, and physical effects of earthquakes.” The USGS has the responsibility for issuing a prediction (statement that an earthquake will occur), whereas state and local officials have the responsibility for issuing a warning (recommendation or order to take defensive action).

The observational and theoretical basis for prediction soon began to unravel. Careful, repeated measurements showed that the purported Vs/ Vp anomalies were not reproducible (151). At the same time, questions arose about the uniqueness of a posteriori reports of geodetic, geochemical, and electromagnetic precursors. Finally, theoretical models (152) incorporating laboratory rock dilatancy, microcracking, and fluid flow gave no support to the hypothesized Vs/Vp time history. By the end of the 1970s, most of the originally proposed precursors were recognized to be of limited value for short-term earthquake prediction (153).

Attention shifted in the 1980s to searching for transient slip precursors preceding large earthquakes. The hypothesis that such behavior might occur was based on the results of detailed laboratory sliding experiments and model simulations (154) and on qualitative field observations prior to an M 6 earthquake on the San Andreas fault near Parkfield, California (155). The preseismic slip observed under laboratory conditions was very subtle, but theoretical calculations suggested that under favorable conditions it might be observable in the field, provided that the critical slip distance Dc observed in the lab studies scaled to a larger size on natural faults.

To investigate these issues, the USGS launched a focused earthquake prediction experiment near Parkfield, in anticipation that an M 6 earthquake was imminent. Geodetic instrumentation, strainmeters, and tiltmeters were deployed to make continuous, precise measurements of crustal strains near the expected epicenter (Figure 2.15). The strain data were anticipated to place much stricter bounds on any premonitory slip. The predicted moderate earthquake has not occurred, so it is premature to evaluate the success of the search for short-term precursors. Nonetheless, the Parkfield experiment has contributed valuable data that improve our understanding of faults, deformation, and earthquakes.

After more than a century of intense research, no reliable method for short-term earthquake prediction has been demonstrated, and there is no

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.15 The network of crustal deformation instruments along the San Andreas fault maintained by the U.S. Geological Survey near Parkfield, California. SOURCE: U.S. Geological Survey.

guarantee that reliable short-term prediction will ever be feasible. At best, only a few earthquake “precursors” have been identified, and their applicability to other locations and earthquakes is questionable. Research continues on a broad range of proposed techniques for short-term prediction, as does vigorous debate on its promise (156). Most seismologists now agree that the difficulties of earthquake prediction were previously underestimated and that basic understanding of the earthquake process must precede prediction.

2.7 EARTHQUAKE ENGINEERING

The 1891 Nobi earthquake killed more than 7000 people and caused substantial damage to modern brick construction in the Nagoya region (157). Milne noted the extreme variability of ground shaking over short

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

distances and reported that “buildings on soft ground … suffer more than those on hard ground.” He laid the foundation for the development of codes regulating building construction by emphasizing that “we must construct, not simply to resist vertical stresses, but carefully consider effects due to movements applied more or less in horizontal directions” (158). Milne’s conclusions were echoed in California following the 1906 San Francisco earthquake. J.C. Branner, a Stanford professor of geology on the Lawson Commission, supervised a detailed study of more than 1000 houses in San Mateo and Burlingame, and he noted that the local site response had a major influence on the level of damage: “The intensity of the shock was less on the hills than on the flat, in spite of the fact that the houses in the hills were nearer the fault line.” Throughout California, the damage patterns were well correlated with the type of structure and building materials (159).

Early Building Codes

The first attempt to quantify the “earthquake design force” was made after the 1908 Messina-Reggio earthquake in southern Italy, which killed more than 83,000. In a report to the Italian government, M. Panetti, a professor of applied mechanics in Turin, recommended that new buildings be designed to withstand horizontal forces proportional to the vertical load (160). The Japanese engineer Toshikata Sano independently developed in 1915 the idea of a lateral design force V proportional to the building’s weight W. This relationship can be written as V = CW, where C is a lateral force coefficient, expressed as some percentage of gravity (%g, where g = 9.8 m/s2). The first official implementation of Sano’s criterion was the specification C = 10 percent of gravity, issued as a part of the 1924 Japanese Urban Building Law Enforcement Regulations in response to the destruction caused by the great 1923 Kanto earthquake (161). In California, the Santa Barbara earthquake of 1925 motivated several communities to adopt codes with C as high as 20 percent of gravity. The first edition of the U.S. Uniform Building Code (UBC), published in 1927, also adopted Sano’s criterion, allowing for variations in C depending on the region and foundation material (162). For building foundations on soft soil in earthquake-prone regions, the UBC’s optional provisions corresponded to a lateral force coefficient equal to the Japanese value.

Measurement of Strong Ground Motions

By 1930, networks of permanent seismic observatories allowed the location and analysis of large earthquakes anywhere on the globe. However, the sensitive instruments could not register the strong (high-amplitude)

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

ground motions close to large earthquakes, the primary cause of damage and loss of life, and were of little value to engineers. Consequently, engineers were forced to estimate the magnitude of the near-source ground accelerations from damage effects (e.g., overturned objects). The American engineer John Freeman voiced the frustration felt by many of his colleagues when he wrote in 1930 (163):

The American structural engineer possesses no reliable accurate data about form, amplitude or acceleration of the motion of the earth during a great earthquake…. Notwithstanding there are upward of fifty seismograph stations in the country and an indefinitely large number of seismologists, professional and amateur; their measurements of earthquake motion have been all outside of the areas so strongly shaken as to wreck buildings.

Japanese seismologists were the first to attempt to obtain these data systematically. They began to record strong ground motions using long-period seismometers with little or no magnification, and by the 1930s, the development of broader-band, triggered devices allowed accurate measurement of the waves most destructive to buildings, those with shorter period and therefore higher acceleration. The Long Beach earthquake of 1933 was the first large event to be recorded by these improved strong-motion seismometers, several of which had been installed in the Los Angeles region just nine months before the earthquake. This new equipment recorded a peak acceleration of 29 percent of gravity on the vertical component and 20 percent of gravity on the horizontal component. The widespread damage caused by the 1933 Long Beach earthquake (Figure 2.16) spurred legislation for stricter building codes throughout California. One month after the event, the California Assembly passed the Field Act, which effectively prohibited masonry construction in public schools by instituting a lateral force requirement equivalent to 10 percent of the sum of the dead load (weight of the building) and the live load (weight of the contents). The Riley Act, also enacted in 1933, required all buildings in California to resist lateral forces of at least 2 percent of the total vertical design load. On September 6, 1933, the city of Los Angeles passed a law requiring a lateral force of 8 percent of the dead load plus 4 percent of the live load.

The success of the Long Beach recording can be credited to the Seismological Field Survey, which was established in California by the U.S. Department of Commerce at the urging of Freeman. A limited number of strong-motion instruments were deployed (164). One such instrument, located on the concrete block foundation of the Imperial Valley Irrigation District building in El Centro, recorded the next significant California event, the 1940 Imperial Valley earthquake (M 7.1). A peak horizontal

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.16 Jefferson Junior High School destroyed in the Long Beach earthquake. The Field Act, passed one month after the earthquake, prohibited this type of masonry construction for schools in California. SOURCE: Steinbrugge Collection, Earthquake Engineering Research Center, University of California, Berkeley. Photograph by Harold M. Engle.

acceleration of 33 percent of gravity was recorded at a distance of approximately 10 kilometers from the fault rupture. For the next 25 years, this was the largest measured ground acceleration, establishing the El Centro record as the de facto standard for earthquake engineering in the United States and Japan (Figure 2.17).

Response Spectra for Structural Analysis

Both the Long Beach and the El Centro data influenced the development of seismic safety provisions in building codes. However, the impact of seismometry on earthquake engineering was limited by the lack of data from a wider distribution of earthquakes, as well as by computational difficulties in performing a quantitative analysis of ground shaking and its effect on structures. Simplified techniques for structural analysis, such as H. Cross’s moment-distribution method and K. Muto’s D-value method, had been encoded in tables and figures by the early 1930s (165).

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.17 Accelerogram of the 1940 Imperial Valley earthquake (center panel), schematic representation of the response of a series of single degree-of-freedom oscillators having different natural periods (top), and their application to the construction of the response spectrum (bottom panel). SOURCE: H.B. Seed and I.M. Idriss, Ground Motions and Soil Liquefaction During Earthquakes, Earthquake Engineering Research Institute, Engineering Monograph on Earthquake Criteria, Structural Design, and Strong Motion Records 5, El Cerrito, Calif., 134 pp., 1982.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

The advent of analog computers in the 1940s provided the first simulations of structural vibrations induced by the recorded ground motions (166) and allowed the automation of strong-motion spectral analysis (167). These early calculations showed that the spectra of earthquake accelerations are similar to “white noise” over a limited range of frequencies, a pivotal observation in the study of earthquake source processes. However, the immediate implication for earthquake engineering was the lack of a “dominant ground period” that might be destructive to particular structures (168). Without a characteristic frequency, earthquake engineering was recognized to be complex, requiring a comprehensive analysis of coupled vibrations between earthquakes and structures. George Housner outlined the issues in 1947:

In engineering seismology, the response of structures to strong-motion earthquakes is of particular interest…. During an earthquake a structure is subjected to vibratory excitation by a ground motion which is to a high degree erratic and unpredictable…. Furthermore, the average structure, together with the ground upon which it stands, is an exceedingly complex system from the viewpoint of vibration theory. It is apparent the problem divides itself into two parts; first a determination of the characteristics of strong motion earthquakes, and second a determination of the characteristics of structures subjected to earthquakes.

Following an earlier suggestion by M.A. Biot, Housner put forward the concept of the response spectrum, the maximum response induced by ground motion in single degree-of-freedom oscillators (“buildings”) with different natural periods but the same degree of internal damping (usually selected to be 5 percent) (169) (Figure 2.17). At shorter periods the maximum induced acceleration exceeds the recorded ground acceleration, whereas for longer periods it is less. When multiplied by the effective mass of a building, the response spectrum acceleration constrains the lateral force that a building must sustain during an earthquake. Computing response spectra over a wide range of frequencies using data from a wide range of earthquakes significantly improved understanding of the damage potential of strong motion.

Building Code Improvements Since 1950

The availability of strong-motion data began to transform earthquake engineering from a practice based on pseudostatic force criteria to a science grounded in an understanding of the complex coupling between ground motions and building vibrations. By the 1950s, strong-motion records were combined with response spectral analysis to demonstrate that structures can amplify the free-field accelerations (recorded on open

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

ground). To approximate this dynamic behavior, a committee of the American Society of Civil Engineers and the Structural Engineers Association of Northern California proposed in 1952 that the lateral force requirement be revised to vary inversely with the building’s fundamental period of vibration (C ~ T–1). With only a handful of strong-motion recordings available at the time, the decrease in the response spectral accelerations with period remained uncertain. Particular attention was focused on the band from 0.5 to 5.0 seconds, which includes the fundamental periods of vibration for most midrise to high-rise buildings as well as many other large structures.

The lateral force coefficient was recast in the 1961 UBC with a weaker (inverse cubed root) dependence on the response period: C ~ ZKT–1/3. This version introduced a seismic zone factor Z that represented the variability of the seismic hazard throughout the United States and a structural factor K that depended on building type and accounted for its dynamic response. The parameters were chosen to reproduce as well as possible the response spectral accelerations measured in previous earthquakes, which were still sparse. The uncertainties in the empirical coefficients remained high, but the form of the lateral force requirement did establish a firm connection between strong-motion measurements and the requirements of earthquake engineering.

The dearth of strong-motion data ended when the San Fernando earthquake (M 6.6) struck the Los Angeles region on February 9, 1971. It subjected a community of more than 400,000 people to ground accelerations greater than 20 percent of gravity and triggered in excess of 200 strong-motion recorders, more than doubling the size of the database. San Fernando provided the first well-resolved picture of the temporal and spatial variability of ground shaking during an earthquake (170). Short-period (0.1-second) accelerations varied widely, even among nearby sites with similar geologic conditions, while long-period (10-second) displacements were coherent over tens of kilometers (171). More important, this earthquake demonstrated that the ground motions could substantially exceed the maximum values observed in previous events. A strong-motion instrument on an abutment of the Pacoima Dam, 3 kilometers above the fault plane, recorded a sharp, high-amplitude (100-centimeter-per-second) velocity pulse in the first three seconds of the earthquake, as the rupture front passed under the dam (Figure 2.18). Four seconds later, after the rupture had broken the surface 5 kilometers away in the San Fernando Valley, the Pacoima instrument recorded an acceleration pulse exceeding 1.2 gravity in the horizontal plane. This value more than doubled the highest previously observed peak ground acceleration (PGA), measured during the 1966 Parkfield earthquake (M 5.5) on the San Andreas fault (172). The short acceleration pulse observed at Pacoima Dam

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.18 Three component records of acceleration, velocity, and displacement from the Pacoima Dam record of the San Fernando earthquake. The horizontal instruments are approximately parallel and perpendicular to the horizontal component of the rupture (N14E and S76E, respectively) and show accelerations exceeding 1 g. SOURCE: D.M. Boore and M.D. Zoback, Two dimensional kinematic fault modeling of the Pacoima Dam strong-motion recordings of February 9, 1971, San Fernando Earthquake, Bull. Seis. Soc. Am.,64, 555-570, 1974. Copyright Seismological Society of America.

engendered much discussion regarding the utility of PGA as a measure of seismic hazard. This pulse did not make a significant contribution to the overall response spectra values, except at the shortest periods (173), and when the data from all available earthquakes were considered, PGA was only weakly correlated with the size of the earthquake (174). From these and subsequent studies, it became clear that the PGA was not necessarily the best determinant of seismic hazard to structures; other characteristics—such as the response spectrum ordinates, anisotropic motions, and the occurrence of intense, low-frequency velocity pulses—were found to be more important.

After the 1971 San Fernando earthquake, policy makers tried to update building codes in light of the large amount of data on ground motion

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

and building response collected from this urban event. The wealth of strong-motion data also prompted a 1976 revision to the UBC, which modified the period scaling in the lateral force equation from T–1/3 to T–1/2 and introduced a factor S based on local soil type. The newly formed Applied Technology Council (ATC) organized, with funding from the National Science Foundation (NSF) and the National Bureau of Standards, a national effort to develop a model seismic code. More than 100 professionals who volunteered for the work were organized into 22 committees. In a comprehensive report published in 1978 (175), the ATC proposed a more physically based lateral force coefficient of the form C ~ AvSR–1T–2/3, where Av is the effective peak ground velocity-related acceleration coefficient, S is a site-dependent soil factor, and R is a “response modification factor” dependent on the structure type. At shorter periods, this expression was replaced by a limiting value proportional to the effective peak acceleration coefficient Aa. The report also provides the first contoured maps of the ground-motion parameters Aa and Av, derived from a probabilistic seismic hazard analysis conducted by the USGS.

Strong-motion data from a number of earthquakes, as well as laboratory test data and results of numerical site response models, demonstrated the need to modify the soil factor S to reflect nonlinear site response. The National Center for Earthquake Engineering Research, which NSF established in 1986, led the revision, recommending two sets of amplitude-dependent, site amplification factors derived for six site geology classifications. The factors were first incorporated in the 1994 National Earthquake Hazard Reduction Program (NEHRP) seismic provisions and then into the 1997 UBC.

Strong-motion data from the 1994 Northridge, California (M 6.7), and 1995 Kobe, Japan (M 6.9), earthquakes confirmed observations from several previous earthquakes that motions recorded close to the fault rupture had distinct pulse-like characteristics, which were not represented in the code’s lateral force equation. The effect of these pulse motions was approximated by introducing near-fault factors into the lateral force equation. A single factor N was first introduced in the base isolation section of the 1994 UBC. This representation was replaced by two near-fault factors Na and Nv in the lateral force provisions of the 1997 UBC lateral force requirement. The Na factor was applied to the short-period, constant-acceleration portion of the design response spectrum, whereas the Nv factor was applied to the intermediate- and long-period constant-velocity portion, where the base shear is proportional to T–1. Interestingly, after a 36-year absence, the T–1 proportionality was reintroduced in the 1997 UBC in part because it was judged to be a more accurate representation of the spectral character of earthquake ground motion (176).

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

Attenuation Relationships

Engineers wanted the scattered ground-motion observations reduced to simple empirical relationships that practitioners could apply, and the derivation of these relationships became a central focus of engineering seismology. A measure of shaking intensity was chosen (typically peak ground acceleration or velocity), and the observed variation of this intensity measure was factored into source, path, and site effects by identifying one or more independent control variables—typically, source magnitude, path distance, and site condition (e.g., soil or rock)—and fitting the observations with parameterized curves. The magnitude dependence or scaling and the fall-off of strong-motion amplitude with epicentral distance were together called the attenuation relation.

Lack of data precluded plotting PGA as a function of magnitude and epicentral distance until the 1960s. Figure 2.19 shows an attenuation relationship obtained from the strong-motion data for the 1979 Imperial Valley earthquake. The dispersion in the data resulted in a relative standard deviation of about 50 percent, which was typical. Other relationships described the site response in terms of the correlation between intensity measures and soil and rock conditions, including allowance for nonlinear soil behavior as a function of shaking intensity (Figure 2.20).

As the use of the response spectrum method increased, it became necessary to develop techniques to predict not only the PGA (equivalent to the response spectral value at zero period) but also the response spectra of earthquakes that might occur in the future. This was done initially by developing a library of response spectral shapes that varied with earthquake magnitude and soil conditions; the selected shape was anchored to a peak acceleration obtained from a set of attenuation relationships, each of which predicted the spectral acceleration at a specific period. Eventually, response spectra were computed directly from ground-motion attenuation relationships.

Seismic Hazard Analysis

By the 1960s, growing strong-motion databases and scientific understanding enabled site-specific seismic hazard assessments incorporating information about the length and distance of neighboring faults, the history of seismicity, and empirical predictions of ground-motion intensity for events of specified magnitude at specified distances. For major facilities in the western United States, in particular nuclear power plants such as San Onofre and Diablo Canyon (177), seismic hazard assessment focused on the maximum magnitude that each fault could produce, its closest distance to the site, and the PGA for these events. PGA was then the

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.19 Recorded peak accelerations of the 1979 Imperial Valley earthquake and an attenuation relation that has been fit to the data. SOURCE: H.B. Seed and I.M. Idriss, Ground Motions and Soil Liquefaction During Earthquakes, Earthquake Engineering Research Institute, Engineering Monograph on Earthquake Criteria, Structural Design, and Strong Motion Records 5, El Cerrito, Calif., 134 pp., 1982.

primary scalar measure of ground-motion intensity for use in structural analysis and design. Typically PGA was used to scale a standard response spectral shape or, if the engineer requested more detailed ground-motion information, “standard” accelerograms, such as the El Centro record from the 1940 Imperial Valley earthquake. The basic motivation for these pro-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.20 Average response spectral shapes for different soil categories derived from strong-motion recordings. The response spectral shapes are normalized to peak acceleration, which is equivalent to zero-period acceleration. The dashed line ABCD indicates a simplified response spectral shape for rock and stiff soil sites that was developed for use in building codes. SOURCE: H.B. Seed and I.M. Idriss, Ground Motions and Soil Liquefaction During Earthquakes, Earthquake Engineering Research Institute, Engineering Monograph on Earthquake Criteria, Structural Design, and Strong Motion Records 5, El Cerrito, Calif., 134 pp., 1982.

cedures was to identify a conservative or bounding value for the maximum potential threat at a specific site. These procedures are now referred to as deterministic to distinguish them from the probabilistic techniques that followed.

The deterministic methods of seismic hazard analysis developed for seismically active sites in California and other western states were unsuited to the tectonically stable environment of the eastern United States, where likely earthquake sources were largely unknown and strong-motion data had not yet been recorded. Therefore, a modified deterministic analysis had to be developed for the some 100 nuclear power plants east of the Rocky Mountains, based on “seismotectonic zones” developed from historic seismic activity and geologic trends. Application of these methods relies heavily on the judgment of scientists and engineers. Although the historical record for this analysis is comparatively long (about 300 years), estimates of past earthquake magnitude were limited to verbal

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

accounts of earthquake effects translated into the 12 levels of the Modified Mercalli Intensity (MMI) scale (see endnote 27). The largest historic event in the zone became the basis for establishing the maximum event, typically the largest MMI or one-half intensity unit larger, depending on circumstances (e.g., nature of the facility, design rules, safety factors adopted by the engineers). The largest event in each zone was then presumed to occur as close to the site as the seismotectonic zone boundary permitted, except for events in the zone that contained the site, where a minimum separation was adopted to reflect the improbability of an event occurring very close to the site. Lacking sufficient ground-motion data, engineering seismologists used MMI data to develop attenuation relations, calibrating MMI to PGA with data from the western United States.

Probabilistic seismic hazard analysis (PSHA) was developed to characterize and integrate several effectively random elements of earthquake occurrence and ground-motion forecasting. The method uses probabilistic analysis to combine the seismic potential from several threatening faults, or spatially distributed across source zones characterized for each by an assumed frequency-magnitude distribution, to obtain an estimate of the total hazard, defined as the mean annual rate at which the chosen intensity measure, such as PGA, will exceed some specified threshold at a prescribed site (178). For each fault, the contribution to hazard was derived from a convolution of the mean annual rate of earthquakes with the probability that the shaking intensity will be exceeded for an event of specified magnitude. The method allows for assumed distributions of event location (e.g., randomly along the fault or within a region) and for variance about the predicted ground motions due to natural variability. The final result for a site is a hazard curve, a plot of mean annual frequency of exceedance at a specified intensity level.

By the late 1970s, the PSHA method had been tested and its application was growing throughout engineering seismology. As with the deterministic method, the practical application of PSHA at a specific site requires professional judgments based on local data and experience. In response to these difficulties, uncertainties in the model parameters associated with the limits of scientific information (e.g., in earthquake catalogs, fault locations, ground-motion prediction) are quantified and propagated through PSHA to produce quantitative confidence bounds on the resulting hazard curves. The objective of the hazard curve is to capture the randomness or “aleatory uncertainty” inherent in the forecasting of future events, while the confidence bounds reflect the current limits on professional knowledge, or “epistemic uncertainty,” in such forecasts. Figure 2.21 presents an example of the analysis for a site in the San Francisco Bay area.

PSHA relies on a wider range of scientific information than deterministic analysis. It also satisfies modern engineering requirements for a

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.21 Mean estimate and 5/95 percent confidence bounds on the mean annual frequency hazard curves for the east end of the San Mateo-Hayward Bridge, San Francisco Bay area, California, for PGA and 5 percent damped spectral accelerations at oscillator periods of 0.3 and 3.0 seconds. SOURCE: Geomatrix Consultants, Inc., Seismic Ground Motion Study for San Mateo-Hayward Bridge, Final Report , prepared for CALTRANS, Division of Structures, Oakland, Calif,, 234 pp. + 6 appendixes, February 1993.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

probabilistic definition of risk. Common engineering practice evolved to define a design ground-motion in terms of a specified frequency of exceedance. This value is lower (i.e., increases the design requirements) for facilities where structural failure involves more severe consequences.

Seismic Hazard Maps

Seismic hazard analysis for buildings, highway overpasses, and smaller structures has traditionally relied on design values mapped nationally or regionally. Early maps were quite crude owing to the typical building-code practice of using only four or five discrete zones with large relative differences (factor of 2) in ground-motion level. At first, the zones were drawn largely to reflect historic seismicity. For example, the first seismic probability map for the United States, distributed in 1948 by the U.S. Coast and Geodetic Survey (USCGS), simply used the locations of historic earthquakes and divided the country into four zones ranging from no expected damage to major damage (179). This basis led to under-stated earthquake hazards in the Pacific Northwest, the eastern Basin and Range Province, and other places with long recurrence intervals. The work was revised in 1958 and 1959 when Charles Richter published several maps based on the seismic regionalization technique that Soviet seismologists had developed in the 1940s (180). Richter also relied on historic seismicity and employed MMI as the intensity measure. In 1969, S.T. Algermission of the USCGS produced a national map with maximum MMI values from historic earthquakes contoured as zones, along with a table and map of earthquake recurrence rates. The maximum-intensity map was the basis for the UBC national zoning map published in 1970.

Several years later, Algermissen and coworkers at the USGS, using PSHA, repeated the national mapping (181). They produced a seismic hazard curve at each point on a grid; the PGA was calculated for a 10 percent probability of exceedance in 50 years; and these values were contoured to produce a national seismic hazard map. The maps provided quantitative estimates of the expected shaking (excluding site effects). They also furnished a compelling visual representation of the relative seismic hazard among different locations in the United States and were the basis for national building code zoning maps in 1979. The USGS updated the national seismic hazard maps in 1982, 1990, 1991, 1994, and 1996, incorporating new knowledge on earthquake sources and seismic-wave propagation. The 1991 maps were the first to display probabilistic values of response spectral ordinates and were published in the NEHRP Recommended Provisions for Seismic Regulations for New Buildings. The 1996 maps implemented a completely new PSHA methodology and provide the basis for the probabilistic portion of the seismic design guidelines in

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

the 1997 and 2000 NEHRP Provisions and the 2000 International Building Code. These seismic hazard maps are also used in seismic provisions for highway bridge design, the International Residential Code, and many other applications.

Challenges Ahead

Establishing building codes, developing attenuation relationships and performing seismic hazard analysis are all examples of earthquake engineering activities that have helped quantify and reduce the threat posed by earthquakes; however, recent large earthquakes make it clear that significant challenges remain. For example, the 1995 Hyogo-ken Nanbu earthquake (Box 2.5, Figures 2.22 and 2.23) devastated the city of Kobe, in Japan—one of the most earthquake-prepared countries in the world. That this earthquake caused such tremendous damage and loss of life indicates

FIGURE 2.22 Kobe-Osaka region, annotated with major faults and geographic reference points, including inferred rupture during the 1995 earthquake and damage belt. SOURCE: H. Kawase, The cause of the damage belt in Kobe: “The basin-edge effect,” constructive interference of the direct s-wave with the basin-induced diffracted/Rayleigh waves, Seis. Res. Lett., 67, 25-34, 1996. Copyright Seismological Society of America.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

FIGURE 2.23 Numerical simulations of basin-edge effects in the 1995 Hyogo-ken Nanbu earthquake. SOURCE: A. Pitarka, K. Irikura, T. Iwata, and H. Sekiguchi, Three-dimensional simulation of the near-fault ground motion for the 1995 Hyogo-ken Nanbu (Kobe), Japan, earthquake, Bull. Seis. Soc. Am., 88, 428-440, 1998. Copyright Seismological Society of America.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

BOX 2.5 Kobe, Japan, 1995

The official name of the M 6.9 earthquake that struck Kobe, Japan, on January 17, 1995 is Hyogo-ken Nanbu (Southern Hyogo Prefecture). It killed at least 5500 people, injured more than 26,000, and caused immense destruction throughout a metropolis of 1.5 million people. One-fifth of its inhabitants were left homeless, and more than 100,000 buildings were destroyed. The total direct economic loss has been estimated as high as $200 billion.1 The Japanese call an earthquake with an epicenter directly under a city a chokkagata. History has demonstrated repeatedly that a direct hit on an urban center can be terribly destructive; for example, an earlier chokkagata wiped out the city of Tangshan, China, in 1976, killing at least 240,000. Nevertheless, given the rigorous Japanese building codes and disaster preparations, the extreme devastation to the city center was surprising. In contrast, the 1994 Northridge earthquake was of comparable size (M 6.7, only a factor of 2 smaller in seismic moment), and occurred in a densely populated region (the San Fernando Valley of California) but killed only 57 people and caused about $20 billion in damages.2 The high losses in the Hyogo-ken Nanbu earthquake can be attributed to at least four independent factors:

  1. Rupture Directivity: The hypocenter of the earthquake was on a nearly vertical fault at a depth of about 14 kilometers directly beneath the Kaikyo suspension bridge. The right-lateral rupture propagated southwestward toward Awaji Island, where surface displacement of 1 to 1.5 meters was mapped on the Nojima fault, and northeastward along the trend of the Rokko fault zone, straight into the city center.3 This pattern of faulting radiated large-amplitude motions into the heart of downtown Kobe, as if it were at the end of a gun. In comparison, the seismic energy from the Northridge earthquake was preferentially radiated northward into the sparsely populated Santa Susanna Mountains.

  2. Basin-Edge Effect: The surface outcrops of the Rokko fault—which mark the boundary between the hard granites in the hills northwest of the city and the soft sediments of the low, narrow coastal strip—were not displaced. Instead, in a narrow zone 500 to 1000 meters southeast of the fault, out in the sedimentary plain, building were heavily damaged and houses collapsed (see Figure 2.23). The very strong ground motions in this damage belt were caused by the constructive interference of two waves at the edge of the sedimentary basin bounded by the Rokko fault zone.4 The basin-edge effect identified at Kobe may explain other peculiarities in the patterns of seismic damage, such as the localized regions of damage in Santa Monica at the northwestern edge of the Los Angeles basin during the 1994 Northridge earthquake.5

  3. Poor Construction and Maintenance of Buildings: Differences in the vulnerability of buildings also contributed to the greater damage in Kobe than in Northridge. In California, engineers introduced ductility concepts into building design in 1971, a full decade before their adoption in Japan. While building codes are an effective way to reduce damage to newly erected structures, they do not address the seismic safety of older structures. Most of the buildings severely damaged in Kobe were built before 1981. Because of the progress in seismic engineering over the last several decades, exposure to seismic risk can be much higher for urban areas with many older buildings, such as the cities in the eastern United States.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
  1. Poor Emergency Response: The Japan Meteorological Agency (JMA) announced preliminary estimates of the magnitude and hypocenter of the earthquake only 18 minutes after the earthquake, but government officials were still accustomed to using seismic intensity, rather than magnitude and location, to characterize earthquake severity. Since the earthquake immediately knocked out telephone lines in the Kobe region, the highest intensities available to the JMA—5 on the Japanese scale—were from Kyoto and other locations away from the epicenter.6 Emergency response officials did not comprehend the full extent of the disaster until four to six hours after the earthquake, when it became clear that the intensity in downtown Kobe had been as high as 7. This incident highlights the need for seismic information systems that can rapidly collect, analyze, and broadcast accurate information on peak accelerations and other measures of strong ground motions from a well-distributed set of instruments, as well as the event’s magnitude and location, and they must be rugged enough to withstand large earthquakes. It also underscores the need for emergency response officials to be well educated in the proper interpretation of seismological data.

1  

The 1995 Hyogo-ken Nanbu earthquake also provided an example of the dire impact that earthquakes can have on global financial institutions. Nicholas Leeson of the Singapore office of Baring’s Bank in London had been involved in unauthorized trading of Japanese government bonds, and the sudden drop in their value following the Hyogo-ken Nanbu earthquake caused the bank’s collapse.

2  

USGS response to an urban earthquake <http://geohazards.cr.usgs.gov/northridge/norpub1.htm>.

3  

H. Kanamori, The Kobi (Hyogo-ken Nanbu), Japan, earthquake of January 16, 1995, Seis. Res. Lett., 66, 6-10, 1995.

4  

H. Kawase, The cause of the damage belt in Kobe: “The basin-edge effect,” constructive interference of the direct s-wave with the basin-induced diffracted/Rayleigh waves, Seis. Res. Lett., 67, 25-34, 1996; A. Pitarka, K. Irikura, T. Iwata, and T. Kagawa, Basin structure effects in the Kobe area inferred from the modeling of ground motions from two aftershocks of the January 17, 1995 Hyogo-ken Nanbu earthquake, J. Phys. Earth, 44, 563-576, 1996; A. Pitarka, K. Irikura, T. Iwata, and H. Sekiguchi, Three-dimensional simulation of the near-fault ground motion for the 1995 Hyogo-ken Nanbu (Kobe), Japan, earthquake, Bull. Seis. Soc. Am., 88, 428-440, 1998. According to these studies, the strongest ground motions were generated by the constructive interference of two simultaneous arrivals: the direct S (shear) wave propagating vertically through the soft sediments and a horizontally propagating surface wave diffracted into the basin. The latter was generated when the direct S wave traveling faster through the hard, granitic rocks of the sidewall encountered the sharp (fault-controlled) edge of the basin.

5  

R.W. Graves, A. Pitarka, and P. G. Somerville, Ground motion amplification in the Santa Monica area: Effects of shallow basin edge structure, Bull. Seis. Soc. Am., 88, 1224-1242, 1998; P.M. Davis, J.L. Rubinstein, K.H. Liu, S.S. Gao, and L. Knopoff, Northridge earthquake damage caused by geologic focusing of seismic waves, Science, 289, 1746-1750, 2000; W.J. Stephenson, R.A. Williams, J.K. Odum, and D.M. Worley, High resolution seismic reflection surveys and modeling across an area of high damage from the 1994 Northridge earthquake, Sherman Oaks, California, Bull. Seis. Soc. Am., 90, 643-654, 2000.

6  

R. Geller, The role of seismology, Nature, 373, 554, 1995; K. Yamakawa, The Prime Minister and the earthquake: Emergency management leadership of Prime Minister Murayama on the occasion of the great Hanshin-Awaji earthquake disaster, Kansai Univ. Rev. Law and Politics, 19, 13-55, 1998.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

that reducing, or even containing, the vulnerabilities to future earthquakes as urbanization of earthquake-prone regions increases, constitutes a major and continuing challenge for earthquake science and engineering.

NOTES

1.  

A general historical account is given by B.A. Bolt, Earthquakes and Geological Discovery, W.H. Freeman, New York, 229 pp., 1993. For a history of Japanese seismology, see T. Utsu, Seismological evidence for anomalous structure of island arcs, Rev. Geophys. Space Phys., 9, 839-890, 1971.

2.  

W. Whiston, translator, The New Complete Works of Josephus, Kregel Publications, Grand Rapids, 1143 pp., 1999.

3.  

The Aristotelian theory is the root of folkloric notions about “earthquake weather.” In describing a series of earthquakes felt in London during 1750, Stephen Hales, a preacher, scientist, and follower of Isaac Newton, echoed Aristotle: “We find in the late earthquakes in London, that before they happen there is usually a calm air with a black sulfurous cloud which would probably be dispersed like a fog if there were a wind; which dispersion would prevent the earthquake which is probably caused by the explosive lightning of this sulfurous cloud; being both near the Earth and coming at a time when sulfurous vapors are rising from the Earth in greater quantity than usual which is often occasioned by a long period of hot and dry weather. Ascending sulfurous vapors in the Earth may probably take fire, and thereby cause Earth lightning which is first kindled at the surface and not at great depths as has been thought whose explosion is the immediate cause of an earthquake.”

4.  

R. Mallet, Neapolitan Earthquake of 1857. The First Principles of Observational Seismology, Chapman and Hall, London, 2 vols., 831 pp., 1862. He also introduced the term hypocenter for the focus of the earthquake, which he presumed was a volcanic explosion, and deduced its location from the observed directions of ground motions, assumed to be excited by pure compressional waves. Despite the crudeness of his method, his estimate of the focal depth, about 10 kilometers, was probably not far off.

5.  

Much earlier than Lyell’s text was the Book of Zachariah (14:4-5), which details a future scenario for a surface-faulting earthquake: “And his feet shall stand on that day upon the Mount of Olives, which is before Jerusalem on the east. And the Mount of Olives shall cleave in the midst thereof towards the east and towards the west. And there shall be a great valley and half of the mountain shall remove towards the north and half of it towards the south. And ye shall flee to the valley of the mountain as ye fled from before the earthquake in the days of Uzziah, King of Juda.”

6.  

Darwin’s observations were not actually new. In a report to the London Geographical Society (An account of some effects of the late earthquakes in Chili: Extracted from a letter to Henry Warburton, Trans. Geol. Soc. London, Ser. 2, 1, 413-415, 1824), Maria Graham, an English travel writer, documented coastal uplift during an earlier earthquake near Valparaiso, Chile, in 1822: “I found the ancient bed of the sea laid bare and dry, with beds of oysters, mussels, and other shells adhering to the rocks on which they grew, the fish all being dead, and exhaling the most offensive effluvia.”

7.  

Although Gilbert emphasized the normal component of faulting, it is now recognized that the 1872 event included a significant component of strike-slip.

8.  

G.K. Gilbert, Lake Bonneville, U.S. Geological Survey Monograph 1, U.S. Government Printing Office, Washington, D.C., 340 pp., 1890.

9.  

R.D. Oldham, Report on the Great Earthquake of the 12th June, 1897, Geological Survey of India, Memoir 29, Calcutta, 379 pp., 1899; C.S. Middlemiss, The Kangra Earthquake of 4th April, 1905, Geological Survey of India, Memoir 37, Calcutta, 409 pp., 1910. Middlemiss’

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

data set was good enough that it could be used eight decades later to model this event as a blind thrust (R. Chander, Interpretation of observed ground level changes due to the 1905 Kangra earthquake, northern Himalaya, Tectonophysics, 149, 289-298, 1988).

10.  

During the first half of the nineteenth century, most geologists viewed vertical uplift by magmatic processes as the main cause of mountain building. The importance of horizontal compression was recognized in the context of Appalachian tectonics by W.B. Rogers and H.D. Rogers (On the physical structure of the Appalachian chain, as exemplifying the laws which have regulated the elevation of great mountain chains, generally, Assoc. Am. Geol. Rep., 1, 474-531, 1843) and championed by the supporters of Élie de Beaumont’s theory (1829) that the Earth was cooling and therefore contracting. The latter included the great Austrian geologist, Eduard Suess, whose five-volume treatise Das Antlitz der Erde (The Face of the Earth) (Freytag, Leipzig, 158 pp., 1909) synthesized global tectonics in terms of the contraction hypothesis.

11.  

E.M. Anderson, Dynamics of faulting, Trans. Geol. Soc. Edinburgh,8, 387-402, 1905. He further developed his ideas in a monograph The Dynamics of Faulting and Dyke Formation with Application to Britain (2nd ed., Oliver & Boyd, Edinburgh, 206 pp., 1951).

12.  

M.K. Hubbert and W.W. Rubey, Mechanics of fluid-filled porous solids and its application to overthrust faulting, 1: Role of fluid pressure in mechanics of overthrust faulting, Geol. Soc. Am. Bull., 70, 115-166, 1959. In soil mechanics, the use of effective normal stress in the Coulomb criterion is sometimes called Terzaghi’s principle, after the engineer who first articulated the concept (K. Terzaghi, Stress conditions for the failure of saturated concrete and rock, Proc. Am. Soc. Test. Mat., 45, 777-792, 1945). The historical development of the mechanical theory of faulting has been summarized by M.K. Hubbert in Mechanical Behavior of Crustal Rocks: the Handin Volume (N.L. Carter, M. Friedman, J.M. Logan, and D.W. Sterns, eds., Geophys. Mono. 24, American Geophysical Union, Washington, D.C., pp. 1-9, 1981).

13.  

The hydrostatic pressure at depth h is the pressure of a water column that deep, whereas the lithostatic pressure is the full weight of the overlying rocks; the latter is greater than the former by the rock-to-water density ratio, a factor of about 2.7.

14.  

State Earthquake Investigation Commission, The California Earthquake of April 18, 1906, Publication 87, vol. I, Carnegie Institution of Washington, 451 pp., 1908, and vol. II, with Atlas (by H.F. Reid), 192 pp., 1910; reprinted 1969. The Lawson Commission submitted a preliminary report almost immediately, on May 31, 1906, but no state or federal funds were available to continue the investigation, so that most of the research following the event had to be underwritten by a private organization, the Carnegie Institution of Washington.

15.  

The correlation between earthquake damage and “made ground” was noted 38 years before the 1906 earthquake when San Francisco’s financial district was badly damaged in the 1868 Hayward earthquake. The 1906 quake caused extensive damage to the same area.

16.  

The State Earthquake Investigation Commission reports on the 1906 earthquake have been the principal source of data for the study of strong ground motions by D.M. Boore (Strong-motion recordings of the California earthquake of April 18, 1906, Bull. Seis. Soc. Am., 67, 561-577, 1977), the reconstruction of the space-time sequence of rupture by D.J. Wald, H. Kanamori, D.V. Helmberger, and T.H. Heaton (Source study of the 1906 San Francisco earthquake, Bull. Seis. Soc. Am., 83, 981-1019, 1993), and the recent reinterpretation of the geodetic measurements by W. Thatcher, G. Marshall, and M. Lisowski (Resolution of fault slip along the 470-kilometer-long rupture of the great 1906 San Francisco earthquake and its implications, J. Geophys. Res., 102, 5353-5367, 1997). These studies, which applied state-of-the-art techniques to old data, form the basis for the reconstruction of the faulting events outlined in Box 2.2.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

17.  

H.F. Reid, The elastic-rebound theory of earthquakes, Univ. Calif. Pub. Bull. Dept. Geol. Sci., 6, 413-444, 1911. This paper was the transcription of the first of the Hitchcock Lectures delivered at the University of California, Berkeley, in the spring of 1911.

18.  

For example, the word “strength” in the first proposition would be modified to “frictional strength.” Reid clearly understood that the strength of faults was governed by the friction across fault surfaces, rather than the strength of intact rocks. In his 1911 paper, he discussed the role of friction in experiments on elastic rebound using jelly sheets (p. 422), recognizing that in nature “the surface rocks on opposite sides of the fault are not identical as is the jelly” (p. 430); he considered the role of fault friction in the generation of slip irregularities that cause strong ground motions (pp. 435-436), and in discussing slickensides on the limbs of folds, he states, “It seems quite certain that, as the rocks were being folded by horizontal pressure, the friction would at first prevent any such slipping of the strata; but as the elastic forces become stronger, slipping would occur suddenly with an elastic rebound of the adjacent strata, which would constitute an earthquake” (p. 437).

19.  

Although Reid’s formulation of the elastic rebound hypothesis preceded plate tectonics by more than 50 years, notions about the horizontal mobility of the Earth’s crust were already in the air. Reid sought to provide some geological mechanism for the “slow displacements” required by his theory in a footnote on p. 28 of his report: “Mr. Baily Willis, on account of the forms of the mountain ranges bordering the Pacific Ocean, has concluded that the bed of the ocean is spreading and crowding against the land. He thinks in particular that there is a general sub-surface flow towards the north which would produce strains and earthquakes along the western coast of North America.”

20.  

The story of the 1906 San Francisco earthquake presented here is based on modern reconstructions that have been updated using geophysical analysis techniques and geological knowledge that were unavailable at the time. An example is the estimation of the event’s nucleation point (hypocenter) and origin time. H.F. Reid’s original determination, given in his 1910 report, used local, imprecise estimates of the beginning of shaking to fix the origin time at 13:12:28 GMT and place the hypocenter at a depth of about 20 kilometers between the town of Olema and the southern end of Tomales Bay (i.e., north and west of San Francisco). Fortunately, this event was one of the first large earthquakes to be recorded by a global network of continuously recording seismometers. The network was sparse and poorly distributed by today’s standards, but the State Earthquake Investigation Commission was able to collect copies of seismograms or arrival times of seismic waves from 96 observatories, 83 of which were outside the conterminous United States. In 1906, the structure of the Earth’s interior was still too poorly known to predict the travel times of distant seismic waves, which is why Reid did not use them in his estimates. However, from an analysis of the teleseismic and local records reproduced in Reid’s report, B.A. Bolt (The focus of the 1906 California earthquake, Bull. Seis. Soc. Am., 58, 457-471, 1968) found that the epicenter was similar to that of the small March 22, 1957, earthquake (37.67°N, 122.48°W), from which he obtained an origin time of 13:12:21 GMT. This position, near the point on the San Francisco peninsula where the San Andreas goes out to sea, is consistent with further studies of the archived seismograms.

21.  

The Milne seismograph grew out of a collaboration of British scientists (Ewing, Gray, and Milne) in Japan in the early 1880s. They solved the problem of how to obtain a long-period response from a physically short pendulum by inclining it so that the gravitational restoring force is reduced. The Milne seismographs used a horizontal bracket pendulum to attain a period of about 12 seconds, and they were recorded photographically. The early history of seismometry is discussed by J. Dewey and P. Byerly (The early history of seismometry—Up to 1900, Bull. Seis. Soc. Am., 59, 183-227, 1969).

22.  

The Jesuits established seismographic stations at their educational institutions in Europe, North and South America, Asia, Africa, and Australia, instrumenting them first

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

with seismoscopes (the earliest dates back to 1868 in Manila) and later with successively improved types of seismographs. In particular, the first standardized network of seismographic stations in North America was deployed in 1908-1911 by the newly formed Jesuit Seismological Service. Each of the fifteen U.S. and one Canadian stations was equipped with a Wiechert inverted-pendulum seismograph with an 80-kilogram mass, stabilized by springs and free to oscillate in any horizontal direction. The history of Jesuit seismology is outlined by A. Udías and W. Stauder (The Jesuit contribution to seismology, Seis. Res. Lett., 67, 10-19, 1996).

23.  

R.D. Oldham, The constitution of the Earth, Quaternary. J. Geol. Soc. London, 62, 456-475, 1906. Prior to Oldham’s analysis, there was considerable confusion over the identification of shear and surface waves. It was later established that there are two basic types of surface waves, those with retrograde-elliptical particle motions in the vertical plane of the source and receiver (LR or Rayleigh waves) and those with motions transverse to this plane (LQ or Love waves).

24.  

Although Milne’s simple graphical method for locating earthquake epicenters was routinely employed by seismological observatories for many years, numerical methods were also formulated. In 1912, L. Geiger (Probability method for the determination of earthquake epicenters from the arrival time only, Bull. St. Louis Univ., 8, 60-71) applied the Gauss-Newton method to the iterative, least-squares solution of the nonlinear equations relating the space-time location parameters to the arrival times of seismic waves, and in the 1930s, Jeffreys showed how the least-squares normal equations could be solved efficiently by successive approximations. The first implementations of Geiger’s method on electronic computers was in 1960 (B. Bolt, The revision of earthquake epicentres, focal depths and origin-times using a high-speed computer, Geophys. J. R. Astron. Soc., 3, 433-440, 1960; E.A. Flinn, Local earthquake location with an electronic computer, Bull. Seis. Soc. Am., 50, 467-470, 1960), and the International Seismological Summary adapted Bolt’s code for routine location in 1961.

25.  

The collection and analysis of earthquake arrival times was facilitated by the establishment of the International Association of Seismology in 1905. Data reporting from the early networks was centralized at the International Seismological Centre, established in Strasbourg in 1906. The distribution of seismological data was standardized through Turner’s publication of the International Seismological Summary, which began in 1923.

26.  

H. Jeffreys and K.E. Bullen, Seismological Tables, British Association for the Advancement of Science, London, 50 pp., 1940. These tables were derived from a model describing the variation of the seismic compressional-wave and shear-wave velocities as a function of radius. Although superseded by more precise estimates of radial structure, the Jeffreys-Bullen (J-B) model is still employed to locate earthquakes by the International Seismological Centre, a testament to its remarkable success.

27.  

The Rossi-Forel scale assigned intensity values from I to X, based on commonly observable effects of the shaking at a particular point. These range from “barely perceptible to an experienced observer” (I) through “felt generally by everyone” (V) to “great disaster” (X). The Modified Mercalli Scale of 1931, developed for building and social conditions of California by H.O. Wood and F. Neumann (Modified Mercalli Intensity Scale of 1931, Bull. Seis. Soc. Am., 21, 277-283, 1931) and close to the one still in use today, comprises 12 grades; the following are abbreviated descriptions of the higher levels: X. “Some well-built wooden structures destroyed; most masonry and frame structures destroyed with foundations; ground badly cracked. Rails bent. Landslides considerable from river banks and steep slopes. Shifted sand and mud. Water splashed, slopped over banks.” XI. “Few, if any masonry structures remain standing. Bridges destroyed. Broad fissures in ground. Underground pipelines completely out of service. Earth slumps and land slips in soft ground. Rails bent greatly.” XII. “Damage total. Waves seen on ground surface. Lines of sight and

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

level distorted. Objects thrown in air.” Regions of constant intensity mapped by averaging over a geographic distribution of local values are separated by contours called isoseismals.

28.  

C.F. Richter, An instrumental earthquake magnitude scale, Bull. Seis. Soc. Am., 21, 28-46, 1935. At the time of this study, Caltech operated a southern California network that included seven stations with standardized Wood-Anderson seismometers. The active element in the standard instrument was a copper mass suspended vertically on a torsion spring with a free period of 0.8 second whose rotation was magnetically attenuated (damping constant of 0.8) and photographically recorded by a light beam reflected from a mirror mounted on the mass; the apparatus was sensitive only to horizontal ground motions, which it recorded with a nominal static magnification of 2800.

29.  

To calibrate how A0, the amplitude of the magnitude-zero reference earthquake, decreased with distance from the epicenter, Richter used the data for 11 earthquakes recorded by these stations in the month of January 1932 (see Figure 22-2 of his textbook, Elementary Seismology, W.H. Freeman, San Francisco, 768 pp., 1958). He then tested the results on a set of 21 well-located local earthquakes during 1929-1931. Correcting the observations for the instrument-specific average residuals did not improve his original calibration curve, which remains the standard for southern California.

30.  

The magnitude scale is open ended; it places no limits on the minimum or maximum sizes of earthquakes. Richter’s choice of A0, though arbitrary, ensured that essentially all shocks recorded by the southern California network would have positive magnitudes and that most events locatable by this network would have magnitudes greater than 3. Beginning with T. Asada’s work in Japan in the late 1940s, networks of instruments with higher sensitivity have been set up in some seismically active regions to locate microearthquakes, defined to be those with Richter magnitudes less than 3 (W.H.K. Lee and S.W. Stewart, Principles and Applications of Microearthquake Networks, Academic Press, New York, 293 pp., 1981).

31.  

The relationships given in Chapter 22 of Richter’s textbook (op. cit.) are mb = 2.5 + 0.63MS = 1.7 + 0.8ML – 0.01ML2. The values of mb and MS agree at magnitude 6.75; above this value, mb < MS, and below it, mb > MS. The former inequality is because the mb scale uses shorter-period waves than MS (~5 seconds versus 20 seconds) and thus saturates at a smaller magnitude.

32.  

Trinity was recorded on Caltech’s permanent array of seismographs, as reported by Gutenberg in the open literature (Interpretation of records obtained from the New Mexico atomic bomb test, July 16, 1945, Bull. Seis. Soc. Am., 36, 327-330, 1946). The equipment to measure the time at the shot point failed, so that his seismologically determined origin time (12:29:12 GMT, July 16, 1945) was adopted by the popular press as the official beginning of the Atomic Age.

33.  

B. Gutenberg and C.F. Richter, Earthquake magnitude, intensity energy and acceleration, Bull. Seis. Soc. Am., 46, 105-145, 1956. They had initially estimated that the energy transmitted as seismic waves from the Baker test was about 1015 joules; however, this value exceeded its total explosive yield and became among the data that forced a revision of their original magnitude-energy formula. They used body-wave rather than surface-wave magnitude to measure explosion size, because explosions are relatively inefficient in exciting surface waves. This empirical observation was later shown to provide one of the better methods for discriminating underground nuclear explosions from earthquakes.

34.  

Multinational discussions to ban the testing of nuclear weapons were begun in 1958 in Geneva, where a Conference of Experts attempted to outline the requirements for treaty verification by seismological and other means. It was agreed that existing techniques were adequate for identifying nuclear explosions on the surface and in the atmosphere, but the prospects for detecting small underground nuclear explosions and discriminating them from earthquakes proved controversial, with the U.S. and U.S.S.R. delegations split on the

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

reliability of existing techniques. Following these negotiations, the United States convened a Panel on Seismic Improvement under the chairmanship of Lloyd V. Berkner. Noting that “the annual budget in the United States from all sources of seismological research amounts to roughly several hundred thousand dollars,” the panel recommended a research program of $53 million for the first two years, including the establishment of a worldwide seismic detection system (The Need for Fundamental Research in Seismology, Panel on Seismic Improvement, Department of State, Washington, D.C., 212 pp., 1959). The recommendations of the Berkner Panel led to Project Vela, begun by the Department of Defense Advanced Research Projects Agency (ARPA) in 1959, and to the establishment of the World Wide Standardized Seismographic Network (WWSSN) in 1961. Between 1960 and 1971, about $245 million were expended by Vela Uniform, the underground explosion component of Project Vela. The history of this period is detailed by B. Bolt in Nuclear Explosions and Earthquakes: The Parted Veil (W.H. Freeman, San Francisco, 309 pp., 1976).

35.  

B. Gutenberg and C.F. Richter, Seismicity of the Earth, 2nd Edition, Princeton, N.J., 310 pp., 1954. Their first paper was published under the same title as Special Paper 34 of the Geological Society of America (1941).

36.  

B. Gutenberg and C.F. Richter, Magnitude and energy of earthquakes, Ann. Geofisica, 9, 1-15, 1956. Prior to Gutenberg and Richter’s publication of this equation, M. Ishimoto and K. Iida (Seismological observation by tremometer, 1. Magnitude and distribution pattern, Bull. Earthquake Res. Inst. Univ. Tokyo Univ., 17, 443-478, 1939) had discovered that the maximum trace amplitude A for Japanese earthquakes at approximately equal focal distances is related to their frequency of occurrence n by Amn = k, which is equivalent to the Gutenberg-Richter relation for m = b + 1, which they estimated to be about 1.7.

37.  

A pure power-law distribution contains no information about characteristic values of lengths or energies and is consistent with the notion that earthquake dynamics is scale invariant. The power-law distribution of frequency versus size must obviously break down for very large earthquakes, because fault dimensions are finite and all realistic friction laws imply a minimum scale for dynamic nucleation, but the role of these inner and outer scales in earthquake mechanics remains controversial.

38.  

In their 1954 book, Gutenberg and Richter applied an energy-magnitude formula that gave a total seismic energy rate of about 1027 ergs per year, which was only about a factor of six less than Bullard’s (Thermal history of the Earth, Nature, 156, 35-36, 1945) estimate of total terrestrial heat flow. Two years later Gutenberg used the improved formula (Equation 2.3) to revise this number downward by two orders of magnitude.

39.  

The static theory of “distorsioni” in elastic media was worked out by V. Volterra and C. Somigliana about the same time as Reid formulated his elastic-rebound hypothesis, and the modern English usage of dislocations to describe these planar discontinuities began with A.E.H. Love in the second edition of his Treatise on the Mathematical Theory of Elasticity, Cambridge University Press, Cambridge, U.K., 643 pp., 1906. However, their connection to the Reid hypothesis was not formalized as a quantitative tool until a half-century later, when the static solution for a finite Volterra dislocation in a semi-infinite elastic medium became available (I.A. Steketee, Some geophysical applications of the elasticity theory of dislocations, Canadian J. Phys., 36, 192-205, 1958; I.A. Steketee, On Volterra’s dislocations in a semi-infinite elastic medium, Canadian J. Phys., 36, 1168-1198, 1958; M.A. Chinnery, The deformation of the ground around surface faults, Bull. Seis. Soc. Am., 51, 355-372, 1961).

40.  

Fusakichi Omori observed consistent patterns to the first motions of earthquakes in the early part of the twentieth century, and T. Shida noted the division of compressions and dilatations into quadrants for the May 18, 1917, Shizuoka earthquake.

41.  

H. Nakano, Notes on the nature of the forces which give rise to the earthquake motions, Seis. Bull. Centr. Met. Obs. Japan, 1, 92-120, 1923. Nakano’s method is summarized

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

by P.A. Byerly, I. Mei, and C. Romney (Dependence on azimuth of the amplitudes of P and PP, Bull. Seis. Soc. Am., 39, 269-284, 1949).

42.  

An influential work in promoting this understanding was by R. Burridge and L. Knopoff, Body force equivalents of seismic dislocations, Bull. Seis. Soc. Am., 54, 1875-1888, 1964. They developed a general theory for earthquake radiation by proving that a time-dependent Volterra dislocation on an arbitrary surface in a heterogeneous, anisotropic medium is equivalent to a surface distribution of double couples. The equivalence between a pointwise dislocation and double couple had been realized by a number of previous authors, including F.R.N. Nabbaro (1951), V. Vvedenskaya (1956), and Knopoff and F. Gilbert (1960). Essentially the same results had been published by T. Marauyama (On force equivalents of dynamic elastic dislocations with reference to the earthquake mechanism, Bull. Earthquake Res. Inst. Tokyo, 41, 467-486, 1963). Prior to the Burridge-Knopoff paper, however, many seismologists maintained the intuitive but incorrect notion that the single displacement couple of an elementary dislocation must correspond to a single force couple. Exceptions included H. Honda and his colleagues in Japan who plotted the initial motions and amplitudes of both P and S waves on the focal sphere; they showed that the S-wave patterns for deep Japanese earthquakes were not consistent with single-couple mechanisms, which required an S-wave node in the plane of faulting, but rather with double couples, which did not (H. Honda, Earthquake mechanism and seismic waves, J. Phys. Earth, 10, 1-98, 1962). With hindsight, the solution to this controversy seems obvious; both a single couple and a double couple impart no net linear momentum, but only the latter conserves angular momentum, as required for an indigenous seismic source.

43.  

M.A. Chinnery, The deformation of the ground around surface faults, Bull. Seis. Soc. Am., 51, 355-372, 1961.

44.  

Wegener elaborated his continental-drift theory in his 1915 book Die Entstehung der Kontinente und Ozeane, written while recovering from war wounds. An expanded version was published in 1920 and a third edition in 1922, which was translated into English, French, Russian, and Spanish; the fourth edition was issued in 1929, the year before his death during an expedition on the Greenland icecap.

45.  

Wegener was optimistic that continental drift could be observed directly on trans-Atlantic profiles by geodetic methods: “Compared with all other theories of similarly wide scope, drift theory has the great advantage that it can be tested by accurate astronomical position-finding. If continental displacement was operative for so long a time, it is probable that the process is still continuing, and it is just a question of whether the rate of movement is enough to be revealed by our astronomical measurements in a reasonable period of time.” He believed it was, because he surmised (incorrectly) that the youngest glacial moraines in Greenland and Europe had been connected, which implied that the North Atlantic had opened at rates of 10-30 meters per year. He persuaded his colleague, J.P. Koch, to compare the astronomical measurements of longitude from the expeditions they had made to Greenland in 1912-1913 and 1906-1908 with earlier determinations in 1823 and 1870; the shift in longitude was of the right order and in the right direction. He considered his theory confirmed when further measurements by the Danish Survey Organization in 1922 and 1927 yielded a drift rate of 36 ± 4 meters per year. Unfortunately for Wegener, the quoted uncertainties did not account for unappreciated sources of bias, and this value turned out to be about 1000 times too large. Continental drift was not measured by geodetic techniques until the development of ultraprecise (±1 centimeter) Very Long Baseline Interferometry more than 50 years after his death (T.A. Herring, I.I. Shapiro, T.A. Clark, C. Ma, J.W. Ryan, B.R. Schupler, C.A. Knight, G. Lundqvist, D.B. Shaffer, N.R. Vandenberg, B.E. Corey, H.F. Hinteregger, A.E.E. Rogers, J.C. Webber, A.R. Whitney, G. Elgered, B.O. Ronnang, and J.L.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

Davis, Geodesy by radio interferometry: Evidence for contemporary plate motion, J. Geophys. Res., 91, 8341-8347, 1986).

46.  

W.A. van der Gracht and J.M. van Waterschoot, eds., Theory of Continental Drift, A Symposium, American Association of Petroleum Geologists, Tulsa, 240 pp., 1928.

47.  

M. Ewing and B. C. Heezen, Some problems of Antarctic submarine geology, in Antarctic in the International Geophysical Year, A. Carey, L.M. Gould, E.O. Hulburt, H. Odishaw, and W.E. Smith, eds., American Geophysical Union Monograph 1, 75-81, 1956. The first published study of mid-ocean seismicity was by E. Tams (Die seismichen verhaltnisse des offenen Atlantischen Ozeans, Zeitschr. Geophys., 3, 361-363, 1927).

48.  

H.H. Hess, History of ocean basins, in Petrologic Studies: A Volume in Honor of A.F. Buddington, A.E. Engle, H.L. James, and B.F. Leonard, eds., Geological Society of America, pp. 599-620, 1962; R.S. Dietz, Continent and ocean basin evolution by spreading of the sea floor, Nature, 190, 854-857, 1961. An excellent insider’s account of the postwar oceanographic expeditions that led to the discovery of seafloor spreading and plate tectonics can be found in H.W. Menard’s historical memoir, The Ocean of Truth (Princeton University Press, Princeton, 353 pp., 1986).

49.  

Reprints of the original papers and historical commentaries on the confirmation of seafloor spreading by marine magnetic data can be found in Plate Tectonics and Geomagnetic Reversals by A. Cox (W.H. Freeman, San Francisco, 702 pp., 1973).

50.  

M.L. Hill and T.W. Dibblee, Jr., San Andreas, Garlock and Big Pine faults, California—A study of their character, history and tectonic significance of their displacements, Geol. Soc. Am. Bull., 64, 443-458, 1953; H.W. Wellman, Structural outline of New Zealand, Bull. N.Z. Dept. Sci. Indust. Res., 121, 35 pp. + map, 1956.

51.  

H.W. Menard and his colleagues at the Scripps Institution of Oceanography mapped a regularly spaced set of very long (>3000 kilometers), linear fracture zones in the eastern Pacific Ocean; the first discovered was the Mendocino Fracture Zone, which intersects the San Andreas system at Cape Mendocino, California (H.W. Menard and R.S. Dietz, Mendocino submarine escarpment, J. Geol., 60, 266-278, 1952). A decade later, V. Vacquier, A.D. Raff, and R.E. Warren (Horizontal displacements in the floor of the northeastern Pacific Ocean, Geol. Soc. Am. Bull., 72, 1251-1258, 1961) recognized a left-lateral offset of 1170 kilometers in the magnetic anomaly patterns across the Mendocino Fracture Zone.

52.  

J.T. Wilson, A new class of faults and their bearing on continental drift, Nature, 207, 343-347, 1965. In the opening paragraphs of this paper, Wilson first used the term plate in its modern form: “Many geologists have maintained that movements of the Earth’s crust are concentrated in mobile belts, which may take the form of mountains, mid-ocean ridges or major faults with large horizontal movements. These features and the seismic activity along them often appear to end abruptly, which is puzzling…. This article suggests that these features are not isolated, that few come to dead ends, but that they are connected into a continuous network of mobile belts about the Earth which divide the surface into several large rigid plates.” He defined a transform as the juncture where one type of mobile belt (plate boundary) changes into another and a transform fault as a strike-slip fault terminated by transforms, which can be contrasted with a transcurrent fault that ends at points unconnected to other zones of deformation. He cataloged the types of transform faults that can connect two segments of oceanic ridges (spreading centers), two segments of island/mountain arcs (subduction zones), or one of each type.

53.  

Motivated by the Berkner panel, ARPA prepared a plan to upgrade and expand the global network of permanent seismic observatories in 1960. The execution of the plan was assigned to the U.S. Coast and Geodetic Survey (USCGS), which held the operational responsibility for earthquake monitoring within the federal government. The new instrumentation comprised three Benioff short-period seismometers (1-second free period), three Press-Ewing long-period seismometers (15- or 30-second free period, 90-second galvanom-

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

eter period), the photographic drum recording apparatus, and a radio-synchronized crystal clock. Installations began in 1961, reaching a total of about 120 stations in 60 countries by 1967; because some of the expenses were borne by the host countries, the total cost to Project Vela Uniform was only about $10 million.

54.  

The possibility that the more mobile, interior layers of the Earth are convecting was discussed as early as 1839 by W. Hopkins (Researches in physical geology, Phil. Trans. Roy. Soc. Lond., 129, 381-423, 1839). The convection hypothesis was proposed as the driving mechanism for mountain building by A. Holmes (Geophysics—The thermal history of the Earth, J. Wash. Acad. Sci., 23, 169-195, 1933) and F.A. Vening Meinesz (The mechanism of mountain formation in geosynclinal belts, Proceedings of the Section of Sciences Koninklijke Nederlandse Akademie van Wetenschappen, 36, 372-377, 1933), and the theory of mantle convection was further developed in the 1930s by C. Pekeris, A. Hales, and D. Griggs.

55.  

D. Griggs, A theory of mountain building, Am. J. Sci., 9, 611-650, 1939.

56.  

The pre-plate-tectonics thinking among the mobilists was documented in the proceedings of a 1950 colloquium in Hershey, Pennsylvania, on “Plastic Flow and Deformation Within the Earth” (B. Gutenberg, H. Benioff, J. M. Burgers, and D. Griggs, Eos, Trans. Am. Geophys. Union, 32, 497-543, 1951).

57.  

B.C. Heezen, The rift in the ocean floor, Sci. Am., 203, 98-110, 1960. Earth expansion was first proposed as a mechanism for continental drift in the 1920s (see A.A. Meyerhoff and A. Holmes, Originator of spreading ocean floor hypothesis, J. Geophys. Res., 73, 6563-6565, 1968, for references), and the hypothesis was revitalized in the geological studies of L. Egyed (The change of the Earth’s dimensions determined from paleogeographical data, Geof. Pura Appl., 33, 42-48, 1956) and S.W. Carey (The tectonic approach to continental drift, in Continental Drift, A Symposium, University of Tasmania, Hobart, pp. 177-355, 1958). Its popularity was broadened considerably by the recognition that Earth expansion might be the observable consequence of a secular decrease in Newton’s universal gravitational constant, as predicted by a class of cosmological theories that attempted to explain the Hubble redshift without a Big Bang expansion of the universe (R.H. Dicke, Principle of equivalence and the weak interactions, Rev. Mod. Phys., 29, 355-362, 1957).

58.  

Subduction, an old term employed by Alpine geologists, was not widely used to describe the sinking of the lithospheric slabs beneath volcanic arcs until it was reintroduced by D. Roeder at a Penrose conference on “The Meaning of the New Global Tectonics for Magmatism, Sedimentation, and Metamorphism in Orogenic Belts” in December 1969 (W.R. Dickinson, Global tectonics, Science, 168, 1250-1259, 1970).

59.  

Benioff’s major papers on the subject were published in the same years as the two editions of Gutenberg and Richter’s Seismicity of the Earth (1949, 1954), and he drew heavily from their catalogs to support his reverse-faulting hypothesis.

60.  

R.R. Coats, in Crust of the Pacific Basin, G.A. Macdonald and H. Kuno, eds., American Geophysical Union Monograph 6, Washington, D.C., pp. 92-109, 1962.

61.  

When fault-plane solutions became more common in the 1950s, the results were often in poor agreement with other information about fault orientations and slip directions. J.H. Hodgson of the Dominion Observatory in Canada concluded, for example, that much of the circum-Pacific faulting was strike-slip, in direct conflict with Benioff’s hypothesis. In his 1954 paper, Benioff discussed the directions of earthquake slip vectors inferred from first-motion radiation patterns, citing studies by Dutch and Japanese seismologists in support of his reverse-faulting hypothesis, but he was clearly skeptical of their reliability. He noted, for example, that Hodgson’s strike-slip solution for the Peru earthquake of November 10, 1946, was contradicted by the field observations of E. Silgado, which indicated dip-slip faulting. The purported dominance of strike-slip faulting in the circum-Pacific region was an incorrect inference by North American seismologists, who employed graphical methods based on Perry Byerly’s “extended station distances” that were inferior to the

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

focal-sphere projections developed by the Dutch and Japanese. The confusion of this period is evident in the proceedings of two symposia convened by Hodgson on focal-mechanism studies (The Mechanics of Faulting, With Special Reference to the Fault-Plane Work, Pub. Dominion Obs. Ottawa, 20, 215-418, 1957; A Symposium on Earthquake Mechanism, Pub. Dominion Obs. Ottawa, 24, 299-397, 1960).

62.  

Early free-oscillation studies of this earthquake were done using strainmeters and specialized ultralow-frequency seismometers by A.A. Nowroozi (Eigenvibrations of the earth after the Alaskan earthquake, J. Geophys. Res., 70, 5145-5156, 1965) and S.W. Smith (Free oscillations excited by the Alaskan earthquake, J. Geophys. Res., 71, 1183-1193, 1968). A major advance in this subject came from the laborious digitization of the analog, long-period records of the 1964 event from more than 100 WWSSN stations, which allowed A.M. Dziewonski and F. Gilbert (Observations of normal modes from recordings of the Alaskan earthquake 1964, Geophys. J. R. Astr. Soc., 27, 393-446, 1972; ibid., Observations of normal modes from recordings of the Alaskan earthquake 1964, 2, Geophys. J. R. Astron. Soc., 35, 401-437, 1973) to apply stacking techniques to measure a large set of free-oscillation eigenfrequencies and use these data to derive improved models of Earth structure.

63.  

F. Press, Displacements, strains, and tilts at teleseismic distances, J. Geophys. Res., 70, 2395-2412, 1965. Press used a strainmeter developed by Benioff in the 1930s, which measured the change in distance between two piers anchored to the ground about 10 meters apart. A rigid quartz rod attached to one pier was extended almost to the second pier, and a sensitive capacitance transducer monitored small changes in the gap caused by either the passage of long-period seismic waves or permanent deformation of the crust.

64.  

The first arrival times of seismic waves at stations around the world demonstrated that the focus was shallow, but the closest recording seismograph was at College, Alaska— 440 kilometers from the epicenter—so that seismologists were unable to fix the depth to the nucleation point precisely enough to help the geologists. The rupture process for the 1964 Alaska earthquake turned out to be very complex. From an analysis of the short-period P waves, M. Wyss and J.N. Brune (The Alaska earthquake of 28 March 1964: A complex multiple rupture, Bull. Seis. Soc. Am., 57, 1017-1023, 1967) identified nine subevents during the first 72 seconds of rupture. Recently, D.H. Christensen and S.L. Beck (The rupture process and tectonic implications for the great 1964 Prince William Sound earthquake, Pure Appl. Geophys., 142, 29-53, 1994) used long-period P waves from the 20 stations with low-gain, on-scale recordings (including those that had been diminished by diffraction around the Earth’s core) to model the rupture process; their solution shows two major episodes of fault slippage, one near the epicenter during the first 100 seconds of rupture and a second near Kodiak Island, starting at about 160 seconds and lasting for 40 seconds.

65.  

The quote is from G. Plafker, Tectonic deformation associated with the Alaska earthquake 1964, USA, Science, 148, 1675-1687, 1965. Plafker’s detailed field observations were synthesized in Tectonics of the March 27, 1964, Alaska Earthquake (U.S. Geological Survey Professional Paper 543-I, U.S. Government Printing Office, Washington, D.C., 74 pp., 1969).

66.  

W. Stauder and G.A. Bollinger, Focal mechanism of Alaska earthquake 1964 March 28 and of its aftershocks, J. Geophys. Res., 71, 5283-5296, 1966; The s-wave project for focal mechanism studies, earthquakes of 1963, Bull. Seis. Soc. Am., 56, 1363-1371, 1966).

67.  

D.P. McKenzie and R.L. Parker, The north Pacific. An example of tectonics on a sphere, Nature, 216, 1276-1280, 1967. These authors were students at Cambridge University when Bullard first employed finite Euler rotations to quantify continental drift (E.C. Bullard, J.E. Everett, and A.G. Smith, Symposium on continental drift. The fit of continents around the Atlantic, Phil. Trans. Roy. Soc. Lond., A258, 41-75, 1965), and they adapted this concept to describe the instantaneous relative motion between two plates as an angular velocity vector. They were the first to appreciate that slip vectors inferred from double-couple fault-plane solutions could be used to constrain the relative angular velocity vector.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

68.  

W.J. Morgan, Rises, trenches, great faults, and crustal blocks, J. Geophys. Res., 73, 1959-1982, 1968. In this paper, Morgan established plate tectonics as a quantitative, global theory by showing that the present-day pole positions estimated from transform-fault azimuths were consistent with the gradients in seafloor-spreading rates observed from marine magnetic anomalies. His study of instantaneous block rotations was followed in the same year by X. Le Pichon’s (Sea-floor spreading and continental drift, J. Geophys. Res., 73, 3661-3697, 1968) reconstruction of the Mesozoic and Cenozoic history of seafloor spreading and continental drift in terms of finite rotations constrained by magnetic anomaly data.

69.  

Walter Elsasser, a colleague of Hess and Morgan at Princeton, circulated his ideas about plates as stress guides in a 1967 preprint that was not published until two years later (in The Application of Modern Physics to the Earth and Planetary Interiors, S.K. Runcorn, ed., Wiley-Interscience, New York, pp. 223-246, 1969). The thermal model of plates as the cold, outer boundary layer of a convecting mantle was discussed by D. Turcotte and R. Oxburgh (Finite amplitude convection cells and continental drift, J. Fluid Mech.28, 29-42, 1967) and modified to include plates of finite thickness by D. McKenzie (Some remarks on heat flow and gravity anomalies, J. Geophys. Res., 72, 6261-6273, 1967).

70.  

B.L. Isacks, J. Oliver, and L.R. Sykes, Seismology and the new global tectonics, J. Geophys. Res., 73, 5855-5899, 1968.

71.  

Japanese seismologists recognized the anomalous velocity and attenuation structure beneath their home islands before the first structural study by J. Oliver and B.L. Isacks (Deep earthquake zones, anomalous structures in upper mantle and lithosphere, J. Geophys. Res., 72, 4259-4275, 1967), but they lacked a satisfactory geodynamic explanation. See T. Utsu (Seismological evidence for anomalous structure of island arcs, Rev. Geophys. Space Phys., 9, 839-890, 1971) for a discussion of this early work.

72.  

T. Atwater, Implications of plate tectonics for the Cenozoic tectonic evolution of western Northern America, Geol. Soc. Am. Bull., 81, 3513-3536, 1970.

73.  

In particular, she showed that the distribution of many geological events in space and time could be understood by the migration of the triple junction between the Pacific, North American, and (now extinct) Farallon plates. The kinematics of such triple junctions had first been discussed by D.P. McKenzie and J.W. Morgan (Evolution of triple junctions, Nature,224, 125-133, 1969).

74.  

T. Matsuda and S. Uyeda (On the Pacific-type orogeny in its model-extension of paired belts concept and possible origin of marginal seas, Tectonophysics, 11, 5-27, 1971) demonstrated that this paired structure of “Pacific-type” orogenic belts, first recognized by A. Miyashiro (Evolution of metamorphic belts, J. Petrol., 2, 277-311, 1961), could be explained by plate tectonics.

75.  

These facts were first noted in a plate-tectonic context by D.P. McKenzie (Speculations on the consequences and causes of plate motions. Tectonics, Geophys. J. Roy. Astron. Soc.,18, 1-32, 1969).

76.  

W. Hamilton, in Proceedings of Andesite Conference, A.R. McBirney, ed., Oregon Dept. Geol. Mineral Industries Bull., No. 65, 175-184, 1969; J.F. Dewey and J.M. Bird, Lithospheric plate-continental margin tectonics and the evolution of the Appalachian orogen, J. Geophys. Res.,75, 2625-2647, 1970.

77.  

P. Molnar and P. Tapponnier, Cenozoic tectonics of Asia. Effects of a continental collision, Science, 189, 419-426, 1975; Active tectonics of Tibet, J. Geophys. Res., 83, 5361-5375, 1978.

78.  

The satellite photos and seismic evidence available to Molnar and Tapponnier did not show evidence for a large amount of crustal shortening across Tibet or for the existence of a shallow-dipping fault that would allow the Indian crust to underplate the Tibetan crust. The reason for the high elevation of Tibet thus remained problematic, with these authors preferring some thermal or magmatic source in the mantle for the uplift. Later

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

seismological studies would confirm the great thickness of the Tibetan crust and lead others to infer that the Tibetan crust was thickened by ductile flow in the lower crust.

79.  

J.D. Byerlee, Friction of rocks, Pure Appl. Geophys., 116, 615-626, 1978. Byerlee obtained a best fit to a diverse data set with t0 = 0, µ = 0.85 at low normal stresses (sn < 200 megapascals) and t0 = 50 megapascals, µ = 0.6 for normal stresses in the range of 200 megapascals to 20 gigapascals. The latter corresponds to pressures in a depth range of 6-60 kilometers. J.C. Jaeger (The frictional properties of joints in rocks, Geofisica Pura Appl., 43, 148-159, 1959) had earlier reported roughly comparable values of friction for various rocks. Exceptions to Byerlee’s law included some of the clay minerals, which show lower values of internal friction, 0.2 < µ < 0.4 (C. Morrow, B. Radney, and J. Byerlee, Frictional strength and the effective pressure law of montmorillonite and illite clays, in Fault Mechanics and Transport Properties of Rocks, B. Evans and T.-F. Wong, eds., Academic Press, London, pp. 69-88, 1992).

80.  

D. McKenzie, Speculations on the causes and consequences of plate motions, Geophys. J. Roy. Astr. Soc., 18, 1-32, 1969; W.F. Brace and D. Kohlstedt, Limits of lithospheric stress imposed by laboratory experiments, J. Geophys. Res., 85, 6348-6252, 1980.

81.  

In discussing his theory of earthquakes in the Great Basin, Gilbert (A theory of the earthquakes of the Great Basin, with a practical application, Am. J. Sci., 3rd Ser., 27, 49-53, 1884) commented explicitly on the role of fault friction: “The upthrust produces a local strain in the crust, involving a certain amount of compression and distortion, and this strain increases until it is sufficient to overcome the starting friction along the fractured surface. Suddenly, and almost instantaneously, there is an amount of motion sufficient to relieve the strain, and this is followed by a long period of quiet, during which the strain is gradually reimposed.” In his 1910 volume for the Lawson Commission (The California Earthquake of April 18, 1906, Publication 87, vol. II, Carnegie Institution of Washington, 192 pp., 1910; reprinted 1969), Reid recognized the San Andreas as a persistent plane of weakness: “We must therefore conclude that former ruptures of the fault-plane were by no means entirely healed, but that this plane was somewhat less strong than the surrounding rock and yielded to a smaller force than would have been necessary to break the solid rock” (p. 21). He also objected to the stress-free crack hypothesis on dynamical grounds: “If the break had been sharp, with no friction at the fault-plane, … the rock at the fault-plane would have made rapid but short vibrations back and forth during the 2.2 seconds necessary for it to reach the equilibrium position. This, however, is not what actually occurred; small slips took place at different parts of the fault-plane, and as the results of these successive slips and great friction, some 30 to 60 seconds were required before the rock came to rest; and even then certain parts of the rock were apparently still held in a strained condition by strong friction, and from time to time gave way, producing the aftershocks …” (p. 39).

82.  

M. Orowan, Mechanism of seismic faulting, in Rock Deformation, D. Griggs and J. Handin, eds., Geological Society of America Memoir 79, New York, pp. 323-345, 1960; M.A. Chinnery, The strength of the earth’s crust under horizontal shear stress, J. Geophys. Res., 69, 2085-2089, 1964. Jeffreys (Note on fracture, R. Soc. Edinburgh Proc., 56, 158-163, 1936) was the first to raise objections to frictional instabilities as the mechanism for deep earthquakes. H. Benioff (Earthquake source mechanisms, Science, 143, 1399-1406, 1964) attributed deep-focus seismicity to sudden phase changes, arguing that a long-period waveform of a deep (600-kilometer) Peruvian earthquake recorded with his strain seismometer close to the epicenter differed radically from the waveform produced by a faulting source. Recent studies have demonstrated that almost all deep-focus earthquakes can be explained with a simple planar fault model with stress drops ranging from 10 bar to 1 kilobar (e.g., H. Kawakatsu, Insignificant isotropic component in the moment tensor of deep earthquakes, Nature, 351, 50-53, 1991).

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

83.  

D. Griggs and J. Handin, Rock Deformation, Geological Society of America Memoir 79, Boulder, Colo., 382 pp., 1960.

84.  

W.F. Brace and J.D. Byerlee, Stick slip as a mechanism for earthquakes, Science, 153, 990-992, 1966. In an early series of experiments at Harvard University, P.W. Bridgman (Shearing phenomena at high pressure of possible importance for geology, J. Geol., 44, 653-669, 1936) had observed strike-slip between thin layers of materials subjected to torsional stress. The term stick-slip was introduced by F.P. Bowden and L. Leben (The nature of sliding and the analysis of friction, Proc. R. Soc. Lond., A169, 371-391, 1939), and the theory was developed with a focus on machine engineering issues (E. Rabinowicz, Friction and Wear of Materials, John Wiley, New York, 50 pp., 1965). The phenomenon was notoriously difficult to study in the laboratory, however, primarily because most testing machines could not be adequately regulated to obtain reproducible results. Brace and Byerlee’s success depended on their use of a new, “stiffer” testing machine.

85.  

W.F. Brace and J.D. Byerlee, California earthquakes: Why only shallow focus, Science, 168, 1575, 1968.

86.  

Although Brace and Byerlee recognized the importance of the dynamical characteristics of the testing machine in the generation of stick-slip instabilities, a quantitative understanding of the dynamics of stable and unstable sliding was not achieved for another 10 years, when the concept of a critical stiffness kc was precisely formulated for strain-weakening materials (J.W. Rudnicki, The inception of faulting in a rock mass with a weakened zone, J. Geophys. Res., 82, 844-854, 1977). Instabilities associated with velocity-weakening “rate-state” constitutive laws were investigated later by J.R. Rice and A.L. Ruina (Stability of steady frictional slipping, J. Appl. Mech., 50, 343-349, 1983).

87.  

F. Rummel and C. Fairhurst, Determination of the post-failure behavior of brittle rock using a servo-controlled testing machine, Rock Mech., 2, 189-204, 1970; R. Houpert, The uniaxial compressive strength of rocks, Proceedings of the 2nd International Society of Rock Mechanics, vol. II, Jaroslav Cerni Institute for Development of Water Resources, Belgrade, pp. 49-55, 1970; H.R. Hardy, R. Stefanko, and E.J. Kimble, An automated test facility for rock mechanics research, Int. J. Rock Mech. Min. Sci., 8, 17-28, 1971.

88.  

Dilatancy is the volume expansion due to the application of a shear stress, a well-known property of granular materials. Brace and his colleagues observed this phenomenon during rock-shearing experiments in the laboratory, which they interpreted as being due to pervasive microcracking in their specimens with a concomitant increase in void space (W.R. Brace, B.W. Paulding, and C.H. Scholz, Dilatancy of the fracture of crystalline rocks, J. Geophys. Res., 71, 3939-3953, 1966).

89.  

T.E. Tullis and J.D. Weeks, Constitutive behavior and stability of frictional sliding of granite, Pure Appl. Geophys., 124, 10-42, 1986; N.M. Beeler, T.E. Tullis, M.L. Blanpied, and J.D. Weeks, Frictional behavior of large displacement experimental faults, J. Geophys. Res., 101, 8697-8715, 1996.

90.  

J.H. Dieterich, Time-dependent friction in rocks, J. Geophys. Res., 77, 3690-3697, 1972.

91.  

C. Scholz, P. Molnar, and T. Johnson, Frictional sliding of granite and earthquake mechanism implications, J. Geophys. Res., 77, 6392-6406, 1972.

92.  

J.H. Dieterich, Time-dependent friction and the mechanics of stick-slip, Pure Appl. Geophys., 116, 790-806, 1978. The concept of a critical slip distance as a parameter characterizing frictional changes was first introduced by E. Rabinowicz (The nature of static and kinetic coefficients of friction, J. Appl. Phys., 22, 1373-1379, 1951; The intrinsic variables affecting the stick-slip process, Proc. R. Soc. Lond., A71, 668-675, 1958), who interpreted it as the typical dimension of surface contact junctions.

93.  

J.H. Dieterich, Modelling of rock friction 1. Experimental results and constitutive equations, J. Geophys. Res., 84, 2161-2168, 1979; J.H. Dieterich, Constitutive properties of faults with simulated gouge, in Mechanical Behavior of Crustal Rocks, N.L. Carter, M. Friedman, J.M.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

Logan, and D.W. Sterns, eds., American Geophysical Union Monograph 24, Washington, D.C., pp. 103-120, 1981; A. Ruina, Slip instability and state variable friction laws, J. Geophys. Res., 88, 10,359-10,370, 1983 (see also J.R. Rice, Constitutive relations for fault slip and earthquake instabilities, Pure Appl. Geophys., 121, 443-475, 1983). The historical development of the rate-state theory is summarized in the review papers by C. Marone (Laboratory-derived friction laws and their application to seismic faulting, Ann. Rev. Earth Planet. Sci., 26, 643-696, 1998) and C.H. Scholz (Earthquakes and friction laws, Nature, 391, 37-42, 1998).

94.  

S.T. Tse and J.R. Rice, Crustal earthquake instability in relation to the depth variation of frictional slip properties, J. Geophys. Res., 91, 9452-9472, 1986. Subsequent depth-variable crustal models were based on friction data for saturated granite gouge studied over the range of crustal temperatures by M.L. Blanpied, D.A. Lockner, and J.D. Byerlee (Fault stability inferred from granite sliding experiments at hydrothermal conditions, Geophys. Res. Letters, 18, 609-612, 1991; Frictional slip of granite at hydrothermal conditions, J. Geophys. Res., 100, 13,045-13,064, 1995); a recent example is N. Lapusta, J.R. Rice, Y. BenZion, and G. Zheng, Elastodynamic analysis for slow tectonic loading with spontaneous rupture episodes on faults with rate- and state-dependent friction, J. Geophys. Res.,105, 23,765-23,789, 2000.

95.  

K. Aki, Generation, propagation of G waves, Niigata earthquake 1964 June 16. Pt 1, Bull. Earthquake Res. Inst. Tokyo Univ., 44, 23-88, 1966. Formally speaking, the right-hand side of Equation 2.6 is the static moment, so that an accurate estimate of M0 for an earthquake of finite size requires the measurement of waves whose periods are long compared to the duration of the faulting. For the 1964 Niigata earthquake, Aki used surface waves with periods up to 200 seconds, which satisfied this criterion. He found M0 ˜ 3 × 1020 newton-meters.

96.  

Moment magnitude MW was introduced by H. Kanamori (The energy release in great earthquakes, J. Geophys. Res., 82, 2981-2987, 1977; Quantification of earthquakes, Nature, 271, 411-414, 1978), and its agreement with other magnitude scales in their unsaturated ranges was discussed by T.C. Hanks and H. Kanamori (A moment magnitude scale, J. Geophys. Res., 84, 2348-2350, 1979).

97.  

Omori established his empirical hyperbolic law of aftershock frequency, n(t) ~ t–1, in his studies of the 1891 Nobi earthquake (F. Omori, On aftershocks, Report by the Earthquake Investigation Committee, 2, 103-139, 1894; ibid., Report by the Earthquake Investigation Committee, 30, 4-29, 1900). This relationship was extended first by R. Hirano and later by T. Utsu (A statistical study on the occurrence of aftershocks, Geophys. Mag., 30, 521-605, 1961) to a power law in the form of Equation 2.8, which is now called the modified Omori law.

98.  

The first detailed study of aftershocks using portable seismometers was conducted by Caltech seismologists following the 1952 Kern County earthquake in central California (H. Benioff, Earthquakes in Kern County, California, 1952, California Division of Mines, State of California, Bulletin 171, 283 pp., 1955). Based on this and other data, Benioff (Seismic evidence for crustal structure and tectonic activity, in Crust of the Earth. A Symposium, A Poldervaart, ed., Geological Society of America, New York, pp. 61-75, 1955) hypothesized that the spatial distribution of aftershocks defines the segment of the fault that had ruptured during the mainshock. Subsequent work has shown this to be approximately true for the initial sequence of aftershocks, although the aftershock zone typically expands to a larger area after a week or so (F. Tajima and H. Kanamori, Global survey of aftershock area expansion patterns, Phys. Earth Planet. Int., 40, 77-134, 1985).

99.  

K. Aki used a spectral representation to establish that earthquakes of varying size had spectra of similar shape, differing primarily in the low-frequency amplitude, proportional to seismic moment, and the location of the “characteristic frequency,” which he related to the characteristic length scale of an earthquake (Scaling law of seismic spectrum, J. Geophys. Res., 72, 1217-1231, 1967). Subsequent studies by J.N. Brune (Tectonic stress and

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

the spectra of seismic shear waves from earthquakes, J. Geophys. Res., 75, 4997-5009, 1970) and J.C. Savage (Relation of corner frequency to fault dimensions, J. Geophys. Res., 77, 3788-3795, 1972) related the corner frequency to the dimensions of the fault plane.

100.  

Theoretical studies of propagating fractures by the engineer G.R. Irwin and his associates (see L.B. Freund, Dynamic Fracture Mechanics, Cambridge University Press, Cambridge, U.K., 563 pp., 1990) had shown that the speed limit for an antiplane crack (slip direction parallel to the dislocation line) was the shear-wave velocity vS, while that for an in-plane crack (slip direction perpendicular to the dislocation line) was the Rayleigh-wave velocity. Seismological observations indicated that the actual rupture velocities for earthquakes are 10-20 percent lower than these theoretical limits.

101.  

This equation for the stress drop omits a nondimensional scaling constant cs of order unity, which depends on the rupture geometry. V. Keilis Borok (On the estimation of the displacement in an earthquake source and of source dimensions, Ann. Geofisica, 12, 205-214, 1959) derived cs = 7p/16 ˜ 1.37 for a circular crack, and he applied this formula to obtain a stress drop for the 1906 San Francisco earthquake.

102.  

The observation that the static stress drop is approximately constant for earthquakes of different sizes in similar tectonic environments was established in publications by K. Aki (e.g., Earthquake mechanism, Tectonophysics, 13, 423-446, 1972) and various other authors in the early 1970s; see the summary by T. Hanks (Earthquake stress drops, ambient tectonic stresses and stresses that drive plate motions, Pure Appl. Geophys., 115, 441-458, 1977).

103.  

H. Kanamori and D.L. Anderson, Theoretical basis of some empirical relations in seismology, Bull. Seis. Soc. Am., 65, 1073-1095, 1975.

104.  

N. Haskell, Total energy and energy spectral density of elastic wave radiation from propagating faults, Bull. Seis. Soc. Am., 54, 1811-1842, 1964.

105.  

N. Haskell, Total energy and energy spectral density of elastic wave radiation from propagating faults, 2, A statistical source model, Bull. Seis. Soc. Am., 56, 125-140, 1966; K. Aki, Scaling law of seismic spectrum, J. Geophys. Res., 72, 1217-1231, 1967.

106.  

B.V. Kostrov, Teoriya ochagov tektonicheskikh zemletryaseniy. Tectonic earthquake focal theory, Izv. Akad. Nauk. S.S.R., Earth Physics, 4, 84-101, 1970; M.J. Randall, Elastic multiple theory and seismic moment, Bull. Seis. Soc. Am., 61, 1321-1326, 1971; F. Gilbert, Excitation of the normal modes of the earth by earthquake sources, Geophys. J. R. Astr. Soc., 22, 223-226, 1971. The moment tensor can be represented a 3 × 3 matrix, which must be symmetric to conserve angular momentum; the most general moment tensor is therefore specified by six real numbers.

107.  

L. Knopoff and M.J. Randall, Compensated linear-vector dipole-possible mechanism of deep earthquakes, J. Geophys. Res., 75, 4957-4963, 1970. A CLVD represents the strain associated with a volume-preserving extension (or compression) of an infinitesimal cylinder along its axis of symmetry. The evidence and interpretation of earthquake focal mechanisms that do not conform to simple double couples have recently been reviewed by B.R. Julian, A.D. Miller, and G.R. Foulger, Non-double-couple earthquakes: 1. Theory, Rev. Geophys., 36, 525-549, 1998.

108.  

D. Forsyth and S. Uyeda (On the relative importance of the driving forces of plate motion, Geophys. J. Roy. Astr. Soc., 43, 163-200, 1975) showed that the primary driving forces are “ridge push” (compression due to gravitational sliding of newly formed lithosphere away from mid-ocean ridge highs) and “slab pull” (tension due to the gravitational sinking of the old subducting slabs). Subsequent analyses attempted to explain plate motions using the lateral density variations inferred from seismic tomography as the buoyancy forces in self-consistent models of mantle convection (B.H. Hager and R.J. O’Connell, A simple global model of plate dynamics and mantle convection, J. Geophys. Res., 86, 4843-4867, 1981; Y. Ricard and C. Vigny, Mantle dynamics with induced plate tectonics, J. Geophys. Res., 94,

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

17,543-17,559, 1989; C.W. Gable, R.J. O’Connell, and B.J. Travis, Convection in three dimensions with surface plates: Generation of toroidal flow, J. Geophys. Res., 96, 8391-8405, 1991).

109.  

Some intraplate earthquakes occur along preexisting zones of weakness in the continental crust associated with the landward extensions of oceanic fracture zones (L.R. Sykes, Intraplate seismicity reactivation of preexisting zones of weakness, alkaline magmatism, and other tectonism postdating continental fragmentation, Rev. Geophys. Space Phys., 16, 621-688, 1978), and some occur along reactivated faults that were first formed during continental rifting (A.C. Johnston, K.H. Coppersmith, L.R. Kanter, and C.A. Cornell, The Earthquakes of Stable Continental Regions: Assessment of Large Earthquake Potential, J.F. Schneider, ed., Electric Power Research Institute, Technical Report 102261, Palo Alto, Calif., 4 vols., 2985 pp., 1994).

110.  

At 10-kilometer depth, the effective normal stress sneff is about 180 megapascals (assuming hydrostatic pore pressure Pf), so that reasonable values of the coefficient of friction (µ = 0.6-0.8) imply that the shear stress to initiate frictional slip should be 110-140 megapascals.

111.  

The lack of detectable heat flow anomaly was first reported by J.N. Brune, T.L. Henyey, and R.F. Roy (Heat flow, stress and rate of slip along San Andreas fault, California, J. Geophys. Res., 74, 3821-3827, 1969) and confirmed by A.H. Lachenbruch and J.H. Sass (Thermo-mechanical aspects of the San Andreas fault system, in Proceedings of the Conference on Tectonic Problems of the San Andreas Fault System, R.L. Kovach and A. Nur, eds., Stanford University Publications in Geological Science 13, Stanford, Calif., pp. 192-205, 1973). Both of these studies quoted an upper bound on the average stress of 20 megapascals.

112.  

The direction of maximum principal compressive stress can be constrained from the orientations of borehole breakouts, in some cases by direct stress measurements, and by the orientations of known faults and the inferred focal mechanism orientation for smaller earthquakes in a region (assuming that these involve slip in the maximally shear-stressed direction on the causative fault). Such data showed that the maximum principal stress direction near the San Andreas fault was steeply inclined to the fault trace, in places approaching 80 degrees, much different than the 30- to 45-degree inclination expected if the fault was the most highly stressed feature in the region and was close to frictional failure (V. Mount and J. Suppe, State of stress near the San Andreas Fault—Implications of wrench tectonics, Geology, 15, 1143-1146, 1987; M.D. Zoback, M.L. Zoback, V. Mount, J. Eaton, J. Healy, D. Oppenheimer, P. Reasenberg, L. Jones, B. Raleigh, I. Wong, O. Scotti, and C. Wentworth, New evidence on the state of stress of the San Andreas fault system, Science, 238, 1105-1111, 1988).

113.  

S.H. Hickman, R.H. Sibson, and R.L. Bruhn, Introduction to special session: Mechanical involvement of fluids in faulting, J. Geophys. Res., 100, 12,831-12,840, 1995.

114.  

Gilbert’s article “A Theory of the Earthquakes of the Great Basin, with a Practical Application” appeared in the Salt Lake City Tribune on September 30, 1883 (reprinted in Am. J. Sci., 3rd Ser., 27, 49-54, 1884).

115.  

Recent paleoseismic studies have shown that during the last 6000 years, the Wasatch fault has not ruptured all at once along its entire 340-kilometer length, but rather in smaller independent segments (M.N. Machette, S.F. Personius, A.R. Nelson, D.P. Schwartz, and W.R. Lund, The Wasatch fault zone, J. Struct. Geol., 13, 137-149, 1991). About 10 such segments have been identified, ranging in length from about 11 to 70 kilometers. During the last 6000 years, one of these segments has ruptured every 350 years on average, producing earthquakes with magnitudes ranging from 6.5 to 7.5. Individual segments appear to show irregular patterns of recurrence, however, with interseismic intervals lasting as long as 4000 years.

116.  

Reid’s comment about precursory strain is based on the geodetic measurements of displacement of the Farallon Islands in the half-century before the 1906 earthquake. This

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

motion, which was about half that of the co-seismic displacement, was a key observation in the development of his hypothesis.

117.  

By “strong earthquake,” Reid clearly meant something like the 1906 San Francisco earthquake, and he was implicitly assuming a characteristic earthquake model. Modern geodetic measurements show a rate of angle change of a few hundred parts per billion per year, with very modest changes over time, in the region of the 1906 quake (W.H. Prescott and M. Lisowski, Strain accumulation along the San Andreas fault system east of San Francisco Bay, California, Tectonophysics, 97, 41-56, 1983). Thus, it would take a few thousand years to accumulate Reid’s critical strain of 1/2000. Recent paleoseismic investigations indicate that surface-rupturing earthquakes on the 1906 rupture zone have an average interval of a few hundred years (D.P. Schwartz, D. Pantosti, K. Okumura, T.J. Powers, J.C. Hamilton, Paleoseismic investigations in the Santa Cruz Mountains, California; Implications for recurrence of large-magnitude earthquakes on the San Andreas fault, J. Geophys. Res., 103, 17,985-18,001, 1998). Reid may have overestimated the critical strain, the paleo-earthquakes may not have all been as large as the 1906 event, or the strain may not have been reset to zero in 1906.

118.  

K. Aki, Possibilities of seismology in the 1980s (Presidential address to the Seattle meeting), Bull. Seis. Soc. Am., 70, 1969-1976, 1980. Imamura is also reported to have forecast the occurrence of the great Nankaido earthquakes of 1944 and 1946 (On the seismic activity of central Japan, Jap. Jour. Astron. Geophys., 6, 119-137, 1928, Jap. Jour. Astron. Geophys., 6, 119-137, 1928; see S.P. Nishenko, Earthquakes, hazards and predictions, in The Encyclopeida of Solid-Earth Geophysics, D.E. James, ed., Van Nostrand Reinhold, New York, pp. 260-268, 1989).

119.  

S.A. Fedotov, Regularities of distribution of strong earthquakes in Kamchatka, the Kuril Islands and northern Japan (in Russian), Akad. Nauk. SSSR Inst. Fiziki Zemli Trudi, 36, 66-93, 1965. Fedotov’s map of seismic gaps include earthquakes of M 7.75 and larger; it was reproduced by K. Mogi in Earthquake Prediction, Academic Press, London, p. 82, 1985.

120.  

L.R. Sykes, Aftershock zones of great earthquakes, seismicity gaps and prediction, J. Geophys. Res.,76, 8021-8041, 1971. G. Plafker and M. Rubin (Uplift history and earthquake recurrence as deduced from marine terraces on Middleton Island, in Proceedings of Conference VI: Methodology for Identifying Seismic Gaps and Soon-to-Break Gaps, U.S. Geological Survey Open File Report 78-943, Reston, Va., pp. 687-722, 1978) later argued from direct geological evidence that this estimate of repeat time was short by a factor of two to four.

121.  

C.G. Chase, The n plate problem of plate tectonics, Geophys. J. R. Astron. Soc., 29, 117-122, 1972; J.B. Minster, T.H. Jordan, P. Molnar, and E. Haines, Numerical modelling of instantaneous plate tectonics, Geophys. J. Roy. Astron. Soc., 36, 541-576, 1974; J.B. Minster and T.H. Jordan, Present-day plate motions, J. Geophys. Res., 83, 5331-5354, 1978.

122.  

J.A. Kelleher, L.R. Sykes, and J. Oliver, Criteria for prediction of earthquake locations, Pacific and Caribbean, J. Geophys. Res., 78, 2547-2585, 1973; W.R. McCann, S.P. Nishenko, L.R. Sykes, and J. Krause, Seismic gaps and plate tectonics: Seismic potential for major boundaries, Pure App. Geophys., 117, 1082-1147, 1979. McCann et al. defined seismic gap as follows: “The term seismic gap is taken to refer to any region along an active plate boundary that has not experienced a large thrust or strike slip earthquake for more than 30 years … Segments of plate boundaries that have not been the site of large earthquakes for tens to hundreds of years (i.e., have been seismic gaps for large shocks) are more likely to be the sites of future large shocks than segments that experience rupture during, say, the last 30 years.”

123.  

S.P. Nishenko, Circum-Pacific seismic potential: 1989-1999, Pure Appl. Geophys., 135, 169-259, 1991. Nishenko defined for each of 98 plate boundary segments a characteristic earthquake with a magnitude sufficient to rupture the entire segment, and a probability that such an earthquake would occur within 5, 10, and 20 years beginning in 1989 was

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

specified. The probability was based on the assumption that characteristic earthquakes are quasi periodic, with an average recurrence time that can be estimated from historic earthquakes or from the rates of relative plate motion. In the latter method, the mean recurrence time is estimated from the ratio of the average displacement in a characteristic earthquake to the relative plate velocity.

124.  

D.P. Schwatz and K.J. Coppersmith, Fault behavior and characteristic earthquakes: Examples from the Wasatch and San Andreas faults, J. Geophys. Res., 89, 5681-5698, 1984.

125.  

T. Hirata, Fractal dimension of fault systems in Japan: Fractal structure in rock fracture geometry at various scales, Pure Appl. Geophys., 131, 157-170, 1989; P. Segall and D. Pollard, Joint formation in granitic rock of the Sierra Nevada, Geol. Soc. Am. Bull., 94, 563-575, 1983; S. Wesnousky, C. Scholz, and K. Shimazaki, Earthquake frequency distribution and the mechanics of faulting, J. Geophys. Res., 88, 9331-9340, 1983.

126.  

In principle, the characteristic magnitude can be determined from the magnitudes of previous earthquakes on a fault segment or estimated from the length or area of the segment. Both methods present practical difficulties. Because the characteristic magnitude is thought to vary by segment, only events on a specific segment pertain. Uncertainties in magnitude and location of early quakes make it difficult to identify truly characteristic earthquakes in the seismic or geologic record. Fault geometry is not very definitive because there is insufficient understanding of the fault features that would prevent earthquake rupture from propagating further. S. Wesnousky (The Gutenberg-Richter or characteristic earthquake distribution: Which is it?, Bull. Seis. Soc. Am., 84, 1940-1959, 1994) examined the magnitude distribution on large sections of the San Andreas and other faults. He compared the rate of large earthquakes required to match the observed fault slip with that inferred by extrapolating the rate of smaller events with the Gutenberg-Richter relationship. For several sections, he found that the rate based on fault slip exceeded the Gutenberg-Richter rate, suggesting that the characteristic model was more appropriate there. However, Kagan disputed these findings (Y. Kagan and S. Wesnousky, The Gutenberg-Richter or characteristic earthquake distribution, Which is it? Discussion and reply, Bull. Seis. Soc. Am., 86, 274-291, 1996.). He argued that the result was biased because the regions were chosen around past earthquakes. Furthermore, the fault slip could be explained by a Gutenberg-Richter distribution if the maximum magnitude chosen was large enough.

127.  

S.P. Nishenko (Earthquakes, hazards and predictions, in The Encyclopeida of Solid-Earth Geophysics, D.E. James, ed., Van Nostrand Reinhold, New York, pp. 260-268, 1989) listed 15 large earthquakes that occurred in previously identified seismic gaps, including 8 earthquakes of M 8 and larger. Y.Y. Kagan and D.D. Jackson (Seismic gap hypothesis, J. Geophys. Res., 96, 21,419-21,431, 1991) reevaluated Nishenko’s conclusions for the 10 events that occurred after Kelleher, Sykes, and Oliver (Possible criteria for predicting earthquake locations and their application to major plate boundaries of the Pacific and the Caribbean, J. Geophys. Res., 78, 2547-2585, 1973) provided geographically specific gap definitions. Kagan and Jackson found five unqualified successes, two mixed successes (earthquake on gap boundary), two events where gaps were not defined), and one earthquake in a “filled” gap. The track record of the gap hypothesis has been quite controversial, in part because early definitions were subjective. For further discussion, see S.P. Nishenko and L.R. Sykes, Comment on “Seismic gap hypothesis: Ten years after” by Y.Y. Kagan and D.D. Jackson, J. Geophys. Res., 98, 9909-9916, 1993; D.D. Jackson and Y.Y. Kagan, “Comment on ‘Seismic gap hypothesis: Ten years after”, Reply to S.P. Nishenko and L.R. Sykes, J. Geophys. Res., 98, 9917-9920, 1993; Y.Y. Kagan and D.D. Jackson, New seismic gap hypothesis: Five years after, J. Geophys. Res., 100, 3943-3959, 1995.

128.  

K. Shimazaki and T. Nakata, Time-predictable recurrence model for large earthquakes, Geophys. Res. Lett., 7, 279-282, 1980. The time-predictable model had been proposed

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

earlier by C.G Bufe, P.W. Harsh, and R.O. Burford (Steady-state seismic slip—A precise recurrence model, Geophys. Res. Lett., 4, 91-94, 1977).

129.  

The Japanese program for earthquake research, like its U.S. counterpart, is a broadly based, multidisciplinary effort directed at mitigating seismic hazards, but unlike the National Earthquake Hazards Reduction Program (NEHRP), its principal focus has been event-specific, “practical earthquake prediction.” The program involves six agencies, with the scientific lead given to the Japan Meteorological Agency. The total expenditures over the first 30 years of the program’s existence (1964-1993) were nearly 2000 × 108 yen, or about $1.7 billion (in as-spent dollars). In units of 108 yen (approximately $1 million), recent budgets for the program were 743 for 1995, 177 for 1996, and 214 for 1997. The 1995 number is high because it includes two supplements of 370 and 265 that were allocated following the destructive Hyogo-ken Nambu (Kobe) earthquake. The NEHRP expenditures for the same years, in millions of dollars, were 103, 106, and 98, respectively. The differences in expenditures between the two programs are even larger because the Japanese budgets do not include salaries, which are funded separately, whereas the U.S. budgets do.

130.  

W.H. Bakun and A.G. Lindh, The Parkfield, California earthquake prediction experiment, Science, 229, 619-624, 1985.

131.  

Postmortem explanations cover a wide range. The 1983 Coalinga earthquake may have reduced the stress (R.W. Simpson, S.S. Schulz, L.D. Dietz, and R.O. Burford, The response of creeping parts of the San Andreas fault to earthquakes on nearby faults: Two examples, Pure Appl. Geophys., 126, 665-685, 1988). The rate of Parkfield earthquakes may be slowing because of postseismic relaxation from 1906 (Y. Ben-Zion, J.R. Rice, and R. Dmowska, Interaction of the San Andreas fault creeping segment with adjacent great rupture zones, and earthquake recurrence at Parkfield, J. Geophys. Res., 98, 2135-2144, 1993), or the Parkfield sequence may be a chance occurrence rather than a characteristic earthquake sequence (Y.Y. Kagan, Statistical aspects of Parkfield earthquake sequence and Parkfield prediction, Tectonophysics, 270, 207-219, 1997).

132.  

Working Group on California Earthquake Probabilities, Probabilities of Large Earthquakes Occurring in California on the San Andreas Fault, U.S. Geological Survey Open-File Report 88-398, Reston, Va., 62 pp., 1988.

133.  

R.A. Harris summarized more than 20 published or broadcast statements that could be interpreted as scientific forecasts of the Loma Prieta earthquakes (Forecasts of the 1989 Loma Prieta, California, earthquake, Bull. Seis. Soc. Am., 88, 898-916, 1998). The 1906 surface displacement was smaller than it was to the north, suggesting incomplete release of the accumulated strain. Small earthquakes were notably absent on this segment, a pattern that was thought to appear before large earthquakes. From these observations, several earthquake forecasts were generated expressing the rupture length, magnitude, and approximate timing in probabilistic terms. For example, in 1983 Alan Lindh forecast an M 6.5 earthquake with a probability of 0.30 in 20 years (A.G. Lindh, Preliminary Assessment of Long-Term Probabilities for Large Earthquakes Along Selected Fault Segments of the San Andreas Fault System in California, U.S. Geological Survey Open-File Report 83-63, Menlo Park, Calif., 15 pp., 1983). In 1984 Sykes and Nishenko forecast an M 7.0 earthquake with a probability of 0.19 to 0.95 in 20 years (S.P. Nishenko, Probabilities of occurrence of large plate rupturing earthquakes for the San Andreas, San Jacinto, and Imperial faults, California, 1983-2003, J. Geophys. Res., 89, 5905-5927, 1984).

134.  

The earthquake was not on the San Andreas fault in a strict sense, but rather on a subsidiary, the Sargent fault, that dips about 70 degrees to the southwest and does not intersect the San Andreas. The fault slip had a large vertical component, which was different from expected San Andreas motion. Moreover, historical strain data suggest that significant slip occurred at depth on the San Andreas fault during the 1906 earthquake, so there may have been little or no slip deficit to warrant the initial forecasts.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

135.  

For a recent compilation, including long-, intermediate-, and short-term prediction, see the conference proceedings introduced by L. Knopoff, Earthquake prediction: The scientific challenge, Proc. Natl. Acad. Sci., 93, 3719-3720, 1996. Articles by many other authors follow in sequence. For a brief, cautiously skeptical review see D.L. Turcotte, Earthquake prediction, Ann. Rev. Earth Planet. Sci., 19, 263-281, 1991. For a detailed, negative assessment of the history of earthquake prediction research, see R.J. Geller, Earthquake prediction: A critical review, Geophys. J. Int., 131, 425-450, 1997.

136.  

J. Deng and L. Sykes, Evolution of the stress field in southern California and triggering of moderate-size earthquakes; A 200-year perspective, J. Geophys. Res., 102, 9859-9886, 1997; R.A. Harris and R.W. Simpson, Stress relaxation shadows and the suppression of earthquakes; Some examples from California and their possible uses for earthquake hazard estimates, Seis. Res. Lett., 67, 40, 1996; R.A. Harris and R.W. Simpson, Suppression of large earthquakes by stress shadows; A comparison of Coulomb and rate-and-state failure, J. Geophys. Res., 103, 24,439-24,451, 1998.

137.  

K. Mogi, Earthquake Prediction, Academic Press, Tokyo, 355 pp., 1985. Mogi’s “do-nut” hypothesis is summarized succinctly in C. Scholz, The Mechanics of Earthquakes and Faulting, Cambridge University Press, New York, pp. 340-343, 1990.

138.  

M. Ohtake, T. Matumoto, and G. Latham, Seismicity gap near Oaxaca, southern Mexico, as a probable precursor to a large earthquake, Pure Appl. Geophys., 113, 375-385, 1977. Further details are given in M. Ohtake, T. Matumoto, and G. Latham, Evaluation of the forecast of the 1978 Oaxaca, southern Mexico earthquake based on a precursory seismic quiescence, in Earthquake Prediction—An International Review, D. Simpson and P. Richards, eds., American Geophysical Union, Maurice Ewing Series 4, Washington, D.C., pp. 53-62, 1981. Interpretation of the success of the prediction and the reality of the precursor is complicated by a global change in earthquake recording because some large seismic networks were closed in 1967. For more details, see R.E. Habermann, Precursory seismic quiescence: Past, present, and future, Pure Appl. Geophys., 126, 277-318, 1988.

139.  

A comprehensive test requires a complete record of successes and failures for predictions made using well-defined and consistent methods. Otherwise, the likelihood of success by chance cannot be evaluated.

140.  

V.I. Kieilis Borok, and V.G. Kossobokov, Premonitory activation of seismic flow: Algorithm M8, Phys. Earth Planet. Int., 61, 73-83, 1990.

141.  

J.H. Healy, V.G. Kossobokov, and J.W. Dewey, A Test to Evaluate the Earthquake Prediction Algorithm M8, U.S. Geological Survey Open-File Report 92-401, Denver, Colo., 23 pp. + 6 appendixes, 1992; V.G. Kossobokov, L.L. Romashkova, V.I. Keilis-Borok, and J.H. Healy, Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the circum-Pacific, 1992-1997, Phys. Earth Planet. Int., 111, 187-196, 1999.

142.  

See <http://www.mitp.ru/predictions.html>. A password is needed to access predictions for the current six-month time interval.

143.  

John Milne noted this quest in his treatise Earthquakes and Other Earth Movements (D. Appelton and Company, New York, pp. 301 and 310, 1899): “Ever since seismology has been studied, one of the chief aims of its students has been to discover some means which could enable them to foretell the coming of an earthquake, and the attempts which have been made by workers in various countries to correlate these occurrences with other well-marked phenomena may be regarded as attempts in this direction.” Milne himself proposed short-term prediction schemes based on measurements of ground deformation and associated phenomena, such as disturbances in the local electromagnetic field. “As our knowledge of earth movements, and their attendant phenomena, increases there is little doubt that laws will be gradually formulated and in the future, as telluric disturbances increase, a large black ball gradually ascending a staff

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

may warn the inhabitants on land of a coming earthquake, with as much certainty as the ball upon a pole at many seaports warns the mariner of coming storms.” Milne’s optimism was not shared by all seismologists, notably Charles Richter, who in his textbook (C.F. Richter, Elementary Seismology, W.H. Freeman, San Francisco, pp. 385-386, 1958) dismissed short-term earthquake prediction as a “will-o’-the wisp,” questioning whether “any such prediction will be possible in the foreseeable future; the conditions of the problem are highly complex. One may compare it to the situation of a man who is bending a board across his knee and attempts to determine in advance just where and when the cracks will appear.”

144.  

Ad Hoc Panel on Earthquake Prediction, Earthquake Prediction: A Proposal for a Ten Year Program of Research, White House Office of Science and Technology, Washington, D.C., 134 pp., 1965.

145.  

Dilatancy is the volume expansion due to the application of a shear stress, a well-known property of granular materials. Brace and his colleagues observed this phenomenon during rock-shearing experiments in the laboratory, which they interpreted as being due to pervasive microcracking in their specimens with a concomitant increase in void space (W.R. Brace, B.W. Paulding, and C.H. Scholz, Dilatancy of the fracture of crystalline rocks, J. Geophys. Res., 71, 3939-3953, 1966).

146.  

I.L. Nersesov, A.N. Semenov, and I.G. Simbireva, Space time distribution of travel time ratios of transverse and longitudinal waves in the Garm Area, in the physical basis of foreshocks, Akad. Nauk USSR, pp. 88-89, 1969.

147.  

See C.H. Scholz, L.R. Sykes, and Y.P. Aggarwal, Earthquake prediction: A physical basis, Science, 181, 803-810, 1973, for a description for the laboratory and field observations that lead to the development of the dilatancy-diffusion model.

148.  

For a comprehensive description of the Haicheng prediction, see P. Molnar, T. Hanks, A. Nur, B. Raleigh, F. Wu, J. Savage, C. Scholz, H. Craig, R. Turner, and G. Bennett, Prediction of the Haicheng earthquake, Eos, Trans. Am. Geophys. Union, 58, 236-272, 1977.

149.  

At the time there were many articles in the popular and scientific press about the impending breakthroughs in prediction capabilities. For example, see F. Press, Earthquake prediction, Sci. Am., 232, 14-23, 1975.

150.  

National Research Council, Predicting Earthquakes: A Scientific and Technical Evaluation—With Implications for Society, National Academy Press, Washington, D.C., 62 pp., 1976.

151.  

C.R. Allen and D.V. Helmberger, Search for temporal changes in seismic velocities using large explosions in southern California, in Proceedings of the Conference on Tectonic Problems of the San Andreas Fault System, R.L. Kovach and A. Nur, eds., Stanford University Publications in Geological Science 13, Stanford, Calif., pp. 436-452, 1973.

152.  

J.R. Rice and J.W. Rudnicki, Earthquake precursory effects due to pore fluid stabilization of a weakening fault zone, J. Geophys. Res., 84, 2177-2193, 1979.

153.  

In the reauthorization of the NEHRP program in 1990, references to earthquake prediction as a goal of the program were specifically removed. See Office of Technology Assessment, Reducing Earthquake Losses, OTA-ETI-623, U.S. Government Printing Office, Washington, D.C., 162 pp., 1995.

154.  

J.H. Dieterich, Preseismic fault slip and earthquake prediction, J. Geophys. Res., 83, 3940-3948, 1978; J.R. Rice, Theory of precursory processes in the inception of earthquake rupture, Gerlands Beitr. Geophys., 88, 91-127, 1979.

155.  

J.J. Lienkaemper and W.H. Prescott, Historic surface slip along the San Andreas fault near Parkfield, California, J. Geophys. Res., 94, 17,647-17,670, 1989; T. Donalee, Historical vignettes of the 1881, 1901, 1922, 1934, and 1966 Parkfield earthquakes, in Parkfield; the Prediction … and the Promise, Earthquakes and Volcanoes, 20, 52-55, 1988.

156.  

Examples of current research on earthquake prediction techniques include the application of pattern recognition techniques (e.g., V.I. Keilis-Borok, L. Knopoff, I.M.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

   

Rotwain, and C.R. Allen, Intermediate term prediction of occurrence time of strong earthquakes, Nature, 335, 690-694, 1988; M. Eneva, and Y. Ben-Zion, Techniques and parameters to analyze seismicity patterns associated with large earthquakes, J. Geophys. Res., 102, 17,785-17,795, 1997), investigations of geochemical precursors (e.g., H. Wakita, Geochemical challenge to earthquake prediction, Proc. Nat Acad. Sci,93, 3781-3786, 1996), and the electromagnetic techniques associated with the VAN hypothesis. This last work has generated considerable controversy (e.g., R.J. Geller, ed., Debate on “VAN,” Geophys. Res. Lett., 23, 1291-1452, 1996). For a negative assessment of the history of earthquake prediction research, see R. Geller, Earthquake prediction: A critical review, Geophys. J. Int., 131, 425-450, 1997.

157.  

Western-style masonry construction had been introduced in Japan to reduce the fire hazard associated with traditional wooden structures.

158.  

J. Milne and W.K. Burton, The Great Earthquake in Japan, 1891, 2nd ed., Lane, Crawford & Co., Yokohama, Japan, 69 pp + 30 plates, 1891.

159.  

State Earthquake Investigation Commission, The California Earthquake of April 18, 1906, Publication 87, vol. I, Carnegie Institution of Washington, pp. 365-366, 1908.

160.  

G.W. Housner, Historical view of earthquake engineering, Proceedings of the Eight World Conference on Earthquake Engineering, San Francisco, pp. 25-39, 1984. Panetti specifically proposed that the first story should be built to withstand one-twelfth of its superposed weight and the second and third stories one-eighth of their superposed weights.

161.  

A. Whittaker, J Moehle, and M Higashino, Evolution of seismic design practice in Japan, Struct. Design Tall Build., 7, 93-111, 1998.

162.  

The January 1927 edition of the UBC was published by the Pacific Coast Building Officials Conference. The provisions in the 1927 UBC to increase the design forces for structures on poor soils were removed in 1949, only to be reintroduced in the 1974 UBC.

163.  

J.R. Freeman, Engineering data needed on earthquake motion for use in the design of earthquake-resisting structures, Bull. Seis. Soc. Am., 20, 67-87, 1930.

164.  

For a discussion of the early strong-motion program, see G.W. Housner, Connections, The EERI Oral History Series, Earthquake Engineering Research Institute, Oakland, Calif., pp. 67-88, 1997; and D.E. Hudson, ed., Proc. Golden Anniversary Workshop on Strong Motion Seismometry, University of Southern California, Los Angeles, March 30-31, 1983.

165.  

H. Cross, Analysis of continuous frames by distributing fixed end moments, Proc. Am. Soc. Civil Engineers, 56, 919-928, 1930; AIJ Standard for Structural Calculation of Rein-forced Concrete Structures, Kenchiku Zassh I (Architectural Institute of Japan), 47, 62 pp., 1933.

166.  

M.A. Biot, A mechanical analyzer for the prediction of earthquake stresses, Bull. Seis. Soc. Am.,31, 151-171, 1940.

167.  

G.W. Housner and G.D. McCann, The analysis of strong-motion earthquake records with the electric analog computer, Bull. Seis. Soc. Am.,39, 47-56, 1949.

168.  

G.W. Housner, Characteristics of strong-motion earthquakes, Bull. Seis. Soc. Am., 37, 19-31, 1947.

169.  

G.W. Housner, R. Martel, and J.L. Alford, Spectrum analysis of strong motion earthquakes, Bull. Seis. Soc. Am., 42, 97-120, 1953.

170.  

National Research Council, The San Fernando Earthquake of February 9, 1971: Lessons from a Moderate Earthquake on the Fringe of a Densely Populated Region, National Academy Press, Washington, D.C., 24 pp., 1971.

171.  

High-frequency ground motions were analyzed by D.E. Hudson (Local distribution of strong earthquake ground motions, Bull Seis. Soc. Am.,62, 1765-1786, 1972), and low-frequency motions by T.C. Hanks (Strong ground motion of the San Fernando, California earthquake: Ground displacements, Bull. Seis. Soc. Am.,65, 193-225, 1975).

172.  

G.W. Housner and M.D. Trifunac, Analysis of Accelerograms—Parkfield Earthquake, Bull. Seis. Soc. Am.,57, 1193-1220, 1967.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×

173.  

M.D. Trifunac and D.E. Hudson, Analysis of the Pacoima Dam Accelerogram—San Fernando, California Earthquake of 1971, Bull. Seis. Soc. Am.,61, 1393-1141, 1971.

174.  

T.C. Hanks and D.A. Johnson, Geophysical assessment of peak accelerations, Bull. Seis. Soc. Am.,66, 959-968, 1976.

175.  

Applied Technology Council, Tentative Provisions for the Development of Seismic Regulations for Buildings, Applied Technology Council Publications ATC-3-06, Palo Alto, Calif., 505 pp., 1978. These provisions served as the basis for the seismic provisions of the 1988 Uniform Building Code and the Federal Emergency Management Agency publication, NEHRP Recommended Provisions for Seismic Regulation for New Buildings, 1994 Edition, Building Seismic Safety Council, Federal Emergency Management Agency Report FEMA-222A (Provisions, 290 pp. 15 maps, 1990) and FEMA-223A (Commentary, 335 pp., 1995), Washington, D.C.

176.  

See Section 5.3.2 of Part 2—Commentary, NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, 1997 Edition, Building Seismic Safety Council, Federal Emergency Management Agency Report FEMA 303, Washington, D.C., February 1998.

177.  

L. Reiter, Earthquake Hazard Analysis; Issues and Insights, Columbia University Press, New York, 233 pp., 1990.

178.  

C.A. Cornell, Engineering seismic risk analysis, Bull. Seis. Soc. Am.,58, 1583-1606, 1968.

179.  

E.B. Roberts and F.P. Ulrich, Seismological activities of the U.S. Coast and Geodetic Survey in 1949, Bull. Seis. Soc. Am.,41, 205-220, 1949.

180.  

C.F. Richter, Seismic regionalization, Bull. Seis. Soc. Am.,49, 123-162, 1959.

181.  

S.T. Algermissen and D.M. Perkins, A Probabilistic Estimate of the Maximum Ground Acceleration in Rock in the Contiguous United States, U.S. Geological Survey Open File Report 76-416, Denver, Colo., 45 pp., 1976.

Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 19
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 20
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 21
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 22
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 23
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 24
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 25
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 26
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 27
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 28
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 29
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 30
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 31
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 32
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 33
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 34
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 35
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 36
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 37
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 38
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 39
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 40
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 41
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 42
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 43
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 44
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 45
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 46
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 47
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 48
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 49
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 50
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 51
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 52
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 53
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 54
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 55
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 56
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 57
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 58
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 59
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 60
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 61
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 62
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 63
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 64
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 65
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 66
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 67
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 68
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 69
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 70
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 71
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 72
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 73
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 74
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 75
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 76
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 77
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 78
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 79
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 80
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 81
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 82
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 83
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 84
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 85
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 86
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 87
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 88
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 89
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 90
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 91
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 92
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 93
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 94
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 95
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 96
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 97
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 98
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 99
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 100
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 101
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 102
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 103
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 104
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 105
Suggested Citation:"2. The Rise of Earthquake Science." National Research Council. 2003. Living on an Active Earth: Perspectives on Earthquake Science. Washington, DC: The National Academies Press. doi: 10.17226/10493.
×
Page 106
Next: 3. Facing the Earthquake Threat »
Living on an Active Earth: Perspectives on Earthquake Science Get This Book
×
Buy Hardback | $81.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The destructive force of earthquakes has stimulated human inquiry since ancient times, yet the scientific study of earthquakes is a surprisingly recent endeavor. Instrumental recordings of earthquakes were not made until the second half of the 19th century, and the primary mechanism for generating seismic waves was not identified until the beginning of the 20th century.

From this recent start, a range of laboratory, field, and theoretical investigations have developed into a vigorous new discipline: the science of earthquakes. As a basic science, it provides a comprehensive understanding of earthquake behavior and related phenomena in the Earth and other terrestrial planets. As an applied science, it provides a knowledge base of great practical value for a global society whose infrastructure is built on the Earth's active crust.

This book describes the growth and origins of earthquake science and identifies research and data collection efforts that will strengthen the scientific and social contributions of this exciting new discipline.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!