Hazards, Land Use, and Environmental Change
ESSAY: A FRACTION OF THE EARTH'S SURFACE
The surface is the interface between the Sun-powered processes dominated by erosion and deposition and the tectonic processes driven by the Earth's internal energy. Most of the surface lies at two general levels: nearly 60 percent is between 1 and 5 km below sea level, and about 25 percent lies within a kilometer above or below sea level (Figure 5.1). The remaining 15 percent of the surface is concentrated in tectonically active mountain belts, continental slopes, and oceanic trenches.
The part of the surface most heavily populated is usually less than 1 km above sea level; most activities are concentrated in areas not far above sea level. Mountainous areas usually are sparsely populated because they are less favorable to agriculture, construction, and transportation and are more susceptible to hazards. Such areas tend to be used for specialized agricultural activities, mining, and recreation. In places where large populations concentrate in or close to structurally active mountainous regions—for example, those in Armenia, Chile, Nepal, and California—the residents are faced with special problems in developing the land resource because of the risk of seismic, volcanic, and landslide hazards.
Just at sea level—within the zone affected by tidal and storm-generated oscillations—spreads the biologically diverse region of environmentally sensitive terrains: marshes, swamps, tidal flats, and fens. Historically, such expanses were considered wastelands or were drained, filled, or surrounded by sea walls to increase their agricultural worth. Research conducted within the past 50 years has disclosed the inadvisability of such projects because they can disturb the natural breeding habitats of wildlife and the cleansing functions of the world's wetlands. Close study shows that wetlands constitute whole ecosystems supporting vast populations, not only of waterfowl, reptiles, and amphibians but also microscopic creatures. Coastal wetlands abound in unfamiliar fungal and bacterial species that perform the invaluable tasks of isolating and neutralizing the toxic compounds flushed through the system in the hydrologic cycle.
The shallow-water area of the continental shelf is recognized as essential to fisheries and as a source of significant oil and gas production. The most recent ocean-waters flooding of the continental shelves took place within the past 10,000 years, and the nature of this geologically transient environment is as yet
poorly understood. Research techniques such as advanced side-scanning sonar systems are aiding in the study of the continental shelves. If sea levels rise more rapidly over the next century, interest in the continental shelf environment will intensify, and the use of land surfaces below sea level may extend beyond the Netherlands, where it is now focused.
Great attention is given to sudden landform changes such as abrupt changes of elevation during earthquakes, yet more money is spent yearly in attempts to retard the slow changes of landform development than on mitigation of the effects of sudden change. The incremental changes produced by erosion and deposition lead to soil loss; silting of reservoirs; and destructive transformations of hillslopes, rivers, and coastlines. Such evolution is natural and often inevitable. The activities of humankind have commonly accelerated the transformations, catapulting natural systems over thresholds and producing immediate environmental threats. The geomorphological processes affected by humans cover a staggering range of scale. From local denudation caused by livestock overgrazing, a significant component of the process of regional desertification, humankind has evolved into a major geomorphological agent. Proper understanding of progressive geomorphological changes can forestall precipitous transformations and prevent the loss of landform stability.
Landforms are not random features; they are the consequences of the interplay of constructive and destructive geological and hydrologic forces requiring careful study before they can safely be artificially modified. Our baselines for understanding processes at the surface are disturbingly short, although existing landscapes provide important information about the magnitudes and return frequencies for many natural processes. Only in the past century have detailed
observations been made concerning such features as flood intensities and frequencies, debris-flow distribution, subsidence, and landslides. Thus, predictions are based on recent to current conditions that are known to have been altered by human actions. Predictions of longer-term events, such as 1,000-year floods, are very unreliable because they must be based on models with uncertain numerical characteristics.
Landforms are sensitive to climatic change because the operating rates of the processes that mold landforms vary dramatically with climate. Fluvial processes dominate in sculpting landforms in semiarid regions, while wind processes are more significant in arid regions. Under very wet climatic conditions, landslides and downhill movements of surface rocks are dominant. Each of these geomorphic processes leaves distinctive evidence in the geological record. When climate changes, the dominant land-forming processes change in response. Threshold values of rainfall and temperature can be defined at which a change from the dominance of one land-forming process to another is likely to occur. It is therefore possible to forecast how agricultural regions might shift size and location in response to global warming and related changes in rainfall.
On an even longer time scale, low-lying coastal areas are subject to episodic flooding as sea level oscillates. Most remarkable are the paleo-landscapes locally exposed by erosion beneath extensive blankets of sedimentary rocks that have been deposited on the continents during flooding episodes. Glacial valleys 450-million-years old are evident at central Saharan sites now occupied by desert wadis, a 170-million-year-old sea stack lies fallen on a modern beach in Scotland, and a tropical beach 450-million-years old can be visited on the outskirts of Quebec City. The long-term durability of low-lying continental surfaces—less than 1 km above sea level and less than 0.2 km below sea level—that is demonstrated by landscape exhumation can be seen on much shorter time scales by the slow rates of erosion that characterize such flat areas.
The occurrence of these extensive areas of low relief, coupled with a suitable climate, makes regions such as the American Midwest prime land resources. The lush agricultural production of such regions can be maintained only if the landscape is treated with the same conservation ethic that inspires reverence for parks and wilderness areas. Prevention of soil erosion and respect for natural ecological balances in low-lying, low-relief regions can ensure their productivity far into the future. Geological characterization forms an essential basis for planned preservation and maintenance.
Human society exists in the biosphere, which thrives at the boundary layer between the solid-earth and its fluid envelopes, perched between the two engines of mantle convection and solar energy that drive geological processes. The biosphere, composed of chemical elements that are cycled among the reservoirs of the atmosphere, hydrosphere, crust, and mantle, also contributes to cycles of rapid chemical turnover and thus functions as a part of the geological processes. The more vigorous manifestations of those processes, however, regularly destroy the parts of the biosphere—and its human constructions—located in their paths. As society has increased its utilization of earth resources and expanded its population and area of colonization, the frequency of its encounters with vigorous geological processes has increased. Society has adopted a term, geological hazards, for these perfectly normal processes that began occurring long before humans arrived on the scene.
Many of the most tragic episodes in the natural history of humans have been related to geological hazards such as disastrous floods, earthquakes, sea waves, landslides, and volcanic eruptions. The fact is that most geological hazards can be avoided or mitigated through proper land-use planning, engineered design and construction practices, building of containment facilities such as dams, use
of preventive measures such as stabilization of landslides, and development of effective prediction and public warning systems. In many parts of the world such measures have already significantly reduced human suffering from geological hazards, although major challenges remain. To further mitigate these hardships, it is essential that a better fundamental understanding of each hazard-causing geological phenomenon be gained and widely disseminated.
Before the development of agriculture, the effects of human beings on the Earth were comparable to the effects of other species. But the onset of crop cultivation and animal domestication, with the subsequent growth of urban civilizations, introduced a new set of forces. Today, humans are changing basic earth processes in unprecedented ways and to unfamiliar degrees. At present, every person in the United States is responsible on average for the consumption of 16 metric tonnes—about 35,000 lb—of minerals and fossil fuels each year. This use does not include the tremendous volume of material moved during the construction of homes, parking lots, office buildings, factories, dams, highways, and other structures. On a worldwide basis, the human population uses nearly 50 billion metric tonnes of earth materials each year. This amount is more than three times the quantity of sediment transported to the sea by all the rivers of the world. Clearly, human beings have become a geological agent that must be taken into account in considering the workings of the earth system.
The various materials moved by human society are perturbing not only the physical cycles of the Earth, by increasing mass transfer, but also the chemical aspects. The biogeochemical and geochemical cycles that convert elements into living creatures, and into ore deposits and other geological concentrates, now have new aspects. The chemicals generated by manufacturing and the disposal of materials, including toxic compounds, occur in concentrations and combinations never before involved in natural systems. The consequences of such contaminations are poorly understood.
Some earth systems operating at the boundaries of the geosphere, hydrosphere, atmosphere, and biosphere are very fragile, and every human effort toward survival or improvement of the human condition necessarily results in repercussions on those systems. We dispose of our wastes in the same sedimentary basins that supply us with the bulk of our groundwater, energy, and mineral resources. Through our social, industrial, and agricultural activities, we are changing the composition of the atmosphere, with potentially serious effects on climate and terrestrial and marine ecosystems. The human population is expanding into less habitable parts of the world—steeper mountainsides, more ephemeral deltaic and barrier islands—which increases vulnerability to natural hazards and strains the biological and geological systems that sustain life. In this sense, geological conditions control the quality of human life.
People all over the Earth are constantly moving from less economically viable rural areas to the crowded cities in hopes of achieving a better livelihood. Wastes produced in and around the cities further compromise the quality of surrounding lands, and consequently there is a strong need for urban renewal or recycling of crowded urban spaces. With the crowding and densification of urban living comes an increased need for more uses of recycled land space. Buildings and other structures for human habitation, transport, or manufacturing are made taller and heavier. They may be founded on sites that are less than optimal for resisting the physical stress induced by their presence. Geology is the main interconnecting element between the craft of civil engineering and the intricacies of nature. It is being used in efforts addressing water quality, resource supply, waste isolation, and disaster mitigation to accommodate and reduce the adverse effects of societal growth.
If present trends continue, the integrity of the more fragile systems on which
human society is built cannot be assured. The time scale on which these systems might break down may be decades or it may be centuries. Human beings are unique among the influences on the Earth—we have the ability to foresee possible consequences of our activities, to devise alternative courses, to weigh the pros and cons of these alternatives, to make decisions, and then to behave accordingly.
Understanding the natural systems acting at the land surface presents a major scientific challenge because of the enormous social implications of those systems. Land resources, as basic elements of the global ecosystem, profoundly affect the lives of every human being on the face of the Earth. The loss of topsoil and forests, the loss of life and property caused by human-induced geomorphic change, and the pollution of air, soil, and water all result in growing consequences for both national and international welfare and security.
Landforms are continually changing, but except for a few spectacular instances their change is so gradual that it is scarcely noted. As population increases, more people are exposed to the effects of processes that have been going on for hundreds of millions of years. In many cases the increase in human population contributes to instability of the physical landscape. When human populations are threatened by geomorphic processes, those processes become geomorphic hazards. Great attention is given to hazards representing abrupt changes, such as earthquakes and volcanic eruptions, but more damage is caused and more money has to be spent annually in attempts to retard ongoing hazards such as landslides, debris flows, and the normal slow progression of erosion and redeposition that leads to soil loss; reservoir infilling; and river, coastline, and hillslope changes.
The surface of the land is shaped by internal forces—folding and faulting with consequent elevation or subsidence—and by erosion—wind and water weathering driven by solar energy and gravity. Wind and water accomplish erosion by forcibly loosening, removing, and transporting solid material. That eroded material becomes the sediment deposited elsewhere. Erosion and sediment production result from the exposure of earth materials and from variations of climate, vegetation, and topographic relief. For materials of similar strength, natural sediment production reaches a maximum at about 33 cm of rainfall per year. Below that amount, less runoff causes less removal of material; above that amount, increased vegetation protects the soil so the amount of erosion and sediment production decreases. Modern erosion rates can be very high, because both urban and agricultural development require removal of the original vegetation.
Geomorphic hazards may involve a slow progressive change in a landform (Figure 5.2) that, although in no sense catastrophic, can become a significant hazard involving costly preventive and corrective measures. There are three types of geomorphic hazards that combine different spans of time, different degrees of damage, and different energy expenditures. The most obvious type is a sudden event that produces an abrupt change—a landslide caused by monsoonal rains, an earthquake, or human activity such as removal of toe support from a stable slope (Figure 5.3). A second type is progressive change that leads to an abrupt result, typified by weathering breakdown of soil or rock that initiates slope failure, gullying of a steepening alluvial fan, meander growth and cutoff, or stream channel shifting. The final type of progressive change gradually produces a slowly developing geomorphic hazard—for example, gradual hillslope erosion, gradual meander shift, channel incision, or channel enlargement. Misidentifying or wrongly estimating the pace of progressive changes may result in incurring pointless protective or remedial costs. Accurate identification of impending problems attributable to slow progressive change can aid in the choice of remedial action and in prudent allocation of money toward engineered hazard prevention. Geomorphologists have quantified ways of identifying potential hazard sites that suggest future change. When historical information is available, it may be possible to make a crude estimate of the timing of landform failure. But, usually, too many influences operating on too fine a scale make it difficult to predict when a failure will occur.
Increased erosion of productive agricultural soils has grown into a serious problem, both for the farmer and for those downstream who must cope with siltation. Soil erosion is accelerated by a variety of agricultural practices, including cultivating slopes at too steep an angle and irrigating with too much water or water under pressure. Under certain circumstances, erosion can proceed so rapidly and over so wide an area that remote sensing techniques may be the best way to monitor it. In situations that require estimates of slow erosion rates from significant topographic features, radioactive isotopic analysis can help in establishing chronologies of erosion surfaces and stratigraphic horizons.
Historically, the greatest amount of erosion in the eastern United States and resulting sediment transport by streams probably occurred in the eighteenth century as agriculture first became widespread. Estuaries became severely silted at that time. In the nineteenth century the western United States experienced a similar development; the process was documented in the Colorado River Basin during the 1880s as extensive channel incision developed in tributary valleys and created the characteristic ar-
royos. Sediment samples taken since 1930 show a significant decrease in sediment loads, suggesting that the incised channels have reached a new state of relative stability. This new equilibrium may result from a combination of conditions: less material is being eroded, and more of the sediment that is produced is being held up in the valleys, where it is deposited in newly developing floodplains. This trend has been enhanced by dam construction. The reservoirs extending behind the dams are clogging up with sediment, drawing attention to the long-term obsolescence of such facilities. In time the renewable resource of water for hydropower and agriculture may become severely compromised. On the lower Mississippi River, a 50 percent decrease in sediment transport has been associated with dam construction upstream on the Missouri River and bank stabilization elsewhere (see Figure 1.11). The Mississippi delta is being modified as the rate of sediment delivery decreases, and the natural slow subsidence due to basin deformation and sediment compaction outruns sediment accumulation, thereby permitting the sea to encroach onto the land.
Landslides and Debris Flows
In the 1970s, landslides—all categories of gravity-related slope failures in earth materials—caused nearly 600 deaths per year worldwide. About 90 percent of the deaths occurred in the circum-Pacific countries. Annual landslide losses in the United States, Japan, Italy, and India have been estimated at $1 billion or more for each country.
Landslide costs include direct and indirect losses affecting both public and private property (Figure 5.4). Direct costs can be defined as the costs of replacement, repair, or maintenance of damaged property or installations. An example of direct costs resulting from a single major event is the $200-million loss attributed to the 21-million-m 3 landslide and debris flow at Thistle, Utah, in 1983. The slide severed three major transportation arteries—U.S. highways 6 and 89 and the main line of the Denver and Rio Grande Western Railroad—and the lake it impounded by damming the Spanish Fork River inundated the town of Thistle, resulting in the destruction of businesses, homes, and railway switching yards. The indirect costs involved the cutoff of eastbound coal shipments along the railroad line. In 1983 oil was expensive and coal was crucial for generating electricity. With supplies from the west severed, eastern coal normally exported to Europe had to be rerouted. European industry, in turn, had to adjust to lowered supply. Ultimately, the landslide affected the international balance of payments.
Destructive landslides have been noted in European and Asian records for over three millennia. The oldest recorded landslides occurred in Hunan Province in central China 3,700 years ago, when earthquake-induced landslides dammed the Yi and Lo rivers. Since then, slope failures have caused untold numbers of casualties and huge economic losses. In many countries, expenses related to landslides are immense and apparently growing. In addition to killing people, slope failures destroy or damage residential and industrial developments as well as agricultural and forest lands, and they eventually degrade the quality of water in rivers and streams. Landslides are often associated with other events: freeze-thaw episodes, torrential rains, floods, earthquakes, or volcanic activity. The bulging of the surface of Mount St. Helens over a rising magma body
led to a massive air-blast landslide—2.8 km3 of rock, the largest slide in recorded history. Loosened material slipped off the side of the growing dome, unroofing the magma and permitting it to degas in a spectacular and locally disastrous eruption (Figure 5.5). The volcanic ash, dust, and pumice, mixed with rain and snowmelt, caused widespread debris flows in local valleys. A minor volcanic event high on the slopes of Nevado del Ruiz volcano in Colombia in 1985 melted enough glacial snow and ice to produce a debris flow that killed 25,000 people in the valley below. An earthquake off the coast of Peru in 1970 initiated a rockfall on Mt. Huascaran in the high Andes. The rockfall turned into a debris avalanche that moved at speeds approaching 300 km per hour and killed more than 20,000 people in the towns of Yungay and Ranrahirca.
Very large slides also are found on slopes below sea level. In the area around the Hawaiian Islands, recently discovered slide debris covers about 15,000 km2 and contains single blocks more than a kilometer thick that slide as much as 235 km away from the shallower water. A major problem in the Gulf of Mexico is slumping, which can disrupt seafloor pipelines and the foundations for drilling platforms worth hundreds of millions of dollars. In Hawaii the debris is well-consolidated basaltic lava, while in the Gulf of Mexico it is unconsolidated, or at best semiconsolidated, clastic sediments. In the geological record, boundaries of rock masses representing such major slides could very well be confused with the effects of tectonic faults; indeed, distinctions between the largest landslides and gravity-driven faults may be in the eye of the beholder.
Despite improvements in recognition, prediction, mitigative measures, and warning systems, worldwide landslide losses—of lives and property—are increasing, and the trend is expected to continue into the twenty-first century. Some of the causes for this increase are continued deforestation, possible increased regional precipitation due to short-term changing climate patterns, and, most important, increased human population.
Demographic projections estimate that by 2025 the world's population will number more than 8 billion people. The urban population will increase to 5.1 billion—more than the total number of humans alive today (Figure 5.6). In the United
States the land areas of the 142 cities with populations greater than 100,000 increased by 19 percent in the 15-year period from 1970 to 1985. By the year 2000, 363,000 km2 in the conterminous United States will have been paved or built on. This is an area about the size of the state of Montana. Accommodation of this population pressure will call for large volumes of geological materials in the construction of buildings, transportation routes, mines and quarries, dams and reservoirs, canals, and communication systems. All of these activities can contribute to the increase of damaging slope failures. In other countries, particularly developing nations, the urbanization pattern is being repeated but often without adequate land planning, zoning, or engineering. Not only do development projects draw people, but the projects themselves as well as the people who settle the surrounding area often occupy just those hillside slopes that are susceptible to sliding. At present, there is no organized program to provide the geological studies that could prevent the worst scenarios posed by this threat.
To reduce landslide losses, research efforts should encompass more than investigations of physical processes in hazardous areas aimed at understanding the nature of slope movement. Earth scientists also need to perfect methods for identifying areas at risk and for mitigating contributory factors. These goals are attainable. Scientists can predict areas at risk and advise means to avoid or moderate danger, but much of the research needed has yet to be done.
For the past half century, geologists have relied primarily on aerial photography and field studies—ideally in combination—for identification of vulnerable slopes and recognition of landslides. In recent years, since multispectral satellite coverage has become available for much of the world, an additional tool is available that can provide images in black and white or color as well as spectral bands through red, green, and near-infrared wavelengths. The coverage, scale, and quality of multispectral imagery is expected to improve considerably within the next decades and provide valuable information that can lead to improved identification of landslide-prone locations.
The information gathered from satellite reconnaissance can contribute to the growing store made use of in geographic information systems, which are digital systems of mapping spatial distribution that can be applied to the preparation of landslide susceptibility maps (Figure 5.7). These modern data-handling systems facilitate both pattern recognition and model building. Patterns and models that suggest landslide susceptibility can be tested, revised and improved, and then tested again against large numbers of observations.
As a result of these gains in knowledge, progress has been made in determining appropriate types of landslide mitigation. The most traditional mitigation technique is avoidance: keep away from areas at risk. When occupation of a site warrants risk, engineered control structures may be required, including surface water diversion and subsurface water drains, the construction of restraining structures such as walls and buttresses, and devices such as rock bolts. The establishment and enforcement of site grading codes calling for appropriate slope stabilization instituted in 1952 by Los Angeles County have worked well. Cut-and-fill grading techniques involving the removal of material from the slope head, regrading of uneven slopes, and hillslope benching are all of proven value. Consideration of such factors has had a major effect on reducing landslide losses in the United States, Canada, the European nations, the former Soviet Union, Japan, China, and other countries. Landslide research today is focused not so much on locating where landslides are and what hazards they represent as on figuring out how to cope with the potential hazard that they represent. This is an area of close cooperation between solid-earth scientists and geotechnical engineers.
Mitigation has also benefitted from substantial progress in the development of physical warning systems for impending landslides. Significant improvement will result as better instrumentation and communication systems are developed. Of particular importance will be continuing advances in computer technology and satellite communications. Hazard-interaction problems require a shift in perspective from the incrementalism of individual hazards to a broader systems approach. Earth scientists, engineers, land-use planners, and public officials are becoming aware of interactive natural hazards that occur simultaneously or in sequence and that produce cumulative effects that differ from those of their component hazards acting separately. In the case of landslides, research is particularly needed on cause-and-effect relationships with other geological hazards. For example, in the 1991 Mount Pinatubo (Philippines) eruption, the thick accumulating ash-fall and ash-flow deposits proved particularly liable to generate landslides and debris flows during typhoons. Research on the social aspects of such relationships in terms of warning systems and emergency services is necessary. At what point should people evacuate and abandon their little piece of the Earth?
Human intervention can reduce landslide risk by influencing some contributory causes. Projects that undermine slopes in marginal equilibrium or destabilize susceptible areas by quick drawdown of reservoirs can be avoided. Among projects that can lay the groundwork for disastrous landslides are road building, mining, fluid injection, and building construction that entails clearing vegetation. Planning and designing such projects with the local landslide potential in mind is absolutely essential. While these activities may not individually cause a landslide, they can increase the likelihood of slope failure as preconditions to which cloudbursts or earthquakes are added. Wherever hillsides receive precipitation over days and weeks, the pore-water pressure can build in rock fractures and decrease bulk shear strength, which can then induce displacements under less force than would be needed to shear a drier
material. A proven mitigation technique in such cases is for geologists to locate the water surface in fractured rocks and drain off destabilizing water by drilling horizontal wells.
Then there are the regional-scale contributory causes of increased landslide susceptibility such as deforestation. According to the World Resources Institute, approximately 109,000 km2 of tropical forest is being destroyed annually—an area the size of Ohio. Removal of the forest cover increases flooding, erosion, and landslide activity. This deforestation is causing serious landslide problems in many developing countries.
Over a period of about 3 years in the 1980s, during the course of an El Niño episode, regional weather changes in the western United States resulted in much heavier than average precipitation in mountainous areas. That increased precipitation caused a tremendous increase in landslide activity in California, Nevada, Utah, Colorado, Washington, and Oregon. Scientists are coming to understand such cycles through integration of collected data with information found in the historical and geological records. Cycles such as El Niño form the background variation of the climate pattern. But earth scientists do not know what to expect with additional perturbation from a changing greenhouse effect. Will the predicted temperature increase cause a decrease in precipitation, as occurred in mid-America during the summer of 1988? Will it increase storm activity throughout the mid-latitudes? Will it disrupt global climatic patterns, resulting in droughts in some areas and increased precipitation in others? Documented cause-and-effect sequences such as those related to El Niño episodes suggest that if areas prone to landslides are subjected to heavier than normal precipitation, susceptible slopes are likely to fail.
Land subsidence can be currently observed in at least 45 states; it is estimated to cost the nation more than $125 million annually. Subsidence can have human-induced or natural causes; both are costly. In the United States at least 44,000 km2 of land has been affected by subsidence attributed to human activity, and the figure is probably higher. As for natural subsidence, one event—the 1964 Alaskan earthquake—caused an area of more than 150,000 km2 to subside as much as 2.3 m. This event was extreme but not atypical of past or probable future disturbances.
The causes of subsidence are various but well known, and the hazard presented is well recognized; however, the indications of specific imminent danger and the possible cures or preventives are not clear. Subsidence can be induced by withdrawing subsurface support—by removing water, hydrocarbons, or rock without a compensating replacement. In many instances, oil or water is removed from porous host sediments that compact as the interstitial fluid is removed. In such cases, collapse is slow and gentle. More dangerous are situations that leave voids—withdrawal of water from cavernous limestone or mining of coal, salt, or metals. These voids can collapse gradually or suddenly. Not all subsidence is unexpected—ground over longwall coal mines is supposed to subside gradually to a new elevation that is both safe and stable. The ground surface above subsiding land is not a good place for a shopping center or school building, but it may be quite suitable for crops or recreation.
Natural subsidence occurs for several reasons. A basin surface may warp downward in response to recent sediment loading or by dewatering and compaction of sediments; both processes presently affect the Mississippi River delta. Tracts may subside because of folding or faulting, as in the Alaskan example above. Regions such as the Texas Gulf coast are triply vulnerable because they overlie a downward-flexing part of the crust; are above a thick pile—up to 10 km deep—of compacting sediments; and are being mined for groundwater and petroleum, which accelerates deflation of the sedimentary pile.
The primary cause of the most common, and most dangerous, subsidence in the United States is groundwater extraction through water supply wells. In California's San Joaquin Valley, 13,500 km2 of land surface has sunk as much as 9 m in the past 50 years because of removal of groundwater for irrigation. The danger, of course, is in more populated areas, especially those close to sea level. Some cities that are already struggling because of groundwater extraction include Houston-Galveston, Texas; Sacramento and Santa Clara, California; and Baton Rouge and New Orleans, Louisiana. The problem of induced subsidence is international and also threatens London, Bangkok, Mexico City, and Venice. If sea level continues to rise, the cities that are literally on the edge now will be fighting to stay above water.
Sinkholes, another common source of land collapse, can occur unexpectedly on a more local scale than wholesale subsidence. Sinkhole collapse generally results from the slumping of poorly consolidated surficial material into underground caverns.
Collapse over underground caverns can usually be attributed to a recent lowering of the groundwater level and consequent loss of pore-water pressure. As long as a cavern is filled with water, the material covering it receives enough support to stay in place. Actual collapse of bedrock caverns is less common. Sinkhole collapses are rapid but very local and can be either natural or human induced. Similar events result from the dewatering of abandoned mines that penetrate close to the surface. Ground-penetrating radar is used to identify shallow caverns, and seismic tomography has promise for evaluating the stability of mine pillars—volumes of unmined rock left to support the roof over adjacent mined areas.
Subsidence in one region, as a consequence of loading by former ice sheets, may be accompanied by compensating elevation in another region. The most recent glaciation of North America isostatically depressed the parts of the continent covered by ice—4.0 km thick in places—and caused upward bulging of land along the margins of the depression. Following deglaciation, the process reversed, and today the area once covered by ice is slowly rising at rates as large as about 3 mm/year as the glacial forebulge subsides. This process is tilting the Great Lakes region, and in a few thousand years it will divert Great Lakes drainage from the Niagara River through Chicago to the Illinois River, drying up Niagara Falls. In the meantime, some of this uplift may play a part in inducing small-magnitude earthquakes in the north central and New England states.
Any relatively low-lying area is subject to flood. Hurricanes and typhoons, tidal surges, and tsunamis can deliver too much water from the direction of the ocean. Snowmelt and ice dams, cloudbursts, and prolonged rainstorms can deliver too much water from inland areas. Some of the most frightening floods descend on mountain settlements when their watersheds receive cloudburst rain, and flash floods completely scour out valleys. In canyons of the mountainous western United States, warning signs read, ''In case of flash flood, climb straight up." In the United States, rainstorms and their accompanying flooding and debris flows accounted for 337 of the 531 federally declared disaster areas from 1965 to 1985. Human activities also cause or contribute to flooding—one of the most tragic floods struck Johnstown, Pennsylvania, in 1889 when a dam failed and 2,200 people were killed. Urbanization augments flooding. Studies revealed that in Houston, Texas, the creation of impervious surfaces increased the magnitude of the 2-year flood by nine times. Paving and stream channelization remove water quickly from one area, but they deliver more water more quickly to other areas. Factions argue about responsibility for the flood that struck Rapid City, South Dakota, in June 1972. That flood, which killed 245 people and caused $200 million in damage, followed an exceptional rainfall that was preceded by cloud-seeding efforts.
Floods can be terrible, but they are also accepted as few other hazards are. Humans have come to realize that access to transportation, fresh water, and rich alluvial soils is the reward for surviving floods. Long experience with floods has led to flood management practices. Because different management theories evolved in different environments, policy makers cannot, and perhaps should not, agree on standardized practices. Methods of management include land-use regulations; structural measures such as dams, levees, and floodwalls; land-treatment measures such as reforestation or terracing of stream banks; zoning ordinances and building codes; and warning systems.
Floods are not discussed at length here because they are considered in the 1991 National Research Council (NRC) report Opportunities in the Hydrologic Sciences, in which specific research activities such as short-term forecasting are identified. In that report emphasis is placed on the responsibility for reduction of flood loss that lies with public policy makers who can regulate development in flood plains. Estimating the risk in flood plains depends on knowing the probability of a repetition of events of a particular magnitude. In areas such as the western states where the historical record is short, this is not easy. Quaternary geologists are able to use the geological record to estimate both the timing and the scale of events in an area prior to written history.
The coastline is a major battleground in the competition between the forces of deposition and erosion. It is also a region of concentrated human activities. Thus, coastal processes constantly affect populations and have long been the subject of conjecture. The causes of coastline fluctuations are numerous; they result from both internal and external earth processes.
One major factor is the uplift or subsidence of the coastal regions caused by tectonic forces. Most of the west coast of the United States is currently rising and has been for the past half-million years.
Uplift stages are typically marked by successively elevated terraces. Each terrace was once a wave-cut bench below a sea cliff, indicating a brief pause for a steadily rising coastline. The coastal terrain is riddled with landslides, an outcome accentuated by occasional earthquake shaking and by the infrequent cloudbursts. The problem is apparent, but identification of the most vulnerable slopes requires the careful scrutiny of engineering geologists.
The present site of the southeastern seashore of the United States stood well above sea level during the most recent continental glaciations. Rivers entering the sea did so farther to the east, through valleys incised into the coastal plain. As the ice sheets melted, the sea level rose and flooded the valleys, forming estuaries such as the Chesapeake Bay. The Patapsco, Potomac, and James rivers were all once tributaries of a Susquehanna River that flowed out past the present bay and formed a delta on what is now the continental shelf. As drastic as such changes might seem, they are relatively recent, having occurred over the past 10,000 years, and have cycled back and forth as ice sheets waxed and waned repeatedly over the past few hundreds of thousands of years.
Within the past 10,000 years of high sea level stand, currents and wave action along the shores have piled up barrier beaches such as those near Cape Hatteras, protecting shallow lagoons on the landward side. These are ephemeral and fragile creations, even without human dredging, building, and destruction of the vegetative cover. The lagoons and barrier islands record a complex history of deposition, erosion, and redeposition. A 1-m variation in sea level will change the sea-land interface substantially. In areas such as the Texas Gulf coast, where a thick section of sediments is gradually consolidating, scientists anticipate that shoreline features will be severely affected, probably with major losses of valuable property (Figure 5.8). Proposals to recover energy from geopressured fluids from southeast Texas aquifers would almost surely result in further subsidence of the surface and encroachment of the gulf on present-day shores.
As rivers erode the land, deltaic and estuarine environments become the first sites of major sediment deposition—except for constructed reservoirs and rare natural lakes. These brackish tidal waters are nursery grounds for many marine organisms as well as invitations for the establishment of human populations because of access to those nursery grounds, to transportation routes, and to freshwater sources just up river. Many, if not most, of the major estuaries of the country have become contaminated by natural or human-introduced pollutants. Boston Harbor is, unfortunately, an example of a contaminated estuary. The fish populations have declined to the point of disappearance, and the beaches are nearly deserted throughout the summer because of pollution. Chesapeake Bay pollution, first detected by the Public Health Service before World War I, has contributed to a decline of striped bass. Pollution of estuaries has an extremely deleterious effect on fish spawning, and encroachment often leads the way in destroying the estuarine habitat, the eventual result being diminution of potential food supplies.
Deltas, barrier islands, lagoons, tidal flats, and estuaries support a number of dynamic physical, chemical, and biological processes that operate in complex association. Although scientific understanding encourages informed management, only a few of the world's populated coastal environments have been studied in the required detail. The United States has engaged in a major effort in coastal-zone planning, but there has been no systematic effort at data collection to determine either baseline information or trends in essentials such as estuarine quality. Study of estuaries and other coastal environments should prove most productive when approached by interdisciplinary teams that can appreciate the complexities of the physical, chemical, and biological processes. The sensitivity and significance of coastal processes warrant a systematic integration of biological, sedimentological, hydrological, and geochemical data collection and analysis. Such an effort should result in a product that will inform decision makers in the near future.
Hazards such as landslides, subsidence, and floods are often exacerbated by human activities. But they are also often triggered by violent tectonic upheavals—earthquakes or volcanoes, or by the former's menacing effects, tsunamis. There is nothing that society can do at present to prevent tectonic upheavals, but there are methods to monitor, predict, mitigate, and avoid potential tectonic disasters (Figure 5.9 ). In the United States, progress in these strategies has involved many geologists over the past 20 years because of the need to protect population centers from threats such as the 1980 Mount St. Helen's eruption and the 1989 Loma Prieta earthquake. Scientists and policy makers know that
major natural disasters can be forestalled only by continuing research into the nature of such upheavals; responsible planning of engineered works and land-use facilities in areas at risk; and methods for warning, evacuating, and providing emergency relief. Only cautious, realistic planning can prevent overwhelming tragedies such as when hundreds of thousands of people were killed by the 1976 earthquake in Tangshan, China, or 3,000 were killed in the Philippines by a tsunami only 3 weeks later. The more knowledge that scientists gain about the causes and effects of earthquakes, the more able they will be to predict and plan for all these potential disasters. For example, the seismological techniques used to monitor earthquake zones can be used to monitor potential volcanic activity and potential tsunami threat. And assessment of earthquake, volcano, and tsunami hazard potential can help planners predict dangers from the associated landslides and floods.
Earthquake hazard evaluation involves determinations of the specific location, frequency of occurrence, and intensity of energy release—which, in turn, require characterization of the space, time, and size distribution of the earthquakes that give rise to the hazard (Figure 5.10). Decision making in the face of serious earthquake threat requires that the hazards and risks be neither overestimated nor underestimated because of the great consequences for life, safety, and economic security. Administrators cannot expect citizens to abandon their daily lives for anything but imminent danger. Therefore, earth scientists have an obligation to acquire relevant data and pursue research aimed at reliable depiction of an earthquake threat.
Quantitative experience from many earthquakes
provides an accurate and useful understanding of an earthquake's seismic hazard parameters, the nature of the earthquake source, the maximum size of the resulting seismic waves, the qualitative characteristics of those seismic waves, and the potential effect along the surface in response to the wave's energy. The study of these parameters involves diverse disciplines, ranging from sedimentology and seismology to geotechnical and civil engineering. To arrive at a realistic picture of the damage that may result from an earthquake, decision makers must apply scientific knowledge of the seismic hazard to the specific characteristics of engineered works. Only then can they estimate the seismic risk—the threat that earthquakes present to human lives and property.
Great concentrations of population have settled close to major plate boundaries. Of those cities whose populations are projected to exceed 2 million by the year 2000, 40 percent are within the 200-km earthquake shock radius of a plate boundary zone. While most deadly earthquakes are related to plate boundaries, those boundaries are not always sharp, and the geometry of subduction-zone dip amplifies the breadth of potential damage. Studies of seismicity and active deformation have shown that the regions affected by plate interactions can be large. For example, the entire belt of mountains formed by the Indian-Asian collision stretches over an area of more than 6 km2. And the effects of the diffuse plate boundary in the western United States extend over 1,000 km into the continent to the active Wasatch Fault (Figure 5.11), which passes through Salt Lake City. The distribution of earthquakes in the United States reveals that broad zones can be involved in plate boundary deformation. In the western states occupying these zones, many faults underlie urban areas, where earthquakes of magnitude 6 or larger pose serious threats. Some of these dormant faults are concealed by basin sediments or overlying rock that responds to tectonic forces by folding rather than faulting; such landscapes may provide few visible geomorphic clues about potential hazard. And while these faults seem to approach the surface at high angles, geophysical surveys find correlations with regionally extensive reflection sur-
faces that are nearly horizontal and relatively deep within the crust. Analysis of geophysical data suggests that the faults are curvilinear in cross section, with one arm angling toward the surface and the other laterally underlying extensive areas. Possibly these listric faults may act as earthquake sources, although no significant earthquakes have so far been demonstrated to have occurred on the flat-lying segments of such faults.
With the continuing development and empirical refinement of plate tectonic theory, a first approximation of potential earthquake source location can be estimated reliably from the regional tectonic setting. Recent studies have shown that the total seismic energy released in plate boundary regions is more than 99 percent of the total worldwide seismic energy release. The problem is that most plate boundary regions have earthquakes quite frequently, which relieves stress incrementally, while the 1 percent of seismic energy released in intraplate events occurs only occasionally. Those singular events may therefore be extremely violent spasms. Nevertheless, the basic plate tectonic setting provides invaluable clues. One example is the Cascadia subduction zone in the Pacific Northwest, which marks the boundary between the Juan de Fuca and North American plates. Historically, this part of the plate interface zone has been seismically quiescent. But the possibility of powerful earthquakes occurring along this zone, which includes the cities of Vancouver, Seattle, and Portland, is suggested on the basis of its plate tectonic setting as well as neotectonic geological studies that indicate the occurrence of major subduction-zone earthquakes here within the recent prehistorical past. Earthquakes comparable in size to the 1964 Alaskan earthquake—which devastated Anchorage—may well occur along this coastline of the Pacific Northwest.
Intraplate earthquakes can pose a threat comparable to plate boundary events, as shown by the three very large earthquakes that rocked the New Madrid, Missouri, region in late 1811 and early 1812. Those earthquakes drained swamps, altered reaches of the Mississippi River, and created new lakes in parts of Tennessee. Chimneys toppled and stone walls cracked in St. Louis, Louisville, and even Cincinnati; church bells rang in Washington, D.C. Another major destructive intraplate event struck Charleston, South Carolina, in 1886. The considerable increase in population density throughout the eastern United States since these great intraplate earthquakes guarantees that comparable events would devastate communities in the eastern two-thirds of the country.
A valuable tool for identification of earthquake sources is the study of earthquakes described in the historical record. Modern seismometers have been useful only since the end of the nineteenth century, but thousands of earthquakes that predate such instrumentation are well documented in the historical literature. Systematic assessment of historical descriptions provides estimates of the sizes and locations of earthquakes in China, the Mediterranean, and the Middle East. Earthquakes have been cataloged in these regions of the world for a few
thousand years—intervals long enough to embrace rare larger events that recent instrumental data often do not include. Clearly, these historical events can indicate earthquake sources that might not otherwise be evident. The extensive historical catalogs may also provide indications of short-term fluctuations in the locations, sizes, and rates of earthquake activity that have not left evidence in the much longer geological record. The Chinese historical record bears witness to apparent spatial and temporal cycles having periods of a few hundred years. Such seismic cycles may also occur in Japan with regular intervals that range from 100 to 200 years. But there are no cases of instrumentally recorded data covering periods long enough to provide any real understanding of seismic cycles—or even to offer any certainty in predicting earthquake recurrence.
Major advances have been made in identifying prehistorical earthquake sources on the basis of geological field studies. Displacements along active faults can be established by combining the historical record with stratigraphy, geomorphic analyses, and age-determination techniques in the new subdiscipline of paleoseismology. A good example of how the new techniques have given insight into the
long-term seismic record is provided by studies of that segment of the San Andreas Fault in California that broke in 1857 (Figure 5.12). The segment currently is seismologically very quiet, but paleoseismological evidence indicates repeated large prehistorical displacements, which presumably generated large earthquakes. The average intervals between these large paleoearthquakes taken together with the time of the latest event in 1857 are used by geologists to make statistical extrapolations as to how likely similar events may be in the near future.
Relatively large significant earthquakes are sometimes caused by ruptures on faults that do not break through the surface, as illustrated by the Coalinga and Whittier Narrows earthquakes in California. Investigation at such localities, however, shows that these events do in fact leave their marks in the geological record. Measurement of changes in the shape of the surface provide geodetic evidence of continuing folding, such as slope steepening, and exposure of cross sections can reveal evidence of strong ground shaking, such as liquefaction—the transformation of a saturated soil into a fluid mass—and of continuing, long-term, centimeter-scale deformation within deposits overlying the faults. Detailed investigations using field geology techniques are essential for locating buried seismogenic faults.
The largest earthquake that a particular seismic source is capable of generating occurs very infrequently, so that the historical record of seismicity at any locality probably does not include the largest
earthquake that ever occurred at that locality during the past few thousand or tens of thousands of years. Consequently, estimates of maximum magnitude, or energy release, for seismic sources are based on the geological record. During individual earthquakes, faults typically do not rupture over the entire fault length; instead they slip over only a few segments of the entire length. Total rupture lengths increase with earthquake magnitude. Similarly, fault rupture area—the size of the plane that has ripped loose—increases with magnitude. The lengths and areas of old ruptured segments (see Figure 5.10) place constraints on the maximum sizes of earthquakes that might reasonably be expected at that location. Current work is aimed at identifying physical, geometric, and fault behavioral characteristics that are diagnostic of individual fault segments. Detailed geological studies along the length of whole fault systems help to identify segments at risk of future ruptures. The problem comes in estimating these parameters for a given fault before the occurrence of the maximum event. A wide variety of geological studies—geophysical, seismological, and geomorphic—aid in the estimation of parameters related to magnitude. Methods used to gather information include interpretation of geometric constraints determined from geophysical analysis, investigation of fault-scarp dimensions as paleoseismic indicators, and integration of stratigraphic relationships in exploratory trenches excavated across the fault (Figures 5.12 and 5.13). All of these approaches may, in combination, provide a clearer picture of the sizes and dates of paleoearthquakes exhibited in the geological record and help to predict whether a large earthquake is imminent.
The first known instrument for earthquake detection was invented nearly nineteen hundred years ago by a Chinese mathematician and astronomer. It was designed with eight dragons poised around the circumference of a hollow globe with unattached balls perched in their mouths and eight openmouthed toads waiting below them. An earthquake of even the slightest magnitude would allow a ball to fall from the mouth of a dragon into the mouth of a toad, setting off an alarm. It was thought that the earthquake would have originated in the direction of the empty-mouthed dragon. For two millennia after this invention, however, earthquake detection remained an elusive goal for researchers in Asia and Europe.
In the late nineteenth century, advances in both theory and engineering led to the development of the first seismometers—instruments that could reliably detect and record earth tremors. Modern seismometers continually monitor the Earth's vibrations and, being rigged with magnification equipment, produce a sensitive record of those vibrations. When an earthquake hits, anywhere on the globe, seismometers distributed over the surface fluctuate according to their reception of body waves and surface waves. There are two types of body waves: compressional—similar to the push and pull of a punch—and shear—similar to the writhing of a whiplash; both of these travel through the Earth. Surface waves travel in a complex array and cause most of the damage. To locate the source of the earthquake, both its focus deep within the lithosphere and its epicenter on the surface above the focus, seismologists need three seismological readings of body wave onset. They calculate the difference in arrival times between the compressional waves and the shear waves, which indicates the distance through the Earth from the seismometer to the focus; they can then pinpoint the fault rupture through triangulation. The techniques developed for recording natural earth shaking have produced a completely independent field of inquiry; seismic technology is a geophysical technique that induces mechanical shock-waves in the Earth, records their reflections, and produces images from the reflections that enhance understanding of the interior structure. The technology has come full circle as an essential tool for studying earthquake characteristics.
Recent advances in geophysical techniques are providing better descriptions of the three-dimensional geometry of earthquake sources. Deep crustal images—such as those constructed from data produced by the Consortium on Continental Reflection Profiling, or COCORP—can reveal the regional scale of low-angle faults as well as previously unsuspected local fault geometries. Seismic tomography—comparable to the remarkable CAT-scan images used for medical investigations—contributes dense numbers of consecutive slices through the Earth and promises the ability to detect ancient lesions and scars in crustal structure with a level of detail previously thought to be unattainable.
Earth scientists have also improved their ability to use mathematical inversion techniques. Inversion analyzes the effects to gain knowledge about the nature of the cause. Seismic inversion, using the data from seismometers, studies patterns produced by an earthquake's body and surface waves and determines the type of earthquake that could generate them. Geodetic inversion measures the physical changes on the surface produced by an earthquake and characterizes the sort of earthquake that
would be capable of making those changes. Successful seismic and geodetic inversions may construct remarkable pictures of the fault rupture surface at depth and the associated complexities in both fault geometry and the dynamic rupture process. Powerful examples of the application of inversion techniques can be found in the studies of recent earthquakes at Imperial Valley and Coalinga, California, and the earthquake at Borah Peak, Idaho.
During the past 20 years efforts to obtain records of strong ground motion have increased, and the data bank has grown with each successive earthquake. The data have been recorded, for the most part, on networks of relatively simple accelerometers with minimal magnification capabilities. These records have been invaluable to the engineering community in designing structures to withstand earthquakes. Within the past few years a new generation of instruments has been developed, and they are now being deployed to supplement the older instruments (Figure 5.14). This new generation of instruments can digitally record a wide spectrum of frequencies over a wide dynamic range. The new data so gathered permit the analysis not only of strong ground motion, as did the older generation of instruments, but also details of the faulting process. Such instruments, and the data recorded by them, are revolutionizing our understanding of rupture mechanics.
Theoretical models have been developed for predicting the shapes of curves that represent earthquake ground-motion intensity, called seismic wave spectra. Although there is still some controversy over the physical interpretation of the models, different investigators have made predictions that are in general agreement with observed records, using data recorded in regions such as western North America where substantial numbers of strong-motion records are available. With that agreement as encouragement, theoretical models
have been proposed that make ground-motion estimates for intraplate regions where strong-motion records are sparse to nonexistent. Research using accelerograph records is supplemented by the study of data from seismometers located at great distances from an earthquake epicenter, called teleseismic data.
Techniques have been developed for simulation of near-source ground motion. These techniques are still the subject of some controversy, but they are being improved and have been used for the design evaluation of such critical facilities as nuclear power plants and nuclear waste repositories. Such simulations are routinely used to estimate the damage potential of postulated subduction-related earthquakes in the Pacific Northwest.
Several recent earthquakes have emphasized the critical importance of soil conditions to earthquake ground motions at a particular site. On September 19, 1985, a strong earthquake ruptured a fault along the Pacific coast of Mexico, west of Mexico City. Resonance of seismic wave energy within the lake sediments beneath Mexico City caused amplified ground motion and resulted in tremendous damage to high-rise buildings in the city even though it is located more than 350 km from the earthquake's epicenter. On October 17, 1989, the damage to buildings in San Francisco and Oakland, located more than 75 km from the epicenter of the Loma Prieta earthquake, occurred almost exclusively at locations underlain by unconsolidated man-made fill or soft sedimentary deposits. Saturated soils that are supporting heavy loads may transform into a fluid slurry when given a jolt—the process of liquefaction (Figure 5.15). Soil structure disruption from this process is identifiable in the geological record as a sign of past earthquake activity. Ironically, in both Mexico City and San Francisco, the susceptibility was known: the problems had been clearly identified in advance and were accurately shown on seismic hazard maps. Photographs taken in the Marina District following the 1906 San Francisco earthquake show evidence of liquefaction-induced soil failures identical to those revealed in photographs taken in 1989. These regrettable facts underscore the futility of hazard research and accurate risk assessment if communities are incompletely informed or choose to ignore known geological threats.
A fundamental goal of seismic hazard research and risk assessment is accurate prediction of potentially damaging ground motion. Seismic hazard is assessed using deterministic and probabilistic statistical analyses, and a dynamical systems approach is developing rapidly. The deterministic approach concentrates on the maximum earthquake that a given seismic source is believed capable of producing during a specified time period. Assuming the maximum earthquake will occur, this approach determines ground motions that can be empirically associated with a known fault of specific seismogenic characteristics. Deterministic seismic hazard analyses are useful for engineering applications at critical facilities, such as dams and nuclear power plants, where there is a need to develop a conservative seismic performance design and to compare the results with those of other approaches.
Probabilistic seismic hazard analyses that attempt to incorporate a more complete picture of the seismic environment affecting the site of interest are now in routine use for virtually all types of structures. This approach delineates the individual uncertainties associated with all aspects of the site and incorporates these uncertainties into the analysis. Key parameters, such as maximum magnitude, are expressed as probabilistic distributions in time and space; the final results are expressed as the probability of exceeding various levels of ground motion at the site. At this point an informed decision can be made about appropriate design requirements for the particular facility. An acknowledged disadvantage of the probabilistic approach is the need to estimate the frequency of occurrence of various levels of earthquake magnitude rather than to merely assume that the maximum earthquake will occur during the lifetime of the engineered works.
Since the 1970s, attempts at short-term deterministic prediction have been augmented by probabilistic forecasts as a pragmatic means of quantifying seismic risk in a socially useful manner. Along the San Andreas Fault in California, for example, 30-year forecasts of earthquake activity form the basis for earthquake hazard zonation and mitigation activities on both the state and local levels. The 1980 forecast for a 2 to 5 percent per year probability of a great earthquake along the fault in southern California led directly to a decade-long program of structural retrofitting or removal of the entire class of buildings with greatest life safety risk in Los Angeles. Cooperation between providers and consumers of earthquake predictions proves to be highly desirable when information flows in both directions. The earthquake prediction experiment at Parkfield changed from a purely scientific investigation to an operational short-term prediction program through the direct participation of the emergency response community in planning and funding the experiment.
As a consequence of emphasizing the inherently statistical nature of prediction, our understanding of the prospects of damaging earthquakes has been revolutionized. Through the development of a framework encompassing all that is known about a particular region, from the historical and instrumental record to its plate tectonic setting and the short-term influence of specific geophysical events, seismologists can work toward making useful predictions. For example, a decade ago the prospect for the generation of great earthquakes in the Cascadia subduction zone off the Pacific Northwest coast was a hotly debated subject. Evidence for prehistorical large-magnitude events found by paleoseismologists has redefined the subject of debate from the likelihood of a future event to its timing and the severity.
Society is increasingly dependent on location-specific scientific information, expressed quantitatively and qualitatively, for decision-making purposes. Earthquake prediction information must be communicated accurately and articulately in order to avoid panic while preparing for highly probable potential crises. Whether long-term or short-term
predictions, such messages should provide maps of earthquake damage potential, including such near-instantaneous changes as landsliding and liquefaction. Within this framework, short-term predictions can provide crucial information, provided the users are properly prepared to receive it; preparation must be made through public education programs. California has begun to issue short-term earthquake advisories based on probabilistic models of foreshock activity. These low-probability advisories—a 2 to 5 percent chance of a larger event in a 3-day window—have proved their worth by spurring individuals and institutions toward earthquake mitigation actions they had previously ignored. More precise short-term predictions may, some day, again revolutionize the technical ability to give earthquake warnings. Even if and when this goal is achieved, the translation of scientific knowledge into planning, preparation, and action will remain the most important task for scientists and public officials alike.
Modern computer-networking capabilities promise a new form of hazard mitigation involving real-time seismology that can immediately identify the most severely damaged areas and assign emergency relief priorities. It is now technically possible to determine earthquake source parameters such as size, depth, and direction of rupture propagation immediately after a large earthquake. Large-scale deployment of the new generation of broadband seismic instruments with satellite or other telemetry capabilities is making this a reality. The intensity distribution of earthquake ground shaking often exhibits a very irregular spatial pattern because of variations in the crustal structure near the epicenter, in site response, and in the force mechanisms—such as strike-slip, along the San Andreas Fault, or dip-slip, in western Mexico. When seismic networks are supplemented by portable instruments, generic path effects and site responses in an earthquake-prone region can be defined. It then becomes possible to quickly determine the spatial distribution of ground shaking and the damage exposure in an entire epicentral region.
Instantaneous communication capabilities offer further opportunities to mitigate earthquake damage. In cases where the earthquake is centered some distance away, it is possible to warn a region of imminent strong shaking as much as several tens of seconds before the onset of damaging shaking because of the relatively slow speed of seismic waves. Endangered regions could respond to the warnings in time to shut down delicate computer systems, isolate electric power grids and avoid widespread blackouts, protect hazardous chemical systems and offshore oil facilities, and safeguard nuclear power plants and national defense facilities.
A simple warning system using this concept has already been incorporated into the Japanese railroad system. A similar strategy is being implemented in the form of a tsunami warning system that will alert areas around the Pacific when any large submarine earthquake occurs in the Pacific basin. Real-time seismic and geodetic systems are also critical components of volcano monitoring systems that can provide fairly short-term warning of impending explosive eruptions.
A tsunami along several hundred kilometers of the coast of Nicaragua on September 2, 1992, resulted in over 100 deaths as a 25-foot wave inundated coastal areas (Figure 5.16) and in places had run-ups of up to 1,000 m. Tsunamis are large ocean waves most commonly generated by the uplift or depression of sizable areas of the ocean floor during large subduction-zone earthquakes; significant tsunamis have also resulted from volcanic eruptions and large landslides or submarine slides. Like earthquakes, great sea waves are of little consequence in remote regions. In the open ocean they are hardly noticed, but as they approach the shore the waves increase in amplitude as they move into shallower water, depending on the nature of the local submarine topography. The resulting tsunami hazard in many coastal areas is far greater than is often appreciated. For example, during the great Alaskan earthquake of 1964, the loss of life from the tsunami generated in the offshore area was more than 15 times as great as the loss of life directly attributable to earthquake shaking; much of it occurred far from Alaska. Indeed, during the past 50 years, significantly more people have been killed in the United States by tsunamis than by other effects of earthquakes—although these statistics could change radically overnight with a major earthquake in a metropolitan area.
The areas of the United States most affected by tsunamis are Alaska, Hawaii, and the Pacific Northwest. Hilo, Hawaii, has been hit repeatedly by tsunamis originating as far away as southern Chile, and the same Chilean earthquake that produced tsunami devastation in Hilo in 1960 caused 200 deaths in far more distant Japan. Very recent geological field studies suggest that large prehistorical tsunamis occurred along the Oregon-Washington-British Columbia coast, probably generated by
large earthquakes in the Cascadia subduction zone. Because no large earthquakes—or locally generated tsunamis—have occurred there within the short historical record, there has heretofore been essentially no planning by public agencies for such a contingency. Because the effects of a large tsunami could indeed be devastating in many low-lying, highly populated areas throughout this region, the issue has now become one of great societal as well as scientific interest. Perhaps the most revealing lines of evidence have come, and will continue to come, from geological field studies of the direct effects of recent abrupt changes in elevation and the resulting tsunamis in the very young sediments of coastal areas as well as changes in the coastal morphology.
Closely related to the tsunami hazard is seiching, or the oscillation of closed and partially open bodies of water caused by long-period surface waves, which are often produced by the same large earthquakes that generate significant tsunamis. For example, the 1964 Alaskan earthquake caused waves as high as 2 m in bays and channels along the Gulf of Mexico, with sizes and shapes suitable for resonating with the wavelength of the arriving long-period seismic energy. In addition to a tsunami that might be caused by a large subduction-zone earthquake off the Pacific Northwest coast, damaging seiches might be generated by the same seismic event on water bodies such as Lake Washington in Seattle, in the Straits of Juan de Fuca and Georgia, on individual arms of Puget Sound, and in large bays such as Grays Harbor and San Francisco Bay.
Although the basic physics of tsunami generation, wave propagation, modification, and run-up on the shallowing shore are generally understood, the oceanic waves and their effects have had little systemic and comprehensive examination. A tsunami research planning group commissioned by the National Science Foundation in 1985 recommended, with highest priority, that a major effort be made to gather meaningful field data related to the generation and propagation of tsunamis. Specifically, the group suggested that arrays of seafloor pressure sensors be placed in two areas where there is a high likelihood of large earthquakes that could result in tsunamis—the Shumagin and Yakataga seismic gaps in Alaska. Their purpose would be to capture the onset of a tsunami and provide critical data on the time histories of shallow-water surface elevation and relative bottom displacement. As part of the same experiment, one or more deep-water arrays of bottom-pressure sensors should be established at greater distances, perhaps north of the Hawaiian Islands and off the California shore, for measuring water-surface elevation, wave directions, and seismic spectra characteristics. Such data are essential for evaluating theoretical models of tsunami generation, propagation, and run-up.
Tsunamis are generated by long-period seismic processes, and it is therefore important to determine the long seismic period nature—those in excess of 100—of earthquakes in areas that might generate tsunamis. Whereas abundant long-period records of such earthquakes are available from instruments located at great distances from the source, almost no close-in, long-period records are currently available because no appropriate instrumentation is in place. The research planning group recommended that
two low-gain, long-period seismometers be established in each of the two Alaskan seismic gaps.
Additional important research areas were identified by the research planning group. They include the fundamental dynamics of long, weakly nonlinear, three-dimensional waves; refraction-diffraction and other nonlinear transformations of long waves in the near-shore region; run-up, run-down, and overland flow; interaction of waves and engineered structures; and basic research pertaining to tsunami warning systems.
Tsunami research involves a number of different government agencies and encourages, even demands, international cooperation. The warning systems that alert population centers around the Pacific Ocean basin of potential and approaching tsunamis are a model for international disaster-mitigation efforts. The tsunami warning system involves three levels of alert. First, it notes the occurrence of an earthquake strong enough to trigger a dedicated alarm located in Honolulu, in the center of the Pacific. Second, if the epicenter is found to be in an area of the Pacific basin capable of generating a tsunami—under or near the ocean—a tsunami watch is established, and communication with areas around the epicenter is attempted to verify the existence of a wave. Finally, if a tsunami has been witnessed, the Honolulu station issues a formal warning along with estimated arrival times for specific locations. Once the formal warnings have been made, individual government agencies proceed with appropriate action.
This system worked quite well for the Alaskan earthquake on March 28, 1964, except for two problems. First, the earthquake damage was so severe in the immediate vicinity of the epicenter that all communication systems were knocked out. Honolulu knew an earthquake had occurred that was large enough to set off the alarm, but the location was not established until an hour later. After another hour the first tsunami report arrived from Kodiak Island, 650 km from the epicenter. Kodiak Island reported two more crests, and a full warning with estimated times of arrival was issued by the Honolulu station. The second problem came from the public response. In Crescent City, California, citizens left their homes when alerted, but they grew impatient and returned before the third wave moved through. Seven people died. In San Francisco and San Diego only the weakness of the waves when they arrived averted disaster. Thousands of people headed down to the beach to watch the big waves, proving, once again, that all the wisdom in the world cannot save those who choose to ignore it.
The energy of a major volcanic eruption is well beyond what can reasonably be expected to be controlled by engineering. Consequently, volcanic phenomena can best be adapted to by accurately predicting the occurrence and the likely results of an eruption. Fortunately, volcanic eruptions have many precursory phenomena (Figure 5.17) that can readily be detected with modern instrumentation and techniques.
The reduction of volcanic risk over the next decade or two will involve three distinctly different efforts: basic volcanological research, monitoring of high-risk sites coupled with public education, and study of volcanic effects on climate. Basic research is required on how volcanoes function; particularly needed are investigations into the mechanisms that trigger eruptions and better determinations of the average time spans between explosive eruptions of large magnitude. The research on recurrence intervals should consider both global and regional frequencies of eruption. This basic research could lead to accurate assessment of specific risks. Known techniques must be expanded, and new ones developed, for assessing volcanic hazards and monitoring active and potentially active volcanoes. A high-priority must be given to public education; as with earthquake and tsunami warnings, a fine line must be followed between offering correct information about probability and risk and diluting the importance of a warning by false—or misunderstood—alarms.
A better understanding of the interaction of volcanic emissions with the atmosphere and hydrosphere is necessary. Historical records show that some volcanic events dramatically modified climate for several years following eruption by introducing large volumes of dust and gas into the atmosphere. Recent research has presented evidence that large submarine eruptions along ocean ridges may alter ocean temperatures. This mechanism has even been suggested to play a part in the short-term El Niño sea-warming condition with its subsequent implications for climatic variations that fluctuate over periods of a few years. Understanding these correlations with climate will depend on global monitoring of volcanism, most likely through satellite-based remote sensing. These new data will yield a better theoretical understanding of climatic response to heat and mass transfer from the interior into the hydrosphere and atmosphere.
Deciphering the workings of volcanoes is an eclectic challenge, involving contributions from
many areas. Data are obtained both directly and through association. Direct methods include continuously monitoring the geological, geophysical, and geochemical changes that occur on active volcanoes and geological mapping of ancient volcanoes whose internal structure has been exposed by eruption and/or erosion. Associations can be determined by analyzing the character and sequence of historical and prehistorical eruptive products from various types of volcanoes and then matching these and other data to conceptual models of how volcanoes work (Figure 5.18).
Present understanding of the dynamics of active volcanic systems is still in its infancy—particularly those volcanoes related to convergent plate boundaries, like the Cascade Range, the Andes, and the Aleutian Islands. In the past 40 years research on active volcanoes has expanded from field observation and description to include experimental and theoretical studies. The remaining challenge is the monumental task of assembling all the parts to create a better understanding of how volcanoes work. Two aspects of basic research that would lead to specific reductions of volcanic risk are discovery of the mechanisms that trigger volcanic eruptions and determination of the frequency of large explosive eruptions on a regional and global basis.
Knowing what triggers an eruption could lead to more successful searches for detectable precursors. Several precursors have already been recognized. These include a dramatic increase in earthquake activity beneath a potentially active volcano; a swelling of the ground surface near the volcano's summit (see Figure 5.5); and a continuous ground vibration, called volcanic tremor, which is detectable on sensitive seismometers. These signs indicate the injection of molten rock into the shallow roots of the volcano and definitely point to the potential for eruption.
Sometimes, when researchers monitoring a volcano have alerted the public to an imminent eruption, the shallow intrusion of molten rock will stabilize and cool without reaching the surface. When this occurs, the scientists may be accused of issuing a false alarm. A better term for this scenario would be an aborted eruption. More precise methods are needed to distinguish between shallow intrusions of molten rock that will erupt to the surface and those that will not. In addition, scientists need to learn how to communicate the uncertainties involved in forecasting eruptions to the governing officials and the public at risk near potentially active volcanoes.
Volcanoes in Hawaii, especially Kilauea, have been thoroughly monitored over the past several decades. Research indicates that molten rock rises
slowly but nearly continuously from a depth of about 65 km and accumulates in a shallow chamber about 3 km beneath Kilauea's summit. When that chamber becomes filled but the delivery of molten rock continues, the rock walls of the reservoir split, and molten rock either erupts to the surface or fills underground fractures in the volcano's flank.
Many questions about the Hawaiian volcanoes remain unanswered, but their general structure and dynamics are understood relatively well by comparison with those of the Cascade volcanoes like Mount St. Helens and Mount Rainier, which are less continually active. This understanding comes from familiarity; the spattering eruptions of Hawaiian volcanoes are part of everyday life. As convergent plate volcanoes, the Cascade volcanoes rise frigid and silent above the jumbled ridges clustered around them—Mount Shasta and Mount Rainier are over 4,000 m high; Mount Hood and Mount Baker reach heights over 3,000 m; Mount St. Helens had grown to only 2,550 m before its eruption in 1980. At these volcanoes, typical of those around the Pacific rim, earthquake swarms and ground deformation episodes are much more intermittent than at Kilauea. Long periods of quiet are suddenly interrupted by intense episodes of shallow intrusion and eruption. When these volcanoes erupt, they can be violently explosive.
Although violent eruptions are easily perceived as a major hazard, so can associated mudflows or lahars. About 4,000 years ago a massive mudflow having a volume of over 3 km3 originated on the edifice of Mount Rainier and partially filled many surrounding valleys and inlets of Puget Sound to hundreds of feet. Such a mudflow can be a major hazard. They might not necessarily be directly associated with a volcanic eruption but can be triggered by a nearby earthquake that might destabilize the glaciers on the volcanic edifice. Mount Rainier currently has over 4 km3 of perennial ice on its peaks; such mudflows are poised to occur again. Because of the potential mudflow and volcanic hazards at Mount Rainier, it has been designated as a ''Decade Volcano" within the context of the International Decade of Natural Disaster Reduction. The intent is to focus hazard-related research activities; these activities are currently being developed.
So far, earthquake and deformation data on the convergent-boundary volcanoes reveal little about their dynamic behavior. A possible reason for this may be that much significant activity at these volcanoes is beyond the penetration capabilities of deformation-monitoring instruments, perhaps 25 to 35 km beneath the surface. When the sources of potential ground-surface deformation are so deep, the changes at the surface are either too small or too widely distributed around the volcano's summit to be measured by conventional surveying techniques.
The advent of satellite-mounted surveying systems, such as the Global Positioning System (GPS), may help to resolve this problem. With an accuracy of ± 2.5 cm over 50 km, satellite surveys may be able to detect such small local or widely distributed changes in the elevations and horizontal positions of benchmarks around potentially active convergent plate margin volcanoes.
Continuous monitoring complements geological mapping of potentially active volcanoes, which is generally the best way of determining their past eruptive habits. History often repeats itself in the natural world, so this type of geological assessment can provide very useful data for long-range forecasting of the activity of an individual volcano. A good example is the Nevado del Ruiz volcano in Colombia. An eruption and mudflow in 1845 killed 1,000 field hands at a tobacco plantation near its base. The summit of Nevado del Ruiz is covered by a large snow and ice cap, and even a small eruption can generate massive floods of meltwater and debris that course down the canyons on the volcano's flanks. Sweeping up soil and debris, the floods become destructive mudflows. When earthquake swarms and small eruptions began at Nevado del Ruiz in 1985, geologists warned that mudflows similar to those in 1845 were likely to occur again. They did—killing 25,000 people. The reasons for this tragedy are complex, but one was that people in Armero—the principal town that was destroyed—did not comprehend what a massive mudflow was. Nothing like this had happened for 140 years—long enough for everyone but a few alarmed geologists to completely forget what such a threat could mean.
Only about 10 percent of the world's 1,300 potentially active volcanoes have been geologically mapped to assess their past eruptive habits. This leaves 90 percent yet to be studied. Efforts to monitor and map these threats using field geologists can be accomplished with a high degree of return for a low hazard-assessment budget investment.
Understanding how a bomb works does not eliminate its danger, but if something is known about its size and the way in which it is detonated, the danger can assuredly be reduced or avoided. The same holds true for geological hazards—the better a phenomenon is understood, the more likely it is that its threat can be mitigated. At the same time, urging the establishment of simple but effective educational programs to inform governing officials and the public at risk remains the responsibility of the scientist studying the hazard. How might such an education program work? As in advertising, keep the message simple, and present it over and over again.
HAZARDS OF EXTRATERRESTRIAL ORIGIN
Within the past dozen years the idea of catastrophic terrestrial impact has been revived. From being a topic that was hardly considered respectable, it has become accepted as something that certainly happens occasionally and that may have had global consequences at various times in the past. Recent theories attribute both the origin of the Moon and the extinction of the dinosaurs to impacts of extraterrestrial objects.
The historical record, which goes back less than 3,000 years, contains no reference to anyone killed by a meteorite fall—although injury from a meteorite that penetrated a house is reported, and early in the century the Naklha meteorite that fell in Egypt may have hit a dog. On the time scale of current societal interest, the danger from impacts is insignificant. However, if a large extraterrestrial object did collide with Earth, the consequences could be devastating.
A distinction should be made between the near-field consequences of impact, dominated by the crater and its surrounding debris field, and the far-field consequences, which for larger impacts would be catastrophic. Research related to the former includes the need for better understanding of the flux of impactors on the Earth. Astronomical observations are essential, while the search for and study of impact craters on Earth yield complementary information about the flux in the past. Astronomical monitoring has produced estimates of impact flux that indicate there must be many unrecognized impact structures waiting to be discovered. At present, attention is focused on the search for evidence of an impact that occurred 66-million-years ago and was large enough to have caused the extinction of the dinosaurs by means of the far-field effects. The Chicxulub structure in the north of the Yucatan Peninsula is considered a strong candidate, as discussed in Chapter 3.
Far-field effects become important if the impactor is large enough to make a crater more than a few tens of kilometers in diameter. Dust and aerosols propelled to great heights would induce fluctuations in weather patterns as well as produce acid rain. Wildfires might ignite over huge areas, and part of the atmosphere could be blasted away. In such catastrophic circumstances, many organisms would perish and many species would face extinction. The geological record should contain evidence of such drastic events. The challenge of impact theory is twofold: to seek evidence of past impacts in the geological record and to model patterns of atmospheric and oceanic systems that could produce distributions matching the evidence.
Giant terrestrial impacts are capable of perturbing the earth system rapidly and to an extent that few other phenomena can rival. Within the past half
century human beings have acquired the ability, through nuclear explosions, to produce sudden perturbations of the earth system on a scale approaching that of the greatest volcanic and impact events. In the distant future it even seems possible that nuclear explosives might be used to divert incoming asteroids. Perhaps equally significant is the ability, also developed in this century, of the human race to modify its environment on an equally grand scale but more slowly: global change, largely a consequence of intensive fossil fuel consumption.
PROBLEMS RELATED TO POPULATION CONCENTRATION
Human interactions with the Earth can disrupt the background equilibria, but that disruption is usually confined to a very limited extent of area and time when compared to the Earth's size and history. The effects of small-scale farms and industries, such as gristmills or kilns, are absorbed into the earth system with little long-term influence. But human activity has reached a critical level that is beyond small-scale local equilibration. The twentieth-century metropolis creates intense, perhaps irrevocable, disruption of earth systems.
Modern engineering has created the most technologically advanced and safely built environment in history. For many the quality of life is enhanced through improved transportation methods, resource development, and engineered structures. The successes of this built environment have led to expectations that can be met only with constant maintenance and strict building codes.
In a resource-critical environment, exemplified by the modern urban center, humans will have to adapt to the local landscape and learn to use it to their advantage without destroying it. In the United States, general land-use decisions and land-use regulations are established at the city and county planning commission level and approved by local governments. Local government creates land-use policies, which become the legal authority that guides growth and development. To create an environmentally responsible policy, the socioeconomic objectives decided on by a community should balance against the constraints and opportunities offered by the physical setting. Urban geology therefore needs to be carried out to address local planning needs. For economic viability and public safety, local governments must be environmentally informed. Earth scientists have the opportunity, perhaps the duty, to provide that information.
Significant issues for the future will involve assessments of possible land use, concentrating on reuse, as affected by geological constraints. Society must act as part of the environment—foregoing attempts to master it—because human actions have far-reaching, often unforeseen, effects on the natural system. Land-use decisions should remain within the limits of our economy while appreciating both existing and future public needs. Many parts of the country are encountering development limits, those points at which the economic and environmental costs of dealing with geological constraints exceed the benefit of a desired development.
Every type of land-use plan could benefit from a preliminary geological assessment. The assessment should generally address the entire geological spectrum: surface topography, composition of subsurface material, arrangement of underlying rock, and character of the groundwater system. This general geological assessment would define foundation cost and would allow estimation of recurrence intervals for floods, landslides, subsidence, and, in some areas, earthquake and volcanic activity. In urban and metropolitan areas the physical assessment should focus specifically on the characteristics of the earth materials. This will lead to the selection of safe and appropriate construction sites, to the conservation of natural resources essential to the city's development, and to the preservation of natural aesthetic qualities. Any proposed land use is likely to produce its own effects on the environment; negative consequences of development should be integrated into the assessment and suitable adaptations to geological conditions suggested.
Urban geology studies and interprets the geological conditions underlying the most intense concentrations of human population. Its immediate goals are to determine the physical properties of individual sites and to locate the deposits of materials necessary for construction projects. In population centers scientific understanding is just one more factor among social, economic, and political components. To the scientist these other factors seem too prone to subjective interpretations and appear to be based more on emotion than logic, but the habit of thinking in geological terms is certainly unfamiliar to the nonscientist. Whatever the constraints of theoretical recurrence intervals, lives to be led in the here and now are more likely to focus on the supply of necessary raw materials and the efficient removal of waste.
While waste management concerns even the smallest rural communities in which waste production outstrips disposal capability, it is a major problem of today's urban centers. The urban ex-
panse of pavement, with limited capacity to absorb rainwater, collects runoff and delivers it episodically in quantity to storm sewers. Traditionally, urban storm runoff has been allowed to mix with municipal sewage, contaminating the whole volume. That entire volume of contaminated water then requires treatment prior to discharge to the environment—hence the multibillion-dollar outlay for storage tunnels and water treatment plants in Chicago and Milwaukee—two cities that have conscientiously responded to federal requirements to control the quality of urban storm-water discharge.
Throughout the design, construction, operation, and maintenance of waste management facilities, consideration of local geology helps to ensure structural integrity and containment of contamination. Toxic liquid wastes require extremely complex treatment before the cleaned effluent can be discharged to the surrounding environment; during all stages such wastes must be prevented from leaking into groundwater. Solid waste disposal is another major headache for all city management. There is no way to absolutely prevent small amounts of hazardous waste from becoming mixed with municipal garbage waste. For example, used motor oil, photographic developing chemicals, household cleaners, pesticides, and paint products all go to the garbage. Given the huge volumes of municipal wastes and the shortage of suitable disposal spaces, an understanding of the options available within the geological constraints is essential for reducing the negative environmental impact. Modern sanitary landfill design depends on geological characterization of proposed sites. Implementation of the 1991 Resource Conservation and Recovery Act's solid waste regulations in the immediate future is meant to ensure groundwater protection through leachate collection and detection of any bottom liner failures. Finally, radioactive waste management has placed society in an unusual quandary: these wastes, most of which are produced in the course of providing electricity, conducting medical research, and maintaining national defense, are both a hazard and a valuable resource for future reprocessing. Underground storage/disposal appears to be the solution of choice—though it will be critical to locate the proper geological environment to prevent any long-term leakage of radionuclides, in liquid or gaseous form. Retrievability from storage for reprocessing may be an important consideration in the future.
Population centers have developed through accumulation, with little or no attention to geological consequences. Our cities are in a constant state of redevelopment. As more and more people have moved to urban centers, land use has become a very complex problem: there is little unused space available. The intricacies of redevelopment are compounded for several reasons. Inadequate existing facilities must be maintained, available surface space is minimal, construction debris accumulates, waste-disposal areas shrink, new transportation routes are not easily created, and often short-sighted decisions are made over acceptance of construction-related disruption and change. Too frequently, political expediency is the deciding factor in development design. But the geological constraints themselves are becoming more evident, as is the need to analyze them more thoroughly—not only for their effects on redevelopment projects but also for their effects on the physical stability of existing buildings and potentially large cost overruns associated with unexpected geological conditions, especially weak earth materials and large inflows of groundwater.
Geological site interpretations are needed for all forms of engineered construction, such as dams, nuclear power plants, hydropower plants, pumping plants, factories, sewers, canals, levees, high-rise buildings, harbors, locks, highways, bridges, tunnels, airports, and port facilities. Increasingly more detailed geological data provide the basis for each step of design and construction. The process begins with a feasibility study, based on knowledge of regional geology and enough specific site information to ensure that major geological features are identified and considered in cost analyses. A key element of the feasibility study is identification of geologically based flaws that could defeat individual projects. The national need for good geological and topographic maps is nowhere more clear than at construction sites. The relatively small scale of generally available maps can never serve all the geotechnical data requirements. Site-specific geological mapping and subsurface exploration are required, and preservation of site geological maps showing conditions at the time of construction is a necessary safeguard in case problems requiring repair or rebuilding arise later.
The selection of sites suitable for the location of major facilities, especially those for which public safety considerations are an overriding issue, is generally accompanied by geological site investigation. But even site selection of homes and parking lots must involve concern for the stability of foundations and for off-site disruption of drainage, water supplies, stream sediment, and air quality. The fact that
the foundations of structures do fail means that attempts to characterize the sites prior to construction have in some cases been inadequate and that recognized risk was accepted. Although it is impractical to evaluate all sites exhaustively, an experienced engineering geologist can usually detect potential fatal flaws such as major faults, landslides, subsurface cavities, and poor soil conditions quite quickly.
The strengths of natural rocks and soils, and of the things we build from them, are generally predictable, but there is a need to anticipate, understand, and correct the failure of these materials. Understanding the nature of materials may lead not only to enhancing strength but also to decreasing strengths temporarily so that urban excavations can be made more efficient.
Use of Earth Materials
Earth materials are linked to land use and environmental management in many ways. Engineered works utilize earth materials both as the foundations, soils or underlying rock, and as raw material for construction.
The physical characteristics of foundation soil, weak rock, and rock are carefully incorporated into engineered design. The engineering characteristics of earth materials greatly affect many design features and the overall cost of all projects. Soil, in the engineering sense, consists of any unconsolidated overburden material. Most engineered structures are founded in soil, and design characteristics include soil moisture content, strength, and compressibility as well as the depth of the groundwater surface.
Foundations of large, heavily loaded structures are usually made to bear on rock, if it is accessible. Rock is normally much stronger than soil, thus making it more competent to support the loads imposed by large structures. Engineering features, such as piles and excavation of basements, are used to transmit structural loads to the underlying rock. Cost-effective alternatives are always sought, such as development of less expensive ways of controlling groundwater, stabilizing excavation slopes, and increasing the performance of soils through admixture and injection. Increasingly, manufactured materials such as synthetic membranes and geotextiles, grouts, and soil nails are being used to strengthen relatively weak bodies of earth material.
Earth materials are the base component of a significant amount of most construction. Building materials are made from common rock types that are easily obtained and relatively inexpensive. Since the 1970s, however, the source areas—stone quarries and sand and gravel pits—have been under increasing pressure from urban siting, regulatory permits, and operational constraints and reclamation requirements. All of these are profoundly affected by site geological conditions.
Quarried rock is a major source of construction material. Rock is used to make aggregate for both concrete and asphaltic concrete, roofing granules, filter stone for drainage, crushed stone for roadway base course and railway ballast, riprap for erosion protection, rock fill, building stone, ornamental and dimension stone, and monuments. Each type of use calls for a different set of properties, selected from durability, density, spacing of rock fractures, hardness, color, physical appearance, abrasiveness, and shape of particles when crushed. These properties are controlled largely by the mineralogy and fabric of the rock, and quarry sources are selected to meet the requirements for the particular use intended. Engineering petrography is routinely used to assess such properties and characteristics.
Construction demands for highways, roads, streets, parking lots, sewers, dams, levees, dikes, and airfield runways specify dimension stone aggregate, granular fillers, and pervious and impervious soils. Gravel and sand pits supply large quantities of material for concrete aggregate, while clay pits supply the material needed for manufacturing brick. Engineering geologists assess the quality and quantity of these products and provide the basic information for permitting and design.
Raw materials for construction—sand, gravel, crushed stone, earth for fill, clay, building stone, and cement—are first in tonnage and second in dollar value only to fuels as the major mining and quarrying products of the United States; they also involve a minimum level of geotechnical oversight and present few scientific problems. Sources for these bulk materials are selected on the basis of convenience to users and environmental insensitivity of the sites. They leave large holes in the ground that because of their size cannot be restored to their original condition but that usually offer attractive opportunities for imaginative subsequent use. Scientific research opportunities include establishing optimum formulations for cheap durable concrete and working out ways to excavate earth materials and process the spoil in a manner least damaging to the environment and most easily accommodated close to users.
Tunnels and Underground Openings
As cities become ever more crowded, and as mineral resources are sought at greater depths, the
ability to create and maintain tunnels and underground openings becomes more and more important. Underground openings are essential. They serve as sewers and accommodate storm drainage. They carry water from distant sources to cities and provide efficient access for many transportation systems, including highways and subways. And large underground caverns, often left by mining enterprises, are increasingly being used to store petroleum and natural gas as well as radioactive waste. Greater Kansas City is now a world leader in conversion of mined underground space to low-energy, secure commodity storage. Underground excavations also house electricity-generating stations and defense command posts. Underground car parking is vital to many communities. The use of underground space will grow further as older population centers are redeveloped. Already the city of Austin, Texas, has a subterranean utility and freight supply trafficway, and Toronto, Canada, has an underground mall network and pedestrian trafficway.
Use of underground space requires geological understanding; the nature of the surrounding earth material—its composition, fractures, and engineering properties—is the primary consideration for space utilization and design. The stability of an excavation, as well as that of adjacent structures, depends on the earth material characteristics and groundwater control. Transportation and infrastructure routes follow excavations, embankments, and tunnels; all require geological analysis for structural integrity and to alleviate effects on adjacent facilities.
Engineering geophysics was born in the late 1950s, the child of technologies developed for petroleum and mining exploration. It was the advent of nuclear power plant siting, which required detailed subsurface structural and positional accuracy, that led to the use of improved oil field geophysical techniques to detect near-surface stratigraphic anomalies, the presence of groundwater, and faults. An amazing array of engineering geophysical technology is now available—based on the refraction and reflection of sound and electromagnetic waves, electromagnetic properties of earth materials, electromagnetic fields, natural or induced radioactivity, and gravitational attraction of earth materials—as are borehole seismic and television imaging devices. These innovations contribute to the abilities of the engineering geophysicist, but, as with so much in the earth sciences, they need to be interpreted in light of visually observed geology. Boreholes and outcrop observations are still necessary to confirm remotely sensed data.
Health Risks from Geological Material
In the continuing search for new sources of industrial minerals, the possible health hazards from occupational exposure to mineral dusts have become an important factor in deciding what minerals can safely be exploited. More recently, concern has grown that exposure to various mineral dusts even in nonoccupational environments may cause disease or death. This concern has focused on asbestos dusts (or dusts perceived to be asbestos-like), radon, and zinc mining. Each of these is given below as examples, and each illustrates how inadequate comprehension about the nature and use of geological materials can lead to profound and quite unnecessary regulatory problems.
Asbestos is a nonscientific commercial term for certain silicate minerals that separate into flexible, heat-resistant, chemically inert fibers. Their particular mechanical, chemical, and especially thermal properties have been valuable in yarns, cloth, paper, paint, brake linings, tiles, insulation, fillers, filters, putty, and cement—applications requiring incombustible, nonconducting, or chemically resistant material. The world consumption of asbestos is about 4 million metric tonnes per year; that of the United States is disproportionately small, just over 80,000 metric tonnes. Of the six types of asbestos that have been mined commercially, only three are quantitatively important—a serpentine mineral called chrysotile and two amphibole minerals called amosite and crocidolite. The three other minerals, which are rarely used as asbestos, are the amphiboles—anthophyllite, actinolite, and tremolite.
Since 1972 the Environmental Protection Agency (EPA) has taken steps to limit human exposure to asbestos on the grounds that the fibers are carcinogens. This conclusion is based on epidemiological observations that former asbestos miners and fabricators experience an increased incidence of lung cancer and mesothelioma, a tumor of the chest cavity lining. Several mineralogists have questioned the EPA's classification of all these different minerals together and consideration of them all as equally hazardous. Recent medical research suggests that chrysotile asbestos is not hazardous outside the workplace—and perhaps not even there.
The serpentine mineral chrysotile is a layer sili-
cate that curls into strong, flexible, needle-like fibers and is chemically and thermally stable because of a misfit between the layers. The overwhelming bulk of asbestos used for commercial purposes is chrysotile, mined principally in Canada. Close study has shown that the incidence of mesothelioma and lung cancer among townspeople in the major chrysotile-producing town, Thetford, Quebec, is about the same as among townspeople in Zurich, Switzerland. Thetford has pervasive chrysotile dust pollution, while Zurich is essentially free of it. From these and other data the environmental hazard of chrysotile asbestos is difficult to prove.
Crocidolite is an amphibole mineral having double chains of silicate tetrahedra linked by sodium, iron, and magnesium. It was mined principally in South Africa and Australia, but its high association with lung cancer incidence in former miners, millers, and fabricators has essentially terminated its use. Amosite is another amphibole mineral, but it lacks the significant sodium or iron found in crocidolite. It too was mined mainly in South Africa, but as with crocidolite the incidence of lung cancer among workers has ended its production and use.
Epidemiological studies have shown that the amphibole asbestos is much more dangerous than chrysotile asbestos; low-to-moderate exposure to chrysotile shows minimal effects on human health (e.g., Report of the Royal Commission on Matters of Health and Safety Arising from the Use of Asbestos in Ontario). However, EPA and other regulatory agencies have assumed that all forms of asbestos are equally dangerous.
Unfortunately, the arbitrary combination of needle-like shapes and a specified fine grain size has been chosen by regulators to characterize asbestos. Not only harmless chrysotile but also a wide variety of other fine-grained, needle-like minerals that are useful and probably of negligible hazard have been nearly banished from use. Minerals also are being judged guilty by association: there is no human health study that shows that the ubiquitous nonfibrous amphiboles are harmful. Nonetheless, the Occupational Safety and Health Administration has proposed, in pending federal regulations, that dust particles of the nonfibrous tremolite, actinolite, and anthophyllite amphiboles be classified as asbestos. A ban on the use of any serpentinite rock—of which noncarcinogenic chrysotile is one type—for commercial purposes is now pending in California; at the same time, serpentine is California's state rock. It should perhaps be noted that in 1969 the common mineral quartz was placed on California's list of carcinogens because tumors were generated in rats breathing quartz dust, and the International Agency for Research on Cancer has since designated quartz as a carcinogen. Quartz, the crystalline form of silica, or SiO2, is the second most common mineral found on the surface of the Earth; it is the major component of most sands.
In view of the force of the federal initiative with regard to mineral dusts, the prognosis for the continued vitality of the U.S. mining industry is ominous. Asbestos regulations and the fear of asbestos are having a crippling effect on the talc and vermiculite industry and on important segments of the mineral aggregate industry. In the future, operations that mine rocks in which amphibole, serpentine, and possibly even quartz are found could be terminated.
Radon is an invisible, odorless, radioactive, chemically inert gas resulting from the decay of radium, which is itself produced by the radioactive decay of uranium and thorium. There are three isotopes of radon. Two of the isotopes, one with a half-life of a minute and the other with a half-life of less than 4 seconds, decay rapidly to less mobile elements before they can escape from their sites of origin. The third isotope, with a half-life of about 4 days, is the environmental hazard.
Uranium and thorium are radioactive elements contained in minerals that are unevenly distributed within rocks and soils, and some of the minerals break down through weathering. These elements and their radioactive daughter products may be leached from rocks by groundwater and transported into soils. In ordinary soils radium is usually present at a concentration of a few parts per million. If the soil is impermeable, the radon may be trapped in the ground, where it creates no hazard. If the soil is permeable and is not water saturated, the radon daughter product, in the form of a chemically inert gas, may migrate away from its original location and into houses through their foundations. Although radon is the mobile step, it is the subsequent chemically active airborne decay products of radon—radioactive polonium, bismuth, and lead—that can lodge in lung tissues, causing damage through alpha particle emissions. Radon itself is inhaled and exhaled without significant buildup.
Estimates of risk from environmental exposure to radon, like those for asbestos, are extrapolated from excess mortality detected by epidemiological studies of uranium miners, but the extrapolations are highly uncertain. The EPA estimates a 1 to 5
percent chance of excess lung cancer from a lifetime exposure to radon at the danger level of 4 picocuries of radon per liter of air. The International Commission on Radiological Protection is less stringent, recommending a danger level of 15 picocuries per liter of air for existing structures. The EPA also claims that 20 to 30 percent of the houses monitored over the past 2 years had radon above the EPA danger level. If these numbers are correct and representative of the whole country, the environmental risk is enormous—but epidemiological studies done on the general population have not demonstrated excess risk. Nevertheless, there must a be a significant number of houses in the country containing radon at or above the present control level in uranium mines. These houses should be identified for remedial action.
High household radon levels can be generated from very-low-radium-bearing but very permeable soils. Conversely, high radium levels are not always dangerous if soil permeability is poor. A more direct approach to evaluating hazard is the simple and cheap measurement of household radon levels, as urged by the EPA. Such determinations should be made in those areas of homes that are occupied much of the time, instead of little-used basements and storage areas. There is already significant mandated compliance as more and more jurisdictions and mortgage lending institutions require that the radon level be determined—just as a termite inspection is required—before a house can be sold or mortgaged.
In summary, ongoing direct determinations of household radon levels make any further geological effort to identify high-risk areas redundant. The role of the earth science community is probably limited to urging more people to voluntarily monitor their household radon levels and to pointing out that, in the past, too much emphasis was placed on soil radium concentrations and too little on soil permeability. High-radium soils may not lead to radon hazards in the home, and low-radium soils do not guarantee safety.
A final example of poorly thought-out control measures for potentially hazardous geological materials is the standard recently set by the EPA for zinc mill and mine effluent. The agency set the allowable zinc content of mill effluent at 0.2 parts per million and the allowable zinc content in mine effluent at 0.5 parts per million. Throughout the U.S. central zinc-mining districts, the ambient groundwater contains an average of about 1.5 mg of zinc per liter, or 1.5 parts of zinc per one million parts of water.
Not surprisingly, removal of zinc from the groundwater used as influent was not feasible. As a consequence, the affected mines and mills were shut down; cleanup to the standards set by the EPA is costing huge amounts of money; and zinc supplies must be sought elsewhere, often outside the United States. At the same time, zinc supplements are fed to livestock in those same districts to promote growth, development, and healing. Zinc deficiencies lead to skin lesions, retarded growth, hair loss, emaciation, and loss of appetite in livestock and humans. To meet the minimum requirement for zinc, a person would have to drink 100 liters of EPA-standard mine effluent per day.
Concentrations of population require vast networks to provide even minimal needs, including food and fiber. The production of these commodities in rural environments stresses natural equilibria. Agriculture annually moves more mass and influences a greater area on the Earth than all other human endeavors combined. The chemical input to the environment through agriculture is tremendous; as populations rise, agricultural activity increases, as do its chemical byproducts. During 1988 the use of fertilizer in the United States alone exceeded 10 million tons of nitrogen, 31 million tons of phosphate rock, and 4.6 million tons of potash. The rest of the world used several times those quantities. It is paradoxical that the success of solid-earth scientists in finding and developing potash and phosphate deposits for fertilizer manufacture has helped generate the problems of agricultural waste now being addressed by some of their colleagues. While domestic and industrial disposal of phosphate detergents is controlled to improve the quality of streams, agriculture puts 10 times the controlled amount of phosphate into the environment. Long-lived persistent organic-chemical pesticides and herbicides, and their trace contaminant dioxin enter the groundwater system daily. Tillage and overgrazing foster dramatically increased erosion by water and wind and are major contributors to desertification. The ancient technique of slash-and-burn clearing of tropical forests is front-page news today. The effects of wholesale forest burning are widely dispersed, and so are a challenge to evaluate. The practice is difficult to influence directly because so many individuals in developing countries are economically
bound to agricultural activity. This simple method for clearing land enables a farmer to establish a foothold as other farming methods could not. The projected population increases over the next five or six decades probably will demand tremendous expansions in productivity, from more use of marginal croplands to genetic engineering of crop plants. Agriculture, the major human activity, is the major source of anthropogenic influence on the environment. The solid-earth scientist is involved with agriculture today in land-surface and soil research, in finding and developing raw materials for fertilizer, in surface and groundwater hydrology, and in waste isolation. As in so many other aspects of our science, there is a growing need for an integrated approach to the system and for handling different kinds of data together in geographic information systems. The variety of temporal and spatial scales involved in soil development affords a good example of the need for an integrated approach.
Soil Development and Soil Degradation
Human society is dependent on the abundance, fertility, and integrity of soils. Soils lie at the interface between the geosphere, biosphere, hydrosphere, and atmosphere. They have unique properties that derive from the intimate mixing of partly disintegrated rocks; dead organic matter; live roots and microorganisms; and an atmosphere high in carbon dioxide, nitrogenous gases, and moisture. Chemical reactions in soils link the geology and biology of the biogeochemical cycles that support life. Ions of potassium, sodium, calcium, magnesium, sulfur, and phosphorus are released from minerals by hydrolytic weathering, while carbon and nitrogen are released from dead organic matter by microbial digestion. The composition of the parent rock influences soil biota through ionic deficiencies or the release of elements that are toxic to microorganisms. Soil biota affect weathering rates through respiration, mechanical disruption, and production of organic acids.
Soils have changed throughout earth history. The oldest preserved soils have been interpreted as reflecting the existence of an atmosphere poorer in oxygen than that of today, and less ancient soils indicate responses to the evolution of land plants and soil faunas. However, these ancient changes hardly rival in speed or extent those that have occurred since agriculture began and the human population started to rise. Soil erosion has accelerated enormously, and a new process, soil degradation, has developed.
Soil erosion has become a major threat to humanity's ability to feed itself. In North America sediment production may be lessening as new equilibria are established by dams constructed in the upper reaches of watersheds, but this trend involving the loose particles that are weathered from rock is only part of the story. The deep, fertile, ancient soils that support abundant fields of waving grain are flushing away. The humus, litter, minerals, and particles that make up soil are separating and being redistributed into different reservoirs. Yes, new soils will form on deposits that do not flow into the ocean, but new soil develops by complex chemical and biological processes only over geological time scales. In terms of human lifetimes, soil is hardly a renewable resource.
Although soil erosion has long been identified as a serious global problem, soil degradation is now becoming recognized as comparable. Soils become degraded not by bodily removal but by selective depletion in nutrients, including water, or by the buildup of toxic materials, such as insoluble residues from fertilizer and pesticide or from salts after irrigation.
The vulnerability of the soils of a particular place to erosion or to one or more of the varieties of degradation depends on many variables. Surface slope, rainfall, vegetational cover, soil history, and agricultural practice are only a few of the factors that act together. Earth scientists need to work with agricultural scientists to understand the complex processes of soil degradation and soil erosion.
Advanced societies generate huge masses of diverse wastes that need to be minimized and then isolated in environmentally acceptable ways with a view toward possible reuse. At present, efforts in this direction are poor, as are witnessed by the dissension concerning all levels of nuclear wastes; by vagabond ocean-going garbage vessels vainly seeking a disposal site; by billions of discarded vehicle tires; and by polluted streams, lakes, and even seas.
An integrated approach is the key. Without it attempts to achieve a specific end frequently create unforeseen or unconsidered problems elsewhere. Planners and policy makers have come to appreciate that the efficient use of land and the protection of existing resources require respectful treatment of the natural systems enveloping the site. After concentrated exploitation, either reclamation must be undertaken or the site must be isolated in a way that
will protect the unsuspecting from whatever toxic byproducts remain. In the United States, Congress has appropriated significant funds and stimulated innovative methods for cleaning up large-scale environmental problems resulting from prior activities. But the demand for such cleanup far exceeds the substantial resources already allocated. This is a political, legal, health, economic, emotional, educational, technological, and scientific arena. Science is essential for understanding the system and for estimating what might be the consequences of various courses of action, but society will act or not act on the basis of a broader range of considerations.
Earth scientists are called on to determine the potential suitability and durability of proposed waste repositories. Seismic disturbance, fluctuations of groundwater levels, and leakage should all be analyzed, and, where appropriate, provisions to counter them should be incorporated into facility designs. Understanding earth processes involves determining the interaction of earth materials with existing or evolving contaminants and the consequences of their integration. Earth scientists are thus well qualified to establish and verify the criteria for selection of waste repository sites and for the operation and strategic protection of such sites.
Five general regulatory categories of waste are recognized: solid, special, hazardous, low-level radioactive, and high-level radioactive. The solid category includes nontoxic debris of mixed nature, generally municipal wastes such as garbage and construction debris, and aqueous wastes, such as municipal sewage and sewage-treatment sludge. The special category embraces high-volume, low-toxicity wastes of predictable chemistry, including those of utility generation, dredge spoil, cement kiln dust, and other nontoxic industrial byproducts. The hazardous category refers to toxic wastes produced by industrial or mineral recovery activities and includes asbestos and asbestiform minerals. Low-level radioactive waste is composed of the residues of manufacturing, medical and industrial research, and components of nuclear power generation. High-level radioactive waste includes the residues of fuels for nuclear power generation and defense weapons production.
Every category of waste has a spectrum of physical and chemical properties and characteristics that affect the manner in which it should be managed. If not effectively managed, each waste type tends to produce an aqueous or organic liquid effluent, known generally as leachate. Leachate is released from bodies of wastes that are improperly disposed of into the environment; it is this contaminated liquid that characteristically moves into the atmosphere and into groundwater and then migrates from disposal sites, all too often reaching human or other environmental receptors. At that point the waste can inflict serious public health or environmental damage. Remediation of the waste has been attempted. One common approach is removal of the contaminated material from one site to another whose characteristics might retard the movement of the contaminants. Other approaches include treatment, often of contaminated waters by a variety of "pump-and-treat" methods, while still others attempt to treat the contaminants in situ using microbial methods to neutralize certain toxic compounds.
Some parts—fortunately small ones, such as a number of older industrial plants and uncontrolled dump sites—of the planet are already so severely contaminated that no reasonable amount of time or effort can return them to a pure predisposal state. Other parts are eminently suitable for total reclamation. Probably the vast majority deserve a treatment somewhere between total abandonment and total reclamation. The earth sciences input is particularly critical for determining the most suitable course of action for each problem area. Blind demand for total reclamation in some instances is unreasonably expensive, unlikely to succeed, and probably more punitive than constructive for society as a whole.
A 1990 NRC report, Rethinking High-Level Radioactive Waste Disposal , suggested that the present regulatory structure in the United States may seriously inhibit attainment of the desired aims in the disposal of radioactive waste because the present structure requires procedures to be decided on before excavation begins. Experience has taught geological engineers that it is only during the preparation of a site for mining or waste disposal that the site can be fully characterized. The existing rules are inappropriate and cannot be expected to produce a successful outcome. Conversion of radioactive wastes to ceramics, which are then encased in additional outer containers of ceramic material, has been shown to be a satisfactory way of isolating waste and in many cases may be sufficient. Storage in accessible facilities in dry areas that are well above the groundwater surface and are surrounded by rock having ion-exchange fixation capability offers a second level of assurance.
The outstanding example of intensive site characterization is the current situation at the potential first high-level reactor waste isolation facility at Yucca Mountain, Nevada. On May 28, 1986—39 years after the first National Research Council study of
the problem—this site was recommended by the secretary of the Department of Energy (DoE) and approved for detailed study. DoE subsequently prepared a site characterization plan, in accordance with the requirements of the Nuclear Waste Policy Act, to summarize existing information about geological conditions at the site, to describe the conceptual design, and to present plans for acquiring the necessary geological information. To date, detailed geological, seismic, and regional geophysical mapping has been accomplished, and 182 boreholes and 23 exploration trenches have been excavated within a radius of 9.5 km around the prospective site. A 1992 NRC study, Ground Water at Yucca Mountain, How High Can It Rise?, assessed the long-term outlook for this site and is a good example of the need to engage scientists with a broad range of geological and geophysical expertise. The report found no likelihood of a postulated environmental risk from future tectonic and hydrologic changes.
Contaminated Water, Air Pollution, and Acid Rain
Complex changes in water chemistry take place throughout the hydrologic cycle, from the time rain falls on the Earth to the time when it flows into the ocean. Some water moves quickly to streams, providing flood flows and moving large loads of sediment. Some moves through the soil and the shallow saturated groundwater system, sustaining the base flow of streams. Some water moves deeper into the crust, circulating for long periods. The deeper water becomes a major transportation system for mass and heat in the Earth and influences the complex chemical changes taking place within the crust.
Since the first irrigation projects, humans have affected the flow and chemistry of water as it circulated along the surface. During this century, such influence has greatly increased. Dams have been built on many of the streams of the world. Water quantity and quality are now seriously at risk, and societally generated contamination may be the most serious of water problems. The consequences of agricultural and resource-extraction practices threaten humans by contaminating the water supply and posing a threat to the natural environment by changing ecosystems.
Wastes also are released into the atmosphere, increasing particulate matter and adding greenhouse gases that prevent heat from escaping the atmosphere. Modifying the greenhouse effect has the potential to seriously disturb climate, which means a change in rainfall. Much of hydrology, as it has been practiced, and many statistics are based on the assumption that the hydrologic cycle does not change within spans of decades to centuries, in contrast with geological time. A precipitously changing climate alters the basic assumption, making a large proportion of analyses suspect. A vacillating hydrologic cycle makes deeper scientific understanding of the basic phenomena much more important. Hydrologic instability, with more intense extremes such as floods and droughts, appears to be more likely during a period of change. Predicting future climate and the associated rainfall and runoff is a great challenge to science.
The atmosphere is composed principally of nitrogen and oxygen, with minor amounts of carbon dioxide and water vapor and traces of many other gaseous materials. The airborne waste products of civilization include gases such as sulfur dioxide and sulfur trioxide, nitrogen oxides, carbon dioxide, and ozone. Rainfall dissolves these gases, from the atmosphere, transporting them to the ocean. The sulfur and nitrogen species and the carbon dioxide are hydrolyzed to yield weakly acidic solutions that may still be substantially more acidic than ordinary surface waters. Eventually, these solutions will increase the acidity of the surface waters, especially in those lakes and rivers where the water acidity is poorly buffered by rocks and soils. The enhanced solubility of many major and trace constituents of ordinary rocks in acid waters releases unfamiliar elements and compounds into the water. These increasing amounts of rock-derived chemical constituents can threaten biological systems as well as the increasing acidity of the water.
Much sulfurous gas comes from electricity-generating plants that burn sulfur-bearing coal and from smelters processing sulfide ores. Coal contains sulfur both as sulfur-bearing organic compounds and as iron sulfides. Although it is possible to remove some of the iron sulfide prior to burning, the organic sulfur is an integral part of the coal. Corrective approaches for coal have relied on using the lowest-sulfur coals and on scrubbing the effluent gases. Corrections for smelter emissions have principally taken the form of prohibitory regulations, which have put the smelters out of business and led to transferral of the mining-smelting industry to other countries. Limited studies of alternatives to thermal smelting—such as dissolution mining or solvent extraction of finely crushed ores—have not yet yielded widely acceptable technologies, but they offer significant scientific and technical challenges as well as long-term economic, environmental, and national security incentives.
Although civilization's contribution to atmospheric acidity is a serious problem, our role can seem puny when compared with that of nature. Even modest volcanic eruptions can inject vast amounts of sulfur into the atmosphere very quickly, and they may remain airborne for several years. The sulfur disperses globally as sulfur dioxide in stratospheric aerosols and returns to the surface gradually. Scientific experience in observing such events has lately taken a leap forward with observations of the plume from the 1991 volcanic eruption of Mount Pinatubo in the Philippines. Understanding the global consequences of a major eruption is likely to be important for volcanology, for understanding waste emission, and for global change studies.
Atmospheric emissions and their consequences have been reviewed repeatedly by the NRC over the past decade, and with the passage of the Clean Air Act in 1990 it became clear that the national perspective on these matters was in the process of change. Hydrologic research was the topic of a recent full-scale assessment by the NRC. Solid-earth scientists are closely involved in hydrology, and their research dominates both surface drainage and underground water. Underground hydrology plays an important role because of the need to ensure the isolation of radioactive waste and untreatable toxic fluids, both of which are considered for deep disposal only. In the past, shallow groundwater was often studied as a separate resource, assumed to be isolated from deeper waters. Modern theory recognizes the importance of shallow and deeply circulating groundwater in the hydrologic cycle and thus in environmental pollution scenarios. For example, fluid contaminants introduced into a deep formation by a waste-injection well might someday reach the biosphere through shallow aquifers, unless precautions are taken in the design of such facilities and enough research is invested to understand the hydrogeological setting.
Much current research in groundwater hydrology and environmental engineering is centered on the study of contaminant transport in shallow aquifers, with respect to both fluid flow and chemical reactions. Significant progress has been made with computational and laboratory simulation of transport processes, although the problems of accurately predicting contaminant transport over human or geological time scales remain largely unsolved. In this light, studies of hydrothermal processes such as sediment diagenesis and ore mineralization may provide insight and natural analogs for verifying models of chemical transport.
There are problems of cleanup and prevention of contamination in both surface water and groundwater. These involve the transport of chemical constituents by water, often with simultaneous chemical reactions. It is essential that more robust and accurate analytical models be developed for predicting the chemical fate and transport of contaminants in groundwater. Certain contaminants can be chemically remedied by means of absorption or caging, and many of the chemical reactions are controlled by microorganisms. Understanding the kinetics of complex chemical reactions in a heterogeneous medium populated by microorganisms poses an exciting, though daunting, scientific challenge. That challenge must be met.
Recorded history affords but a brief glimpse of environmental change, one that fails to shed light on events and conditions that may confront us in the decades ahead. Global climates may soon be warmer than at any time during the past 1 million years, and sea level may stand higher than at any time during the past 100,000 years. Within decades, we may find ourselves in the midst of a biotic crisis that rivals the most severe mass extinctions of the past 600 million years. Nature is a vast laboratory that we can never manipulate or duplicate artificially on the scale necessary to test theories of global change. In developing plans for the future, we must therefore rely heavily on observations of nature itself.
Change is universal throughout the earth system, and has been since the origin of the Earth, but the term ''global change" has come to be used much more narrowly to refer to human-induced changes affecting the atmosphere, hydrosphere, and biosphere. These changes are beginning to be recognized as likely to modify our environment in unprecedented ways within the next century. While many of the processes causing the changes are not unusual, it is an accelerated rate of change that human activities have introduced into the earth system. That accelerated rate of change may prevent normal adaptation mechanisms in the atmosphere, hydrosphere, lithosphere, and biosphere from working without major consequences—at least to humans. Public interest has focused mainly on global climatic change, but the scientific community recognizes that expected changes are much broader, extending to changes in such phenomena as sea level, groundwater quality, pollution, and biodiversity. Exactly what phenomena are included as elements of global change varies, depending on the breadth of the
interests of the individual scientist or policy maker concerned.
Understanding Global Change
The worldwide scientific community has risen to the global change challenge. Over the past decade it has begun to address such questions as how the world is changing, what might happen in the next decades, and what might be done to avert possible unwanted changes. The problems are great because the world is always changing, and the differences we would like to be able to measure are very small, especially when seen against the background of general fluctuation.
The NRC has identified a set of issues and suggested possible lines of global change research (NRC, 1983, 1986). In a complimentary effort, NASA established the Earth System Science Committee (ESSC), which attempted a comprehensive assessment of how the earth system works and how satellite missions might address how it operates and how it might be changing (ESSC, 1988). Space provides the right environment from which to obtain an overall view of the planet's behavior, so it is not surprising that the NASA-sponsored study identified an important role for current and future measurements from space. Recognizing the scale and expense of the necessary programs, in the mid-1980s an interagency Committee on Earth and Environmental Sciences was established under the Federal Coordinating Council on Science, Education, and Technology with a working group on global change. In recent years, budget plans worked out through that structure have accompanied the president's budget when it has been presented to the Congress (e.g., Our Changing Planet, Office of Science and Technology Policy, 1992). These plans, which represent close cooperation among science planners in numerous agencies, reflect an integrated view of the earth system similar to that stressed in this report.
The federal government's planners have assigned the highest priority to the fluid parts of the earth system—clouds, ocean circulation, and biogeochemical fluxes—but solid-earth science research has not been neglected. Two of the sets of research priorities identified are of special importance to solid-earth scientists because they relate to earth system history and solid-earth processes. The latter category includes volcanic activity, sea level change, coastal erosion, and frozen ground; earth system history focuses on paleoclimatology, paleoecology, and ancient atmospheric composition.
When the scale of the national effort that would be needed for addressing the problems of global change became clear in the mid-1980s, the NRC established the Committee on Global Change (now the Board on Global Change), which has issued a number of reports, of which Research Strategies for the U.S. Global Change Research Program (NRC, 1990) is of special interest because it has a section devoted to strategies for the study of earth history. Study of the most recent past is emphasized; the past 1,000 to 2,000 years, the earlier Holocene, the most recent glacial cycle, and the past few glacial cycles are all discussed as offering specific opportunities for important research into global change. Environments of extremely warm periods and climate-biosphere connections during abrupt changes are also considered as particularly likely to yield significant research findings.
Science planners in the United States have been closely involved with developments in the International Council of Scientific Unions (ICSU). An International Geosphere-Biosphere Program (IGBP) was set up under the ICSU in the mid-1980s to study global change, and the NRC's Board on Global Change acts as the U.S. National Committee for the IGBP. The IGBP defined initial core projects in 1990 (IGBP Report #12) that both supplement and complement U.S. initiatives. The international body has set up a project on past global changes; its "two-stream approach" emphasizes earth history over the past 2,000 years and the glacial-interglacial cycles in the Late Quaternary.
The scientific community has clearly defined the kinds of research needed to understand global change, and the Bush administration recognized the scale of the effort by submitting programs to Congress that amounted to roughly $1 billion a year. Because measurements from space are critical and because observation over a long time is essential, costs of about $2 billion a year over 20 years could be involved.
Mitigation and Remediation
The NRC has lately looked at the question of what the policy implications are of global change (Policy Implications of Global Change, 1991). Among its recommendations, some specific roles for solid-earth science research may be discerned—for example, "Make greenhouse warming a key factor in planning for our future energy supply mix. . . ." Action items include:
Encouraging broader use of natural gas. On both a national and a global scale, there is a clear
opportunity not only for seeking out more natural gas supplies but also for applying geology and geophysics to efficient development and production.
Developing and testing a new generation of nuclear reactor technology. Understanding the solid-earth will be important for nuclear fuel, waste isolation, and hazard assessment.
Accelerating efforts to assess the economic and technical feasibility of carbon dioxide sequestration from fossil-fuel-based generating plants. Efforts in "clean coal" technology have concentrated strongly on high-temperature combustion and related aspects. Solid-earth scientists can identify what coals from what areas can be efficiently used most in the new technology.
Other recommendations relate to forestry, agricultural research, and making the water supply more robust by coping with present variability. Solid-earth scientists through their understanding of the land surface, its drainage, and its groundwater clearly have much to contribute in all basic aspects of mitigation and remediation.
Three Roles for the Solid-Earth Scientist
There are three aspects of global change in which the solid-earth scientist is particularly involved. The first is understanding global change. The record of the past reveals both the extent and the pace of change. This information is useful not only in revealing the extent of past changes but also in showing how fast they have happened and in throwing light on how remote parts of the earth system have accommodated themselves to perturbations. Understanding the approaching changes will be easier if we understand what has already happened. Models of the future can be tested for validity by seeing how well they can reproduce past conditions. The past record is also important in helping to distinguish between anthropogenic change and what is sometimes called natural variability, although the involvement of the human race is hardly unnatural. The role of the solid-earth scientist in increasing understanding of global change is to document the background rates, ranges, and intensities of the environmental kaleidoscope without the factor of anthropogenic changes. That documentation can be obtained in the ways outlined in Chapter 3, The Global Environment and Its Evolution.
The second aspect of global change in which solid-earth scientists will play a part is assistance in reducing the extent of future change. Increasing the world's reserves of natural gas, for example, could make more readily available a fuel that produces less carbon dioxide than most other forms of fossil hydrocarbon. Better understanding of aquifers worldwide could lead to informed management practices less likely to lead to pollution from waste disposal or to depletion of the water supply.
Because the world's population is so large and is continuing to increase, some kinds of global change are inevitable. More coal burning in populous India and China over the next 20 years can hardly be avoided and cannot fail to increase the carbon dioxide content of the atmosphere faster than plausible reductions in carbon dioxide output in the most advanced communities. Some consequent climatic change in the next 50 years appears unavoidable. Because of this prospective change, there is a third useful role for solid-earth scientists in global change research: study of how to mitigate and ameliorate the effects of possible changes. The following examples illustrate how such studies could be effective. Research in areas where sea level is rising today, such as the Gulf Coast where water extraction, reduced sediment supply, and other factors have induced rapid local subsidence, is likely to pay dividends if sea level rise becomes more general in the next century. Research on the hydrology of arid lands—for example, on how transitions from desert to more moist conditions take place—can be carried out now in areas such as the Sahara-Sahel boundary. The results of such studies could prove useful if, as seems not unlikely, there are changes in the distribution of the present climatic zones of the continents in the next century.
At present, models of global change are not capable of suggesting the possible extent of changes in the future, partly because of limited spatial resolution and partly because they cannot accommodate all the controlling variables. Solid-earth scientists have the opportunity to work with other earth scientists, including social scientists, in testing and refining models of the earth system that will be needed for understanding global change sufficiently well to permit informed decision making. The roles of the solid-earth scientist in understanding change, managing change, and mitigating the effects of unavoidable change are all likely to increase in importance in the next decades.
The problems and the research discussed in this chapter are concerned with the major interactions
between the earth sciences and society and involve many aspects of the earth system that impinge directly on the quality of life. The problems affect the daily lives of the average American more than those discussed in any other chapter of this report. The Research Framework (Table 5.1) summarizes the research opportunities identified in this chapter with reference also to other disciplinary reports and recommendations. These topics, representing significant selection and thus prioritization from a large array of research projects, are described briefly in the following section.
The main topics are geological hazards and changes in the environment on global and local scales. Some changes occur naturally, and others are caused by the activities of society. Human beings are now the most significant geological agents. The topics are identified in Table 5.1 under Objective C, mitigating geological hazards, and Objective D, minimizing and adjusting to global and environmental change, but they also involve some aspects of Objective B, finding, extracting, and disposing of natural resources, and certainly depend on Objective A, understanding the processes in research themes I-IV. These themes extend from the surface into the interior and are all interrelated. Research projects commonly involve more than one theme, as was discussed in connection with the research opportunities for Chapters 3 and 4. In particular, the fluxes of material involved in the biogeochemical cycles (II), largely transported in fluids (III), have influenced changing paleoatmospheres and paleoceans (I).
Research conducted during recent decades on geological processes and the causes and mechanisms of geological hazards has provided unprecedented opportunities to reduce the potential for disaster, and the International Decade for Natural Disaster Reduction is concerned with converting the opportunities into science-wrought realities. High national priority has been attached to the comprehensive U.S. Global Change Research Program, which encompasses the full range of the earth system, and many research opportunities have been identified.
Objective C: To Mitigate Geological Hazards
Although understanding of hazards and the engineering capacity to control them are both growing, hazard losses continue to increase because this knowledge is not reflected in engineering design and in public regulation, private policies, and investment decisions. Achievement of the mitigation objective therefore requires not only continued basic research but also attention to matters such as social science and governmental issues. These issues are the concern of the U.S. program for the International Decade for Natural Disaster Reduction. They include such items as the following:
Scientific and engineering efforts aimed at closing critical gaps in knowledge to reduce loss of life and property.
Guidelines and strategies for applying existing knowledge.
Physical adjustments for avoiding the impacts of hazards, such as land-use planning, building site evaluation, building to withstand hazards, predicting occurrences, and preventing hazards.
Social adjustments for avoiding the social effects of hazards, including land-use controls and standards, public awareness campaigns, emergency preparedness programs, and financial arrangements to spread economic loss among a larger population.
Education of the public and of public officials to raise the level of awareness about how to plan for and respond to natural hazardous events.
Governmental issues, including the strengthening of communication links among federal officials and among federal, state, and local levels of government and the development of efficient lines of authority for decision making, especially in multiple-hazard events.
A concerted effort to identify research on any hazard that has applications to other hazards, which is very cost effective.
There is clearly commonality in issues related to geological hazards, and there is therefore a strong overlap between the research needs and opportunities listed under the following subheadings. There are extraordinary opportunities in geoscience for scientists and engineers to make valuable and lasting contributions to public welfare.
Earthquakes, Volcanoes, and Landslides
Earthquake Prediction. Both the prediction of events and the assessment of associated hazards emphasize cross-disciplinary research that draws on seismology, geodesy, geochemistry, hydrology, tectonics, and geomorphology. The most recent large events in the best-instrumented seismogenic regions are particularly important as sources of potential understanding and of new ways of thinking, such as that represented by a dynamical systems approach. There is a continuing need to improve dialogue about seismic risk between solid-earth scientists and engineers and decision makers, as well
TABLE 5.1 Research Opportunities
C. Mitigate Geological Hazards—Earthquakes, Volcanoes, Landslides
D. Minimize Perturbations from Global and Environmental Change—Assess, Mitigate, Remediate
I. Global Paleoenvironments and Biological Evolution
■ Environmental impact of mining coal
■ Past global change
■ Catastrophic changes in the past
■ Solid-earth processes in global change
■ Global data base of present-day measurements
■ Climatic effects of volcanic emissions
II. Global Geochemical and Biogeochemical Cycles
■ Soil processes and microbiology
■ Earth science/materials/medical research
■ Biological control of organic chemical reactions
■ Geochemistry of waste management
III. Fluids in and on the Earth
■ Seismic safety of reservoirs
■ Precursory phenomena and volcanic eruptions
■ Volume-changing soils
■ Isolation of radioactive waste
■ Groundwater protection
■ Waste disposal: landfills
■ Cleanup of hazardous waste
■ New mining technologies
■ Waste disposal from mining operations
■ Disposal of spent reactor material
IV. Crustal Dynamics: Ocean and Continent
■ Earthquake prediction
■ Geological mapping
■ Remote sensing of volcanoes
■ Quaternary tectonics
■ Soil cohesion
■ Landslide susceptibility maps
■ Landslide prevention
■ Age-dating techniques
■ Real-time geology
■ Systems approach to geomorphology
■ Extreme events modifying the landscape
■ Geographic information systems
■ Land use and reuse
■ Hazard-interaction problems
■ Detection of neotectonic features
■ Bearing capacity of weathered rocks
■ Urban planning and underground space
■ Geophysical subsurface exploration
■ Detection of underground voids
V. Core and Mantle Dynamics
Facilities, Equipment, Data Bases
■ Global data base of present-day measurements for detection of future changes
■ Geographic information systems
■ Remote sensing from satellites
■ Dating techniques
■ Geophysical techniques for subsurface exploration in engineering
■ New mining technologies
■ New methods for fracture sealing
■ Methods to densify soils
■ Geophysical techniques for subsurface exploration
■ Global data base of present-day geochemical and geophysical measurements
Education: Schools, Universities, Public
■ Essential to devise ways to inform the public and policy makers about the scientific basis for understanding geological hazards and environmental changes
as to become fully aware of how risk is perceived in the community at large. Many societies cannot afford earthquake-resistant construction, and even those that can retain large inventories of structures that are far below the state of the art in design and construction.
Seismic Safety of Reservoirs. Reservoir safety needs continued research on topics such as proximity to active faults, reservoir-induced seismicity, and the known seismicity of particular areas.
Paleoseismology. Development of data in the new subdiscipline paleoseismology has provided a sufficiently long historical sample to permit the identification of patterns and rates of occurrence of large earthquakes. For example, great earthquakes in intracontinental regions are now known to recur at intervals as long as several thousand years, and seismically quiescent periods between clusters of events on a given fault system may be hundreds of thousands of years long.
Precursory Phenomena and Volcanic Eruptions. Basic research in geology, petrology, geochemistry, and geophysics (see Chapter 2) is required in order to understand what triggers volcanic eruptions and how to predict them. Eruptions are generally preceded by seismic activity as volcanoes are reactivated, and the activity may include destructive earthquakes.
Geological Mapping. Mapping of potentially active volcanoes to determine their past eruptive history and their eruptive frequency, with the help of new dating methods, is an effective, low-cost way of reducing risk from volcanic eruptions themselves and from the associated devastating mudflows. Our present ability to predict along which of the radial directions from a volcano's center the brunt of explosive or mudflow destruction is likely to be directed is especially useful.
Satellite-Based Surveying Systems and Remote Sensing. These observation techniques offer great potential for monitoring the hundreds of potentially active volcanoes that have not been mapped and for measuring the increased temperatures associated with rising magma below the volcanoes.
Quaternary Tectonics. Fundamental research focusing on Late Quaternary history (roughly the past 500,000 years), including tectonic geology and geomorphology, paleoseismology, neotectonics, and geodesy, presents an unusual opportunity to catalog and understand ongoing active tectonic processes. This understanding could become the basis for assessing the potential to predict tectonic activities and other changes up to several thousand years in the future. Predicting or forecasting the future is essential for designing ways to minimize the disastrous effects of earthquakes and volcanic eruptions and for selecting the most stable sites for long-term disposal of hazardous and radioactive wastes.
Soil Cohesion. Improved methods of densifying or increasing cohesion of soil materials can increase the shear strength of susceptible soils and thereby eliminate potential liquefaction during earthquake and other cyclic vibrational loadings.
Volume-Changing Soils. Much of the western United States is dotted with patches of land that are susceptible to significant changes in volume, activated by slight increases in moisture content. The changes, whether collapse or swelling, are sufficient to bring about structural damages to engineering works. Such damage is diffuse and rarely life threatening but amounts to as much as $2 billion per year. State coverage maps of areas with inherently weak soil fabrics or certain clay minerals could raise public awareness to the point that such damage could be nearly eliminated.
Landslide Susceptibility Maps. Maps for landslide detection and evaluation have been formulated by application of new concepts such as geographic information systems, a digital system of mapping. We can expect major strides in decision making as applied to human activities around landslides.
Landslide Prevention. Alternative methods of preventing landslides and of stabilizing active landslides are a prime area for research, for the purpose of reclaiming otherwise unusable land for high-value human use.
Age-Dating Techniques. The greatest research need is for data management and physical models to quantify rates of active tectonic processes. Public policy decisions, for example, need such data arrays to determine what levels of earth hazard risks are acceptable. Decisions rest largely on evaluations of the rates (and the variability of the rates) of processes, the frequency of events, and the prediction of when hazards might become critical.
Real-Time Geology. Computers and electromechanical instrumentation now make possible the monitoring and instantaneous analysis of geological processes that occur very rapidly, and this capability leads to exciting new possibilities for prediction and forecasting. In addition, the documentation and understanding of rapid geological processes are an entirely new dimension for earth scientists to explore.
Systems Approach. A systems approach to geomorphic and engineering problems must be developed if serious and unforeseen consequences of human actions are to be avoided. The geomorphologist has the long-term perspective of landform change that is needed in meeting and dealing with many engineering and land management programs, and familiarity with the effects of time, landform evolution, and thresholds of instability are critical for prediction of future events and landform responses.
Extreme Events. The effect of extreme events in modifying or conditioning the landscape is a developing field. For example, rainfall-runoff floods and floods from natural-dam failures may be accompanied by massive erosion and sedimentation. A related field is the interdependency of events, as illustrated by the 1980 eruption of Mount St. Helens. The eruption and huge landslide formed several large lakes, and groundwater disruption became the cause of new instability of the natural dam. Sediment eroded from the debris avalanche in the Toutle River valley continues to plague downstream channel capacity, fish habitat, and water quality. A huge and expensive dam will be required to mitigate this problem.
Land-Use and Urban Planning
Geographic Information Systems (GISs). The way in which solid-earth science information is used with other kinds of societally important information requires the information-layering capability that is the essence of GISs. The solid-earth scientist has an important role in establishing appropriate standards for the way in which specialized data are used in GISs, as well as to ensure that such information is up to date and of high-quality.
Land Use and Reuse. The effects of geological constraints on land use and reuse will continue to be a significant issue for the future. Society must become a deferential part of the environment, not its master; human actions have profound and far-reaching impacts on the rest of the earth system.
Hazard-Interaction Problems. A shift in perspective is needed from the incrementalism of individual site-specific hazards to the broader systems approach, dealing with single or multiple hazards common to broader physiographic regions. Increasingly, earth scientists, civil engineers, land-use planners, and public officials are noting the existence of interactive natural hazards that occur simultaneously or in sequence and that produce synergistic cumulative impacts that differ from those of their separately acting component hazards.
Detection of Neotectonic Features. Identifying these features, which are key data links to a better understanding of a long-term seismicity, requires geological mapping of each seismically active region of the country. Such assessments are critical to the siting and design of safe facilities that are of critical-performance nature or serve housing-dependent populations (hospitals and schools) and to the establishment of realistic building codes.
Soil Processes and Microbiology. New theories of the relations between microbiology and soil processes, which are important because food production is tied to soil science, can be established with new chemical and instrumental techniques. Large-scale remote sensing can monitor growth patterns on different soil types and enhance land management to preserve our precious soil bank.
Rock-Bearing Capacity. Research on the bearing capacity of the weathered rock zone together with better definition, identification, and classification of strong soil/weak rock could save considerable money, both in design and construction of buildings.
Urban Planning and Underground Space. If we are to unclog our cities with the limited financial resources at hand, planning must start to take into account utilization of the subsurface. The use of underground space is an undeniable necessity of the future, to be undertaken as population centers are redeveloped. There is only so much room on the land, and the prime space must be used for human habitation. With the predicted increase in usage of underground space, new technologies for characterization and sealing of rock fractures against groundwater inflow will become a necessity.
Geophysical Subsurface Exploration. Improved geophysical techniques for subsurface exploration and computer-based methods for data processing offer new opportunities to predict and control human-induced land subsidence. Techniques such as ground-penetrating radar are now commonly used to detect underground cavities, such as caverns and abandoned mines at shallow depths, and seismic tomography is being experimented with to evaluate the conditions of pillars in abandoned mines of such urban areas as Pittsburgh, Birmingham, and Kansas City.
Underground Void Detection. The detection of underground voids by indirect methods needs to be greatly improved. Many subsidence problems are associated with either carbonate-solution caverns or abandoned underground mines, and it is usually
impractical to drill out suspect areas. Prediction of limestone sinkhole collapse will remain elusive until the triggering mechanisms are better understood.
Objective D: To Minimize Perturbations from and Adjust to Global and Environmental Change
Mineral Products and Health
Interdisciplinary Earth Science/Materials/Medical Research. Interdisciplinary research in these apparently diverse fields can provide a substantial opportunity to address major societal issues regarding health. This involves research into and public education about acceptable levels of risk in relationship to costs. Three examples are (1) the "asbestos hazard" where the relation between crystal size, shape, structure, and composition and chemical/ biological interaction is particularly important; (2) the radon concern; and (3) the presence and hazard to humans of trace elements present in the environment, especially if they may interact.
Contamination of Aquifers
Radioactive Waste Isolation. To ensure the isolation of radioactive waste, which has been designated by the Congress to deep disposal only, hydrologic research on sedimentary and volcanic deposits plays an important role. Sedimentary deposits contain aquifers, which are the single most important reservoir for fresh water.
Groundwater Protection. Protecting groundwater quality requires monitoring and sampling, such as vadose zone sampling, sampling of volatile organic components in groundwater, and sampling volatile organic components that might affect employee safety. Improved methods for testing field permeability and in situ methods of measuring the physical and chemical properties of soil and rock are needed.
Organic Chemistry Control. Biological control of organic chemical reactions, especially in groundwater, is a rich area for research—research with a high potential monetary payoff in terms of current and ongoing expenditures for waste site remediation. Chemical reactions involving the fate of in situ organic compounds are controlled by soil microorganisms, and understanding the kinetics of such reactions poses a scientific challenge. Introducing an organism that produces a benign byproduct may be the cheapest way to remediate some forms of groundwater contamination. In many instances groundwater moves so slowly that colonies of microorganisms can actually move with the plume of contamination.
Waste Management and Environmental Change
Waste Management: Landfills. The earth science community is equipped to develop management expertise for those materials that must be consigned to land burial.
Waste Management Geochemistry. A new field of geochemistry of waste management can be produced by recycling metals from waste.
Hazardous Waste Cleanup. In situ cleanup of uncontrolled hazardous waste sites vitally needs attention. The importance of fracture sealing of radioactive waste repositories in the subsurface cannot be overemphasized. Future generations should not be left with isolated pockets of hazardous material whose limits will undoubtedly become unknown with time.
Waste Disposal from Mining Operations. The problems with disposal of the waste from mining, milling, and in situ leaching need much greater emphasis, both for better understanding and, where possible, for mitigation of the potential adverse effects of exploiting and utilizing energy resources. Research and technological development are needed. Also, methods for site selection, characterization, and design of proposed repository sites for radioactive wastes have still not been developed.
Coal Mining and the Environment. The environmental impact of mining coals and their environmental acceptability as fuel are the most important aspects of coal research. Every stage of coal development, from exploration to consumption, has potential environmental impacts. Mine safety and mine site rehabilitation will also require research.
Nuclear-Energy-Related Wastes. Disposal of spoil waste at mine sites and of spent reactor material is the most important research area for nuclear energy. Recent concern about atmospheric carbon dioxide buildup, as well as about the increase in acid rain resulting partially from coal usage, may lead to a reevaluation of policies relating to nuclear power.
Global Change. The solid-earth scientist is the custodian of the past record of global change, and the research areas recommended in Chapter 3 are related to that record. Here the emphasis is on the most recent past.
Past Global Change. The record in ice cores, tree rings, deep-sea cores, lake-bed deposits, and surface features like sand dunes and glacial deposits is generally the most complete record of the recent
past, but other environments preserve some information. Integrated pictures of what the world was like at specific times—for example 5,000, 10,000, or 20,000 years ago—such as those compiled by the CLIMAP and COHMAP consortia can help in understanding how the world has changed. They can also be used to test the usefulness of the models constructed by geophysical fluid dynamicists.
Abrupt and Catastrophic Changes in the Past. Episodes of sudden environmental change have been identified from times as recent as 10,000 years ago to 66-million-years ago and more. If we can work out what happened under the peculiar circumstances of those times, it may help in understanding the likely effects of the current changes.
Solid-Earth Processes in Global Change. We need a better understanding of how sea level might be changing at present and the effects of volcanism, both above and below sea level, on the global environment. The volume of the ice sheets should be monitored on the decadal scale, and changes in permafrost areas should be observed. Monitoring global change at the surface from space, using the ongoing high-spatial-resolution imagery of Landsat and SPOT-type instruments, is essential for seeing what is happening in such environments as the Sahel, central Asia, and the Altiplano.
Present-Day Geochemical/Geophysical Data Base. A global data base of present-day geochemical and geophysical measurements needs to be established for direct detection of future changes. Such data are lacking for remote and underdeveloped regions of the world.
Climatic Effects of Volcanic Emissions. The effect on climate modification of volcanic emissions, and their interactions with the atmosphere and hydrosphere, can be better understood from global monitoring of volcanism.
NRC (1978). Studies in Geophysics: Geophysical Predictions, Geophysics Study Committee, Geophysics Research Board, National Research Council, National Academy Press, Washington, D.C., 215 pp.
NRC (1983). Fundamental Research on Estuaries: The Importance of an Interdisciplinary Approach, Geophysics Study Committee, Geophysics Research Board, National Research Council, National Academy Press, Washington, D.C., 79 pp.
NRC (1983). Toward am International Geosphere-Biosphere Program, National Academy Press, Washington, D.C.
NRC (1983). Safety of Existing Dams: Evaluation and Improvement, Water Science and Technology Board, National Research Council, National Academy Press, Washington, D.C., 384 pp.
NRC (1984). Studies in Geophysics: Groundwater Contamination, Geophysics Study Committee, Geophysics Research Board, National Research Council, National Academy Press, Washington, D.C., 179 pp.
NRC (1984). Studies in Geophysics: Explosive Volcanism: Inception, Evolution, and Hazards, Geophysics Study Committee, National Research Council, National Academy Press, Washington, D.C., 176 pp.
NRC (1985). Safety of Dams: Flood and Earthquake Criteria, Water Science and Technology Board, National Research Council, National Academy Press, Washington, D.C., 224 pp.
NRC (1986). Global Change in the Geosphere-Biosphere: Initial Priorities for an IGBP, U.S. Committee for an International Geosphere-Biosphere Program, Commission on Physical Sciences, Mathematics, and Resources, National Research Council, National Academy Press, Washington, D.C., 91 pp.
NRC (1986). Studies in Geophysics: Active Tectonics, Geophysics Study Committee, Board on Earth Sciences and Resources, National Research Council, National Academy Press, Washington, D.C., 266 pp.
NRC (1987). Confronting Natural Disasters: An International Decade for Natural Hazard Reduction, Advisory Committee on the International Decade for Natural Hazard Reduction, National Research Council, National Academy Press, Washington, D.C., 60 pp.
NRC (1987). Recommendations for the Strong-Motion Program in the United States , Committee on Earthquake Engineering, National Research Council, National Academy Press, Washington, D.C., 59 pp.
NRC (1987). Responding to Changes in Sea Level: Engineering Implications , Marine Board, National Research Council, National Academy Press, Washington, D.C., 148 pp.
NRC (1988). Space Science in the Twenty-First Century: Imperatives for the Decades 1995 to 2015: Mission to Planet Earth, Task Group on Earth Sciences, Space Science Board, National Research Council, National Academy Press, Washington, D.C., 121 pp.
NRC (1988). Estimating Probabilities of Extreme Floods: Methods and Recommended Research, Water Science and Technology Board, National Research Council, National Academy Press, Washington, D.C., 141 pp.
NRC (1989). Reducing Disasters' Toll: The United States Decade for Natural Disaster Reduction, Advisory Committee on the International Decade for Natural Hazard Reduction, National Research Council, National Academy Press, Washington, D.C., 40 pp.
NRC (1989). Margins: A Research Initiative for Interdisciplinary Studies of Processes Attending Lithospheric Extension and Convergence , Ocean Studies Board, National Research Council, National Academy Press, Washington, D.C., 285 pp.
NRC (1989). Irrigation-Induced Water Quality Problems, Water Science and Technology Board, National Research Council , National Academy Press, Washington, D.C., 157 pp.
NRC (1989). Great Lakes Water Levels: Shoreline Dilemmas, Water Science and Technology Board, National Research Council, National Academy Press, Washington, D.C., 167 PP.
NRC (1990). Assessing the Nation's Earthquakes: The Health and Future of Regional Seismograph Networks, Committee on Seismology, Board on Earth Sciences and Resources, National Academy Press, Washington, D.C., 67 pp.
NRC (1990). Rethinking High-Level Radioactive Waste Disposal: A Position Statement of the Board on Radioactive Waste Management, Board on Radioactive Waste Management, National Research Council, National Academy Press, Washington, D.C., 38 pp.
NRC (1990). Research Strategies for the U.S. Global Change Research Program, Committee on Global Change, U.S. National Committee for the IGBP, National Research Council, National Academy Press, Washington, D.C., 291 pp.
NRC (1990). Studies in Geophysics: Sea Level Change, Geophysics Study Committee, Board on Earth Sciences and Resources, National Research Council, National Academy Press, Washington, D.C., 234 pp.
NRC (1990). A Safer Future: Reducing the Impacts of Natural Disasters , U.S. National Committee for the Decade for Natural Disaster Reduction, National Research Council, National Academy Press, Washington, D.C., 67 pp.
NRC (1990). Ground Water Models: Scientific and Regulatory Applications , Water Science and Technology Board, National Research Council, National Academy Press, Washington, D.C., 302 pp.
NRC (1990). Managing Coastal Erosion, Committee on Coastal Erosion Zone Management, Water Science and Technology Board, Marine Board, National Research Council, National Academy Press, Washington, D.C., 182 pp.
NRC (1990). Ground Water and Soil Contamination Remediation: Toward Compatible Science, Policy, and Public Perception: Report on a Colloquium, Water Science and Technology Board, National Research Council, National Academy Press, Washington, D.C., 261 pp.
NRC (1990). Surface Coal Mining Effects on Ground Water Recharge, Water Science and Technology Board, National Research Council, National Academy Press, Washington, D.C., 159 pp.
NRC (1991). Policy Implications of Greenhouse Warming, Committee on Science, Engineering, and Public Policy, National Research Council, National Academy Press, Washington, D.C., 127 pp.
NRC (1991). Real-Time Earthquake Monitoring: Early Warning and Rapid Response, Committee on Seismology, Board on Earth Sciences and Resources, National Academy Press, Washington, D.C., 52 pp.
NRC (1991). Toward Sustainability: Soil and Water Research Priorities for Developing Countries, Committee on International Soil and Water Research and Development, Water Science and Technology Board, National Research Council, National Academy Press, Washington, D.C., 65 pp.
NRC (1991). Opportunities in the Hydrologic Sciences, Committee on Opportunities in the Hydrologic Sciences, Water Science and Technology Board, National Research Council, National Academy Press, Washington, D.C., 16 pp.
NRC (1992). Ground Water at Yucca Mountain: How High Can It Rise?, Board on Radioactive Waste Management, National Research Council, National Academy Press, Washington, D.C., 231 pp.
NRC (1992). Restoration of Aquatic Ecosystems: Science, Technology, and Public Policy, Water Science and Technology Board, National Research Council, National Academy Press, Washington, D.C., 485 pp.
Hanks, Thomas C. (1985). The National Earthquake Hazards Reduction Program—Scientific Status, U.S. Geological Survey Bulletin 1659, U.S. Government Printing Office, Washington, D.C., 40 pp.
Ziony, J. I., ed. (1985). Evaluating Earthquake Hazards in the Los Angeles Region—An Earth Science Perspective, U.S. Geological Survey Professional Paper 1360, U.S. Government Printing Office, Washington, D.C. , 505 pp.
Earthquake Engineering Research Institute (1986). Reducing Earthquake Hazards: Lessons Learned from Earthquakes, Earthquake Engineering Research Institute, El Cerrito, California, 208 pp.
Decker, R. W., T. L. Wright, and P. H. Stauffer, eds. (1987). Volcanism in Hawaii, U.S. Geological Survey Professional Paper 1350, U.S. Government Printing Office, Washington, D.C., 1,667 pp.
ESSC (1988). Earth System Science: A Program for Global Change, Earth Systems Sciences Committee, NASA Advisory Council, National Aeronautics and Space Administration, Washington, D.C., 208 pp.
Wallace, R. E., ed. (1990). The San Andreas Fault System, California , U.S. Geological Survey Professional Paper 1515, U.S. Government Printing Office, Washington, D.C., 283 pp.
The International Geosphere-Biosphere Programme: A Study of Global Change (IGBP) of the International Council of Scientific Unions (1990). The Initial Core Projects, IGBP Report No. 12, International Council of Scientific Unions, Stockholm, 232 pp. plus appendices.
The International Geosphere-Biosphere Programme: A Study of Global Change (IGBP) of the International Council of Scientific Unions (1992). The Pages Project: Proposed Implementation Plans for Research Activities , Pages Scientific Steering Committee, 110 pp.
Office of Science and Technology Policy (1992). Our Changing Planet: The FY 1992 U.S. Global Change Research Program, Committee on Earth and Environmental Sciences, 90 pp.
Wright, T. L., and T. C. Pierson (1992). Living with Volcanoes, U.S. Geological Survey Circular 1073, U.S. Government Printing Office, Washington, D.C., 57 pp.