Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 67
--> 4 Climate and Floods: Role of Non-Stationarity Flood frequency analysis, as traditionally practiced, is marked by an assumption that annual maximum floods conform to a stationary, independent, identically distributed random process. Furthermore, the assumption that floods are independent and identically distributed in time is at odds with the recognition that climate naturally varies at all scales, and that climate additionally may be responding to human activities, such as changes over the past century in atmospheric composition or in global land use patterns, which have changed the climate forcing and perhaps the hydroclimatic response on regional scales in recent decades. Porparto and Ridolfi (1998) demonstrate that estimated flood exceedance probability can increase quite rapidly with time even in the presence of rather mild rising trends in the annual maximum flood. As mentioned in Chapter 2, Knox (1993) makes the same point. Thus, it is important to acknowledge that non-stationarities are likely to be present in the records and to discuss potential sources of such trends or non-stationarities. There is considerable evidence of regime-like or quasi-periodic climate behavior and of systematic trends in key climate variables over the last century and longer (see NRC, 1998a for one overview). The unambiguous attribution of cause for such non-stationarities in a finite record is difficult, given the rather rich, nonlinear dynamics of the climate system. Even with stationary underlying dynamics (i.e., no change in the governing equations or parameters), finite sample statistics of a nonlinear dynamical system can be non-stationary as the system evolves from one regime to another. The nature of the nonlinear oscillations of the system as well as regime probabilities and its mean state may change as the external forcings (e.g., solar radiation or greenhouse gases) are changed. The stationarity assumption in flood frequency analysis has persisted because of (a) short historical records that limit a formal analysis of non-stationarities, (b) the lack of a formal framework for analyzing non-stationary flood processes and the associated annual risk, and (c) institutional adherence to engineering practice guidelines. As record lengths have increased, trends in floods and other processes have been observed. The ongoing global climate change debate and identification of interannual and decadal ocean-atmosphere oscillations (e.g., El Niño Southern Oscillation), and their teleconnections to continental hydroclimate, have led to increased awareness of this issue. Cyclical or monotonic non-stationarities pose a serious challenge to flood frequency and risk analysis and flood control design and practice. If cyclical or regime-like variations arise due to the natural dynamics of the climate system, a
OCR for page 68
--> relatively short historical record may not be representative of the succeeding design period. Further, by the time one recognizes that the project operation period has been different from the period of record used for design, the climate system may be ready to switch regimes again. Thus, it is unclear whether the full record, the first half or the last half of the record, or some other suitably selected portion is most useful for future decisions without a better understanding and prediction of the climate regimes. This is one issue faced in an analysis of the American River and Sacramento flood protection question. In addition, if a monotonic trend in floods is indicated in a reasonably long record and the possibility of global climate change effects is considered, projections of future flood potential are still unclear. For one, the effects of global climate changes may be more in the variability of the process than the mean, and may translate into an increased probability of recurrence of certain regimes of climate more than others. No means for the believable projections of such changes have as yet emerged. Deterministic coupled ocean-atmosphere general circulation models do not yet adequately reproduce observed low frequency climatic patterns or watershed scale precipitation and hence their utility for answering this question is limited. The thrust of these comments is that the uncertainty associated with the flood frequency estimates presented in Chapter 3 is likely to be considerably greater than that indicated by the statistical estimates. Some of the non-stationarities of the American River flood records and related hydroclimatic records are documented in this chapter. General Meteorological Features Of Major Floods The utility of connecting atmospheric circulation patterns to flood events is now well established. Hirschboeck (1987ab, 1988) demonstrated that catastrophic floods as well as trends in floods may be best understood in terms of large-scale and regional circulation pattern anomalies. She also proposed mixture estimation methods for flood frequency estimation conditional on the frequency of atmospheric circulation patterns. Time constraints precluded such an analysis for the American River basin. The discussion here is similarly motivated in that an explanation for changes in the flood frequency of the American River is sought in terms of associated changes in the large-scale atmospheric circulation patterns. In the ensuing discussion, the term "annual" will refer to a winter-centered 12-month period, such as the water year (Oct-Sept) or the period July-June. Similarly, "winter" will in general refer to the entire cool portion of the year, not just December-February as in much meteorological literature. Central California has a modified Mediterranean climate. Precipitation usually builds to a maximum in winter and subsides to nearly nothing in the summer, so that there are essentially two seasons rather than the traditional four. Consequently, major Sierra Nevada flooding occurs predominantly in the middle of the wet season and rarely in the summer months. Heavy snowmelt years can bring streams to slightly over their banks in late spring, but this type of flooding is not catastrophic. The 10 largest annual maximum floods in the 1905-1997 period on the American River in the Fair Oaks record occurred between late November and early
OCR for page 69
--> March. The 10 smallest annual maximum floods occurred between March and July or in December, reflecting lower winter precipitation and colder winter temperatures in those years. The Sierra Nevada floods of winter are produced by strong onshore atmospheric flow patterns containing numerous embedded disturbances. The flow generally has a southwest to northeast orientation, typically tapping deeply into tropical and subtropical moisture. This orientation is nearly perpendicular to the elevation contours east of Sacramento, where the terrain steadily ramps up from sea level to about 7,500 feet at pass level, and near 10,000 feet at the peaks. Vertical velocities caused by the forced ascent of the rapidly moving air are quite large. The associated cooling and moisture condensation proceeds at a high rate. High freezing levels and warm temperatures cause rain at higher elevations, often to near the tops of the mountains. Snowmelt is often a contributing factor, but rain is the main ingredient. Antecedent conditions (degree and depth of low elevation snowpack, soil moisture from prior to the first snowfall) can be significant. Large floods begin when such an atmospheric flow regime persists with little deviation for two to three days. Precipitation rates can be so high that in some situations, even a difference of an hour or two in duration can make a critical difference in the size and shape of a flood pulse. Descriptions of one such event (1996-1997 New Years Flood) are given in Redmond and Pulwarty (1997), and of the response in California Flood Emergency Action Team (1997). An examination of the large scale atmospheric conditions associated with the December-January-February (DJF) circulation for different types of years is instructive. The mean DJF sea level pressure map (Figure 4.1a) shows a deep low pressure center located in the central North Pacific, and two broad high pressure centers located over the southeastern Pacific and over the Great Basin and northern Rockies. The corresponding mean atmospheric flow will be counter clockwise around the low pressure center and clockwise about the high pressure center. The presence of the two high pressure centers in the mean DJF pattern reflects a climatological tendency for deflection of storms to the typically wetter, more northerly portions of the western United States. Winter precipitation is brought to the Sierra Nevada by transient systems, typically 20-25 each year, that are coupled to the jet stream. On occasion, these midlatitude systems entrain moisture from the subtropics, and even the tropics, and deliver even more precipitation than the average system. The strength and position of the mean upper air flow, and of the disturbances which both feed from and feed back into the jet stream (and which are influenced in part by access to heat and moisture at lower latitudes) are linked to the pattern of sea surface temperatures in the tropical and extratopical Pacific Ocean. A composite of the average anomaly (departure from the full record) DJF sea level pressure for the 10 years with the largest annual maximum American River floods is shown in Figure 4.1b. The largest negative anomaly is found considerably to the southeast of the area with climatologically lowest pressure (the "Aleutian Low" in Figure 4. 1a). This implies a slight filling and shifting of the Aleutian Low to the southeast. Of note is that pressures near California are only slightly less, and that most of the change in pressure is well away from the mainland, so that an
OCR for page 70
OCR for page 71
OCR for page 72
--> Figure 4.1 DJF sea level pressure (mb) patterns for (a) the 1900-1997 period, (b) the years with the 10 largest annual maximum floods (ending year of winter 1907, 1928, 1951, 1956, 1963, 1965, 1980, 1982, 1986, and 1997), and (c) the years with the 10 smallest annual maximum floods (ending year of winter 1912, 1913, 1931, 1933, 1961, 1966, 1977, 1988, 1990, and 1994). The mean sea level pressure is plotted in (a) and anomalies from this climatology are plotted in (b) and (c).
OCR for page 73
--> increased east-west surface pressure gradient exists. The upper air pattern (not shown) is an accentuated version of the surface pattern. An enhanced south to north flow component is noted, well offshore, turning east at higher latitudes, with "landfall" over the Pacific Northwest. During flood episodes within these winters, periods that may only last 5-10 days, this pattern shifts closer to the coast and becomes temporarily greatly accentuated, and delivers abundant moisture to the "favored" site. This is consistent with the observation that winters with California floods do not appear to be otherwise particularly wet (see below). A similar compositing was performed for the 10 years with the smallest annual maximum rain-fed floods, and results are shown in Figure 4.1c. In this case, the anomaly is positive over a broad area nearly coincident with the band of climatological high pressure, extending at its eastern end over the northern west coast and the northern Rockies. This even more strongly entrenched high pressure constitutes a pattern known as "blocking," in which the prevailing upper flow shunts storms far to the north, more toward the Alaska Panhandle. Such patterns are very persistent and hard to dislodge. In these circumstances, very few frontal systems are able to penetrate through to central California. Consequently, in terms of flood potential and changes in flood frequency in the American River region, one needs to understand changes in the low frequency variability of the associated atmospheric flow patterns. These patterns are in turn related to oceanic temperature and ultimately oceanic circulation patterns, which are also related to atmospheric circulation patterns in an endlessly circular fashion, and hence to low frequency variability in ocean-atmosphere interactions such as the El Niño/Southern Oscillation and the Pacific Decadal Oscillation. These low frequency forcings and global climate change issues are discussed further in a later section. Large floods need not reflect the character of the entire winter. Notably, the floods in December 1955 and February 1986 occurred in what would have otherwise been dry years, and the 1996-1997 July-June total would have been just slightly above average. After a second smaller storm later in January, the next four months were the driest in records spanning 150 years. The discussion of the average DJF sea level pressure patterns is consequently useful only because it addresses changes in the probabilities of flood causing events. Observed Climate And Streamflow Variability Non-Stationarity of American River Floods Since 1950, there have been seven annual maximum floods on the American River that have equaled or exceeded the largest previous flood in the systematic record. The estimated frequency of exceedance of extreme floods has correspondingly increased. The 100-year event for the three day annual maximum flood for the American River at Fair Oaks estimated using the log normal distribution from 2- or 51-year moving windows shows a near monotonic increase over the period of record (Figure 4.2). A moving window analysis of the mean and standard
OCR for page 74
--> Figure 4.2 Trends in the 100-year, three-day annual maximum flow (cfs) at the American River at Fair Oaks gage computed using a log normal distribution with 51-year (heavy solid) and 21-year (dashed) moving windows. The 100-year event (204,674 cfs) estimated using the log normal distribution with the 1905-1997 record is shown by the solid horizontal line.
OCR for page 75
--> deviation of the three-day annual maximum flood reveals that the trend in the 100-year flood is primarily due to the trend in the standard deviation (toward increasing variance) of the annual floods. For the associated precipitation record, whereas year-to-year variability recently has increased greatly, trends in total precipitation (for seasons or for the wettest episodes) are barely discernible. Similar conclusions are reached for the one-day and five-day annual streamflow maxima. A perspective for the American River basin flood trends is next developed through a review of climatic trends in nearby basins and in the United States. Even under a scenario of global climate change, given our understanding of climate dynamics, there is no expectation that the trends in floods or precipitation would be geographically similar. Trends in Systematic Records of Other Nearby Basins Precipitation records for locations in and near the American River basin show the latter half of the 20th century slightly wetter than the first half. The number of significantly wet years, however, shows a considerable difference between the earlier and later parts of the record. For example, at Placerville (elevation 1,700 ft, 124-year average is 39.92 inches) in the 22 years from 1874 through 1895, 5 years exceeded 55 inches (1 in 4.4 years); then just one year in the 55 years from 1896 through 1950 (1 in 55); and 6 years in the 47 years from 1951 through 1997 (1 in 7.8 years). At higher elevation Bowman Lake (5,390 ft, 98-year average 66.44 inches), just north of the North Fork basin, in the 51 available years from 1898 through 1950, 5 years exceeded 87 inches (1 in 10.2); followed by 11 cases in the remaining 47 years from 1951 through 1997 (1 in 4.3). Similarly, the later years also show more cases of dry winters, so that in general the number of extreme wet or dry years is increasing. This pattern for the 20th century is similar to those just over the mountain crest, from a station with an excellent record, Tahoe City, in the adjoining Truckee/Tahoe basin (elevation 6230 feet) just east of the American River. For the winter months of October through March, this site has only 1 year that exceeds 40 inches from 1910-1950 (1 in 41), compared with 11 years from 1951-1998 (1 in 4.4). The decadal trends in this series are illustrated in Figure 4.3 using a 10 year running mean of the winter precipitation. The temporal history of multi-day precipitation extremes is perhaps of more direct interest in the flood context. The time series of maximum 10-day precipitation for each water year is shown in Figure 4.4 for Lake Spaulding (elevation 6,160 feet). Though only a slight upward trend exists overall, the number of instances of very wet episodes increases during the latter half of the 20th century. For example, there are two years where 10-day maximum exceeded 22 inches in the 48 available winters from 1898 through 1950 (1 in 24), and 9 in the 47 years from 1950-1951 through 1996-1997 (1 in 5.2). A similar increase from the first to the second half of the century is seen in three-day amounts exceeding 14 inches. The three-day annual maximum basin precipitation for the American River basin estimated using the precipitation stations at Repressa, Auburn, Placerville, Nevada, Spaulding, and Tahoe shows similar trends. Its correlation with the three-day annual maximum flow at the
OCR for page 76
--> Figure 4.3 Tahoe City, California (elevation 6230 feet) winter-centered 12-month July-June precipitation. Plus marks represent 10-year running means centered on plotted year.
OCR for page 77
--> Figure 4.4 Lake Spaulding, California, (elevation 5,160 feet) maximum 10-day precipitation for each water year. Plus marks represent 10-year running mean centered on plotted year. Data provided by J.D. Goodridge.
OCR for page 90
--> Figure 4.8 Relation between annual maximum three-day flow for the American River at Fair Oaks and the Southern Oscillation Index (SOI) for the preceding June-November. The SOI is a normalized pressure difference of Tahiti minus Darwin (Australia) and is strongly related to El Niño/La Niña.
OCR for page 91
--> are not common, and as a rule, flood peaks (highest three-day runoff) in La Niña winters tend to be lower than average on streams such as the American River. However, on the American four of the top five three-day flow events in the last 65 years have occurred during modest to strong La Niña winters (refer again to Figure 4.8). As mentioned before, aside from their flood periods, many of the big flood years are relatively lackluster, even dry; so their water year flows are large but not exceptional. La Niña winters appear to have the interesting property that peak flows are likely to be lower than average, but carry an increased risk of producing some of the highest flows in the record. Strong evidence is emerging that heavy West Coast precipitation episodes are related to the so-called Madden-Julian Oscillations ("MJO") seen in the vicinity of Indonesia (Mo, 1999; Mo and Higgins, 1998a, 1998b; Ye and Cho, 1999). Regimes of El Niño/Southern Oscillation To the extent there may be a relation between floods and any of the phases of ENSO, such as La Niña, periods of extended predominance of one or the other ENSO phase could affect the frequency of Sierra Nevada floods. The record of ENSO does indeed show such behavior, most notably the period since 1976, when a major shift occurred in the Pacific climate (see Ebbesmeyer et al., 1991; Trenberth and Hurrell, 1994). Since that time El Niño years (negative Southern Oscillation Index) have occurred at a much higher frequency than earlier this century, about nine times in 20 years or one year in 2.2, compared with a historical frequency of about 1 year in 3.7 years. La Niña has been notably scarce since this 1976 shift, with one appearance in 1988-1989, a weak episode in 1996-1997, and a significant episode in the winter of 1998-1999. These ENSO non-stationarities are also discussed by Trenberth and Hoar (1996, 1997), Harrison and Larkin (1997) and Rajagopalan et al. (1997). Trenberth and Hurrell (1994) argue that the duration of the 1991-1995 El Niño event and the increase in the frequency of El Niño events is likely to be an indicator of global climate change. Rajagopalan et al. (1997) use non-homogeneous Markov chains to argue that these changes can be explained as natural long-term variations of the ENSO cycle and may not be dissimilar to the nature of ENSO activity in the late 1800/early 1900s. Lall et al (1998) used a wavelet analysis of the NIÑO3 sea surface temperature index to show that there have been several systematic variations in the dominant return period of ENSO between 1856 and 1997. They also analyzed a 1,000-year sequence from the Cane-Zebiak ENSO model with stationary forcings and found that El Niño frequencies in this deterministic stationary model varied dramatically at century time scales. Several papers (Enfield, 1992; Diaz and Pulwarty, 1992; Lough, 1992; Thompson et al., 1992; and Michaelson and Thompson, 1992) in the volume edited by Diaz and Markgraf (1992) examine the history and statistics of ENSO, by a variety of means, which collectively show intermittency, regime-like behavior, and general non-stationarity on scales ranging from decades to centuries. Mann et al. (1998) has attempted to put the recent El Niños in context by reconstructing their occurrence using proxy records extending back over six centuries. The commentary in studies
OCR for page 92
--> such as this has tended to focus more on El Niño than La Niña, but the data in some cases do portray both phases. Pacific Decadal Oscillation The Pacific Decadal Oscillation as a Potential Modulator of ENSO Effects Mantua et al (1997) and Hare et al. (in press) have identified a pattern of variability in the Pacific basin and the overlying atmosphere having characteristic time scales of 20-30 years, which they called the Pacific Decadal Oscillation (PDO). The pattern resembles the interannual-to-ENSO time scale variability pattern, but clearly separates out using singular value decomposition of the time histories of a set of fields of oceanic and atmospheric variables. This pattern expresses itself most clearly in the North Pacific, and thus is also referred to by some as the North Pacific Oscillation. One prominent aspect of the PDO is an out-of-phase relationship between the Pacific Northwest of the U.S., and the northern Gulf of Alaska. Streamflow, temperatures, and salmon abundance are clearly linked to this mode of variability over this century (Hare et al, in press). Mann and Park (1994; 1996) also identified a 16 to 20 year oscillation related to the North Pacific, which oscillation appears to correspond to one identified by Latif and Barnett (1994) using a coupled ocean-atmosphere model. Latif and Barnett's postulated mechanism is that self-sustained oscillations at interdecadal time scales can be set up through the influence of the subtropical ocean gyre on SST anomalies in the North Pacific and a subsequent delayed response of wind stress that spins down the gyre. This mechanism provides a potential for understanding and predicting interdecadal fluctuations in climate and flow in the western United States. Indeed, Lall and Mann (1995) and Mann et al (1995) show that a projection of this mode on to the Great Basin explains a significant fraction of the interannual variation in the Great Salt Lake and is tied to its major highs and lows. The connection of the interdecadal mode identified by these authors to the more diffuse decadal variability identified as the PDO is not clear. The primary importance of the PDO and other extratropical interdecadal North Pacific climate patterns is that they may modulate the mean position of the jet stream and also of the tropical interaction with the jet stream. Potential ENSO effects could be enhanced or reduced depending on the phase of the longer period North Pacific oscillation. An understanding of these issues would help (a) by allowing proper adaptation to ENSO events at interannual time scales and (b) by providing an understanding of interdecadal tendencies for increased or decreased flood potential. If the PDO is also shown to be associated with the regimes of frequent and stronger or infrequent and weaker El Niño events, additional understanding of regimes of wet and dry periods will result. Finally, an understanding of internal dynamic modes of the climate system with interdecadal time scales and their impacts on floods is essential if the potential effects of secular global climate change are to be sorted out from the last century of record.
OCR for page 93
--> Regimes in ENSO Resulting from PDO Decadal Modulation One of two ways that the PDO could be relevant to central Sierra floods could be its possible modulation of relationship between ENSO and the winter climate of the western United States. Gershunov and Barnett (1998) and Gershunov et al. (1998) have indicated that this may very well be the case. During one phase, lasting a few decades, the strength and robustness of the connection appears to be greater than during the opposite phase. That is, whether or how La Niña or El Niño affects the West Coast would, if these findings bear up, depend on which phase the PDO or the Mann and Park/Latif and Barnett interdecadal oscillation or more generally the state that the North Pacific seas surface temperatures is in. In a very interesting paper, McCabe and Dettinger (1998) have recently examined the temporal characteristics of the relationships reported by Redmond and Koch (1991). They find that the relationship between the Southern Oscillation Index (SOI) and western winter climate has varied considerably over the past 100 years. Currently the relationship is quite good, but earlier in the 20th Century it was much weaker. They also note that the relationship between Pacific equatorial atmospheric pressure behavior (expressed by the SOI) and Pacific equatorial ocean behavior (expressed by sea surface temperatures, SSTs) has similarly varied quite considerably this century. Of relevance to the American River and California, it is likely not a mere coincidence that the SOI-SST relationship was rather weak until about 1950, when it became the much stronger relationship to which we have grown accustomed. McCabe and Dettinger also find a strong modulation of the SOI-SST correlation and of the SOI western climate correlation by the Pacific Decadal Oscillation. This lends further support to the idea that large-scale changes in Pacific Basin climate behavior, and in its relation to Pacific Rim locations, took place about 1950. These findings are quite intriguing. Of particular note, if this is related to "regime" behavior rather than secular global change trends (below), then the possibility exists to return to a prior regime, i.e., the one that existed during the first part of this century. Other Potential PDO Effects Not Involving ENSO The second way that the PDO could be relevant to central Sierra floods could be by modulating other connections, not related to ENSO, between the North Pacific and the Sierra Nevada. By contrast with the tropics, the ocean and the atmosphere drive each other more equally in the higher latitudes, on scales of a few weeks, and it is nearly impossible to say anything specific about the implications for the Sierra Nevada. Because ENSO accounts for only a modest fraction of the year-to-year climate variability in the West, there must be other sources of variability, and the conditions in and over the Gulf of Alaska would be a strong candidate for an additional influence. Much more remains to be learned about potential connections there. It seems almost certain that any such connection would involve the deep ocean.
OCR for page 94
--> Other Potential Natural Influences on California The earth's climate system is extremely complicated. In the fullest sense, climatic behavior at any given location and over any significant span of time (e.g., a few decades) is determined by processes involving the earth's biological organisms, its frozen water, volcanic activity, astronomical factors, solar output, radiatively active components in the atmosphere, ocean behavior from top to bottom, as well as a host of positive and negative feedbacks involving clouds, precipitation, adiabatic heating and cooling, flow dynamics, and more, with numerous thresholds at which subcomponent behavior changes radically (e.g., freezing, convection), all interacting in highly nonlinear ways. For an engaging popular discussion of this subject, see, for example, Bak (1996). In such a system it would not be surprising if internal feedbacks operating through a multiplicity of links could contribute to the variability observed at any one point of interest. In fact, the absence of variations resulting from internal dynamical processes would be a major surprise. A consideration of the variety of external forcings interacting with a variety of complicated internal interactions and feedbacks led Bryson (1997) to state unequivocally that "the history of climate is a non-stationary time series." A frequently cited example of a "remote" and large-scale influence is the thermohaline circulation of the world's oceans. Temperature and salinity both affect the density of sea water, spatial and temporal variations of which produce horizontal and vertical accelerations and motion at all depths. These factors, in concert with fluxes of heat, moisture, and momentum across the water-atmosphere interface, affect the circulation of the atmosphere (e.g., Cayan and Peterson, 1989; Cayan, 1992). Because of the small speeds and large time constants involved, oceanic influences on climate can have time scales from days to about a millennium. Manabe and Stouffer (1996, 1997) have used the results of very long simulations to argue that General Circulation Model (GCM) runs of nearly a thousand years are needed to properly understand the role of natural variations and internal feedbacks affecting ocean circulation, and thus by implication effects on terrestrial climates. Broecker's notion (1987, 1991) of a global linkage among the world's oceans driven by temperature and salinity differences (an aspect of the thermohaline circulation dubbed the "conveyor belt") has attracted wide attention. Though the ocean is regarded as slow and ponderous, gradual processes could bring conditions to near thresholds, where behavior changes suddenly. Ice cores from Greenland (e.g., Mayewski et al. 1993a, 1993b) are showing that major circulation changes in, for example, Gulf Stream position may occur in less than a decade; perhaps in just a few years atmospheric adjustments would be seen over the entire hemisphere. Global Change Issues To this point only the natural variations in climate have been addressed. During the last century the human population has increased to the point where its activities can significantly alter the flow of radiation in the atmosphere. Although much of the focus has been on temperature, the realization has been slowly growing that other significant climatic adjustments to the altered radiation regime may be
OCR for page 95
--> expressed in the hydrologic cycle. A general conclusion is that global evaporation and precipitation will proceed more energetically and that water will cycle through the system faster. This implies that large precipitation rates will be more common. Climate change, whatever the cause, almost never affects all locations and seasons equally, and these details cannot be resolved at the current level of understanding. Hydrologic modeling studies for California by Lettenmaier and Gan (1990), Lettenmaier and Sheer (1991), and Tsuang and Dracup (1991), all indicate that similar temperature increases (e.g., those predicted by GCMs under global change scenarios) would cause changes in streamflow timing and increased flooding, primarily due to increases in the rain-to-snow precipitation ratios. These conclusions are clearly of concern in light of the changes in seasonality and extreme floods noted earlier. A brief perspective on the global climate change debate is provided here. All the various natural mechanisms that can potentially cause climate fluctuations on annual to century scales are considered to be capable of producing both positive and negative contributions to climate forcing at one time or another. With respect to human-induced changes in climate forcings, especially the radiative forcings associated with atmospheric composition changes, a widely held view is that such temporal trends are unidirectional and unlikely to change course in a century or two. Partly on the evidence of modeling experiments, it is likewise widely held that a steadily increasing forcing will also lead to a steadily increasing response. Of course, in finite physical systems, no component can increase forever without limit, but it can appear to do so within a limited range of forcing. Unfortunately, modeling experiments pertaining to global climate change do not have a realistic representation of known low-frequency ocean-atmosphere interactions and their treatment of the hydrologic cycle is also relatively primitive. Given the importance of water vapor as a greenhouse gas and also its role in the atmospheric energy balance, a better understanding of the radiative nature of clouds and the movement, organization, and precipitation of atmospheric water vapor is needed. It is possible that the response will be stepped, as a series of plateaus; or will have different seasonal signs that are influenced by the background state; or sometimes will even be in this direction, sometimes in that, as planetary adjustments in the mass fields and flow of the two major fluids—water and air—take place. In light of the possibility that long term natural variations in climate occur, the global climate change response in this area may occur as a change in the frequency distribution, strength, and recurrence of these regimes. Such changes will of necessity be at longer time scales than the recent record. Thus, an unambiguous detection of global change and its impacts is unlikely unless the changes are altogether dramatic. Definitive answers to these questions are not expected anytime soon. Essentially, the problem facing flood managers, engineers, and everyday citizens in this situation amounts to making a forecast for the next several decades of what the flood statistics will be and then acting on that forecast. Aside from recently introduced human-induced or human-enhanced factors, the remaining natural mechanisms for climate change have been operating all along, have been "seen" before, and have been either directly measured or otherwise recorded in the proxy
OCR for page 96
--> evidence. The human factors are new, may have unidirectional effects, and may carry system behavior beyond bounds it has not exceeded for some time. When humans look at time series, there is a universal tendency to extrapolate any type of trend discerned in the latest points linearly out into the future. In a natural system it is widely realized that eventually this expectation will prove incorrect. With global climate change there is a possibility that, within the useful lifetime of a prediction (say, a century or so, by which time the entire matter of how humans interact with rivers will almost certainly have been completely re-thought), this linear extrapolation might be correct. If this logic is correct, flood frequency curves may edge closer to or enter territory not seen during Sacramento's history. Moreover, there still remains the possibility that natural variations of larger amplitude, not observed during the few centuries of recorded settlement, could also occur (or recur). Of the various mechanisms for climate change facing us in the near term, the human-induced global changes appear to have the greatest likelihood of taking us to this point. Just as a cautionary note, it is worth pointing out that like carbon dioxide, the optically active gas methane—which contributes about 15% of the enhanced greenhouse effect—was also expected to continue to rise steadily in concentration well into the next century; however, in a major surprise, the concentration began to increase less rapidly in the early 1990s, and by 1996 had essentially leveled off (Dlugokencky et al., 1998). This holds important lessons about how we should regard even our "safe" assumptions. It is also worth noting that for short time periods—a few decades or centuries—naturally occurring fluctuations would masquerade as "trends," especially with the short records we possess. When we are sensitized to the prospect that our activities may lead to global or regional climatic changes, we are more likely to find such trends, and to interpret them as evidence of the hypothesized effect. The hard question, one very difficult to answer, is "what would the natural system have done otherwise?" We are a long way from answering this. In climate change research, this problem is known as the attribution problem, in contrast to the other two main pieces of the puzzle: the detection problem and the prediction problem. In addition to greenhouse gas concerns, a body of literature is emerging (see, for example, Chase et al., 1996; Pielke, 1991, 1998, In press, and references therein) showing that land use changes—on local, regional, and global scales—are a significant factor in causing actual and potential climate change—again at local, regional, and global scales. Changes in land use modify flows of energy in substantial ways. The climate system adjusts to these energy flow changes by changing its circulation patterns. The atmospheric adjustments are both local and remote. This area of climate change research is beginning to receive a substantial amount of attention. Recent climate modeling experiments by these investigators (e.g., Chase et al., 1996) show that the observed changes in land use around the earth during this century (with no change in greenhouse gasses) are sufficient by themselves to produce regional circulation and surface temperature responses of the same magnitude as the changes that have been projected for changes resulting from greenhouse gas increases. In such regions as western North America worldwide land
OCR for page 97
--> use changes in these preliminary experiments lead to temperature increases on a par with those observed in the Sierra Nevada winter over the last several decades. There is no simple pattern to global land use changes over the last 100 years, and the patterns of land use change are themselves changing. Although it is not clear whether the earth as a whole will warm or cool from such changes, the way the climate response (temperature, precipitation, and snowfall) is distributed in space and by season and altitude could be very complex. Because of the highly nonlinear nature of this system, climate changes that result from land use changes will not necessarily have to exhibit monotonic trends. Model performance will need to improve still further before specific results can be accepted without question. For now, the conclusion that land use effects can rival other sources of variability is sufficient. Unfortunately, these long-term trends in land use are taking place while greenhouse gases and atmospheric sulfate aerosol emissions are also changing. There is as yet no way to separate out their effects, and it is not clear if additive (linear) approaches are even appropriate (see, for example, Hanson et al., 1997; Hanson et al., 1998). These instructive and sobering studies have increasingly led to a reluctant acceptance of the possibility that our ability to provide useful climate change predictions may stay barely ahead of the actual progress of time, if at all. Summary Non-stationarity in the flood process can come from naturally structured, low-frequency climate variability; from human changes to the watershed (e.g., hydraulic mining, subsidence, urbanization, land use, and vegetative cover); or from watershed influences on the large-scale climate system (likely minimal in this case). There is evidence of significant changes in land use and surface attributes of the American River basin over the last two centuries. Trends are evident in basin land use and surface attributes, as well as precipitation and other climatic elements, particularly the incidence of extremes. A context for understanding these trends in terms of climatic mechanisms has been provided above. Global climate change concerns related to greenhouse gas emissions over the last century may also be considered as a plausible factor in changing flood frequencies. The contribution of structured, oscillatory, interannual- to millennial-scale climate variability to changing flood potential in the region is also of considerable interest. The latter may represent the behavior of a nonlinear dynamical system that exhibits unstable oscillations or close returns of a trajectory that appears periodic. Such a system would have stationary dynamics, but a finite period of record may exhibit apparent non-stationarity in terms of the statistics. Key implications of these observations are: (1) Given trends, persistence or memory in the system, the true variability of the flood process could be substantially higher than that estimated from a finite period of record. In other words, the uncertainty in the estimate of the T-year flood is higher than that indicated by a method that considers the n years of record to
OCR for page 98
--> represent a stationary, independent identically distributed stochastic process. The latter is the standard assumption for flood frequency analysis. (2) Record high and low floods are likely to be clustered over extended periods of years, if the underlying climate system is slowly oscillating. Thus, the pre-and post-1950 segments of the American River flood record are plausible members of trajectories of the same stationary dynamical system. As the underlying climate state changes slowly, the flood potential, as well as the timing and causative mechanisms undergo systematic structural changes. This leads to the question of whether a single probability distribution is an appropriate descriptor of the flood process, and whether the frequency curve should be bent at one end, or whether low and high floods should be modeled by the same distribution. Censoring, mixed distribution models and non-parametric flood frequency estimators are commonly offered as solutions to this problem. However, all these methods assume that the underlying process is independent identically distributed. Consequently, the resulting flood frequency estimates will be reasonable only if our flood record extends over an adequate number of the underlying cycles and if our planning horizon is infinite in the future. (3) Unless the quasi-oscillatory climate behavior is predictable over the next 5 to 30 years, and unless that information is used for modifying the underlying flood frequency curve, the independent identically distributed procedures used may lead to an apparent bias in the flood frequency curve, as seen in the pre- and post-1950 period for the American River. Unfortunately, neither the understanding of the complexity of the underlying dynamical system nor the technology for such interannual to century scale predictions (see, however, Rajagopalan et al.,  and Lall et al. ) is currently available. Consequently, the risk of being wrong about the estimate of the flood frequency curve remains higher than anticipated by the standard analyses. (4) The use of paleodata spanning centuries or millennia is often offered as a tool for improving flood frequency estimates in conjunction with a probable maximum flood analysis. Such information, if untainted by anthropogenic effects and derived accurately, is potentially very useful for refining the flood frequency estimate for "steady state" future conditions. This may or may not be reasonable, as our understanding of cyclic climate variations at century to millennia time scales is still very much in its infancy. In a Bulletin 17-B setting, where a guideline for steady state flood prediction is of interest, the recent few centuries of reconstructed data is likely to be useful at least for providing a context for interpreting the flood record of the American River over this century. The committee's summary recommendation is that its understanding of climate variability suggests that (a) the uncertainty of the flood frequency estimates is higher than indicated by the usual statistical criteria, (b) climatic regime shifts may—slowly or abruptly—significantly affect the local flood frequency curve for protracted periods, and (c) at this time, given the limited understanding of the low frequency climate-flood connection, the traditional independent identically distributed approach to flood frequency estimation is recommended with the strong caution that the application of such a curve is likely to lead to significant biases or variability over any period of time. A more conservative design criterion as well as
OCR for page 99
--> adaptive flood control measures in addition to structural flood control may therefore be appropriate. Recognition of the non-stationary nature of climate dynamics should motivate society to replace the existing static flood risk framework with a dynamic one. The existing static flood risk paradigm considers the estimation of a single flood frequency distribution from all available historical, regional, and paleoflood data and the application of the estimated distribution for an indefinite future period. A dynamic risk paradigm would call for the evaluation of potential flood risk over the duration of project operation, and/or a regular flood frequency updating procedure. Adaptive flood control and design strategies would be favored under this paradigm. Given the usual paucity of flood data, the interest in extreme (1% annual risk) floods, the limited ability to forecast climate statistics into future planning periods, and the weak understanding of the connection between slowly-varying climate factors and the at-site or regional flood process, it is beyond our ability at the present time to implement practical dynamic flood risk models. However, research in various areas is needed to address these important issues. New diagnostic, prognostic and decision frameworks need to be developed. First, investigations of the nature of flood risk variations in the historical record and their connections to low frequency climate variability are needed to establish the nature and sensitivity of the at-site or regional flood process to key climate indicators or factors, to provide a context for understanding the apparent changes in flood risk as seen in the American River, and to assess the need to consider climate induced flood non-stationarity in the decision process. Given the potential for anthropogenic climate change, and ongoing research on its prognostication, it is important to assess the specific ocean-atmosphere. state variables that are useful predictors of flood risk, and their spatial signature in the regional flood process. A causal, hypothesis-testing framework may be useful for such analyses. Identification of the sensitivity of flood risk to identifiable, changing (and predictable) climate indicators will be useful for decisions on whether a dynamic risk framework is useful. Such analyses also have implications for changing regional flood frequency estimation methods. At-site flood records used for regional frequency curve estimation can often have widely varying periods of record. Recognition of quasi-periodic and monotonic trends in climate factors influencing floods behooves stratification of these records by "climate epoch" prior to the estimation of regional frequency curves. An examination of the spatial structure of the regional flood risk relative to the climate state may also be useful. For small basins where flood risk is determined by local thunderstorms, regional information for several decades could be quite useful. For large basins (such as the American River basin) where flood risk is determined by very large regional storms, regional information extracted using traditional methods may be of limited value. These large regional storms have preferred tracks that can be related to seasonality and to identifiable slowly varying ocean temperature (and associated atmospheric) conditions. There may hence be prospects for relating low-frequency climate variability to regional storm frequency and severity and hence to floods. Conditioning basin and regional flood process on ocean-atmosphere teleconnections using the century long records available may be more fruitful in this context than
OCR for page 100
--> "pooling" available regional flood data. A statistical characterization of the connections between ocean-atmosphere variability at interannual to decadal time scales and the frequency of the annual maximum flood in the region is needed. This relationship, coupled with "beliefs" as to scenarios for future climate derived from an analysis of the historical and paleoflood record and coupled general circulation models of the climate system, may be useful for assessing scenarios for future flood risk. A framework for formally conducting such analyses to better estimate potentially changing flood frequency distributions and their uncertainty is needed. Second, a framework is needed for decision analysis on flood management that explicitly considers the dynamic risk and its estimation uncertainty. Clearly, such a framework needs to consider both the length of the planning period over which the projected flood risk will be used and the reliability with which the risk can be estimated from available information. Such a framework may be developed considering a "bias-variance" tradeoff or considering related explicit economic consequences. Consider first a monotonic trend in the annual maximum flood. In this case, one may be tempted to use the last 10 years of record to estimate the 100 year flood for the next 10 years (the planning period). One would reduce bias, but there would be tremendous uncertainty in risk estimates because the record is so short. If instead one had employed a 200-year period of record to project the flood risk over the next 10 years, then the bias in flood risk is likely to be larger, while the variance of flood risk estimators should be reduced. The magnitude of the expected shift (i.e. the projected bias) in the estimated 100 year flood over the next 10 years, and its economic consequences, relative to the increased uncertainty of estimate of this flood, would determine whether the shorter record is used. This answer may well be different if a 50-year planning period were considered. The bias would be larger, as would the uncertainty associated with projecting the monotonic trend into the future. This situation is complicated if quasi-periodic climate variations are considered. For instance, if a 20 year periodic climate variation were considered, using the last 10 years of record to project flood risk for the next 10 will increase both bias (as one goes from the high to the low phase of the oscillation) and variance of estimated flood risk. Explicitly conditioning the flood risk estimate on climate state has an effect similar to the selection of a subset of years of the record as discussed above. The use of such a conditional probability statement would attach higher weights to floods in years with climate state similar to the one projected and lower weights to other floods. This reduces the effective sample size used for flood risk estimation. Thus, the "conditional risk estimation" framework needed needs to consider length of record, length of planning period, the nature of the climatic non-stationarity and causal relations between the climatic factors and the floods. The utility of paleoflood and proxy climate data could be evaluated in the same framework.
Representative terms from entire chapter: