Topic III
What Should Be the Interface Between the Science of Hydrologic Extremes and Applications Issues, Such As the Need to Replace Standard Methods, Such As Those Laid Out In Bulletin 17B and Other Methods That Are Based on Stationary Statistical Methods?

PRESENTATION

Tim Cohn of the U.S. Geological Survey summarized the apparent and paradoxical disconnect between the practice and the science of flood frequency analysis. In essence, he stated that neither partner in this marriage is entirely knowledgeable about, confident in, or respectful of the other. On the one hand, Bulletin 17B in fact does address nonstationarity, and it expresses considerable concern specifically about climate change. So the recent “scientific finding” of nonstationarity as a central component of flood frequency analysis does not come as news. On the other hand, operational activities seldom if ever consider the very substantial nonstationarities associated with development and other land-use change that we know exist from both science and theory, with the result that flood risk is often greatly underestimated.

In addition, he said, if we believe that rational policies for dealing with flood risk is based on a rational (i.e. scientific) understanding of flood frequency, then it is surprising that Bulletin 17B has not been updated in over a generation. Right now, we employ approaches that reflect an antiquated understanding of the science, at least in some cases.

Third, Cohn noted that there seems to be another disconnect between data and models. The global climate models seem to suggest that we ought to be seeing—at some time and some places—substantial changes in stream discharge statistics. However, flood data from the U.S. Geological Survey's Hydro-Climatic Data Network (HCDN) exhibit essentially no trends at all. (The HCDN consists of about 1,500 gauge sites in areas generally unaffected by development.) For example, two-thirds of the HCDN sites show less than one percent change per year in maximum annual peak discharge.

Why is this true? There are two possibilities. First, if one examines peak flows, they are much more variable than average flows. The variability at the extremes is already high. A 20 percent change in mean annual flows would be detected almost immediately, but a similar change in the 100-year flood is hard to detect statistically (and yet could have very large economic effects). Second, the statistics are such that a 10 percent increase in mean annual precipitation correlates to a mean annual flow of about 10 percent, but this has a much smaller impact on the outer edges of the distribution (e.g., the 100-year flood). Thus, if there are, indeed, trends in this “noisy” data, they would be extremely difficult to discern with statistical methods.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 10
Topic III What Should Be the Interface Between the Science of Hydrologic Extremes and Applications Issues, Such As the Need to Replace Standard Methods, Such As Those Laid Out In Bulletin 17B and Other Methods That Are Based on Stationary Statistical Methods? PRESENTATION Tim Cohn of the U.S. Geological Survey summarized the apparent and paradoxical disconnect be- tween the practice and the science of flood frequency analysis. In essence, he stated that neither partner in this marriage is entirely knowledgeable about, confident in, or respectful of the other. On the one hand, Bulletin 17B in fact does address nonstationarity, and it expresses considerable concern specifically about climate change. So the recent “scientific finding” of nonstationarity as a central component of flood fre- quency analysis does not come as news. On the other hand, operational activities seldom if ever consider the very substantial nonstationarities associated with development and other land-use change that we know exist from both science and theory, with the result that flood risk is often greatly underestimated. In addition, he said, if we believe that rational policies for dealing with flood risk is based on a ra- tional (i.e. scientific) understanding of flood frequency, then it is surprising that Bulletin 17B has not been updated in over a generation. Right now, we employ approaches that reflect an antiquated understanding of the science, at least in some cases. Third, Cohn noted that there seems to be another disconnect between data and models. The global climate models seem to suggest that we ought to be seeing—at some time and some places—substantial changes in stream discharge statistics. However, flood data from the U.S. Geological Survey's Hydro- Climatic Data Network (HCDN) exhibit essentially no trends at all. (The HCDN consists of about 1,500 gauge sites in areas generally unaffected by development.) For example, two-thirds of the HCDN sites show less than one percent change per year in maximum annual peak discharge. Why is this true? There are two possibilities. First, if one examines peak flows, they are much more variable than average flows. The variability at the extremes is already high. A 20 percent change in mean annual flows would be detected almost immediately, but a similar change in the 100-year flood is hard to detect statistically (and yet could have very large economic effects). Second, the statistics are such that a 10 percent increase in mean annual precipitation correlates to a mean annual flow of about 10 percent, but this has a much smaller impact on the outer edges of the distribution (e.g., the 100-year flood). Thus, if there are, indeed, trends in this “noisy” data, they would be extremely difficult to discern with statistical methods. 10

OCR for page 10
Topic III 11 In closing, he noted that future changes in runoff generated by global climate models suggest dra- matic shifts, if you believe the models. But there is much we do not know about floods and flood pat- terns. It is not easy to identify the right questions to ask, or even to define the practical problems. PLENARY DISCUSSION The discussion, led by David Ford of David Ford Consulting, raised a number of related points. Sev- eral participants noted that discerning regional trends in precipitation is also a challenge. One in particu- lar mentioned that his research group had examined trends in the means of annual maximum of 24 hour rainfall, and found that only about 15 percent of them had a statistically significant change, and of those, about half were up and half were down. Even when trying to reduce noise-to-signal ratio by grouping nearby gages, this participant was not seeing many significant trends and the few trends seen did not show a lot of spatial coherence. Another participant had found the same in India—a lack of spatial coherence in regional precipitation trends. Trends in 90th percentile values are downward over time, whereas those in 99th percentile values are upward. A third participant mentioned that while one commonly hears refer- ence to the “acceleration of the hydrologic cycle” he had not seen much evidence for this. A second topic related to what science might be able to contribute in the post-stationarity era to help practitioners construct a flood frequency curve. One participant said that progress has been made with deterministic modeling of land-use change. This participant continued saying, with respect to climate change effects, from an operational perspective we can at least admit a greater uncertainty. This would lead to an operational decision to either build structures with greater safety factors, or to accept greater risks. Another said that at the very least we should be poring over all our hydrologic records to mine that voluminous data on a large scale. He also suggested paying at least as much attention to watersheds un- dergoing land-use change as on the climate change issue; he believed that is at least as great a driver for streamflow change as climate. Another participant noted that climate changes might be observed in ways other than as trends. For example, one can sometimes see periodicities in precipitation. Another mentioned a case where changes in discharge were found not in the mean but in the standard deviation of the data. Finally, several participants mentioned that non-stationarity matters much more for long-term than for short-term projects. One stated that in designing something with a life cycle of less than 50 years, it is probably not important. If greater than 50 years, it may be, but we aren’t sure how. Another noted that insurance companies typically only look at 10 or 20 years of data for their risk assessment. BREAKOUT SESSION REPORT Rapporteur John England, U.S. Bureau of Reclamation, summarized the discussion in the breakout session. He emphasized the overall theme of change—the more things change, the more they stay the same. This is especially illustrated by the fact that there is still tension between flood science on one hand and engineering estimates and operations on the other. He stated that within the theme of hydrologic change there are three main components that various participants in the breakout session had emphasized: changes in climate, changes in land cover and land use, and changes in water management (such as regulation). While all of these factors illustrate issues with the use of stationarity as a governing principle, there seems to be very little consensus on what the hydrologic community might use to replace it.

OCR for page 10
12 Research and Applications Needs in Flood Hydrology Science One of the issues, he noted, is that operationally we still have troubles defining extremes, for exam- ple, 100-year or 500-year flood. This has impacts on the quality of our flood operational estimates. Some session participants thought there has been inadequate flood research over recent decades to im- prove current techniques. One breakout session participant reinforced this point by making an analogy to the area of regulation of toxics relative to human health where no one study seems to have procured changes in regulations. Rather, it is a “preponderance of the evidence” issue with many investigators tak- ing many approaches. Some attendees continued by articulating that a considerable effort in hydrologic research with pertinent and timely results needs to be resumed before significant progress can be made in its application to operations. Finally, England stated that many participants had noted a “disconnect” between the fields of climate modeling and hydrology—a theme raised earlier in the workshop in Upmanu Lall’s presentation. For flooding, we usually do a single frequency analysis of the data, but climate models have shown that there are often alternating epochs of greater and lesser precipitation. Further, the source of moisture for the largest floods may be different from that of less extreme events. Work with data sets, retrospective analy- sis, and projections may be needed to unify the climate and hydrologic sciences for operational benefits, according to England and some other participants.