Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 227

APPENDIX D
Concepts of Probability in
Hydrology
Rainfall depths for specific durations and streamflow peaks occurring
during long periods of time are stochastic variables and can be analyzed as
such. Statistical and probabilistic analyses allow the development of proba-
bility statements or estimates related to the magnitude of certain events.
Such estimates can be used for design purposes.
Random or stochastic variables may be discrete or continuous. Rainfall
accumulation and streamflow are generally considered continuous, since
they may take any value on the real axis (or at least at any positive value) . To
quantify or parameterize the probability of occurrence of a continuous ran-
dom variable, one can use a standard probability density function (pdf) f(X).
The probability of a random variable, X, taking a value in the infinitesimal
range [X, X + dX], is f(X) dX. Given the pdf f(X), then the probability that
the random variable X assumes a value between x1 and x2 is
x2
P[x1 s X c x2] = fix) dx
xl
The distributions of a random variable are frequently characterized by
shape measures or moments. The mean, a measure of central tendency, is
defined by
00
~ = xf~x) dx = E(X)
—00
227

OCR for page 227

228
Appendix D
where E is called the expectation operator. The variance, a measure of
· . .
c aspersion, Is
00
o2 = (X—p)2f(x) dx = E(x—R)2
—Go
The standard deviation is the square root of the variance. The skewness is
the third moment around the mean, and measures deviations from symme-
try:
00
G = (x - ,u)3f(x) dx
—00
Higher order moments are also defined. A standardized measure of asym-
metry is the coefficient of skewness:
G
V =
t' 3
BULLETIN 17 PROCEDURES FOR FLOOD-FREQUENCY ANALYSIS
Two broad classes of flood-frequency analyses can be iclentified, depend-
ing upon whether or not stream-gaging records exist at or near the location of
interest. Procedures available for use at ungaged sites, however, either use
directly, or depend on, results of procedures designed for use at gages] sites;
they offer no useful insights that are not given more clearly by discussion of
procedures designed for use at gaged sites. For this reason, only those proce-
dures designee] for use with gage data are discussed here.
In order to promote correct and consistent application of statistical flood-
frequency techniques by the many private, local, state, and federal agencies
having responsibilities for water resource management, the U.S. Water Re-
sources Council formulated a set of guidelines for flood-frequency analysis
known as Bulletin 17, or the WRC procedure (Interagency Advisory Com-
mittee on Water Data, 1982~. These guidelines, originally issued in 1976,
reflect an evaluation, selection, and synthesis of widely known and generally
recognized methods for flood-frequency analysis. The guidelines prescribe a
particular procedure but clo permit properly documented and supported
departures from the procedure in cases where other approaches may be more
appropriate. Federal agencies are expected to follow these guidelines, and
nonfederal organizations are encouraged to do so.
In broad outline, Bulletin 17 characterizes flood occurrence at any loca-
tion as a sequence of annual events or trials. The occurrence of multiple trials
(floods) per year is not addressed. Each annual trial is characterized by a

OCR for page 227

Appendix D
229
single magnitude (peak flood discharge). The magnitudes are assumed to be
mutually independent random variables following a log-Pearson Type III
probability distribution; that is, the logarithms of the peak flows follow a
Pearson Type III (gamma) distribution. This distribution defines the proba-
bility that any individual annual peak will exceed any specified magnitude.
Given these annual exceedance probabilities, the probabilities of multiyear
events, such as the occurrence of no exceedances during a specified design
period, can be calculated as explained below. By considering only annual
events, Bulletin 17 reduces the flood-frequency problem to the classical
statistical problem of estimating the log-Pearson probability curve using a
random sample consisting of the record of annual peak flows at a site.
The Bulletin 17 procedures utilize three broad categories of data: system-
atic records, historical records, and regional information. The systematic
record is the product of one or more programs of systematic streamflow
gaging at a site. The essential criterion is that the annual peak be observed or
estimated each year regardless of the magnitude of the peak. Breaks in the
systematic record can be ignored and the several segments treated as a single
sample, provided that the breaks are unrelated to flood magnitudes. Thus,
the systematic record is intended to form a representative and unbiased
random sample of the population of all possible annual peaks at the site.
In contrast to the systematic record, the historical record consists of an-
nual peaks that would not have been observed except for evidence indicating
their unusual magnitude. Flood information derived from newspaper artic-
les, personal recollections, and other historical sources almost inevitably
must be of this type. The historical record can be used to supplement the
systematic record provided that all flood peaks exceeding some threshold
discharge during the historical record period have been recorded.
Regional information is also used to improve the reliability of the Bulletin
17 frequency curve by incorporating information on flood occurrence at
nearby sites. The regional information can include a generalized skew coef-
ficient that is used to adjust the station skew for short records. Bulletin 17 also
provides procedures for adjustment of short-record frequency curves by
means of correlations with longer records at nearby stations and for compu-
tation of weighted averages of independent estimates.
The Bulletin 17 frequency analysis overall has six major steps:
1. systematic record analysis;
2. outlier detection and adjustment, including historical adjustment and
conditional probability adjustment;
3. generalized skew adjustment;
4. frequency curve ordinate computation;
5. probability plotting position computation; and
6. confidence-limit and expected-probability computation.
~ .
,4r~

OCR for page 227

230
Appendix D
Bulletin 17 describes each of these steps in some detail. The key to the
procedure in most cases reduces to the use of the sample mean x and sample
variance s2 Of the logarithms x of the observed peak flows. These two sample
moments, along with a weighted average of the sample skewness coefficient
and a regional estimate of the skewness coefficient, generally serve to define
the estimated frequency distribution of the floods at a site. Several of the
steps above pertain to special techniques and adjustments employed to deal
with special cases, unusual flood values, and historical information.
Mixtures of hazards can occur in flood-frequency analysis, for example,
when several distinct causes of flooding, such as thunderstorms, hurricanes,
snowmelt, or ice-jams, can be identified and give rise to floods of such
different magnitudes. In such cases, each type of flood can be analyzed in
isolation, producing a conditional probability curve for each type. The total
or unconditional probability distribution is constructed by weighting each
conditional distribution in proportion to the probability that a flood will be
of that type (Benjamin and Cornell, 1970; Vogel and Stedinger, 1984~.
FLOOD RECURRENCE INTERVALS AND FLOOD RISK
A widely used means of expressing the magnitude of an annual flood
relative to other values is the return period or the probability of exceedance.
The 100-year or 1 percent chance flood is defined as that discharge having a 1
percent average probability of being exceeded in any one year; this discharge
can be estimated from the probability distribution of annual floods. (Note
that the 100-year flood is not a random event like the flood that occurs in a
particular year; rather it is a quartile of the flood-frequency distribution. ~ If
the occurrence of an annual flood exceeding the 100-year flood is called an
exceedance or a success, and if annual floods are independent of each other,
then the probability of an exceedance on the next trial is 1 percent, regardless
of whether the present trial resulted in success or failure. The probability
that the next trial will fail but the second one succeed is 0.99 x 0.01. The
probability that the next success will be on the third trial is 0.99 x 0.99 x
0.01, on the fourth trial (0.99~3 X 0.01, end so on. Thus, the mostlikely time
for the next success or exceedance of the 100-year flood is on the next trial.
The average time to the next exceedance, however, works out to be 100 trials.
This paradox is resolved by noting that the probability distribution of in-
TABLE D-1 Approximate Probabilities That No Flood Exceeds the
100-Year Flood
Period, yr
Probability, percent
2 10 20 so 100 200
99 98 90 82 60 40 15

OCR for page 227

Appendix D
TABLE D-2 Probability (in percent) That Indicated Design Flood Will Be
Exceeded During Specified Planning Periods
231
Average Return
Period of Length of Planning Period (yrs)
Design Flood iyr) 1 25 50 100 200
25 4 64 87 98 100
50 2 40 64 87 98
100 1 22 39 63 87
200 0.5 12 22 39 63
1,000 0. 1 3 5 10 20
10,000 0.01 0.25 0.5 1 2
100,000 0.001 0.025 0.05 0.1 0.2
1,000,000 0.0001 0.002 0.005 0.01 0.02
terexceedance times is very different from the familiar tightly clustered,
symmetrical, bell-shaped normal curve, so it is no surprise that intuition
developed on the normal distribution should fail here.
A related problem is determination of the probability that a time interval
will contain one or more events. Continuing the example, the probability
that a 2-year period will contain at least one exceedance of the 100-year flood
is one minus the probability that both years will be free of exceedances, or
1 - (0.99 x 0.99), which is very nearly 0.02. Similarly, the probability that a
3-year period will contain at least one exceedance is 1 - (0.99~3, which is
very nearly 0.03. The approximate probabilities that various planning pe-
riods will be without exceedances of the 100-year flood are listed in Table
D-1. Thus, there is only a 15 percent chance that a 200-year period will be
free of exceedances of the 100-year flood.
On the other hand, if a family moves into a house at the edge of the 100-
year floodplain and plans to stay for 20 years until the children are grown,
the chances they move before they get flooded are about 80 percent. Thus,
the chances one experiences a flood depend on the annual risk (1 percent in
this case) and the length of the planning period or anticipated length of
exposure to that risk (either 200 or 20 years in these two examples).
This last observation is particularly relevant to the safety analysis of dams.
Dams built or in existence today may continue to function and hold water for
centuries. Though it seems impractical to try to plan for such long time
periods given the imprecision with which we can foresee the future, cer-
tainly society should be concerned with the safe operation of existing and
proposed dams over the next 50-100 years. Table D-2 considers four planning
periods (25, 50, 100, and 200 years) and several design floods. Specified
within the table are the chances each design flood value will be exceeded
(one or more times) during each planning period. Certainly a 100-year

OCR for page 227

232 Appendix D
planning period does not correspond to the situation faced by an individual
family that intends to stay in a neighborhood or location for only a few years;
however, it is very appropriate for cities, counties, and state and federal
governments, which are concerned with society's long-term welfare. Con-
sider the 1,000-year flood listed in Table D-2. While it has only a 0.1 percent
chance of occurring in any one year, it has a 10 percent chance of occurring in
100 years. Thus, in situations where dam failure or overtopping is likely to
have disastrous consequences, and one wants to ensure that such an event is
unlikely to occur over a long period of time, one must design for events with
very long return periods.
ESTIMATING THE RETURN PERIOD OF THE PMF
In this section we consider if it is possible to credibly estimate the probabil-
ity that a probable maximum flood (PMF) estimate will be exceeded, even to
within an order of magnitude, through examination of the rainfall-runoff
relationship and the probabilities of various events. The following section
considers whether statistical approaches are able to provide reliable and
credible estimates of flood-frequency curves out into the 10,000- to
1,000,000-year event range.
It would be useful if a reliable and credible estimate of the return period
or, equivalently, the exceedance probability of a PMF could be obtained by
analysis of rainfall-runoff processes. Thus, one would start with an analysis
of the frequency of extreme precipitation, as has been done in some studies
(e.g., Harriman Dam—see Appendix A, Yankee Atomic Electric Co.) . Then
one would need to consider how such precipitation totals would be distrib-
uted in time and would interact with winds and antecedent conditions
within the basin (both moisture levels in the soil and snowpack in some
places). All possible combinations of these factors and the probabilities they
would occur jointly must then be determined to arrive at the frequency
distribution of extreme floods, as indexed by the surcharge over the dam or
the flood volume, maximum discharge rate, or jointly by both.
While the algebra can easily be written to describe these computations,
calculating the probability of two conditions arising together (two probable
maximum precipitations (PMPs) back-to-back or a PMP on the probable
maximum snowpack), neither of which has been experienced separately,
seems like a very imprecise activity. Newton (1983) has tried to perform such
a calculation, and his analysis illustrates the problems.
Newton indicates that one might consider the probability of a near-PMP
storm on a particular watershed to be 10-8, with 10-6 defining an upper
confidence limit. An antecedent storm of 15-50 percent of the PMP is used to
define antecedent moisture conditions. He assigned a probability of 1/130

OCR for page 227

Appendix D
233
per year for such a storm and 3/365 for the probability that it would occur in
the 3 days preceding the PMP. This yields a probability of (3/365) (11130) or 6
x 10-5. Thus, that antecedent condition followed by a PMP rainfall has a
probability of
(6 x 10-5~10-~) = 6 x 10-~3
Now consider the following issue. Certainly PMP-type storms are more likely
to occur in some seasons than others and when regional atmospheric condi-
tions are unstable. Given that meteorologic conditions will be such that a
PMP would occur, the conditional probability of a 15-50 percent PMP pre-
ceding storm may be only 10-3, or even 10-2. This would yield a probability
of this joint event of
(10-2 to 10 -3) ~ 10 -en = 10 - ]0 to 10 -
rather than 6 x 10-~3.
Problems with defining joint probabilities for extreme events appear at
every turn. In some situations they may not matter, if the magnitude of the
antecedent storm has little effect on the basin's hydrologic response to the
PMP and on anticipated antecedent reservoir levels. However, antecedent
conditions could be very important in systems with large natural or man-
made storage capacity. In any case, the fact that antecedent conditions may
or may not have an effect on the actual performance of a reservoir illustrates
an additional difficulty with calculations aimed at assigning probabilities to
PMF events. A string of unusually rare events may have a joint probability of
occurring of less than 10-~2. However, if a near-PMP event with frequently
experienced conditions is likely to occur with probability 10 -6-10 -8 and for a
modest size reservoir is likely to cause nearly as big an outflow hydrograph as
the combination with much lower probability, then the 10-~2 value is more
misleading than useful. In some places in the United States, such as Califor-
nia, the PMP value may be as little as twice the 100-year rainfall, in places
where extreme rainfalls are very unusual but possible (such as in extreme
northern New England), PMP values may surpass the hundred-year rain-
falis by much larger factors. These considerations demonstrate the issues
that must be addressed by an analysis that wouIc] assign return periods to
extreme flood events. One would need to consider the frequency of severe
rainfalls from modest storms to the worst observed and up to the PMP. These
must be convoluted with the conditional frequency of other conditions (soil
moisture, snowpack, flood pool elevations, base flow rates) to arrive at the
frequency distribution for spillway release rates from a proposed or existing
dam.
Development of estimates of the return period of PMF values would be a
valuable activity, and such attempts should be encouraged. In some in-

OCR for page 227

234
Appendix D
stances, antecedent conditions may have little effect on the PMF values, and
the required calculations will be greatly simplified and can credibly be
defended. Certainly, the return period of near PMP values varies greatly
(Newton, 1983, Washington State response, Appendix A); thus, one would
expect the exceedance probabilities of PMFs to vary widely, particularly
when one considers the uncertainty associated with the other extreme mete-
orological and hydrological conditions that are assumed to pertain. Quanti-
fication of these factors could make planning more rational if such
quantification were credible.
FREQUENCY ANALYSES FOR RARE FLOODS
Bulletin 17 provides uniform procedures for estimation of floods with
modest return periods, generally 100 years or less. Extrapolation much be-
yond the 100-year flood using flood-frequency relationships based on avail-
able 30- to 80-year systematic records is often unwise. First, the sampling
error in the 2 or 3 estimating parameters is magnified at these higher return
intervals (Kite, 1977~. Moreover, the available record provides little indica-
tion of the shape of the flood-frequency curve or confirmation of a postu-
lated shape at the extreme return periods of interest here. The paragraphs
below consider how those problems might be overcome.
Regionalization
If only 30-80 years of record is available at a single site, then perhaps
information at many sites can be combined to obtain a better estimate of the
frequency of extreme events.
Wallis (1980) discusses one class of such procedures. They involve defining
a standardized (dimensionIess) flood-frequency curve for a region that is
scaled by an estimate of the mean flow at a site to allow calculation of a
flood-frequency curve for that site. Such procedures, called the index flood
procedure, were once used by the U.S. Geological Survey before being
abandoned because they failed to capture the effect of basin size on the shape
of the flood-frequency curve (Benson, 1968) .
If one had 40 years of record at 100 gages in a region, they would have
4,000 station-years of data. However, this is still far less than the 10,000 to
1,000,000 station-years of data that would be desirable to reliably and credi-
bly estimate floods with return periods on the order of 104-106 years.
Still there are other problems. Sometimes the drainage area for one gage
falls within the drainage area for another gage, so that the two records do not
reflect independent experiences. Even when drainage areas do not overlap,

OCR for page 227

Appendix D
annual floods at different gages are cross correlated, reflecting common
regional weather patterns; this too decreases the information content of
regional hydrologic networks (Stedinger, 1983b, pp. 507-509~. Still other
problems are caused by the need to standardize the flow records from indi-
vidual gages if one attempts to define a standardized (dimensionless) flood-
frequency curve for a region (Stedinger, 1983b, pp. 503-507~.
Others have advocated Bayesian and empirical Bayesian procedures for
estimating flood risk. Stedinger (1983a, pp. 511-522) provides a review of
much of this literature. With such approaches, the information about the
mean and variance of floods provided by the available at-site flood-flow
record can be augmented using regional relationships that describe the vari-
ation of those quantities with measurable physiographic and other basic
characteristics. Unfortunately, the precision of such regional regression rela-
tionships appears to be such that the results are worth less than 20 years of at-
site data. Thus, the Bayesian procedures generally discussed result in very
modest improvements in the precision of flood-frequency curves for sites
with 40 years or more of data. Moreover, these procedures are really de-
signed for flood-frequency estimation within the range of experience, again
perhaps the 100-year flood or less.
On balance, statistical analysis of available records of annual flood peaks
is unlikely to provide reliable and credible estimates of floods with return
periods of the order of 104-106 years.
235
Use of Historical Flood Data
Physical evidence and written records of large floods that have occurred in
the recent and distant past provide objective evidence of the likelihood and
frequency of larger floods beyond that provided by gaged flow records.
Techniques are being developed to include recent historical information in a
given river basin with the available gaged or systematic annual flood record
(Condie and Lee, 1981; Cohn and Stedinger, 1983~. Use of 200 years of
pregaged-record experience might be worth 100 years of systematic record.
This would be useful for many purposes but is still insufficient for ours.
Alternatively, multidisciplinary paleohydrologic techniques that rely on
stratigraphic and geomorphic evidence of extraordinary floods Garrett and
Costa, 1982) have the best potential for illustrating what size floods can
occur. This is particularly true when a large area is searched that would
provide evidence of extraordinary floods, had they occurred. When such
techniques are applicable, they should be user] to demonstrate that calcu-
lated PMFs are credible and are neither unreasonably larger or small; cur-
rent paleohydrologic techniques seem better suited for illustrating what is or
is not possible than they are for constructing a safety evaluation flood.

OCR for page 227

236
Numerical Evaluation of Expected Damages
Appendix D
A numerical estimate of the expected damage costs calculated as part of a
risk-cost analysis is generally based upon (1) a proposed flood-frequency
curve extended to the PMF and (2) a set of flood damage estimates corres-
pondingto discrete and particular flood flows. From these data, one needs to
construct the probability density function f~q) for the flood flows and an
estimate of the damage function: D(q). The function D(q) is used to interpo-
late the damages associated with various flood flows between the flood flow
values for which the corresponding damages have been estimated by flood
routing and actual damage estimates. Once these two functions have been
constructed, one can numerically integrate their product D(q jf~q) from qmin
to the PMF to get an estimate of the expected damages associated with floods
in this range. Events smaller than qmin are assumed not to cause damages
while the damages associated with floods larger than the PMF are neglected.
Neglecting those large events should not affect the relative costs of various
alternatives if the dam would be overtopped and fail for all designs causing
equivalent damages. Each step is discussed below.
Estimating the Probability Density Function
If F~q) is the probability of a flood less than q, then the probability density
function ftq) is the first derivative of Fig) with respect to q. The best way to
calculate f~q) is to develop an analytical description of the flood-frequency
curve yielding an analytical expression for ftq). For example, if Fig) is
obtained by a linear extension on log normal paper of the flood-frequency
curve through the PMF, then over that range
f~q) = ~ (27r)° 5vq) -I exp ~ -0.5~1n q - m)2/v2]
where the values of m and v can be determined from boundary conditions
such as
i—F(PMF) = 10-4 or 10-6 and 1—F~q~00) = 0.01
where quit is the 100-year flood.
If an analytical description of the postulated flood-frequency curve can-
not be estimated, then a finite difference formula of at least second order can
be used to estimate f~q) at the required points (Hornbeck, 1975, p. 22~. This
approach, though less accurate and more trouble, can produce satisfactory
results.
Estimating the Damage Functions
From flood routing exercises using appropriate reservoir operating poli-
cies and surveys of the associated damages, agencies can estimate the dam-

OCR for page 227

Appendix D
237
ages Di associated with the values of several large inflows qi and a particular
reservoir and spillway design. These values can be used to develop an esti-
mate of the damages D(q) associated with each value of q over this range.
Such a function is required in the estimation of the expected damages.
It is generally appropriate to assume that D(q) is a continuous function of
q except at the critical flow qc, above which the dam would be overtopped
and would breach and fail. Thus, it is appropriate to develop an estimate of
D(q) for q < qc (where a breach does not occur) and one for q ~ qc. A
satisfactory estimate of D(q) over each interval can usually be provided by
an interpolating cubic spline with natural end conditions (AhIberg et al.,
1967~. A piecewise-linear approximation could also be used but would pro-
vide less accurate results.
Numerical Integration
Once an analytical expression for f~q) is determined and an approxima-
tion for D(q) constructed, one can develop a precise numerical estimate of
the expected damages associated with the design or proposal to which D(q)
corresponds. This is done by numerically integrating D(q jftq).
Because D(q) is discontinuous at qc while some of its derivatives are most
likely discontinuous at the qi points for which Di was specified, the product
of D(q jf~q) should be integrated separately over each interval qi to qi+~ and
the results added. This avoids accuracy problems that those discontinuities
might cause with some numerical integration algorithms. Over each subin-
terval one can then employ at least a second-order numerical quadrature
(integration) formula (Hornbeck, 1975, pp. 148-150) with a sufficiently
small step size to ensure accurate results. Gould (1973) shows that Simpson's
second-order numerical integration formula can yield significantly better
estimates of expected damages than the commonly used midpoint procedure
with a moderate step size. The midpoint procedure can grossly overestimate
the actually expected damages associated with a given damage function and
a proposed frequency curve. There is no excuse for not carefully processing
damage cost estimates with a proposed flood-frequency curve to accurately
determine the associated expected damages. This can be done as described
above.
A Simple Example of Expected Damages
In Appendix E, simple flood frequency and damage estimation models are
presented along with the results of five cases that showed the expected an-
nual damages depending on the characteristics of the project and the param-
eters of the models. In this section we indicate how the expected damages
were computed. The flood-frequency and damage functions were chosen

OCR for page 227

238 Appendix D
specifically to admit a closed-form solution that would not require numeri-
cal integration, but the basic technique would be the same for other
functions.
The peak flow rates into the reservoir are assumed to follow an exponen-
tial distribution for high flows, so that for q—10,000 cubic feet per second
(cfs) the probability that the peak flow in any year is less than q is
F~q3=1—expt—r~q+qO)]
The probability density function is
ftq) = ~F/dq= rend—r~q + qO)]
The probable maximum flood is assumed to be 120,000 cfs with a return
period T of 104 or 106 years, and the 100-year flood (F = 0.99) is assumed to
be 20,000 cfs. If the cumulative distribution function F is to fit these two
points, then r and q must satisfy
1 - 1/T = F (120,000) = 1—exp ~—rtl20,000 + qO)]
1 - 1/100 = F ~ 20,000) = 1—exp ~—rt 20,000 + qO)]
which transforms to
r~l20,000 + q0) = lnT
r( 20,000 + q0) = in 100
yielding
r = 1o-s In (T/100)
q0 = (in 100)/r—20,000
For the two possible values of T one obtains the following parameter values:
T= 104
r 4.6 x 10-5
qO 80,000
T= 106
9.2 x 10-5
30,000
This leads to the exceedance probabilities for different values of T shown in
Table D-3.
In this example there is a maximum possible damage to downstream
property, denoted M. At flows below 10,000 cfs, no damage occurs, while
above that value, damages follow an exponential curve with M as the limit-
ing value. If the dam is overtopped and breaks, the surge wave is assumed to
cause the maximal downstream damages M, to which we must add the cost

OCR for page 227

Appendix D
239
TABLE D-3 Exceedance Probabilities for Different
Values of T
P[q 2 x] = 1—F(~
X T= 104 T= 106
10,000 1.6 x 10-2 2.5 x 10-2
20,000 1.0 x 10-2 1.0 x 10-2
50,000 2.5 x 10-3 6.0 x 10-4
75,000 8.0 x 10-4 6.0 x 10-5
100,000 2.5 x 10-4 6.0 x 10-6
120,000 1.0 x 10-4 1.0 X 10-6
from loss of services and rebuilding the dam. This latter quantity is denoted
L. Thus, the damage function is
D(q) = 0
D(q) = M 1 - exp [-s(q - 10,000)]
D(q) = M + L
q c 10,000
10,000 ~ q < qc
qc'q<120,000
where qc is the critical inflow above which the dam fails. Note that the
parameter s determines at what point damages become a significant fraction
of the maximum M. If s is "large" ~ > 1O-4), significant damages occur at or
below the 100-year flood, while if s is smaller, the relatively high damages
only occur with larger inflows.
To compute the average annual damages, D(q)f(q) is integrated from the
no-damage flow up to the PMF. In our example, if the flow ever exceeds the
PMF, the damage will not depend on the particular dam and spillway design
selected. Thus, the expected damages of interest are
+~
120,000
D—
10,000
[qc
10,000
120,~0
11 qc
D(q)f(q) dg
M{1 - exp [s(q - 10,000)]} f(q) dg
(M + L)f(q) dg
Notice that the integral has been broken where D(q) is discontinuous. In
most cases one would have to perform the integration numerically. How-
ever, the probability and damage functions have been selected so that a

OCR for page 227

240 Appendiix D
closed-form expression for the expected annual damages can be obtained.
Here the probability density function is
f(q) = rexp[—r(q+ qO)]
so that the first integrand becomes
D(q)f(q) = Mr {exp [—r(q + qO)]—K exp [ - (r + s)q]}
where
K = exp [lO,OOOs—rqO]
The expected damages are then
E[D] = M ~—exp [—r(q + qO)]
where
+ rK exp[ - (r+s)q]~
r+s 1o,ooo
F(q) = 1—exp [—r(q + qO)]
The average annual damages depend on the damage function parameters
M, L, and s;` the parameters of the flood flow frequency distribution r and qO
(which in turn depend on the return period assigned to the PMF, T); and the
critical flow qc. In general, dam design and operating policy influence the
level of damages that will result from different flood events, and these in turn
determine the average annual damages.