Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 283

FAPPENDIx Cons traction of
Confidence Intervals for
Mathematical Combinations
of Random Variables
Many of the variables central to risk and benefit analyses will generally
be measured with imprecision. Accordingly, the Committee has recom-
mended in this report that the estimates for such variables be reported as
90 percent confidence intervals rather than as single point estimates. (Of
course, inadequacies with the data will require most of the ranges to be
subjectively determined, on the basis of the analyst's judgment.)
In such instances, variables are estimated by mathematically combin-
ing (e.g., adding, multiplying) estimates of other variables. For instance,
total benefits forgone due to the withdrawal of chlorobenzilate are
measured by the sum of benefits forgone from the citrus and noncitrus
uses. If the mathematical manipulations involve two or more variables
measured as intervals, caution must be exercised in forming the
confidence interval for the derived estimate. This appendix presents the
correct procedure for combining estimates of random variables to derive
estimates of and confidence intervals for other random variables.)
The following discussion is framed largely in terms of the two-variable
case. The randomly distributed variables are denoted by x and y.
Further, their expected values and variances are denoted by E(x) and
E(y) and by V(x) and V(y), respectively. The covariance between x end y
is denoted by C(x, y).
283

OCR for page 283

284
SUMS AND DIFFERENCES OF RANDOM VARIABLES
Appendix F
If the variables x end y are to be combined to form a new variable z = x
+ y, then E(z) = E(x + y) = E(x) + Em. Thus, the best estimate of z
is simply the sum (or difference) of the best estimates of x andy.
The variance of z is V(z) = V(x) + V(v] + 2C(x. Y). Clearly, if the
~ ~ ~ , .' ~ ,~
variables are independently distributed, the variance of their sum or
difference is simply the sum of their variances: V(x) + V(y). These
results easily generalize to the case involving three or more random
variables (see Mood et al. 1974~.
PRODUCT OF RANDOM VARIABLES
Suppose that x and y are to be combined to form a product, z = xy. In
this case, E(z) = E(xy) = E(x)E(y) + C(x, y). Of course, if x and y are
uncorrelated, then E(z) = E(x)E(y).
The variance of the product of two random variables assumes a rather
complex form when x and y are correlated. In general, the data available
to oPP would not permit the use of this formula, so it is not shown here.
(The interested reader may refer to Mood et al. 1974.) If x and y are
independently distributed, the variance of the product is relatively
straightforward:
V(xy) = E(x)2 Vim + EQ)2 V(x) + V(x) Vim.
These results also generalize to cases involving three or more variables
(Goodman 1960~.
QUOTIENT OF TWO RANDOM VARIABLES
In general, there are no simple exact formulas for the mean and variance
of the quotient of two random variables, although there are some
approximate formulas (Mood et al. 1974~. The formulas used by the
Committee are
E ~ x ~ E(x) _ C(x, y) + E(x jV(y)
y E(y) E~y'2 E(y)
and
V ax ~ E(x) )2 ~ V(X) + V(y) _ 2C(x, y)
By ~ E(y) J [(x,2 E(y'2 E(x)E`y'

OCR for page 283

Appendix F
Clearly, both of these formulas simplify somewhat when x and y are
uncorrelated.
285
APPLYING THE FORMULAS
In applying the formulas described above, the Committee found it
necessary to adopt the following assumptions.
1. Both x end y have normal distributions.
2. The interval estimates for x and y represent 90 percent confidence
intervals.
3. The midpoints of the ranges estimate the expected values for the
variable.
4. The product and quotient of x and y create distributions that can
be reasonably approximated by the normal distribution.
5. x end y are uncorrelated (unless stated otherwise).
The application of these formulas can be illustrated with an example
from Chapter 7. The benefits of chlorobenzilate to the Florida IPM
program were estimated in Chapter 7 to range from $0 to $3
million/year. The non-~PM benefits to Florida citrus growers were
estimated to fall between $0.6 and $6.6 million annually.
What is the appropriate interval for the sum of these two benefits? In
accordance with the above-mentioned assumptions, we can restate, say,
the non-~PM benefits as equalling $3.6 million ~ + $3.0 million). The upper
limit for the 90 percent confidence interval is presumed to be $6.6 million
in this instance. Thus, $6.6 million = $3.6 million + 1.64sx,~where so
represents the estimated standard deviation around the estimated mean
value for the non-~PM benefits. This equation clearly implies that the
variance around the estimated mean is she = $3.33 x 10~2. Similar
reasoning applied to the IPM benefit estimates yields an estimated
variance of $8.31 x 10~. The square root of the sum of these variances
provides a correct estimate of the standard deviation around the sum of
the mean values for the two variables, namely so+ = $2.04 million.
Thus, the sum of the midpoints of the two benefit measures yields an
estimate of the aggregate benefits in Florida equal to $5.1 million
(+~1.64~$2.04 million] = $3.35 million). Alternatively, the aggregate
Florida benefits are estimated to range from $1.75 to $8.45 million.
It is interesting to note that this estimated range is quite different from
the one obtained from simple additions of the lower and upper limits for
the individual benefit estimates. This "naive" approach implies a much
larger range: $0.6 to $9.6 million.

OCR for page 283

286
NOTE
1. This discussion draws heavily on Mood et al. (1974).
REFERENCES
Appendix F
Goodman, L.A. (1960) On the exact variance of products. Journal of the American
Statistical Association 55:70~713.
Mood, A.M., F.A. Graybill, and D.C. Does (1974) Introduction to the Theory of Statistics.
New York: McGraw-Hill.