Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 111

Appendix G
On the Quantification of Uncertainty and
Enhancing Probabilistic Risk Analysis
Nozer D. Singpurwalla
Professor, Department of Statistics
George Washington University, Washington, D.C.
PREAMBLE (or the objective chances) of all possible outcomes (also
known as consequences) of an action, and the utilities of
This appendix consists of two parts. In Part 1, we over-
each outcome. Probabilities and chances are ways to quan-
view some commonly used approaches for quantifying
tify uncertainty (i.e., the possibility mentioned above), and
uncertainty. The overview is necessarily terse, but adequate
quantification is a necessary step for invoking the logical
references are provided. Herein we introduce the notions
argument. Utilities are numerical values of the consequences
of chance, probability, likelihood, belief, and plausibility,
of each outcome, on a zero to one scale. Indeed, utilities
terms that commonly arise in the context of risk analysis.
are probabilities and must therefore obey the rules (or the
Also mentioned here are the notions of consequences and
calculus) of probability (cf. Lindley, 1985, p. 56). They
utilities, both of which are germane to risk analysis and
quantify one’s preferences between consequences. Thus the
risk management. Part 1 can serve as a supplement to the
modern notion of risk entails the twin notions of probability
“Lexicon of Probabilistic Risk Assessment Terms” given in
(or chance) and utility. Its computation via the sum of prod-
Appendix A of this report.
ucts rule mentioned above (cf. Morgeson et al. [2006] for a
In Part 2 we put forth some thoughts and ideas for enhanc-
detailed application of this principle to terrorist risk assess-
ing PRA (Probabilistic Risk Analysis) with some statistical
ment) is a consequence of the calculus of probability. The
and decision theoretic methodologies that are available in the
quantification of uncertainty by probability is, according to
literature, and which could be advantageously invoked. We
de Finetti (1972) and Lindley (1982), the only satisfactory
close this section by alluding to the possibility of some new
way. Alternatives to probability, like Zadeh’s (1979) possibil-
research in PRA, namely, the development of an architecture
ity, do not lead to a prescription for the quantification of risk;
for adversarial risk analysis and decision making in vague
this is one of its biggest drawbacks.
(or fuzzy) environments.
It is our hope that this appendix will fill in any gaps of
interpretation of the Lexicon that is given in the text, so that Chance and Probability: Metrics for
this appendix and the Lexicon of Appendix A are linked. To Quantifying Uncertainty
better facilitate a broad based appreciation of the material
The use of probability as a metric for quantifying uncer-
presented here, this appendix has been deliberately cast in
tainty dates back to the 16th century. However, discussions
a conversational style. That is, mathematical notation has
about its meaning and interpretation continue until today.
been avoided.
The distinction between chance and probability (cf. Good,
1990) is a consequence of such debates and discussions. In
PART 1.
APPROACHES TO QUANTIFYING his review article, Kolmogorov (1969) wholeheartedly sub-
UNCERTAINTY scribes to probability as an objective chance that is agreed
upon by all even though it can never be observed. It is defined
Introduction as the limit of a relative frequency; the operational word
being “limit.” To Kolmogorov, chance and probability were
From a layperson’s point of view, the term “risk” connotes
synonymous, and thus the word chance does not appear in his
the possibility that an undesirable event will occur. However,
writings. To de Finetti (1976) and others, like Savage (1972),
the modern technical meaning of the term is different. Here,
probability is subjective and personal, and encapsulates ones
risk is the sum of the product of one’s personal probabilities
disposition to a two-sided bet. De Finetti (1972) goes further
111

OCR for page 111

112 DEPARTMENT OF HOMELAND SECURITY BIOTERRORISM RISK ASSESSMENT
by connecting chance and probability via his theorem on mane to PRA, because each leads to a different paradigm
exchangeable sequences with the thesis that probability is to for assessing risk. The former leads to the frequentist (or
be seen as a two-sided bet about the unknown chance. The sample-theoretic) approach, the latter to the subjectivistic
algebra (or the calculus) of probability is subscribed to by all Bayesian approach. Under the frequentist approach, PRA
(save the axiom of countable additivity which to de Finetti can only be done when hard data on the basic events are at
is unnecessary). Whereas an unobservable chance can be hand, and preferably a substantial amount. Such data could
estimated via observed data (if available), probability can be be easy to come by when one deals with conceptually re-
made operational by monitoring one’s disposition to a series peatable events like failures in a population of items such
of bets. One needs to monitor a series of bets to ensure that as valves, electronics, and other such small gadgets. PRA
the bettor adheres to the calculus of probability; i.e. the bettor under frequentist paradigm is most suitable for engineered
needs to be coherent. systems like airplanes, automobiles, tanks, and nuclear
r
eactors. By contrast, under the Bayesian approach to PRA,
probabilities of the basic events need to be subjectively ob-
Likelihood: A Weighting Function
tained via the elicitation, codification, modulation, and the
The term likelihood has often been used as a substitute for fusion of expert testimonies (see, for example, Singpurwalla,
chance and probability. However, the technical meaning of 2006, Chapter 5). Because terrorist risk related events are
the term is different. Indeed, it can be seen that a likelihood not considered to be repeatable (to constitute an ensemble),
is not a probability (or chance), and that a likelihood does not PRA under the subjectivistic Bayesian paradigm appears to
obey the calculus of probability. The notion of a likelihood be relevant, not only in the contexts of biological agent risk
arises in the context of making assessments of uncertainty analysis and other modes of terrorist risk (cf. Morgeson et
in the light of new evidence (or data) using Bayes’ Law. al., 2006), but also for human health risk assessment from
The likelihood is simply a weighting function that can be environmental hazards (cf. Nayak and Kundu, 2001, who
assigned either subjectively or via a probability model. The also allude to a distinction between chance and probability
matter is subtle and warrants a detailed discussion that cannot vis-à-vis “variability” and “uncertainty”).
be given here. We refer the reader to Singpurwalla (2006),
Section 2.4.3, or to Singpurwalla (2007) for a more complete
The Dynamic Nature of Subjective Probability
picture. The essence of this sub-section is that like chance
and probability, the likelihood is, from a technical point of With the above in place, some caveats about the subjective
view, a distinct construct. Thus, caution should be used when probabilities and their assessments need to be stated. Unlike
it is used with the first two. chance—an objective entity—that is fixed for all time and
agreed upon by all, subjective probability is personal to an
individual (or a group acting as one), and can change from
Probabilistic Risk Analysis
person to person. More important, it can change over time
Probabilistic risk analysis—henceforth PRA—is a sys- even for the same person. In other words, subjective prob-
tematic way to assess and to invoke the calculus of prob- ability is dynamic. It is assessed at some fixed point in time
ability. Its origins can be traced to the work done at Bell and the assessment is presumably based on the information
Telephone Laboratories on the launching of missiles (cf. at hand at that fixed point in time. As time marches on, new
Watson, 1961), and to the work done at the Boeing Scientific information could become available, and with it a possible
Laboratories on assessing the reliability of airplanes (cf. change of probability. The position that subjective prob-
Hassl, 1965). The prominence of PRA grew with the dawn- ability can be dynamic takes a more dramatic stand via the
ing of the nuclear reactor era when it became the dominant claim that it is not merely the availability of new informa-
tool for assessing the safety of nuclear reactors (cf. Barlow, tion over time that brings about a change in probability. A
et al., 1975). The driving tools behind a PRA are the event change in probability could also be the result of a change in
trees and fault trees, which are a graphical portrayal of the the psychological disposition of the individual whose betting
causes that lead up (or down) to an event of interest. At the behavior is assessed (cf. Ramsey, 1926). It is because of the
terminus of such trees are the causes that trigger the event of above caveats that de Finetti (1974) in the introduction of
interest; such causes are called the basic events of the trees. his famous two-volume book on probability declares that:
PRA is attractive to engineers and other scientists because “Probability Does not Exist.”
of their inherent graphic feature, just as Bayesian Belief Nets
(BBNs) are attractive to computer scientists. When all is said
Alternatives to Chance and Probability
and done, both the PRA and the BBN are simply tools for
assessing probabilities, and invoking the calculus of prob- One, among the several, of Kolmogorov’s (1933) notable
ability. They are devices for good book-keeping practices in achievements was that he freed probability from the debates
probability calculations. and discussions of interpretation. He did this by axiomatizing
The distinction between chance and probability is ger- probability. (The call to axiomatize probability can be traced

OCR for page 111

APPENDIX G 113
to the German mathematician David Hilbert, Kolmogorov’s The net effect of these measures is that probability, instead
dissertation supervisor, and to Sergei N. Bernstein). How- of being a single number, is bounded above and below by
ever, in order to axiomatize probability, Kolmogorov had to what are known as upper and lower probabilities (also see
introduce an architecture, and it is aspects of this architecture Walley, 1991). A proposal for decision making based on up-
that have paved the way for an entrance of alternatives to per and lower probabilities has been made by Giron and Rios
probability. (1980). Whereas this proposal lacks the force of coherence
The mathematical architecture upon which the axiomat- that decision making based on probabilities has, it may serve
ization of probability rests consists of a sample space (i.e., as a basis for risk analysis based on belief and plausibility.
the set of all possible outcomes of a random phenomenon), This possibility remains to be explored.
and a many to one mapping (or a function) from the sample
space to the real line. The mapping is known as a random
PART 2. ENHANCING PRA WITH BEST PRACTICES
variable. Probability is another mapping defined on the
subsets of the sample space. It takes values between 0 and The material of this part is linked with that of Part 1
1, and it abides by the addition and multiplication rules of wherein it was stated that probability and utility are two
probability. Kolmogorov’s architecture subscribes to the components of risk analysis, and that PRA was a tool to
law of the excluded middle. The essence of this law is that facilitate the assessment of probabilities of certain events, us-
every element of the sample space can either belong, or not ing the calculus of probability. A prescription for computing
belong, to a particular sub-set of the sample space. In other risk was also given, and it was stated that in the context of
words, any element of the sample space cannot simultane- biological agent risk analysis PRA under the subjectivistic
ously belong and not belong to any sub-set of the sample Bayesian paradigm would be the desired approach. The
space. This happens when the sub-sets are sharp; that is, their dynamic nature of subjective probability was mentioned
boundaries are well defined. and the need to ensure coherence of elicited probabilities
Objections to Kolmogorov’s architecture stem from two was emphasized. The prescription for calculating risk as
directions. The first is that in practice, especially when it the sum of the product of probabilities and utilities was a
comes to linguistic information, the law of the excluded consequence of the calculus of probability, and the fact that
middle turns out to be a restriction. In other words, requir- utilities are probabilities.
ing that sub-sets of the sample space have sharp boundaries In the context of managing risk, one chooses that action
is restrictive. One needs to entertain the possibility that the for which the calculated risk is a minimum. This prescription
boundary of the said sub-sets could be vague or fuzzy. The for taking actions constitutes the basis of decision making
second objection pertains to the requirement that the map- under uncertainty (cf. Raiffa and Schlaifer, 1961) wherein
ping from the sample space to the real line may be many to decision trees play a role analogous to that of fault and event
one. In practice, scenarios can arise wherein the said map- trees. That is, decision trees facilitate good book-keeping in
ping needs to be one to many. Such scenarios can generally the context of making decisions. Decision theorists are at-
arise in the context of forensics, accident investigation, or tracted to decision trees for the same reason that engineers
failure diagnosis. liking fault trees, event trees, and PRA; graphics is the
The need to entertain fuzzy sets has led Zadeh (1979) virtue of both. The important point to note is that generally,
to propose an alternative to probability, namely, possibility decision trees pertain to the flow of actions and events that
theory. The calculus of possibility theory is different from are of relevance to a single decision maker. With the above
that of probability theory; it parallels that of operations with as a perspective, the following enhancements to the current
fuzzy sets. Thus fuzzy set theory and possibility theory are methods of using PRA for risk analysis and management
often mentioned in the same vein. Regrettably, and despite can be suggested.
Zadeh’s persistent efforts, there has been no justification of
the calculus of possibility theory. By contrast, the axioms 1.
The elicited subjective probabilities should be tested
of probability theory—the Kolmogorov axioms—have a to ensure coherence via more than a single query of
foundation that is rooted in behavioristic phenomena. As the “expert.”
a consequence, possibility theory has failed to provide a 2.
The assessed subjective probabilities should be modu-
prescription for calculating risk. More important, it has been lated to make adjustments for any inherent biases that
recently argued (cf. Singpurwalla and Booker, 2004) that it is the experts may have.
possible to endow fuzzy sets with probability measures. This 3.
When the assessed subjective probabilities entail more
has made the role of possibility theory unnecessary. than one expert—and this on principle should always
The need to entertain scenarios involving one-to-many be attempted—the expert testimonies should be fused
mappings has motivated Dempster (1968) to propose a in a manner that accounts for the correlations (positive
generalization of probability measures, which he calls belief or negative) among the experts.
and plausibility; some details about these can be had from
Singpurwalla and Wilson (2007) and the references therein. Steps 2 and 3 above should be done formally via the calculus

OCR for page 111

114 DEPARTMENT OF HOMELAND SECURITY BIOTERRORISM RISK ASSESSMENT
of probability. Details about how this can be done are given 7. the context of terrorist risk assessment, be it bio-
In
in Singpurwalla (2006, Chapter 5), wherein references to logical or otherwise, the layered defense and attack
the original sources can be found. Some researchers (Cooke, concepts used in military science could be valuable;
1991) argue strongly in favor of calibrating probabilities an inkling of these appears in Morgeson et al. (2006).
against empirical data as an alternative to modulation. The Under a layered defense, the probability of penetra-
author disagrees that proper Bayesian methods for modulat- tion goes down with the number of layers, resulting
ing assessed probabilities are not available. Philosophical in lower probability of a successful attack on an asset.
issues aside, the calibration method suggested by Cooke The effect of all this would be an expansion of the
requires empirical data; and in the absence of such data, event and fault trees and the assessment of several
modulating the assessed probabilities based on one’s assess- conditional probabilities.
ment of the expertise of the experts is a desirable option. 8.
Even though alternatives to probability have often
been mentioned in the context of a PRA, there do not
4. many, a routine use of subjective probabilities
To seem to be at hand concrete examples and illustra-
and their accompanying paraphernalia of Bayesian tions demonstrating the viability of such alternatives.
methods in the context of PRA are objectionable; A possible reason behind this state of affairs could be
see, for example, Nayak and Kundu (2001). This is the lack of awareness about the availability of some
particularly acute when it comes to matters of public tools that are able to deal with decision making in a
policy wherein some sense of objectivity becomes fuzzy environment, and in the presence of a one-to-
paramount. Thus whenever hard data on the basic many map. Singpurwalla and Booker (2004) and Gi-
events are available, frequentist methods should also be ron and Rios (1980) allude to such tools. These tools,
used, for no other reason than as a means of calibrating albeit unproven, offer a pathway toward enhancing
the Bayesian results. the current PRA technology, and are worth attempting
5.
Risk calculations based on subjective probabilities given the repeated calls for PRA under alternatives to
and Bayesian methods should be investigated for their probability.
robustness and sensitivity against the priors and the
coding, modulating, and fusing mechanisms. It was mentioned before that the traditional decision trees
6.
Much of the current work in PRA uses stylized metrics which provide a prescription for action to mitigate the pos-
such as dollars or lives lost, for utilities. Statisticians sibility of an adverse outcome were pertinent to a single deci-
routinely use squared error or the absolute error as the sion maker. More important, the decision maker’s opponent
metrics of utility. Such metrics, while easy to imple- is considered to be nature, a benevolent adversary. The same
ment, may not reflect the true preferences of a deci- is also true of fault trees and event trees, the staple tools of
sion maker. Thus formal methods of utility elicitation a PRA. Game theory comes into play when the adversary
as prescribed in the von Neumann and Morgernstern is not benevolent, like a terrorist. When such is the case the
(1944) interpretation of utility should be considered. static decision, fault, and event trees need to be enhanced to
Endowing a PRA with utilities that are formally elic- incorporate adversarial behavior. Thus the graphics and the
ited will be a major step forward. This seems to be underlying mathematics of a PRA need to be modified so that
lacking. they encapsulate adversarial actions. However doing so under
Decision d1 Outcome O
Consequence (d 1 , O).
D1 R
FIGURE G.1 Non-adversarial decision tree of D1.
Decision d1 Decision d 2 Outcome O
D2 R01268, Consequence (d1, d2, O).
D1 Figure G-1
R
FIGURE G.2 Adversarial decision tree of D1.

OCR for page 111

APPENDIX G 115
the umbrella of standard game theory would be problematic Lindley, and A.F.M. Smith (eds.), Bayesian Statistics. Valencia U.P.:
because of the matter of infinite regress (see for example, von University of Valencia Press.
Good, I.J. 1990. “Subjective Probability.” In J. Eatwell, M. Milgate, and
Neumann and Morgenstern, 1944). A possible compromise P. Newman (eds.), The New Palgrave: Utility and Probability. New
would be to consider the use of an adversarial decision tree. York: Norton.
An adversarial decision tree (cf. Lindley and Singpurwalla, Hassl, D. 1965. “Advanced Concepts in Fault Tree Analysis.” Paper
1991, 1993) portrays the schemata of adversarial decision presented at System Safety Symposium, sponsored by University of
making when the actions of each adversary are sequential. Washington and Boeing Company, Seattle, Washington.
Kolmogorov, A.N. 1933. Foundations of the Theory of Probability. New
The layered attack and defense scenario mentioned above York: Chelsea Publishing.
would serve as a suitable model that calls for an adversarial Kolmogorov, A.N. 1969. “The Theory of Probability.” Pp. 229-264 in A.D.
event, fault, and decision tree. Since the adversarial actions Aleksandrov, A.N. Kolmogorov, M.A. Lavrentev (eds.), Mathematics,
change over time, the underlying probabilities will need to Its Contents, Methods and Meaning, Vol. 2, Part 3. Cambridge, Mass.:
be reassessed over time, and the dynamic nature of subjective MIT Press.
Lindley, D.V. 1982. “Scoring Rules and the Inevitability of Probability.”
probability allows for this constant reassessment. International Statistical Review 50(1):1-26.
To get some sense of what an adversarial decision tree Lindley, D.V. 1985. Making Decisions, 2nd ed. New York: Wiley.
would look like, consider Figures G.1 and G.2. The former Lindley, D.V., and N.D. Singpurwalla. 1991. “On the Evidence Needed
has a single decision node, D1, wherein D1 encapsulates the to Reach Agreed Action Between Adversaries, with Application to
actions of D1, a single decision maker. Figure G.1 portrays Acceptance Sampling.” Journal of the Royal Statistical Association
86(416):933-937.
the scenario of non adversarial decision making. By con- Lindley, D.V., and N.D. Singpurwalla. 1993. “Adversarial Life Testing.”
trast, Figure G.2 which consists of two decision nodes D1 Journal of the Royal Statistical Society, Series B 55(4):837-847.
and D2, portrays the contemplated sequential actions of two Morgeson, J.D., V.A. Utgoff, M.A. Fainberg, and M. Keleher. 2006. “Na-
decision makers, D1, and his/her adversary D2. The latter tional Risk Assessment Pilot Project.” Institute for Defense Analyses,
will supposedly (to D1) act in the light of the actions of D1 Document D-3309. Arlington, Va.
Nayak, T.K., and S. Kundu. 2001. “Calculating and Describing the Un-
and their possible consequences. However, the decision tree certainty in Risk Assessment: The Bayesian Approach.” Human and
itself pertains to the actions that D1 should take, taking into Ecological Risk Assessment 7(2):307-328.
consideration the possible actions of D2. The overall aim is Raiffa, H., and R. Schlaifer. 1961. Applied Statistical Decision Theory.
for D1 to maximize his/her expected utility. Figure G.2 can Boston: Harvard University, Graduate School of Business Administra-
be extended to cover the repeated actions of D1 and D2 over tion, Division of Research.
Ramsey, F.P. 1926. “Truth and Probability.” In Foundations of Mathematics
several cycles. However, the total number of cycles must be and Other Logical Essays. New York: Humanities Press.
finite, or else the matter of infinite regress will begin to creep Savage, L.J. 1972. The Foundations of Statistics, 2nd ed. New York:
back. The decision nodes Di, the decisions di, i = 1, 2, and Dover.
the random node R of Figures G.1 and G.2 are conventional Singpurwalla, N.D. 2006. Reliability and Risk: A Bayesian Perspective.
(see, for example Raiffa and Schlaifer, 1961). Hoboken, N.J.: Wiley.
Singpurwalla, N.D. 2007. “Betting on Residual Life: The Caveats of Condi-
tioning.” Letters in Probability and Statistics 77(12):1354-1361.
REFERENCES Singpurwalla, N.D., and J. Booker. 2004. “Membership Functions and
Probability Measures of Fuzzy Sets.” Journal of the American Statistical
Barlow, R.E., H.B. Fussell, and N.D. Singpurwalla (eds.). 1975. Reliability Association 99:867-877.
and Fault Tree Analysis: Theoretical and Applied Aspects of System Singpurwalla, N.D., and A. Wilson. 2007. “Probability, Chance, and the
Reliability and Safety Assessment. Philadelphia, Pa.: SIAM. Probability of Chance.” IIE Transactions. Forthcoming.
Cooke, R.M. 1991. Experts in Uncertainty: Opinion and Subjective Prob- von Neumann, J., and O. Morgenstern. 1944. Theory of Games and Eco-
ability in Science. New York: Oxford University Press. nomic Behavior. Princeton, N.J.: Princeton University Press.
de Finetti, B. 1972. Probability, Induction and Statistics. New York: Walley, P. 1991. Statistical Reasoning with Imprecise Probabilities. Lon-
Wiley. don: Chapman and Hall.
de Finetti, B. 1974. Theory of Probability. New York: Wiley. Watson, H.A. 1961. Launch Control Safety Study. Section VII, Vol. 1. Mur-
de Finetti, B. 1976. “Probability: Beware of Falsification!” Scientia ray Hill, N.J.: Bell Laboratories.
111:283-303. Zadeh, L. 1979. “Possibility Theory and Soft Data Analysis, Memo.”
Dempster, A.P. 1968. “A Generalization of Bayesian Inference.” Journal of Technical Report UCB/ERL M79/66. Berkeley, Calif.: University of
the Royal Statistical Society, Series B 30:205-247. California.
Giron, F., and S. Rios. 1980. “Quasi-Bayesian Behavior: A More Realistic
Approach to Decision Making?” In J.B. Bernardo, M.H. De Groot, D.V.