Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 111
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change Appendix G On the Quantification of Uncertainty and Enhancing Probabilistic Risk Analysis Nozer D. Singpurwalla Professor, Department of Statistics George Washington University, Washington, D.C. PREAMBLE This appendix consists of two parts. In Part 1, we overview some commonly used approaches for quantifying uncertainty. The overview is necessarily terse, but adequate references are provided. Herein we introduce the notions of chance, probability, likelihood, belief, and plausibility, terms that commonly arise in the context of risk analysis. Also mentioned here are the notions of consequences and utilities, both of which are germane to risk analysis and risk management. Part 1 can serve as a supplement to the “Lexicon of Probabilistic Risk Assessment Terms” given in Appendix A of this report. In Part 2 we put forth some thoughts and ideas for enhancing PRA (Probabilistic Risk Analysis) with some statistical and decision theoretic methodologies that are available in the literature, and which could be advantageously invoked. We close this section by alluding to the possibility of some new research in PRA, namely, the development of an architecture for adversarial risk analysis and decision making in vague (or fuzzy) environments. It is our hope that this appendix will fill in any gaps of interpretation of the Lexicon that is given in the text, so that this appendix and the Lexicon of Appendix A are linked. To better facilitate a broad based appreciation of the material presented here, this appendix has been deliberately cast in a conversational style. That is, mathematical notation has been avoided. PART 1. APPROACHES TO QUANTIFYING UNCERTAINTY Introduction From a layperson’s point of view, the term “risk” connotes the possibility that an undesirable event will occur. However, the modern technical meaning of the term is different. Here, risk is the sum of the product of one’s personal probabilities (or the objective chances) of all possible outcomes (also known as consequences) of an action, and the utilities of each outcome. Probabilities and chances are ways to quantify uncertainty (i.e., the possibility mentioned above), and quantification is a necessary step for invoking the logical argument. Utilities are numerical values of the consequences of each outcome, on a zero to one scale. Indeed, utilities are probabilities and must therefore obey the rules (or the calculus) of probability (cf. Lindley, 1985, p. 56). They quantify one’s preferences between consequences. Thus the modern notion of risk entails the twin notions of probability (or chance) and utility. Its computation via the sum of products rule mentioned above (cf. Morgeson et al.  for a detailed application of this principle to terrorist risk assessment) is a consequence of the calculus of probability. The quantification of uncertainty by probability is, according to de Finetti (1972) and Lindley (1982), the only satisfactory way. Alternatives to probability, like Zadeh’s (1979) possibility, do not lead to a prescription for the quantification of risk; this is one of its biggest drawbacks. Chance and Probability: Metrics for Quantifying Uncertainty The use of probability as a metric for quantifying uncertainty dates back to the 16th century. However, discussions about its meaning and interpretation continue until today. The distinction between chance and probability (cf. Good, 1990) is a consequence of such debates and discussions. In his review article, Kolmogorov (1969) wholeheartedly subscribes to probability as an objective chance that is agreed upon by all even though it can never be observed. It is defined as the limit of a relative frequency; the operational word being “limit.” To Kolmogorov, chance and probability were synonymous, and thus the word chance does not appear in his writings. To de Finetti (1976) and others, like Savage (1972), probability is subjective and personal, and encapsulates ones disposition to a two-sided bet. De Finetti (1972) goes further
OCR for page 112
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change by connecting chance and probability via his theorem on exchangeable sequences with the thesis that probability is to be seen as a two-sided bet about the unknown chance. The algebra (or the calculus) of probability is subscribed to by all (save the axiom of countable additivity which to de Finetti is unnecessary). Whereas an unobservable chance can be estimated via observed data (if available), probability can be made operational by monitoring one’s disposition to a series of bets. One needs to monitor a series of bets to ensure that the bettor adheres to the calculus of probability; i.e. the bettor needs to be coherent. Likelihood: A Weighting Function The term likelihood has often been used as a substitute for chance and probability. However, the technical meaning of the term is different. Indeed, it can be seen that a likelihood is not a probability (or chance), and that a likelihood does not obey the calculus of probability. The notion of a likelihood arises in the context of making assessments of uncertainty in the light of new evidence (or data) using Bayes’ Law. The likelihood is simply a weighting function that can be assigned either subjectively or via a probability model. The matter is subtle and warrants a detailed discussion that cannot be given here. We refer the reader to Singpurwalla (2006), Section 2.4.3, or to Singpurwalla (2007) for a more complete picture. The essence of this sub-section is that like chance and probability, the likelihood is, from a technical point of view, a distinct construct. Thus, caution should be used when it is used with the first two. Probabilistic Risk Analysis Probabilistic risk analysis—henceforth PRA—is a systematic way to assess and to invoke the calculus of probability. Its origins can be traced to the work done at Bell Telephone Laboratories on the launching of missiles (cf. Watson, 1961), and to the work done at the Boeing Scientific Laboratories on assessing the reliability of airplanes (cf. Hassl, 1965). The prominence of PRA grew with the dawning of the nuclear reactor era when it became the dominant tool for assessing the safety of nuclear reactors (cf. Barlow, et al., 1975). The driving tools behind a PRA are the event trees and fault trees, which are a graphical portrayal of the causes that lead up (or down) to an event of interest. At the terminus of such trees are the causes that trigger the event of interest; such causes are called the basic events of the trees. PRA is attractive to engineers and other scientists because of their inherent graphic feature, just as Bayesian Belief Nets (BBNs) are attractive to computer scientists. When all is said and done, both the PRA and the BBN are simply tools for assessing probabilities, and invoking the calculus of probability. They are devices for good book-keeping practices in probability calculations. The distinction between chance and probability is germane to PRA, because each leads to a different paradigm for assessing risk. The former leads to the frequentist (or sample-theoretic) approach, the latter to the subjectivistic Bayesian approach. Under the frequentist approach, PRA can only be done when hard data on the basic events are at hand, and preferably a substantial amount. Such data could be easy to come by when one deals with conceptually repeatable events like failures in a population of items such as valves, electronics, and other such small gadgets. PRA under frequentist paradigm is most suitable for engineered systems like airplanes, automobiles, tanks, and nuclear reactors. By contrast, under the Bayesian approach to PRA, probabilities of the basic events need to be subjectively obtained via the elicitation, codification, modulation, and the fusion of expert testimonies (see, for example, Singpurwalla, 2006, Chapter 5). Because terrorist risk related events are not considered to be repeatable (to constitute an ensemble), PRA under the subjectivistic Bayesian paradigm appears to be relevant, not only in the contexts of biological agent risk analysis and other modes of terrorist risk (cf. Morgeson et al., 2006), but also for human health risk assessment from environmental hazards (cf. Nayak and Kundu, 2001, who also allude to a distinction between chance and probability vis-á-vis “variability” and “uncertainty”). The Dynamic Nature of Subjective Probability With the above in place, some caveats about the subjective probabilities and their assessments need to be stated. Unlike chance—an objective entity—that is fixed for all time and agreed upon by all, subjective probability is personal to an individual (or a group acting as one), and can change from person to person. More important, it can change over time even for the same person. In other words, subjective probability is dynamic. It is assessed at some fixed point in time and the assessment is presumably based on the information at hand at that fixed point in time. As time marches on, new information could become available, and with it a possible change of probability. The position that subjective probability can be dynamic takes a more dramatic stand via the claim that it is not merely the availability of new information over time that brings about a change in probability. A change in probability could also be the result of a change in the psychological disposition of the individual whose betting behavior is assessed (cf. Ramsey, 1926). It is because of the above caveats that de Finetti (1974) in the introduction of his famous two-volume book on probability declares that: “Probability Does not Exist.” Alternatives to Chance and Probability One, among the several, of Kolmogorov’s (1933) notable achievements was that he freed probability from the debates and discussions of interpretation. He did this by axiomatizing probability. (The call to axiomatize probability can be traced
OCR for page 113
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change to the German mathematician David Hilbert, Kolmogorov’s dissertation supervisor, and to Sergei N. Bernstein). However, in order to axiomatize probability, Kolmogorov had to introduce an architecture, and it is aspects of this architecture that have paved the way for an entrance of alternatives to probability. The mathematical architecture upon which the axiomatization of probability rests consists of a sample space (i.e., the set of all possible outcomes of a random phenomenon), and a many to one mapping (or a function) from the sample space to the real line. The mapping is known as a random variable. Probability is another mapping defined on the subsets of the sample space. It takes values between 0 and 1, and it abides by the addition and multiplication rules of probability. Kolmogorov’s architecture subscribes to the law of the excluded middle. The essence of this law is that every element of the sample space can either belong, or not belong, to a particular sub-set of the sample space. In other words, any element of the sample space cannot simultaneously belong and not belong to any sub-set of the sample space. This happens when the sub-sets are sharp; that is, their boundaries are well defined. Objections to Kolmogorov’s architecture stem from two directions. The first is that in practice, especially when it comes to linguistic information, the law of the excluded middle turns out to be a restriction. In other words, requiring that sub-sets of the sample space have sharp boundaries is restrictive. One needs to entertain the possibility that the boundary of the said sub-sets could be vague or fuzzy. The second objection pertains to the requirement that the mapping from the sample space to the real line may be many to one. In practice, scenarios can arise wherein the said mapping needs to be one to many. Such scenarios can generally arise in the context of forensics, accident investigation, or failure diagnosis. The need to entertain fuzzy sets has led Zadeh (1979) to propose an alternative to probability, namely, possibility theory. The calculus of possibility theory is different from that of probability theory; it parallels that of operations with fuzzy sets. Thus fuzzy set theory and possibility theory are often mentioned in the same vein. Regrettably, and despite Zadeh’s persistent efforts, there has been no justification of the calculus of possibility theory. By contrast, the axioms of probability theory—the Kolmogorov axioms—have a foundation that is rooted in behavioristic phenomena. As a consequence, possibility theory has failed to provide a prescription for calculating risk. More important, it has been recently argued (cf. Singpurwalla and Booker, 2004) that it is possible to endow fuzzy sets with probability measures. This has made the role of possibility theory unnecessary. The need to entertain scenarios involving one-to-many mappings has motivated Dempster (1968) to propose a generalization of probability measures, which he calls belief and plausibility; some details about these can be had from Singpurwalla and Wilson (2007) and the references therein. The net effect of these measures is that probability, instead of being a single number, is bounded above and below by what are known as upper and lower probabilities (also see Walley, 1991). A proposal for decision making based on upper and lower probabilities has been made by Giron and Rios (1980). Whereas this proposal lacks the force of coherence that decision making based on probabilities has, it may serve as a basis for risk analysis based on belief and plausibility. This possibility remains to be explored. PART 2. ENHANCING PRA WITH BEST PRACTICES The material of this part is linked with that of Part 1 wherein it was stated that probability and utility are two components of risk analysis, and that PRA was a tool to facilitate the assessment of probabilities of certain events, using the calculus of probability. A prescription for computing risk was also given, and it was stated that in the context of biological agent risk analysis PRA under the subjectivistic Bayesian paradigm would be the desired approach. The dynamic nature of subjective probability was mentioned and the need to ensure coherence of elicited probabilities was emphasized. The prescription for calculating risk as the sum of the product of probabilities and utilities was a consequence of the calculus of probability, and the fact that utilities are probabilities. In the context of managing risk, one chooses that action for which the calculated risk is a minimum. This prescription for taking actions constitutes the basis of decision making under uncertainty (cf. Raiffa and Schlaifer, 1961) wherein decision trees play a role analogous to that of fault and event trees. That is, decision trees facilitate good book-keeping in the context of making decisions. Decision theorists are attracted to decision trees for the same reason that engineers liking fault trees, event trees, and PRA; graphics is the virtue of both. The important point to note is that generally, decision trees pertain to the flow of actions and events that are of relevance to a single decision maker. With the above as a perspective, the following enhancements to the current methods of using PRA for risk analysis and management can be suggested. The elicited subjective probabilities should be tested to ensure coherence via more than a single query of the “expert.” The assessed subjective probabilities should be modulated to make adjustments for any inherent biases that the experts may have. When the assessed subjective probabilities entail more than one expert—and this on principle should always be attempted—the expert testimonies should be fused in a manner that accounts for the correlations (positive or negative) among the experts. Steps 2 and 3 above should be done formally via the calculus
OCR for page 114
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change of probability. Details about how this can be done are given in Singpurwalla (2006, Chapter 5), wherein references to the original sources can be found. Some researchers (Cooke, 1991) argue strongly in favor of calibrating probabilities against empirical data as an alternative to modulation. The author disagrees that proper Bayesian methods for modulating assessed probabilities are not available. Philosophical issues aside, the calibration method suggested by Cooke requires empirical data; and in the absence of such data, modulating the assessed probabilities based on one’s assessment of the expertise of the experts is a desirable option. To many, a routine use of subjective probabilities and their accompanying paraphernalia of Bayesian methods in the context of PRA are objectionable; see, for example, Nayak and Kundu (2001). This is particularly acute when it comes to matters of public policy wherein some sense of objectivity becomes paramount. Thus whenever hard data on the basic events are available, frequentist methods should also be used, for no other reason than as a means of calibrating the Bayesian results. Risk calculations based on subjective probabilities and Bayesian methods should be investigated for their robustness and sensitivity against the priors and the coding, modulating, and fusing mechanisms. Much of the current work in PRA uses stylized metrics such as dollars or lives lost, for utilities. Statisticians routinely use squared error or the absolute error as the metrics of utility. Such metrics, while easy to implement, may not reflect the true preferences of a decision maker. Thus formal methods of utility elicitation as prescribed in the von Neumann and Morgernstern (1944) interpretation of utility should be considered. Endowing a PRA with utilities that are formally elicited will be a major step forward. This seems to be lacking. In the context of terrorist risk assessment, be it biological or otherwise, the layered defense and attack concepts used in military science could be valuable; an inkling of these appears in Morgeson et al. (2006). Under a layered defense, the probability of penetration goes down with the number of layers, resulting in lower probability of a successful attack on an asset. The effect of all this would be an expansion of the event and fault trees and the assessment of several conditional probabilities. Even though alternatives to probability have often been mentioned in the context of a PRA, there do not seem to be at hand concrete examples and illustrations demonstrating the viability of such alternatives. A possible reason behind this state of affairs could be the lack of awareness about the availability of some tools that are able to deal with decision making in a fuzzy environment, and in the presence of a one-to-many map. Singpurwalla and Booker (2004) and Giron and Rios (1980) allude to such tools. These tools, albeit unproven, offer a pathway toward enhancing the current PRA technology, and are worth attempting given the repeated calls for PRA under alternatives to probability. It was mentioned before that the traditional decision trees which provide a prescription for action to mitigate the possibility of an adverse outcome were pertinent to a single decision maker. More important, the decision maker’s opponent is considered to be nature, a benevolent adversary. The same is also true of fault trees and event trees, the staple tools of a PRA. Game theory comes into play when the adversary is not benevolent, like a terrorist. When such is the case the static decision, fault, and event trees need to be enhanced to incorporate adversarial behavior. Thus the graphics and the underlying mathematics of a PRA need to be modified so that they encapsulate adversarial actions. However doing so under FIGURE G.1 Non-adversarial decision tree of D1. FIGURE G.2 Adversarial decision tree of D1.
OCR for page 115
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change the umbrella of standard game theory would be problematic because of the matter of infinite regress (see for example, von Neumann and Morgenstern, 1944). A possible compromise would be to consider the use of an adversarial decision tree. An adversarial decision tree (cf. Lindley and Singpurwalla, 1991, 1993) portrays the schemata of adversarial decision making when the actions of each adversary are sequential. The layered attack and defense scenario mentioned above would serve as a suitable model that calls for an adversarial event, fault, and decision tree. Since the adversarial actions change over time, the underlying probabilities will need to be reassessed over time, and the dynamic nature of subjective probability allows for this constant reassessment. To get some sense of what an adversarial decision tree would look like, consider Figures G.1 and G.2. The former has a single decision node, D1, wherein D1 encapsulates the actions of D1, a single decision maker. Figure G.1 portrays the scenario of non adversarial decision making. By contrast, Figure G.2 which consists of two decision nodes D1 and D2, portrays the contemplated sequential actions of two decision makers, D1, and his/her adversary D2. The latter will supposedly (to D1) act in the light of the actions of D1 and their possible consequences. However, the decision tree itself pertains to the actions that D1 should take, taking into consideration the possible actions of D2. The overall aim is for D1 to maximize his/her expected utility. Figure G.2 can be extended to cover the repeated actions of D1 and D2 over several cycles. However, the total number of cycles must be finite, or else the matter of infinite regress will begin to creep back. The decision nodes Di, the decisions di, i = 1, 2, and the random node R of Figures G.1 and G.2 are conventional (see, for example Raiffa and Schlaifer, 1961). REFERENCES Barlow, R.E., H.B. Fussell, and N.D. Singpurwalla (eds.). 1975. Reliability and Fault Tree Analysis: Theoretical and Applied Aspects of System Reliability and Safety Assessment. Philadelphia, Pa.: SIAM. Cooke, R.M. 1991. Experts in Uncertainty: Opinion and Subjective Probability in Science. New York: Oxford University Press. de Finetti, B. 1972. Probability, Induction and Statistics. New York: Wiley. de Finetti, B. 1974. Theory of Probability. New York: Wiley. de Finetti, B. 1976. “Probability: Beware of Falsification!” Scientia 111:283-303. Dempster, A.P. 1968. “A Generalization of Bayesian Inference.” Journal of the Royal Statistical Society, Series B 30:205-247. Giron, F., and S. Rios. 1980. “Quasi-Bayesian Behavior: A More Realistic Approach to Decision Making?” In J.B. Bernardo, M.H. De Groot, D.V. Lindley, and A.F.M. Smith (eds.), Bayesian Statistics. Valencia U.P.: University of Valencia Press. Good, I.J. 1990. “Subjective Probability.” In J. Eatwell, M. Milgate, and P. Newman (eds.), The New Palgrave: Utility and Probability. New York: Norton. Hassl, D. 1965. “Advanced Concepts in Fault Tree Analysis.” Paper presented at System Safety Symposium, sponsored by University of Washington and Boeing Company, Seattle, Washington. Kolmogorov, A.N. 1933. Foundations of the Theory of Probability. New York: Chelsea Publishing. Kolmogorov, A.N. 1969. “The Theory of Probability.” Pp. 229-264 in A.D. Aleksandrov, A.N. Kolmogorov, M.A. Lavrentev (eds.), Mathematics, Its Contents, Methods and Meaning, Vol. 2, Part 3. Cambridge, Mass.: MIT Press. Lindley, D.V. 1982. “Scoring Rules and the Inevitability of Probability.” International Statistical Review 50(1):1-26. Lindley, D.V. 1985. Making Decisions, 2nd ed. New York: Wiley. Lindley, D.V., and N.D. Singpurwalla. 1991. “On the Evidence Needed to Reach Agreed Action Between Adversaries, with Application to Acceptance Sampling.” Journal of the Royal Statistical Association 86(416):933-937. Lindley, D.V., and N.D. Singpurwalla. 1993. “Adversarial Life Testing.” Journal of the Royal Statistical Society, Series B 55(4):837-847. Morgeson, J.D., V.A. Utgoff, M.A. Fainberg, and M. Keleher. 2006. “National Risk Assessment Pilot Project.” Institute for Defense Analyses, Document D-3309. Arlington, Va. Nayak, T.K., and S. Kundu. 2001. “Calculating and Describing the Uncertainty in Risk Assessment: The Bayesian Approach.” Human and Ecological Risk Assessment 7(2):307-328. Raiffa, H., and R. Schlaifer. 1961. Applied Statistical Decision Theory. Boston: Harvard University, Graduate School of Business Administration, Division of Research. Ramsey, F.P. 1926. “Truth and Probability.” In Foundations of Mathematics and Other Logical Essays. New York: Humanities Press. Savage, L.J. 1972. The Foundations of Statistics, 2nd ed. New York: Dover. Singpurwalla, N.D. 2006. Reliability and Risk: A Bayesian Perspective. Hoboken, N.J.: Wiley. Singpurwalla, N.D. 2007. “Betting on Residual Life: The Caveats of Conditioning.” Letters in Probability and Statistics 77(12):1354-1361. Singpurwalla, N.D., and J. Booker. 2004. “Membership Functions and Probability Measures of Fuzzy Sets.” Journal of the American Statistical Association 99:867-877. Singpurwalla, N.D., and A. Wilson. 2007. “Probability, Chance, and the Probability of Chance.” IIE Transactions. Forthcoming. von Neumann, J., and O. Morgenstern. 1944. Theory of Games and Economic Behavior. Princeton, N.J.: Princeton University Press. Walley, P. 1991. Statistical Reasoning with Imprecise Probabilities. London: Chapman and Hall. Watson, H.A. 1961. Launch Control Safety Study. Section VII, Vol. 1. Murray Hill, N.J.: Bell Laboratories. Zadeh, L. 1979. “Possibility Theory and Soft Data Analysis, Memo.” Technical Report UCB/ERL M79/66. Berkeley, Calif.: University of California.