Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 122
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change Appendix I Review of BTRA Modeling Alan R. Washburn, Ph.D. Distinguished Professor Emeritus of Operations Research Naval Postgraduate School, Monterey, California July 10, 2007 MEMORANDUM FOR THE NATIONAL ACADEMY OF SCIENCES (NAS) Review of the Department of Homeland Security (2006) work on bioterrorism. Background. The Department of Homeland Security (DHS) has produced a 2006 bioterrorism study, and is working on subsequent versions. DHS has asked NAS to assess the 2006 work, which I will refer to hereafter as “the 2006 work.” I have become acquainted with the work through contacts with the NAS committee, and have been invited to provide a review. This is the review. It is intended for a scientific audience, so I will not hesitate to use the language of probability in describing what I think was done in 2006, or in how things might be handled differently in the future. Random variables are uppercase symbols, P() and E() are the probability and expected value functions, respectively. My Qualifications. After working five years for the Boeing Company, I joined the Operations Research faculty at the Naval Postgraduate School in 1970, where I did the usual academic things until retiring in 2006. My teaching includes probability and decision theory, which are relevant here. See my resume at http://www.nps.navy.mil/orfacpag/resumePages/washbu.htm for details. I have no biological or medical qualifications. My acquaintance with the work is mainly through the references listed at the end of this review. Event Trees. The fundamental idea behind the 2006 work is an event tree. As I will use the term in this review, an event tree is a branching structure whose root corresponds to the assertion that some event has occurred, the event in this case being what I will call an “incident.” The tree branches repeatedly until a “scenario” is encountered, at which point one will find a probability distribution that determines the consequence of the incident, a random variable that I will call Y. I think of consequences as being “lives lost,” but any other scalar measure would do. Each node of the tree has a set of successor arcs, and there is a given probability distribution over these arcs. One can imagine starting at the root and randomly selecting an arc at each node encountered until finally the consequence is determined. In addition to Y, the event tree involved in the 2006 work is such that every path from root to consequence also defines two other random variables: A, the biological agent, one of 28 possibilities, and S, the scenario. The scenario might be null in the sense that Y is 0 because the incident is terminated prematurely, but is nonetheless always defined. DHS determines the consequence distributions through Monte Carlo simulation based on expert input. The results are collected into decade-width histograms. I will not comment further on the methodology for producing the consequence distributions, since I have not examined it in detail. DHS has modified the above definition of an event tree in three senses. One is that the initial branches from the root are rates, rather than probabilities. Call the rate on branch i λi, and let the sum of all of these rates be λ. If one interprets these rates as independent Poisson rates of the various kinds of incident, then it is equivalent to think of incidents as occurring in a Poisson process with rate λ, with each incident being of type i with probability λi/λ. These ratios can be the first set of branch probabilities, so this is all equivalent to the standard event tree definition, except that we must remember that incidents occur at the given rate λ. This first modification is thus of little import. The second modification is that an incident might involve multiple attacks, each with separate consequences. This is a more significant modification, and will be discussed separately below.
OCR for page 123
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change The third and most significant modification is that the branching probabilities (DHS on occasion also calls them “branch fractions”) are not fixed, but are instead themselves determined by sampling from beta distributions provided indirectly by Subject Matter Experts (SMEs). Let θ be the collection of branching probabilities. In each incident we therefore observe (θ, A, S, Y), with θ determining the event tree for the other three random variables. This modification will also be discussed separately below. The Second Modification: Repeated Attacks per Incident. The vision is that a cell or group of terrorists will not plan a single attack, but will plan to continue to attack until interrupted, with the entire group of attacks constituting an incident. The effect of this is to change the distribution of consequences of an incident, since a successful attack will be accompanied by afterattacks, the number of which I will call X. I believe that the formula used for calculating E(X) is incorrect. Specifically, let λ′ be the probability that any one of the afterattacks will succeed, assume that after-attacks continue until one of them fails, and assume that the failed afterattack terminates the process and itself has no consequences. Then the average value of X is E(X) = λ′/(1 − λ′), the mean of a geometric-type random variable. This is not the formula in use. Using the correct formula would be a simple enough change, but I believe the numerical effect might be significant. Other changes may also be necessary to implement the original vision. If the afterattacks all have independent consequences, then the distribution of total consequences is the (1 + X)-fold convolution of the consequence distribution, a complicated operation that I see no evidence of. The documentation is mute on what is actually assumed about the independence of after attacks, and on how the E(X) computation is actually used. Simply scaling up the consequences of one attack by the factor (1 + E(X)) is correct on the average, regardless of independence assumptions, but will not give the correct distribution of total consequences. The Third Modification: “Random Probabilities.” DHS has accommodated SME uncertainty by allowing the branch probabilities themselves to be random quantities, with the SMEs merely agreeing to a distribution for each probability, rather than a specific number. I will refer to each of these probability distributions as a “marginal” for its branch. If a node has N branches, the experts contribute N marginals, one for each branch. Except at the root, these marginals are all beta distributions on the interval [0 1], and each therefore has two parameters, alpha (α) and beta (β). Each of these distributions has a mean, and since the probabilities themselves must sum over the branches to 1, the same thing must logically be true of the means. The same need not be true of the SME inputs, but DHS seems to have disciplined the elicitation process so that the SME marginal means actually do sum to 1. That is true in all of the data that I have seen. However, summing to 1 is not sufficient for the SME marginals to be meaningful. This is most obvious when N = 2. If the first branch has probability A, then the second must have probability 1 − A, and therefore the second probability distribution has no choice but to be the mirror image of the first. If the experts feel that the first marginal has α = 1 and β = 1, while the second has α = 2 and β = 2, then we must explain to the experts that what they are saying is meaningless, even though both marginals have a mean of 0.5. The second marginal has no choice but to be the mirror image of the first, and must therefore be the first, by symmetry. Any other possibility is literally meaningless, since there is no pair of random variables (A1, A2) such that Ai has the ith marginal distribution and also A1 + A2 is always exactly 1. I think DHS recognizes the difficulty when N = 2, and has basically fixed it in that case by asking the SMEs for only one marginal, but the same difficulty is present for N > 2, and has not been fixed. The sampling procedure offered on page C-81 of Department of Homeland Security (2006) will reliably produce probabilities A1, …, AN that sum to 1, and which are correct on the average, but they do not have the marginal beta distributions given by the SMEs. This is most obvious in the case of the last branch, since the Nth marginal is never used in the sampling process, but I believe that the marginal distribution is correct only for the first branch. There is a multivariable distribution (the Dirichlet distribution) whose marginals are all beta distributions, but the Dirichlet distribution has only N + 1 parameters. The SME marginals require 2N, in total, so the Dirichlet distribution is not a satisfactory joint distribution for A1, …, AN. Estimation of the Spread in Agent-Damage Charts. I have defined Y to be the consequence and A to be the agent. Define Ya to be the consequence if A = a, or otherwise 0, so that the 28 random variables Ya sum to Y. Most of the DHS output deals with the random variable E(Ya | θ), the expected consequence contribution from agent a, given the sampled branch probabilities θ. This quantity is random only because of its dependence on θ, the natural variability of Ya having been averaged out. A sample E(Ya | θj), j = 1,…, 500 is produced by Latin Hypercube Sampling (LHS) of the branch probabilities, each sample including the standard average risk computations for the event tree. A sample mean estimate Ŷa of E(Ya) is then made by The agents are then sorted in order of decreasing sample mean, and displayed in what I will call “agent-damage” charts showing the expected values and spreads as a function of agent. The sample means are normalized before being displayed, probably by forcing them to sum to 1. The normalization destroys information that is relevant to the decisions being made. I do not know the motivation for doing so. The spreads display the epistemic variability due to SME uncertainty about θ, but suppress all of the aleatoric variability implied by the event tree. If there were no uncertainty
OCR for page 124
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change about θ, all of the spreads would collapse to a single point (the mean) for each agent. I am not sure how the variability displayed in agent-damage charts is supposed to relate to decision making, but I guess that the graphs are intended to support conclusions such as the following: “I know that the mean damage for agent 1 is larger then the mean damage for agent 2, but I still think that we ought to spend our money defending against agent 2 because of its high associated variability. Even a small prospect of the high damages associated with agent 2 is not acceptable.” If that is the kind of logic that the agent-damage charts are intended to support, then they should include aleatoric variability. Without it, the spreads associated with each agent are too small. This issue affects infectious agents more than the other kind, since infectious diseases will have especially high damage variances. The agent-damage charts are intended for a high level of decision-making audience, and devote considerable space (one of the two available dimensions) to showing the spread associated with each agent. Without the need to show spread, they could be replaced by bar charts or simple tables. If spread is important enough to be displayed, then it ought to be displayed in a manner that facilitates good decisions. I doubt that that is currently the case. Even without the aleatoric issue, I still have concerns about the spread that is displayed. The object ought to be to display the mean and fractiles (the spread) of the random variable E(Ya | θ) for each value of a. The mean of E(Ya | θ) is simply E(Ya) by the conditional expectation theorem, and is estimated by Ŷa. DHS claims graphically that the LHS sample fractiles are also the fractiles of the random variable E(Ya | θ). I suspect that this claim is false. LHS is basically a variance reduction technique that makes the variance of Ŷa smaller than it would be with ordinary sampling. While this effect is welcome, LHS also has an unpredictable effect on variability. The spread that is shown for each agent may not be a good estimate of the spread of the random variable E(Ya | θ). One final point on estimation. As long as there is no dependence between the branch probabilities at different nodes, as there is not in the 2006 work, it is characteristic of an event tree that P(Ya ≤ y) = E(P(Ya ≤ y | θ)) = P(Ya ≤ y | E(θ)). The first equality is due to the conditional expectation theorem, and the second is because no event tree probability enters more than once into calculating the probability of any scenario. In other words, all information pertinent to the distribution of Ya could be obtained without sampling error by simply replacing the marginal branch distributions by their means. This information includes E(Ya), which is currently being estimated (with sampling error) by Ŷa. (Note added in June 2007. Let me expand the notation to clarify this final point, since it has caused some confusion. Let θ = (Q1, …, Qn), where n is the number of nodes and Qi is the collection of branch probabilities at node i. Also let Qij be the jth branch probability at node i. In the sampling procedure used by DHS to obtain θ, Qij and Qkl are independent random variables as long as i and j are not the same, which is all that is required for my conclusion to be true. While it is certainly true that the branches chosen at nodes i and j are in general dependent, the branch probabilities are not.) Use of SMEs. It is inevitable in a project like this that probabilities will have to be obtained from Subject Matter Experts, rather than experimentation. The important thing is that the SMEs at least know what they are estimating, and that estimates be used correctly once they are obtained. I have already mentioned that SME estimates of the marginal branch distributions are not reproduced by the sampling procedure. Another concern is at the third stage of the event tree, where SMEs are asked to deal with agent selection. At that stage there are 4 × 8 = 32 nodes in the event tree where an agent might be selected, each of which has 28 branches. I can certainly understand DHS’s reluctance to conduct 896 interviews with SMEs, each to determine one of the needed beta distributions. Some kind of a shortcut is needed, but I wonder whether the one adopted is a good one. The SMEs are first asked to determine an “input regarding known preferences of terrorists” for each agent. If I were an SME and somebody asked me to determine the quoted expression for agent a, I would announce my estimate of P(A = a), the probability that agent a is actually selected in an incident. Given all of these SME inputs, DHS then goes over the 896 branches, some of which have a logical 0 for the agent, and assigns probabilities using the rule that the probability is either 0 or else proportional to the SME’s agent input, the proportionality constant being selected in each of the 32 cases so that the probabilities sum to 1. My objections are that The quoted expression above does not make it clear that the SME input is supposed to be P(A = a). There is a danger of every SME making a different interpretation of what is being asked for. If the SME does input the probabilities P(A = a), and if DHS applies the shortcut procedure to fill out the third stage of the event tree, and if the probabilities of the 28 agents are then computed from the tree, they will not necessarily agree with the SME’s inputs. This would be true even without my next objection. The SME’s inputs are subsequently modified by various formulas involving agent lethality, etc. What is an SME who is already acquainted with agent lethality to think of this? Should he adjust his input so that the net result of all this computation is the number that he wanted in the first place? If one is going to elicit SME inputs on probabilities, then it seems to me that one ought to use them as they are intended. Given that the agent probabilities strongly influence the agent-damage charts, the procedure for eliciting and using them should be an object of concern in future work.
OCR for page 125
Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change Tree Flipping? The process described earlier for generating agent-damage charts may not be a correct statement of what DHS actually did in 2006. The DHS documentation in several places, after describing a single event tree with 17 ranks, states that a separate analysis was actually done for each agent (paragraph C.3.4.2 of Department of Homeland Security , for example). Now, it is possible to end up with the single-tree analysis described earlier by doing that. The essential step is to first calculate P(A = a) for each agent, and then make a new tree where the agent is selected at the root, with the agent selection probabilities on the 28 branches from the root. The second and third ranks of the tree would then be what were originally the first and second, with new probabilities as computed by Bayes’ theorem, and the rest of the tree would be unchanged. Since the agent is at the root of the resulting “flipped” tree, using the flipped tree is in effect doing a separate analysis for each agent. The flipped tree would lead to the same earlier described agent-damage charts—the two trees are stochastically equivalent. But I don’t see the motivation for doing all this extra work in flipping the tree, and I have some concerns about whether the flipping operation was actually done correctly, or done at all. One concern is that the thing being manipulated is not an ordinary event tree, and there is no reason to expect that beta distributions will remain beta distributions in the flipping process. Of course, the flipping could occur after the tree is instantiated in each of the 500 replications, but that would get to be a lot of work. I doubt if that has been the case. The documentation is mute about the tree flipping process. I can only hope that the method actually used for producing agent-damage charts is equivalent to analyzing the single event tree as described above. Suggestions. My main suggestion for future work is that distributions for branch probabilities be abandoned in favor of direct branch probabilities, as in a standard event tree. In other words, keep it simple. SMEs will not be comfortable expressing definite values for the probabilities, but then they are probably not comfortable with expressing definite values for α and β, either. Most people are simply not comfortable quantifying uncertainty. There is very little to be gained by including epistemic uncertainty about the branch probabilities in an analysis like this, and much to be lost in terms of complication. Epistemic uncertainty is not even discussed in most decision theory textbooks. Standard software for handling decision trees would become applicable (event trees are just a special case where there are no decisions) if epistemic uncertainty were not present. There is also standard software for handling influence diagrams, which ought to be considered as an alternative to decision trees. Influence diagram software is sometimes used diagnostically, which might be of use in bioterrorism. One might observe that the agent is known to be anthrax, for example, and instantly recompute the target probabilities based on that known condition. Another suggestion is to examine the potential for optimization. Given that the basic problem is how to spend money to reduce risk, it is too bad that a problem that simple in structure cannot be posed formally. It is possible that some actions that we might take would be effective for all contagious diseases. This should make them attractive, but the low rank of most contagious diseases individually in the agent-damage charts tends to suppress their attractiveness. My last suggestion is to report future results in a scientific fashion that can be reviewed by scientists. English is a notoriously imprecise language for describing operations involving chance, so I have repeatedly struggled to understand what was actually done in making my way through the references. As a result, I may well have misinterpreted something above that I hope DHS will correct. If I were reviewing the 2006 work for a journal, my first act would be to send the material back to the authors with a request that it be written up using mathematics embedded in English, instead of just English. I know that DHS has to communicate complicated ideas about risk to laypeople. That task should be in addition to reporting the results scientifically, not a replacement for it. In summary, my opinion is that the 2006 DHS methodology is not yet the “rigorous and technically sound methodology” demanded by the 2004 Homeland Security Presidential Directive 10: Biodefense for the 21st Century. Let me also add that I consider the report as a whole to be a remarkable accomplishment, given the magnitude of the task and the time available to do it. References. Materials that I have examined before writing this review include the following: Department of Homeland Security. 2006. Bioterrorism Risk Assessment. Biological Threat Characterization Center of the National Biodefense Analysis and Countermeasures Center. Fort Detrick, Md. I have also examined various drafts of the following: Department of Homeland Security. 2007. “A Lexicon of Risk Terminology and Methodological Description of the DHS Bioterrorism Risk Assessment.” April 16. Of all the documents, this last one comes closest to the technical appendix that I recommend. It has been of considerable use to me, but even it does not address tree flipping.