Cover Image

PAPERBACK
$39.75



View/Hide Left Panel

The third and most significant modification is that the branching probabilities (DHS on occasion also calls them “branch fractions”) are not fixed, but are instead themselves determined by sampling from beta distributions provided indirectly by Subject Matter Experts (SMEs). Let θ be the collection of branching probabilities. In each incident we therefore observe (θ, A, S, Y), with θ determining the event tree for the other three random variables. This modification will also be discussed separately below.


The Second Modification: Repeated Attacks per Incident. The vision is that a cell or group of terrorists will not plan a single attack, but will plan to continue to attack until interrupted, with the entire group of attacks constituting an incident. The effect of this is to change the distribution of consequences of an incident, since a successful attack will be accompanied by afterattacks, the number of which I will call X. I believe that the formula used for calculating E(X) is incorrect. Specifically, let λ′ be the probability that any one of the afterattacks will succeed, assume that after-attacks continue until one of them fails, and assume that the failed afterattack terminates the process and itself has no consequences. Then the average value of X is E(X) = λ′/(1 − λ′), the mean of a geometric-type random variable. This is not the formula in use. Using the correct formula would be a simple enough change, but I believe the numerical effect might be significant.

Other changes may also be necessary to implement the original vision. If the afterattacks all have independent consequences, then the distribution of total consequences is the (1 + X)-fold convolution of the consequence distribution, a complicated operation that I see no evidence of. The documentation is mute on what is actually assumed about the independence of after attacks, and on how the E(X) computation is actually used. Simply scaling up the consequences of one attack by the factor (1 + E(X)) is correct on the average, regardless of independence assumptions, but will not give the correct distribution of total consequences.


The Third Modification: “Random Probabilities.” DHS has accommodated SME uncertainty by allowing the branch probabilities themselves to be random quantities, with the SMEs merely agreeing to a distribution for each probability, rather than a specific number. I will refer to each of these probability distributions as a “marginal” for its branch. If a node has N branches, the experts contribute N marginals, one for each branch. Except at the root, these marginals are all beta distributions on the interval [0 1], and each therefore has two parameters, alpha (α) and beta (β). Each of these distributions has a mean, and since the probabilities themselves must sum over the branches to 1, the same thing must logically be true of the means. The same need not be true of the SME inputs, but DHS seems to have disciplined the elicitation process so that the SME marginal means actually do sum to 1. That is true in all of the data that I have seen.

However, summing to 1 is not sufficient for the SME marginals to be meaningful. This is most obvious when N = 2. If the first branch has probability A, then the second must have probability 1 − A, and therefore the second probability distribution has no choice but to be the mirror image of the first. If the experts feel that the first marginal has α = 1 and β = 1, while the second has α = 2 and β = 2, then we must explain to the experts that what they are saying is meaningless, even though both marginals have a mean of 0.5. The second marginal has no choice but to be the mirror image of the first, and must therefore be the first, by symmetry. Any other possibility is literally meaningless, since there is no pair of random variables (A1, A2) such that Ai has the ith marginal distribution and also A1 + A2 is always exactly 1.

I think DHS recognizes the difficulty when N = 2, and has basically fixed it in that case by asking the SMEs for only one marginal, but the same difficulty is present for N > 2, and has not been fixed. The sampling procedure offered on page C-81 of Department of Homeland Security (2006) will reliably produce probabilities A1, …, AN that sum to 1, and which are correct on the average, but they do not have the marginal beta distributions given by the SMEs. This is most obvious in the case of the last branch, since the Nth marginal is never used in the sampling process, but I believe that the marginal distribution is correct only for the first branch.

There is a multivariable distribution (the Dirichlet distribution) whose marginals are all beta distributions, but the Dirichlet distribution has only N + 1 parameters. The SME marginals require 2N, in total, so the Dirichlet distribution is not a satisfactory joint distribution for A1, …, AN.


Estimation of the Spread in Agent-Damage Charts. I have defined Y to be the consequence and A to be the agent. Define Ya to be the consequence if A = a, or otherwise 0, so that the 28 random variables Ya sum to Y. Most of the DHS output deals with the random variable E(Ya | θ), the expected consequence contribution from agent a, given the sampled branch probabilities θ. This quantity is random only because of its dependence on θ, the natural variability of Ya having been averaged out. A sample E(Ya | θj), j = 1,…, 500 is produced by Latin Hypercube Sampling (LHS) of the branch probabilities, each sample including the standard average risk computations for the event tree. A sample mean estimate Ŷa of E(Ya) is then made by The agents are then sorted in order of decreasing sample mean, and displayed in what I will call “agent-damage” charts showing the expected values and spreads as a function of agent. The sample means are normalized before being displayed, probably by forcing them to sum to 1. The normalization destroys information that is relevant to the decisions being made. I do not know the motivation for doing so.

The spreads display the epistemic variability due to SME uncertainty about θ, but suppress all of the aleatoric variability implied by the event tree. If there were no uncertainty



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement