probability of S being true into two sources: (1) the prior likelihood of S, which encodes one's prior knowledge of S; and 2) the likelihood of S being true in light of the observed evidence, E. Pearl (1986) refers to these two sources of knowledge as the predictive (or causal) and diagnostic support for S being true, respectively.
The Bayesian belief network is particularly suitable for environments in which:
Evidence from various sources may be unreliable, incomplete, and imprecise.
Each piece of evidence contributes information at its own source-specific level of abstraction. In other words, the evidence may support a set of situations without committing to any single one.
Uncertainties pervade rules that relate observed evidence and situations.
These conditions clearly hold in the military decision making environment. However, one can reasonably ask whether human decision makers follow Bayesian rules of inference since, as noted in Chapter 5, many studies show significant deviations from what would be expected of a "rational" Bayesian approach (Kahneman and Tversky, 1982). We believe this question is an open one and, in line with Anderson's (1993) reasoning in developing adaptive control of thought (ACT-R), consider a Bayesian approach to be an appropriate framework for building normative situation awareness models—models that may, in the future, need to be "detuned" in some fashion to better match empirically determined human assessment behavior. For this brief review, the discussion is restricted to simple (unadorned) belief networks and how they might be used to model the military situation assessment process.
A belief network is a graphical representational formalism for reasoning under uncertainty (Pearl, 1988; Lauritzen and Spiegelhalter, 1988). The nodes of the graph represent the domain variables, which may be discrete or continuous, and the links between the nodes represent the probabilistic, and usually causal, relationships between the variables. The overall topology of the network encodes the qualitative knowledge about the domain.
For example, the generic belief network of Figure 7.2 encodes the relationships over a simple domain consisting of five binary variables. Figure 7.2a shows the additional quantitative information needed to fully specify a belief network: prior belief distributions for all root nodes (in this case just A and B) and conditional probability tables for the links between variables. Figure 7.2a shows only one of this belief network's conditional probability tables, which fully specifies the conditional probability distribution for D given X. Similar conditional probability tables would be required for the other links. These initial quantities, in conjunction with the topology itself, constitute the domain model, and in most applications are invariant during usage.
The network is initialized by propagating the initial root prior beliefs downward through the network. The result is initial belief distributions for all variables