because of their gainful use of Bayes’ theorem.8 A probability model for an inferential problem consists of variables that characterize aspects of the problem and probability distributions that characterize the user’s knowledge about these aspects of the problem at any given point in time. The user builds a network of such variables, along with the interrelated probability distributions that capture their substantively important interrelationships. Conditional independence relationships introduced earlier in this chapter play a key role, both conceptually and computationally. The basic idea of conditional independence in Bayes nets is that the important interrelationships among even a large number of variables can be expressed mainly in terms of relationships within relatively small, overlapping, subgroups of these variables.

Bayes nets are systems of variables and probability distributions that allow one to draw inferences within complex networks of independent variables. Examples of their use include calculating the probabilities of disease states given symptoms and predicting characteristics of the offspring of animals in light of the characteristics of their ancestors. Spurred by applications in such diverse areas as pedigree analysis, troubleshooting, and medical diagnosis, these systems have become an active topic in statistical research (see, e.g., Almond, 1995; Andersen, Jensen, Olesen, and Jensen, 1989; Pearl, 1988).

Two kinds of variables appear in a Bayes net for educational assessment: those which concern aspects of students’ knowledge and skill (construct variables in the terminology of this chapter) and those which concern aspects of the things students say, do, or make (observations). The nature and grain size of the construct variables is determined jointly by a conception of knowledge in the domain and the purpose of the assessment (see Chapter 2). The nature of the observations is determined by an understanding of how students display the targeted knowledge, that is, what students say or do in various settings that provides clues about that knowledge. The interrelationships are determined partly by substantive theory (e.g., a student who does not deeply understand the control-of-variables strategy in science may apply it in a near-transfer setting but probably not in a far-transfer setting) and partly by empirical observation (e.g., near transfer is less likely to be observed for task 1 than task 2 at any level of understanding, simply because task 2 is more difficult to read).


Bayes’ theorem concerns the relationship between two variables, described famously in a posthumously published paper by the Reverend Thomas Bayes. It concerns how one should revise his or her beliefs about one variable when one obtains information about another variable to which it is related. Let X be a variable whose probability distribution p (x\z) depends on the variable Z. Suppose also that prior to observing X, belief about Z can be expressed in terms of a probability distribution p(z). Bayes’ theorem says p(z\x) = p(x\z)p(z)/p(x) where p(x) is the expected value of p(x\z) over all possible values of Z.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement