into preference at the level of the individual entity, and means of doing so are discussed in the next section.


Three stages in the development of decision models are reviewed in this section. The first-stage models—random-utility—are probabilistic within an entity, but static across time. They provide adequate descriptions of variability across episodes within an entity, but they fail to describe the dynamic characteristics of decision making within a single decision episode. The second-stage models—sequential sampling decision models—describe the dynamic evolution of preference over time within an episode. These models provide mechanisms for explaining the effects of time pressure on decision making, as well as the relations between speed and accuracy of decisions, which are critical factors for military simulations. The third-stage models—adaptive planning models—describe decision making in strategic situations that involve a sequence of interdependent decisions and events. Most military decision strategies entail sequences of decision steps to form a plan of action. These models also provide an interface with learning models (see Chapter 5) and allow for flexible and adaptive planning based on experience (see also Chapter 8).

Random-Utility Models

Probabilistic choice models have developed over the past 40 years and have been used successfully in marketing and prediction of consumer behavior for some time. The earliest models were proposed by Thurstone (1959) and Luce (1959); more recent models incorporate some critical properties missing in these original, oversimplified models.

The most natural way of injecting variability into utility theory is to reformulate the utilities as random variables. For example, a commander is trying to decide between two courses of action (say, attack or withdraw), but he or she does not know the precise utility of an action and may estimate this utility in a way that varies across episodes. However, the simplest versions of these models, called strong random-utility models, predict a property called strong stochastic transitivity, a property researchers have shown is often violated (see Restle, 1961; Tversky, 1972; Busemeyer and Townsend, 1993; Mellers and Biagini, 1994). Strong stochastic transitivity states that if (1) action A is chosen more frequently than action B and (2) action B is chosen more frequently than action C, then (3) the frequency of choosing action A over C is greater than the maximum of the previous two frequencies. Something important is still missing.

These simple probabilistic choice models fail to provide a way of capturing the effects of similarity of options on choice, which can be shown in the following

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement