actions and events across multiple stages (see, for example, the semiautomated force combat instruction set for the close combat tactical trainer). The models discussed next address this need.
Before a commander makes a decision, such as attacking versus withdrawing from an engagement, he or she needs to consider various plans and future scenarios that could occur, contingent on an assessment of the current situation (see also Chapters 7 and 8). For example, if the commander attacks, the attack may be completely successful and incapacitate the enemy, making retaliation impossible. Alternatively, the attack may miss completely because of incorrect information about the location of the enemy, and the enemy may learn the attacker's position, in which case the attacker may need to withdraw to cover to avoid retaliation from an unknown position. Or the attack may be only partially successful, in which case the commander will have to wait to see the enemy's reaction and then consider another decision to attack or withdraw.
Adaptive decision making has been a major interest in the field of decision making for some time (see, e.g., the summary report of Payne et al., 1993). Decision makers are quite capable of selecting or constructing strategies on the fly based on a joint consideration of accuracy (probability of choosing the optimal action) and effort or cost (time and attention resources required to execute a strategy). However, most of this work on adaptive decision making is limited to single-stage decisions. There is another line of empirical research using dynamic decision tasks (see Kerstholt and Raaijmakers, forthcoming, for a review) that entail planning across multiple stages decision, but this work is still at an early stage of development and formal models have not yet been fully developed. Therefore, the discussion in the next section is based on a synthesis of learning and decision models that provides a possible direction for building adaptive planning models for decision making. The models reviewed in this subsection are based on a synthesis of the previously described decision models and the learning models described in Chapter 5. More specifically, the discussion here examines how the exemplar- or case-based learning model can be integrated with the sequential sampling decision models (see Gibson et al., 1997).
The first component is an exemplar model for learning to anticipate consequences based on preceding sequences of actions and events. Each decision episode is defined by a choice among actions—a decision—followed by a consequence. Also, preceding each episode is a short sequence of actions and events that lead up to this choice point. Each episode is encoded in a memory trace that contains two parts: a representation of the short sequence of events and actions that preceded a consequence, and the consequence that followed.
The second component is a sequential sampling decision process, with a choice being made on the anticipated consequences of each action. When a