Rule-Based Models

Some of the most advanced systems for computer-generated agents used in military simulations are based on the Soar architecture (Laird et al., 1987; see also Chapter 3). Soar, originally constructed as an attempt to provide a unified theory of human cognition (Newell, 1991), is a production rule system in which all knowledge is represented in the form of condition-action rules. It provides a learning mechanism called chunking, but this option is not activated in the current military simulations. When this mechanism is activated, it works as follows. The production rule system takes input and produces output during each interval defined as a decision cycle. Whenever the decision cycle of the production system reaches an impasse or conflict, a problem solving process is activated. Eventually, this problem solving process results in a solution, and Soar overcomes the impasse. Chunking is applied at this point by forming a new rule that encodes the conditions preceding the impasse and encodes the solution as the action for this new rule. Thus the next time the same situation is encountered, the same solution can be provided immediately by the production rule formed by the chunking mechanism, and the problem solving process can be bypassed. There are also other principles for creating new productions with greater discrimination (additional conditions in the antecedent of the production) or generalization (fewer conditions in the antecedent) and for masking the effects of the older productions.

A few empirical studies have been conducted to evaluate the validity of the chunking process (Miller and Laird, 1996; Newell and Rosenbloom, 1981; Rieman et al., 1996). More extensive experimental testing is needed, however, to determine how closely this process approximates human learning. There are reasons for questioning chunking as the primary model of human learning in military situations. One problem with the current applications of Soar in military simulations thus far is that the system needs to select one operator to execute from the many that are applicable in each decision cycle, and this selection depends on preference values associated with each operator. Currently, these preference values are programmed directly into the Soar-IFOR models, rather than learned from experience. In future military Soar systems, when new operators are learned, preferences for those operators will also need to be learned, and Soar's chunking mechanism is well suited for this purpose. Given the uncertain and dynamic environment of a military simulation, this preference knowledge will need to be continually updated and adaptive to its experience with the environment. Again, chunking can, in principle, continually refine and adjust preference knowledge, but this capability must be shown to work in practice for large military simulations and rapidly changing environments.

A second potential problem is that the conditions forming the antecedent of a production rule must be matched exactly before the production will fire. If the current situation deviates slightly from the conditions of a rule because of noise in the environment or changes in the environment over time, the appropriate rule



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement