parameters that produce the behavior of concern. All of these procedures are relevant to the validation of emulation models.

Statistical or graphical comparisons between a model's results and those in the real world may be used to examine the model's predictive power. A key requirement for this analysis is the availability of real data obtained under comparable conditions. If a model is to be used to make absolute predictions, it is important that not only the means of the model and the means of the real world data be identical, but also that the means be correlated. However, if the model is to be used to make relative predictions, the requirements are less stringent: the means of the model and the real world do not have to be equal, but they should be positively correlated (Kleijnen and van Groenendaal, 1992).

Since a model's validity is determined by its assumptions, it is important to provide these assumptions in the model's documentation. Unfortunately, in many cases assumptions are not made explicit. According to Fossett et al. (1991), a model's documentation should provide an analyst not involved in the model's development with sufficient information to assess, with some level of confidence, whether the model is appropriate for the intended use specified by its developers.

It is important to point out that validation is a labor-intensive process that often requires a team of researchers and several years to accomplish. It is recommended that model developers be aided in this work by trained investigators not involved in developing the models. In the military context, the most highly validated models are physiological models and a few specific weapons models. Few individual combatant or unit-level models in the military context have been validated using statistical comparisons for prediction; in fact, many have only been grounded. Validation, clearly a critical issue, is necessary if simulations are to be used as the basis for training or policy making.

Large models cannot be validated by simply examining exhaustively the predictions of the model under all parameter settings and contrasting that behavior with experimental data. Basic research is therefore needed on how to design intelligent artificial agents for validating such models. Many of the more complex models can be validated only by examining the trends they predict. Additional research is needed on statistical techniques for locating patterns and examining trends. There is also a need for standardized validation techniques that go beyond those currently used. The development of such techniques may in part involve developing sample databases against which to validate models at each level. Sensitivity analysis may be used to distinguish between parameters of a model that influence results and those that are indirectly or loosely coupled to outcomes. Finally, it may be useful to set up a review board for ensuring that standardized validation procedures are applied to new models and that new versions of old models are docked against old versions (to ensure that the new versions still generate the same correct behavior as the old ones).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement