Workshop participants noted the importance of sensitivity analysis in examining the effects of model assumptions on final results. Seglie (Appendix B) posed two key questions: How does one find the most sensitive assumptions? How does one define a robust measure of effectiveness? The second task might be reduced to the choice of a suitable parameterization. For example, should exchange ratios be considered only within a particular scenario instead of across a range of scenarios? These questions should ultimately be answered by military users who are informed by statistical thinking.

Staniec (Appendix B) cited the need for careful analysis of COEA sensitivity to changes in weapon system characteristics. The analysis of the Javelin system (see case study #1) involved many different scenarios and different combat simulation models, each of which is based on a series of assumptions. Introducing human beings into the operation leads to further complications. Closed-form simulations of combat typically do not model human decision processes well. However, with the recent development of “realistic” and real-time distributed simulation, it may be possible to conduct experiments using man-in-the-loop systems to quantify the benefits of various command-and-control system alternatives. Advanced distributed simulation methods will require more elaborate computer experiments, and sensitivity analysis will probably have to be built into the initial experimental design.

In addition to the variability in assessments of operational effectiveness, the uncertainties associated with cost estimates may be large but are typically not expressed in the COEAs performed for prospective defense systems. Defense analysts can provide useful information to decision makers by identifying key factors driving program costs and assessing the sensitivity of cost estimates to plausible changes in these key factors.

Kathryn Laskey observed that statistical methods may play an important part in examining the sensitivity of conclusions to assumptions made in the modeling process. Most combat simulations use nonlinear, deterministic models, and relatively small changes in inputs might lead to large changes in results. Also, output parameter values, such as weapon ranges or kill probabilities, will vary greatly under different conditions, not all of which are included in the model. In general, the variation of results from combat models is due to aggregation over factors included in the study but not in the model, to random error, and to factors not included in the study.

Standard sensitivity analysis involves varying input parameters one at a time and monitoring the corresponding changes in model outputs. Alternatively, more sophisticated multivariate methods are available. One alternative approach to sensitivity analyses is to use a variance components or a

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement