In addition to their potential utility for expanding the scope of operational performance evaluations, modeling and simulation can also serve as valuable test planning tools—exploring test scenario options, addressing test sizing issues, identifying critical performance issues, and so on. Depending on the application, this can be accomplished by detailed physics-based models or by lower fidelity models that capture primary system performance without attempting to replicate the underlying physical processes that define system performance. One advantage of simpler models is increased responsiveness and flexibility, both in terms of initial availability to support operational testing and evaluation planning as well as turnaround time to complete detailed studies (e.g., comprehensive sensitivity analyses). However, without a physical justification, extrapolation away from conducted testing conditions and circumstances is generally not warranted.

As Art Koehler described in a presentation at the workshop, a particular application of modeling and simulation that is directly relevant to evolutionary acquisition is being developed by Procter & Gamble and Los Alamos National Laboratories. They have designed a simulation and analysis system that can examine the impacts on system performance and system reliability from changing a major component of an existing complex system. This simulation and analysis system therefore provides a key component of what one would need to know in an evolutionary context in order to plan for and accommodate changes resulting from component upgrades, possibly through redesign of other parts of the existing system.

SOFTWARE DEVELOPMENT IN AN EVOLUTIONARY CONTEXT

Software development is one area in which evolutionary acquisition is expected to play a major role. Given the increasing role of software in complex defense systems, staged development should take place consistent with current best practices. The general approach described in Appendix C, sometimes referred to as the cleanroom process, is one of several similar approaches representative of current best practices. The software engineering process described in Appendix C, however, is designed to be carried out by contractors, and the extent to which it could (or should) be mandated in government contracts is unclear. Nevertheless, there is a need to explore how this might be done.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement