can be viewed as new subsystems. If the effect of these new subsystems on system performance can be assumed to be additive (as in point 2 above), then the majority of the experimentation resources in the k-th stage should be spent on the new subsystems with limited resources devoted to system integration (interactions). More specifically, the following strategy should be useful in such situations:
Do full operational testing and integration testing only after substantial stages.
Do limited integration testing at intermediate stages at which modifications are small to moderate.
Build in realism in developmental tests and carry out full component testing in developmental test.
As pointed out at the workshop by Steve Vardeman, one can use more formal decision-theoretic methods to combine both costs and data from test results at different stages of development, including results from developmental, operational, and field tests. Examples of such methods were described at the workshop. (See, for example, Gaver et al., 2005). These techniques require inputs (typically costs) that are often difficult to estimate or quantify; examples include cost of fielding a system late, cost of having a fielded system perform poorly in operations, the benefit of deploying a good system earlier, benefit of winning a battle faster as a result of a fielded new system, and so on. Nevertheless, analyses based on such approaches can provide useful insights into the trade-offs involved, especially when they are coupled with sensitivity analyses on the robustness of the conclusions to the inputs.
Chaloner, K., and I. Verdinelli 1995 Bayesian experimental design: A review. Statistical Science 10:273-304.
Gaver, D., P. Jacobs, and E. Seglie 2005 Modern Military Evolutionary Acquisition and the Ramifications of “RAMS.” Technical Report. Monterey, CA: Naval Postgraduate School.