An alternative approach to testing is to get experts knowledgeable in the system together with experts in statistical experimentation design to (a) model and analyze the system design at important levels of abstraction for important scenarios of use; (b) identify the most likely critical or vulnerable components, interfaces, or interactions and design and conduct relatively cheap, focused tests of these components or interfaces; (c) present the results and case analysis to key people in the acquisition processes; and (d) refocus the development to the critical or vulnerable components.

Traditional operational test and evaluation can and should still be conducted for simpler systems to demonstrate that these systems are likely to work as intended in the field. But as the percentage of defense systems that are extremely complex appears to be increasing, the operational test and evaluation strategy must focus on the essential components, subsystems, and interfaces. Although such an approach will not provide a guarantee that the comprehensive and complex system will work, it will provide useful evidence that certain scenarios and zones of use will work reliably. Analogous to mapping a mine field, testing can show “safe paths” of important use. For example, there are techniques for identifying small subsets of all possible combinations of components, subsystems, and environments, so that critical pair-wise or higher order combinations are tested; Cohen et al. (1994) discuss an application to software testing. Also, aspects of the “exploratory analysis under uncertainty” paradigm (e.g., Davis et al., 1999) may be applicable.

In summary, with unlimited time and test budgets, the preferred approach would be to test the system or its components in every operationally relevant scenario. However, given the large number of all possible combinations of the factors and test scenarios, testing will have to make use of clever strategies. These include testing only in selected scenarios to examine the performance of those components or interfaces that seem to be most problematic, testing only a subset of all possible interactions or inter-operability features, testing those scenarios that correspond to the most frequent types of use, or some combinations of these strategies. Clearly, there is a need to investigate various alternatives and develop specific proposals.

Conclusion 5: The DoD testing community should investigate alternate strategies for testing complex defense systems to gain, early in the development process, an understanding of their potential operational failure modes, limitations, and level of performance.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement