Thus, it is even more critical to view testing as a process of experimentation: one that involves continuous data collection and assessment, learning about the strengths and weaknesses of newly added capabilities or (sub)systems, and using all of this information to determine how to improve the overall performance of the system. This should not be viewed as an activity to be carried out solely by contractors near the initiation of system development or by DoD near the end; instead, it should become an intrinsic part of system development with facilitated communication and interaction between the contractor and government testers throughout the developmental process.
In the evolutionary acquisition context, experimentation in early stages can be used to identify system flaws and to understand the limitations of system design. The focus in later stages should be on problems identified in the field and/or unresolved from earlier testing, evaluating the most recent modifications to the system, and assessing the maturity of a new component or subsystem design. This experimentation can be at the component level, at the subsystem level, or at the system level, with varying degrees of operational realism, depending on the goals.
Operational testing and evaluation (or testing for verification) will still have a major role to play, since it is the only way to verify that the systems are in fact operationally effective and suitable. In fact, it is critical that there is adequate oversight and accountability in this flexible environment. However, it is not realistic to undertake comprehensive operational tests at each stage of the development process. These should be undertaken only at stages encompassing major upgrades, the introduction of new, major capabilities, or new major (sub)systems. At other stages, a combination of data and insights from component or subsystem testing and developmental tests (reflecting operational realism as feasible) can be used instead, along with engineering and operational user judgment.
Conclusion 1: In evolutionary acquisition, the entire spectrum of testing activities should be viewed as a continuous process of gathering and analyzing data and combining information in order to make effective decisions. The primary goal of test programs should be to experiment, learn about the strengths and weaknesses of newly added capabilities or (sub)systems, and use the results to improve overall system performance. Furthermore, data from previous stages of development, including field data, should be used in design, development, and testing at future stages. Operational testing (testing for verification) of