ism to developmental testing is very likely to help discover these flaws earlier in the development process. Another benefit of adding operational realism to developmental testing is that it provides a closer connection between developmental and operational testing, thereby facilitating combining information between the two forms of testing.

We also note that operational testing as currently done is typically too short to be able to discover many reliability deficiencies, such as fatigue. The time for developmental testing is also typically too short to find some of these flaws. These weaknesses in the current testing approach motivate the discussion below on accelerated testing, which, when properly implemented, can effectively expedite the discovery of design flaws.

A later report (National Research Council, 2006:15) noted that continuous testing is especially appropriate for systems that are acquired in stages, as one “learns about strengths and weaknesses of newly added capabilities or (sub) systems, and uses the results to improve overall system performance.” This report also recommended that DOD documents and processes be revised “to explicitly recognize and accommodate [this] framework” (p. 3) so that the testing community is engaged in a joint effort to learn about and improve a system’s performance. Although such formal changes have not been made, practices within DOD appear to be moving in this direction, one that is consistent with commercial industry practices.

There are a number of challenges in implementing the above paradigm. Test data from various sources need to be readily available, including field data from similar systems, data from previous stages of development, contractor data, developmental data, and data from modeling and simulation. Information from these sources can then be combined and exploited for effective test planning, design, analysis, and decision making. There are, however, major obstacles to meeting the challenges and accomplishing this approach in DOD: lack of data archives (see discussion below); use of multiple databases (with their own formats and incompatibilities); lack of access to data; and perhaps most importantly, lack of an incentive structure that emphasizes early detection of faults and sharing of information. As noted in the NRC report (2006:19): “incentives need to be put in place to support the process of learning and discovery of design inadequacies and failure modes early.” In addition, the NRC recommended that DOD require that contractors share all relevant data on system performance and results of modeling and simulation developed under government contracts. Similarly, Adolph et al. (2008:219) noted: “Sharing and access to all appropriate system-level and selected component-level test and model data by government DT [developmental testing] and OT [operational testing] organizations” should be required in defense contracts. Despite these recommendations, there has been a lack of progress in this key area.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement