While there are obvious and important differences between the private sector and the Department of Defense, some instructive comparisons emerged from the workshop.

One of the themes was the value of building a more neutral and cooperative decision-making environment. In the DoD setting, an important distinction emerged between the sharing of information to produce more informed decisions and the voluntary cooperation of military services engaged in a zero-sum game—that is, competing for identical missions during a period of declining budgets. Workshop participants believe that better archiving and sharing of information—for example, by constructing a component reliability data base—could be achieved within the current constraints of an inherently adversarial system.

Several participants alluded to incentives for groups within DoD to play an advocacy role in the weapon acquisition process. For example, Meth observed that current reliability test planning and analysis approaches are mostly influenced by programmatic considerations. He believes there is a tendency to put the best face forward in presenting test results. Seglie also worried that models can be used for public relations purposes in making sales pitches. Numerous investigations by the General Accounting Office (GAO) of the acquisitions of weapon systems have been conducted, and some of them similarly document the presence of an advocacy environment. The criticisms acknowledged by Duncan in his opening remarks at the workshop and listed earlier include overly optimistic evaluations of test results, inaccurate reports to Congress, and unrealistic testing. Seglie also pointed out that, between developmental and operational testing, measures of reliability, such as mean time to failure, often differ disturbingly by a factor of 2 or 3.

The costs and inefficiencies associated with advocacy in a manufacturing context also appear in defense analysis and testing. The program manager of a weapon system under development may not seriously contemplate the possibility that the system might be canceled on the basis of operational testing. This orientation is understandable in view of the large expenditures that have already been sunk into development (and can vest interest in production) of an expensive weapon system. There may be an expectation that there will be an opportunity to correct problems uncovered in testing, so that rather than a pass/fail testing regimen, DoD employs a test-fix-test-again cycle that is not (but could be) reflected in the statistical methods used in analysis. Seglie (Appendix B) observed that the program manager and the test manager both lack incentive to delay tests. But premature testing—in particular, operational testing—may actually use more resources to conduct (typically) destructive testing than would be used if operational testing were carried out somewhat later on a better-developed system. Samaniego (1993) expressed this theme more generally when he stressed that it is far



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement