Data Storage and Use

Better use of data within DoD could improve the quality of decisions regarding weapon acquisition; since the workshop, DoD has drafted a directive on collection of data for weapon system development. Presentations and subsequent discussions at the workshop suggested that information from field and simulated testing (including associated costs) could be stored in more accessible records and shared more frequently across organizational lines. The creation of a new DoD statistical agency or less ambitious institutional reforms might facilitate efforts at quality improvement.

Hodges recommended that DoD establish and maintain strong links between data archiving and management operations and the groups engaged in statistical analysis of these data. With such organizational ties, statistical approaches might provide, as David Chu has suggested, leverage in determining what data to collect and how much testing is required. Hodges and others urged that DoD be open to possible new institutional arrangements to achieve its objectives.

Several participants pointed out that DoD testing data are expensive and hard to gather; they should therefore be valued highly. Even more scarce, as Chu noted in his remarks, are combat action data. Yet DoD agencies do not systematically build or use bodies of data on DoD systems. Hodges expressed the belief that, although some individuals value data and maintain archives, the importance of systematic accumulation and active use of data is not widely appreciated in DoD. He cited experiences involving missing documentation of helicopter operational tests and unused data



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 42
Statistical Issues in Defense Analysis and Testing: Summary of a Workshop Data Storage and Use Better use of data within DoD could improve the quality of decisions regarding weapon acquisition; since the workshop, DoD has drafted a directive on collection of data for weapon system development. Presentations and subsequent discussions at the workshop suggested that information from field and simulated testing (including associated costs) could be stored in more accessible records and shared more frequently across organizational lines. The creation of a new DoD statistical agency or less ambitious institutional reforms might facilitate efforts at quality improvement. Hodges recommended that DoD establish and maintain strong links between data archiving and management operations and the groups engaged in statistical analysis of these data. With such organizational ties, statistical approaches might provide, as David Chu has suggested, leverage in determining what data to collect and how much testing is required. Hodges and others urged that DoD be open to possible new institutional arrangements to achieve its objectives. Several participants pointed out that DoD testing data are expensive and hard to gather; they should therefore be valued highly. Even more scarce, as Chu noted in his remarks, are combat action data. Yet DoD agencies do not systematically build or use bodies of data on DoD systems. Hodges expressed the belief that, although some individuals value data and maintain archives, the importance of systematic accumulation and active use of data is not widely appreciated in DoD. He cited experiences involving missing documentation of helicopter operational tests and unused data

OCR for page 42
Statistical Issues in Defense Analysis and Testing: Summary of a Workshop about the accuracy of artillery forward observers. Also, there appear to be only limited attempts to gain information in a systematic way from training exercises for the purposes of improving combat analyses. Hodges, for example, has concluded that the Army has been unwilling to make a commitment of analytically trained personnel on the ground at the National Training Center and other facilities commensurate with their importance as sources of data. Apparently, field test data are often put away and forgotten after the regulatory requirement is satisfied. Seglie described the Pentagon attitude toward data as one characterized by the belief that testing handicaps the forces by using money and equipment that could be directed elsewhere. Numerous workshop participants called for linking data from operational and developmental testing and using experience with similar systems. These activities implicitly require good data storage and accessibility. Gaver noted the need to work with historical data on various categories of equipment. Both Gaver and Fries noted recent movement in this direction; Fries referred to the Army's current activities in the establishment of a master test and evaluation data base and also called for comprehensive data collection in operational testing and for integrity checks to ensure data quality. Larry Crow also cited the need for a sound data collection system to assess the reliability growth of defense systems. Regarding the cost and operational effectiveness analyses that are performed prior to concept demonstration approval (milestone 1) and development approval (milestone 2), Lese stressed the importance of developing validated data bases. He also called for technical and operational corroboration of data by engineering assessments and/or performance tests. He also argued that, because of the attention given to cost estimates in COEAs, these estimates should be validated and should include uncertainty and sensitivity analysis. Others touched on the need for more careful collection and storage of cost data in other areas. Gaver called for a more explicit accounting of costs in reliability testing. In their background paper, Peggy Mion and John Gehrig (Appendix B) identified seven problems associated with understanding and determining the cost of testing for major weapons programs: (1) the difficulty of resurrecting financial records over the long acquisition period (15 to 20 years); (2) difficulty in determining boundaries between related programs; (3) accounting for development testing in view of contractor sensitivities about providing this information; (4) differentiation of testing costs from costs of tactics and doctrine development and training; (5) accounting for institutional costs of test range use; (6) accounting for use of resources in the program manager's offices; and (7) accounting for the cost of production testing. Mion and Gehrig concluded that, although they had achieved some degree of success in determining test costs for the Army tactical missile

OCR for page 42
Statistical Issues in Defense Analysis and Testing: Summary of a Workshop system, there are major difficulties in obtaining definitive test cost data for any major acquisition program. These difficulties arise because relevant cost information does not reside within a single repository but is divided between project management offices and test and evaluation agencies. The authors also cite variations in cost accounting across test and evaluation agencies and developmental testing organizations. Hodges identified two institutional problems contributing to poor data utilization within DoD. First, no one has responsibility for accumulating test data in one place and ensuring proper utilization. Second, there are competing objectives among the chief players—defense contractors, the services, and the Office of the Secretary of Defense —that complicate the collection and interchange of data. A technical problem is that some of the needed statistical methods—for example, Bayesian methods for incorporating prior information—are not trivial, which complicates their routine implementation. Hodges suggested, therefore, that DoD needs to develop a collection of methods for sensibly using an accumulated body of data. Hodges and others believe that DoD should identify an agency to be the designated repository of data and the advocate of data collection and use. Such an agency could be a new office, an existing office, or a consortium of existing offices. In order to function effectively as a data advocate, this agency would need broad access to data that are being collected, some role in deciding what data will be collected, and some role in how data are used. In closing remarks at the workshop, Donald Rubin echoed Hodges's call for a data advocate within DoD, and Samaniego emphasized the need for long-term data management.