about the accuracy of artillery forward observers. Also, there appear to be only limited attempts to gain information in a systematic way from training exercises for the purposes of improving combat analyses. Hodges, for example, has concluded that the Army has been unwilling to make a commitment of analytically trained personnel on the ground at the National Training Center and other facilities commensurate with their importance as sources of data. Apparently, field test data are often put away and forgotten after the regulatory requirement is satisfied. Seglie described the Pentagon attitude toward data as one characterized by the belief that testing handicaps the forces by using money and equipment that could be directed elsewhere.

Numerous workshop participants called for linking data from operational and developmental testing and using experience with similar systems. These activities implicitly require good data storage and accessibility. Gaver noted the need to work with historical data on various categories of equipment. Both Gaver and Fries noted recent movement in this direction; Fries referred to the Army's current activities in the establishment of a master test and evaluation data base and also called for comprehensive data collection in operational testing and for integrity checks to ensure data quality. Larry Crow also cited the need for a sound data collection system to assess the reliability growth of defense systems.

Regarding the cost and operational effectiveness analyses that are performed prior to concept demonstration approval (milestone 1) and development approval (milestone 2), Lese stressed the importance of developing validated data bases. He also called for technical and operational corroboration of data by engineering assessments and/or performance tests. He also argued that, because of the attention given to cost estimates in COEAs, these estimates should be validated and should include uncertainty and sensitivity analysis.

Others touched on the need for more careful collection and storage of cost data in other areas. Gaver called for a more explicit accounting of costs in reliability testing. In their background paper, Peggy Mion and John Gehrig (Appendix B) identified seven problems associated with understanding and determining the cost of testing for major weapons programs: (1) the difficulty of resurrecting financial records over the long acquisition period (15 to 20 years); (2) difficulty in determining boundaries between related programs; (3) accounting for development testing in view of contractor sensitivities about providing this information; (4) differentiation of testing costs from costs of tactics and doctrine development and training; (5) accounting for institutional costs of test range use; (6) accounting for use of resources in the program manager's offices; and (7) accounting for the cost of production testing.

Mion and Gehrig concluded that, although they had achieved some degree of success in determining test costs for the Army tactical missile



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement