gathering arm of a larger system that conceives of, develops, produces, and maintains military equipment in the most cost-efficient manner possible. Under the second description, Vardeman noted that the distinction between the developmental and operational testing offices would disappear.

Vardeman also suggested that an information-gathering office should consider methods for developing “soft edge” measures of military value or utility to be associated with various combinations of levels of equipment performance, deployment, and cost. This idea contrasts with the approach of an end-of-the-line inspection station that thinks in terms of critical thresholds and sharp boundaries between acceptable and unacceptable equipment capabilities. Vardeman also suggested that such an office should treat the various operational testing and evaluation strategies for all acquisitions simultaneously and in terms of their likely joint impact on the overall expected military value.

Other workshop participants similarly questioned the realism and desirability of rigid specifications. Seglie noted that original requirements typically change and therefore should not be regarded as absolute. He suggested that modeling and simulation could be used to characterize the military utility of a given system as a function of its test measure—a measure of effectiveness or performance. Francisco Samaniego noted the relevance of statistical process control in attacking a multitude of quality issues. Both he and Donald Gaver commented on the small amount of comparative experimentation under simultaneous conditions, and Samaniego called for expanded efforts in prototype construction.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement