Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 25
Appendix E Role of Test and Evaluation in Department of Defense Acquisition The Department of Defense (DOD) major system acquisition process is described briefly here to explain the role of test and evaluation (T&E). Further information is widely available. A recent Congressional Research Service report provides a good overview of defense acquisition (Chadwick 2007). The acquisition of a new capability starts with the generation of a mission needs statement, which identifies and supports the need for a new or an improved capability. The acquisition process is initiated on approval of the mission needs statement by the secretary of defense (referred to as Milestone A). On approval of the secretary of defense, a DOD component runs the program. The Joint Program Executive Office Chemical and Biological Defense (JPEO CBD), which coordinates biological defense efforts of the four services, is the DOD component responsible for the T&E of biological standoff detection systems. The next phase in the acquisition process is a competitive exploration of alternative system concepts. Demonstration and validation follow approval of the alternative system concepts. Pending the outcome of the demonstration and validation, a preferred system is recommended. On approval of the secretary of defense (Milestone 1), full-scale engineering development of the preferred system starts. Procurement of long-lead production items and limited production for operational test and evaluation (OT&E) are approved at this time (Milestone B). On successful completion of the full-scale engineering development and on the basis of the outcome of the OT&E, JPEO CBD may recommend production of a detection system. If approved by the secretary of defense (Milestone C), production of the system begins, and the services are authorized to deploy the system. Two types of measurements are required to verify and validate the performance of lidar systems. First, metrics must be provided to establish the quality of the modeling and simulation products used to relate the performance of the lidar during both T&E and later operation. The metrics will identify the underlying assumptions that produce a positive detection event from a certain type of data and the confidence level and variability in that decision-making process. Second, metrics must be provided to yield functional data that go into a model to produce a positive call. These measurements should identify the variation in each element of the data and explain how uncertainties are compounded. Often, they are measurements of performance and effectiveness and of the merit of the system under test (for more information, see http://vva.dmso.mil/Special_topics/Measures/default.htm). Measures of merit (MOMs) relate the effects of a concept or system to the mission that the concept or system supports. MOMs measure concept or system capabilities in terms of the effects of the capabilities on the overall mission of which the concept or system is a part. They cover mission attributes that define the overall objectives of the simulation. For example, an 25
OCR for page 25
attribute of a standoff detector is usability. Measures of detector usability might include weight, power use, mean time between failures, and difficulty in reading data. Measures of effectiveness (MOEs) assess a system’s effectiveness in the accomplishment of a task. MOEs measure capabilities in terms of task accomplishment or system attributes. Tested capabilities should be related directly to operational capabilities in terms of engagement or battle outcome. MOE evaluation criteria (acceptability criteria) should be quantitative if possible. For example, measures of standoff detection include accuracy, false-alarm rates, response time, reliability, range of detection, and discrimination between target and threat. Measures of performance (MOPs) gauge system or component capabilities or characteristics. MOPs are quantitative or qualitative measures of simulation capabilities and characteristics. They are based on capabilities and characteristics that are defined by the requirements of the intended application or that meet user-defined system performance requirements. Quantitative MOPs are used when it is difficult to assess an MOE directly or when quantitative criteria need to be established. Qualitative MOPs are categorical measures of performance that refer to the presence or absence of specified characteristics. Quantitative MOPs can frequently be related to a numerical scale. A MOP for a standoff detector would be how many simultaneous plumes can be detected, tracked, and analyzed. A qualitative MOP would be how much more rapidly a standoff detector allows a battlefield commander to decide to change a protective posture. Subjective measurement techniques are generally used to address qualitative MOPs. Associated with each measure is a criterion that shows how well the measure needs to be addressed by the simulation if it is to be acceptable for the intended use. Those criteria are typically called acceptability criteria because they define a minimal level of performance, degree of effectiveness, level of success, or the like that the simulation needs to achieve to be acceptable to the user. 26