Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 28
7 Decision Making and Risk A potential concern with the deployment of a standoff detector that has only been tested with simulants under realistic field conditions—never with “live” chemical warfare agents (CWAs)—is that this might lower the degree of trust in its operation. A lack of confidence in the detector’s performance, in turn, could reduce its value as a risk management tool for dealing with chemical threats to military operations. On the other hand, successful testing of a standoff detector with actual agents under a limited set of “open-air” or field conditions will not provide complete assurance that the detector would meet performance criteria under the myriad of weaponization and delivery modes as well as background conditions found on the battlefield. In fact, the committee is concerned that successful performance of a standoff detector with limited outdoor test sets with CWAs might lead to an unwarranted confidence in detector operation under variable field conditions and with various interferents. Part of the difficulty regarding the decision-making process for detector testing and validation stems from perceived differences in the simulants and CWAs in terms of their inherent detectability in the atmosphere. Although simulants and agents have different physicochemical properties and toxicity, from a detection standpoint it is important that they have similar spectral signatures in the atmosphere that can be measured. The principal detection challenge—for simulant or for the agent—is whether those spectral features can be distinguished unequivocally from the background radiation in the ambient environment and from the confounding spectral features associated with chemical interferents in the atmosphere (e.g., naturally occurring compounds as well as those associated with battlefield environments). The committee has devised rigorous test protocols for testing and evaluation of both active and passive standoff CWA detectors. These protocols rely on a series of experiments and tests using simulants (in both chambers and open air) and live agents (only in chambers). These test protocols provide the data necessary to develop algorithms that will be able to detect CWA compounds in the complex spectra acquired by a given instrument in the field. Thus, the protocols explicitly address agent-specific spectra as well as the fundamental challenge encountered in instrumental detection of any airborne compound—the accurate identification of a given signal from background spectra. If a given standoff detector can achieve the demanding level of performance required by the applicable test
OCR for page 29
protocol, there is a high degree of confidence that it would also detect CWAs in actual field conditions. Consequently, there is no compelling or overriding requirement for such tests. Live-agent testing under field conditions would provide minimal additional information for demonstrating detector effectiveness. Another consideration affecting testing and validation is the anticipated role of the standoff detector in military operations. According to multiservice doctrine, risk management procedures should be utilized to guide the planning and execution of joint military operations.21 Given this context, the operational role and performance of a standoff agent detector must be evaluated in terms of how it is used to manage risks that are detrimental to the successful completion of a given mission. The basic components of the risk management process include threat identification, threat/risk assessment, development and implementation of appropriate controls, and evaluation/revision of the risk management strategy. Implementing this methodology for a mission that faces chemical warfare threats might include such details as atmospheric dispersion modeling of possible CWA attacks, prediction of potential casualties, and development of controls for risk management (e.g., modifying troop deployment, placement of detectors) to enhance the likelihood of mission success. In actual field implementation, additional information would come into play, such as measurement of meteorological conditions, intelligence on threats, and enemy location and activity. Moreover, detector performance can be addressed explicitly in the risk assessment process in terms of the likelihood (probability) that a detector would fail to detect the presence of a CWA (i.e., false negative) and the consequences of that failure to accomplishment of the mission. We note that standoff detectors are not required to be 100% reliable across all combat conditions and that, consequently, the level of trust placed in their operation always needs to be tempered with their potential limitations. Similarly, validation and field testing of detectors will never produce an infallible detector, but as a secondary control the military’s risk management process should be sufficient to compensate for variable detector performance. The ultimate proof of a standoff detector’s worth will be its performance under field conditions, either on the battlefield or in antiterrorism applications. A commander in a threat situation must make decisions about the use of protective equipment, the way forces are deployed, and the actions and precautions that troops should take when faced with the possibility of a CWA attack. Protective measures are costly and hamper the fighting effectiveness of troops or the ability of a community to conduct its normal activities. In most settings a CWA attack is quite unlikely, and so precautionary measures impose costs that are unlikely to provide benefits. On the other hand, if an attack should occur, the costs to unprotected troops or a civilian population could be major, including severe casualties and risks to the mission. The value of a detector is in its ability to provide information about imminent exposures to CWAs, thereby allowing the commander to take avoidance or protective measures only when there is a high likelihood they are actually necessary. The way to measure the value of a detector is to analyze the improvement in the decisions made when it is able to provide information. The detector’s value lies in the degree to which attacks on unprotected troops or civilian populations can be avoided as well as the degree to which unproductive protective measures can be avoided when there is no attack. Any detector, no matter how well designed, will have some rate of failing to warn of actual attacks (false negatives) and some rate of sounding alarms when there is no attack (false positives). The false negative and false positive rates are key to determining the optimal decisions about adopting protective measures in a threat situation. The principal value of testing a detector beyond its design 21 Risk Management. 2001. FM 3-100.12. Air Land Sea Application Center, Langley Air Force Base, VA.
OCR for page 30
phase is to establish the false positive and false negative rates. Variations in the setting, background, delivery mode, and CWA concentrations are key challenges to correct detector performance, and these will markedly influence the chances of false positives and false negatives. A valid testing protocol, therefore, must address the span of such conditions that are expected to be encountered by deployed units. The value of open-air testing of detectors with CWAs, if any, would lie in the degree to which false positive and false negative rates are better characterized and understood than could be achieved by testing with simulants. Arguing for such live-agent testing is the fact that one is less sure of the operating characteristics of a detector toward real CWAs in the open air when testing is only on simulants than when testing is also on live CWAs. With simulants the characterization of detector performance is always an inference, never a direct demonstration, and the possibility exists that some unanticipated factor that differs between simulants and the CWAs might invalidate the inferences arising from simulant-only testing. This could result in apparent false reporting rates for the detector that are overly optimistic, most seriously if the detector would fail more often than estimated to sound alarms in the presence of real CW attacks in the field. Arguing against live-agent testing is the fact that any such testing must necessarily be limited to a fairly narrow range of conditions and CWAs, and so it may be harder to characterize the dependence of a detector on backgrounds, temperatures, humidities, and other factors in its ability to correctly sound alarms. If the detector “works” (i.e., sounds an alarm when live CWA is released) under fairly favorable conditions, this may provide a false sense of security in its abilities when conditions are less favorable or when they are significantly different from the setting in which the tests were carried out. The appropriate way to judge these issues is to carry out a value-of-information analysis. This decision-analytic technique would aim at explicitly measuring the improvement in the field commander’s decisions that result (and the consequent lowering of losses from inappropriate decisions) if additional open-air testing of the detector with live agent were to be done versus if it were not done. A more detailed discussion of risk assessment and the value of information analysis, its application to the testing of standoff instrumentation, and issues that a decision maker needs to consider in using the field information from standoff detectors before taking action in the field is given in Appendix C. The committee concluded that risk management science has to be an integral part of utilizing the “hard” technical output from any monitoring instrumentation. One of the committee’s recommendations underscores the importance of this issue.
Representative terms from entire chapter: