National Academies Press: OpenBook

Testing and Evaluation of Standoff Chemical Agent Detectors (2003)

Chapter: 7 Decision Making and Risk

« Previous: 6 Research Areas to Support the Test Protocols for Standoff Detectors
Suggested Citation:"7 Decision Making and Risk." National Research Council. 2003. Testing and Evaluation of Standoff Chemical Agent Detectors. Washington, DC: The National Academies Press. doi: 10.17226/10645.
×

7
Decision Making and Risk

A potential concern with the deployment of a standoff detector that has only been tested with simulants under realistic field conditions—never with “live” chemical warfare agents (CWAs)—is that this might lower the degree of trust in its operation. A lack of confidence in the detector’s performance, in turn, could reduce its value as a risk management tool for dealing with chemical threats to military operations. On the other hand, successful testing of a standoff detector with actual agents under a limited set of “open-air” or field conditions will not provide complete assurance that the detector would meet performance criteria under the myriad of weaponization and delivery modes as well as background conditions found on the battlefield. In fact, the committee is concerned that successful performance of a standoff detector with limited outdoor test sets with CWAs might lead to an unwarranted confidence in detector operation under variable field conditions and with various interferents.

Part of the difficulty regarding the decision-making process for detector testing and validation stems from perceived differences in the simulants and CWAs in terms of their inherent detectability in the atmosphere. Although simulants and agents have different physicochemical properties and toxicity, from a detection standpoint it is important that they have similar spectral signatures in the atmosphere that can be measured. The principal detection challenge—for simulant or for the agent—is whether those spectral features can be distinguished unequivocally from the background radiation in the ambient environment and from the confounding spectral features associated with chemical interferents in the atmosphere (e.g., naturally occurring compounds as well as those associated with battlefield environments). The committee has devised rigorous test protocols for testing and evaluation of both active and passive standoff CWA detectors. These protocols rely on a series of experiments and tests using simulants (in both chambers and open air) and live agents (only in chambers). These test protocols provide the data necessary to develop algorithms that will be able to detect CWA compounds in the complex spectra acquired by a given instrument in the field. Thus, the protocols explicitly address agent-specific spectra as well as the fundamental challenge encountered in instrumental detection of any airborne compound—the accurate identification of a given signal from background spectra. If a given standoff detector can achieve the demanding level of performance required by the applicable test

Suggested Citation:"7 Decision Making and Risk." National Research Council. 2003. Testing and Evaluation of Standoff Chemical Agent Detectors. Washington, DC: The National Academies Press. doi: 10.17226/10645.
×

protocol, there is a high degree of confidence that it would also detect CWAs in actual field conditions. Consequently, there is no compelling or overriding requirement for such tests. Live-agent testing under field conditions would provide minimal additional information for demonstrating detector effectiveness.

Another consideration affecting testing and validation is the anticipated role of the standoff detector in military operations. According to multiservice doctrine, risk management procedures should be utilized to guide the planning and execution of joint military operations.21 Given this context, the operational role and performance of a standoff agent detector must be evaluated in terms of how it is used to manage risks that are detrimental to the successful completion of a given mission. The basic components of the risk management process include threat identification, threat/risk assessment, development and implementation of appropriate controls, and evaluation/revision of the risk management strategy.

Implementing this methodology for a mission that faces chemical warfare threats might include such details as atmospheric dispersion modeling of possible CWA attacks, prediction of potential casualties, and development of controls for risk management (e.g., modifying troop deployment, placement of detectors) to enhance the likelihood of mission success. In actual field implementation, additional information would come into play, such as measurement of meteorological conditions, intelligence on threats, and enemy location and activity. Moreover, detector performance can be addressed explicitly in the risk assessment process in terms of the likelihood (probability) that a detector would fail to detect the presence of a CWA (i.e., false negative) and the consequences of that failure to accomplishment of the mission. We note that standoff detectors are not required to be 100% reliable across all combat conditions and that, consequently, the level of trust placed in their operation always needs to be tempered with their potential limitations. Similarly, validation and field testing of detectors will never produce an infallible detector, but as a secondary control the military’s risk management process should be sufficient to compensate for variable detector performance.

The ultimate proof of a standoff detector’s worth will be its performance under field conditions, either on the battlefield or in antiterrorism applications. A commander in a threat situation must make decisions about the use of protective equipment, the way forces are deployed, and the actions and precautions that troops should take when faced with the possibility of a CWA attack. Protective measures are costly and hamper the fighting effectiveness of troops or the ability of a community to conduct its normal activities. In most settings a CWA attack is quite unlikely, and so precautionary measures impose costs that are unlikely to provide benefits. On the other hand, if an attack should occur, the costs to unprotected troops or a civilian population could be major, including severe casualties and risks to the mission. The value of a detector is in its ability to provide information about imminent exposures to CWAs, thereby allowing the commander to take avoidance or protective measures only when there is a high likelihood they are actually necessary. The way to measure the value of a detector is to analyze the improvement in the decisions made when it is able to provide information. The detector’s value lies in the degree to which attacks on unprotected troops or civilian populations can be avoided as well as the degree to which unproductive protective measures can be avoided when there is no attack.

Any detector, no matter how well designed, will have some rate of failing to warn of actual attacks (false negatives) and some rate of sounding alarms when there is no attack (false positives). The false negative and false positive rates are key to determining the optimal decisions about adopting protective measures in a threat situation. The principal value of testing a detector beyond its design

21  

Risk Management. 2001. FM 3-100.12. Air Land Sea Application Center, Langley Air Force Base, VA.

Suggested Citation:"7 Decision Making and Risk." National Research Council. 2003. Testing and Evaluation of Standoff Chemical Agent Detectors. Washington, DC: The National Academies Press. doi: 10.17226/10645.
×

phase is to establish the false positive and false negative rates. Variations in the setting, background, delivery mode, and CWA concentrations are key challenges to correct detector performance, and these will markedly influence the chances of false positives and false negatives. A valid testing protocol, therefore, must address the span of such conditions that are expected to be encountered by deployed units.

The value of open-air testing of detectors with CWAs, if any, would lie in the degree to which false positive and false negative rates are better characterized and understood than could be achieved by testing with simulants. Arguing for such live-agent testing is the fact that one is less sure of the operating characteristics of a detector toward real CWAs in the open air when testing is only on simulants than when testing is also on live CWAs. With simulants the characterization of detector performance is always an inference, never a direct demonstration, and the possibility exists that some unanticipated factor that differs between simulants and the CWAs might invalidate the inferences arising from simulant-only testing. This could result in apparent false reporting rates for the detector that are overly optimistic, most seriously if the detector would fail more often than estimated to sound alarms in the presence of real CW attacks in the field. Arguing against live-agent testing is the fact that any such testing must necessarily be limited to a fairly narrow range of conditions and CWAs, and so it may be harder to characterize the dependence of a detector on backgrounds, temperatures, humidities, and other factors in its ability to correctly sound alarms. If the detector “works” (i.e., sounds an alarm when live CWA is released) under fairly favorable conditions, this may provide a false sense of security in its abilities when conditions are less favorable or when they are significantly different from the setting in which the tests were carried out.

The appropriate way to judge these issues is to carry out a value-of-information analysis. This decision-analytic technique would aim at explicitly measuring the improvement in the field commander’s decisions that result (and the consequent lowering of losses from inappropriate decisions) if additional open-air testing of the detector with live agent were to be done versus if it were not done.

A more detailed discussion of risk assessment and the value of information analysis, its application to the testing of standoff instrumentation, and issues that a decision maker needs to consider in using the field information from standoff detectors before taking action in the field is given in Appendix C. The committee concluded that risk management science has to be an integral part of utilizing the “hard” technical output from any monitoring instrumentation. One of the committee’s recommendations underscores the importance of this issue.

Suggested Citation:"7 Decision Making and Risk." National Research Council. 2003. Testing and Evaluation of Standoff Chemical Agent Detectors. Washington, DC: The National Academies Press. doi: 10.17226/10645.
×
Page 28
Suggested Citation:"7 Decision Making and Risk." National Research Council. 2003. Testing and Evaluation of Standoff Chemical Agent Detectors. Washington, DC: The National Academies Press. doi: 10.17226/10645.
×
Page 29
Suggested Citation:"7 Decision Making and Risk." National Research Council. 2003. Testing and Evaluation of Standoff Chemical Agent Detectors. Washington, DC: The National Academies Press. doi: 10.17226/10645.
×
Page 30
Next: 8 Summary of Recommendations »
Testing and Evaluation of Standoff Chemical Agent Detectors Get This Book
×
Buy Paperback | $47.00 Buy Ebook | $37.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The report provides an independent assessment of suitable test protocols that might be useful and reliable for the testing and evaluation of standoff chemical agent detectors. The report proposes two testing protocols, one for passive detectors and one for active detectors, to help ensure the reliable detection of a release of chemical warfare agents. The report determined that testing these detectors by release of chemical warfare agents into the atmosphere would not provide additional useful information on the effectiveness of these detectors than would a rigorous testing protocol using chemical agents in the laboratory combined with atmospheric release of simulated chemical warfare agents.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!