Skip to main content

Currently Skimming:

Appendix B: Performance Metrics for ASPs and PVTs
Pages 58-68

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 58...
... , conventional measures such as sensitivity and specificity provide useful information, but do not assess directly test performance in actual field operation. The metrics of interest concern the probabilities of making incorrect calls, i.e., the probability that the cargo actually contained dangerous material when the test system allowed it to pass (a false negative call)
From page 59...
... is likewise of interest for purposes of evaluating costs and benefits: Too many false positive calls can also be costly by slowing down commerce, diverting CBP personnel from potential threats as they spend 48 Some analyses refer to "false discovery rate" and "false non-discovery rate," which are related to (1–PPV)
From page 60...
... First, our two-way table arises from a designed experiment where values of m0 and m are set by design. Second, our bigger concern lies not with false positive calls but rather with false negative calls; i.e., with the probability that a cargo declared "safe" (negative)
From page 61...
... Suppose we have 24 trucks, into 12 of which we place SNM and leave only benign material in the remaining 12 trucks. We run all 24 trucks through two test systems, and observe the following results: Test System 1 Test System 2 No Total No Total Alarm Alarm Alarm Runs Alarm Runs SNM in 10 2 12 11 1 12 cargo Non-SNM 4 8 12 2 10 12 in cargo 14 10 24 13 11 24 Sensitivity is the probability that the system alarmed, given the presence of SNM in the cargo: among the 12 trucks that contained SNM, 10 alarmed for test system 1 (estimated sensitivity S1 = 10/12)
From page 62...
... P{Bc|A} = probability that event Bc occurs, given confirmation that event A has occurred (here, P{test system does not alarm | cargo contains SNM} = 1 – S) P{Bc|Ac} = probability that event B occurs, given confirmation that event Ac has occurred (here, P{test system does not alarm | cargo contains no SNM} = T)
From page 63...
... In real situations, however, one test system may have a higher test sensitivity but a lower specificity. For example, if T1 = 0.70 and T2 = 0.80 (test system 2 is more likely to remain silent on truly benign cargo than test system 1)
From page 64...
... . Calculations for the probability of a false positive call (FPCP, 1-PPV)
From page 65...
... . P{Bc|A} = probability that event Bc occurs, given confirmation that event A has occurred (here, P{test system does not alarm | cargo contains SNM} = 1 – S)
From page 66...
... . Tables of values of the probabilities of both false negative calls and false positive calls were calculated when T A , S A , TP , and S P were set equal to 0.1, 0.2, ..., 0.8, 0.9, 0.95, 0.99; of the 114 = 14,641 combinations, only 858 satisfied criteria 4 and 5.
From page 67...
... . In each 1 case, the absolute magnitudes of the false negative call probabilities are quite small, and the ratios of the false positive call probabilities are almost 1.
From page 68...
... 2006. False discovery and false non-discovery rates in single-step multiple testing procedures, Annals of Statistics 34(1)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.