Laboratories must correctly identify all of the culture-negative fecal samples, all of the culture-positive TNT (too numerous to count) samples, and 70 percent of the remaining samples identified as positive by 70 percent of the participating laboratories. Laboratories must correctly identify 90 percent of the serology samples. In the first year, the performance of only five of 35 laboratories met fecal-culture-approval standards, and none of 16 met those for serology. This increased to 35 of 41 for fecal culture, to 2 of 6 for fecal PCR, and to 61 of 63 for serology in the 2000 round of testing. Veterinary diagnostic laboratories have an incentive to participate in this program: the current U.S. voluntary cattle herd status program recommends that all tests be performed in an approved laboratory (USAHA, 1998). No such check testing program exists for veterinary diagnostic laboratories that test JD specimens originating from other ruminant livestock species.
In contrast to the proficiency assessment procedures currently in use, and those recommended by OIE (OIE, 2000), considerable research has shown that assessment programs should use a process that is as blind as possible: the participating laboratory should not be able to distinguish check samples from routine samples. Open schemes, such as those currently in use, overestimate day-to-day laboratory proficiency, presumably because more care is taken with the check samples than with regular diagnostic samples. Rather than estimating routine laboratory proficiency, open schemes indicate the optimal proficiency of participating laboratories (Black and Dorse, 1976; Reilly et al., 1999). Libeer (2001) reported that assurance plans that use identifiable samples overestimated clinical chemistry laboratory performance by 25 percent, compared with blind samples. In a Centers for Disease Control and Prevention study (Hansen et al., 1985) of samples submitted for drug testing, the average error rate across six drug classes was 49 percent higher for blind positive samples than for mailed positive samples. In a study of mycology samples, the frequency of errors on covert samples was as much as 25 percent higher than those for overt samples (Reilly et al., 1999). This bias occurs even when both the laboratory director and the individuals who perform the testing are required by law to sign statements that the proficiency program samples are handled in the same manner as regular submissions. In a study of 42 laboratories in proficiency testing programs requiring such a statement, 18 percent of results from blind proficiency testing samples were unacceptable, but only 4 percent of samples from open tests were unacceptable. In this study, 60 percent of the laboratories exhibited significantly better performance on the open samples than on the blind samples (Parsons et al., 2001). Use of the appropriate sample matrix, in addition to blinding, is important: Only 47 percent of laboratories properly identified E. coli in a blind urine specimen; 94 percent did so in a lyophilized specimen (Black and Dorse, 1976). The conclusion is that the current USDA laboratory certification system considerably overestimates the actual performance of veterinary diagnostic laboratories on routine submissions, particularly for difficult, labor-intensive procedures, such as fecal culture for Map, or for procedures with many sources of variability, such as bovine JD ELISA.