conditions. Some testing laboratories originally used matching rules that were based on the average spacing of fragment sizes in each region of the gel, rather than on actual studies of reproducibility. Other laboratories used purely visual matching criteria. Both are inadequate. Each testing laboratory must carry out its own reproducibility studies, because reproducibility varies among laboratories. The precise match criterion of each laboratory should be made freely available to all interested persons and should be stated in forensic reports.

The match criterion is also used in the calculation of allele frequencies. To determine the probability that a matching allele was found by chance, one counts the number of matching alleles in an appropriately chosen reference population. For the calculation to be valid, the same match criterion must be applied in screening the population databank and in comparing the forensic samples. Some testing laboratories originally used less stringent rules for declaring a match between forensic samples and more stringent rules for determining the frequency of matching alleles in the databank; the effect was an overstatement of the probability of obtaining a match by chance.

Some have advocated that testing laboratories, instead of using a match criterion, should report a likelihood ratio—the ratio of the probability that the measurements would have arisen if the samples came from the same person to the probability that they would have arisen if they came from different persons. No testing laboratories in the United States now use that approach. The committee recognizes its intellectual appeal, but recommends against it. Accuracy with it requires detailed information about the joint distribution of fragment positions, and it is not clear that information about a match could be understood easily by lay persons.

A laboratory's level of reproducibility can increase or decrease over time. Reproducibility should be measured not only when a laboratory first implements DNA typing, but continually on the basis of actual casework, as well as external proficiency testing (see Chapter 4). One easy way is to record the fragment measurements from the control samples of known DNA included on the membrane and regularly examine the variability in these measurements. A drawback of that approach is that the control pattern might become too well known to the examiners. A slight variation would eliminate the problem. Examiners would continue to use a fixed known control sample on every membrane, but would also be given a blind control sample as a bloodstain to analyze with each case. The latter sample would be randomly selected from a collection of a few dozen known samples. The examiners would not know its specific identity, but only a code number. They would compare the blind control sample against the known patterns, to determine whether it matched to the expected extent. Such an internal test of reproducibility would provide continuing internal measurement of a



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement