and studies in the published literature. In testimony before the committee, it was clear that some members of the forensic science community will not concede that there could be less than perfect accuracy either in given laboratories or in specific disciplines, and experts testified to the committee that disagreement remains regarding even what constitutes an error. For example, if the limitations of a given technology lead to an examiner declaring a “match” that is found by subsequent technology (e.g., DNA analysis) to be a “mismatch,” there is disagreement within the forensic science community about whether the original determination constitutes an error.32 Failure to acknowledge uncertainty in findings is common: Many examiners claim in testimony that others in their field would come to the exact same conclusions about the evidence they have analyzed. Assertions of a “100 percent match” contradict the findings of proficiency tests that find substantial rates of erroneous results in some disciplines (i.e., voice identification, bite mark analysis).33,34

As an example, in a FBI publication on the correlation of microscopic and mitochondrial DNA hair comparisons, the authors found that even competent hair examiners can make significant errors.35 In this study, the authors found that in 11 percent of the cases in which the hair examiners declared two hairs to be “similar,” subsequent DNA testing revealed that the hairs did not match, which refers either to the competency or the relative ability of the two divergent techniques to identify differences in hair samples, as well as to the probative value of each test.

The insistence by some forensic practitioners that their disciplines employ methodologies that have perfect accuracy and produce no errors has hampered efforts to evaluate the usefulness of the forensic science disciplines. And, although DNA analysis is considered the most reliable forensic tool available today, laboratories nonetheless can make errors working with either nuclear DNA or mtDNA—errors such as mislabeling samples, losing samples, or misinterpreting the data.

Standard setting, accreditation of laboratories, and certification of individuals aim to address many of these problems, and although many laboratories have excellent training and quality control programs, even

32

N. Benedict. 2004. Fingerprints and the Daubert standard for admission of scientific evidence: Why fingerprints fail and a proposed remedy. Arizona Law Review 46:519; M. Houck, Director of Forensic Science Initiative, West Virginia University. Presentation to the committee. January 25, 2007.

33

D.L. Faigman, D. Kaye, M.J. Saks, and J. Sanders. 2002. Modern Scientific Evidence: The Law and Science of Expert Testimony. St. Paul, MN: Thompson/West.

34

C.M. Bowers. 2002. The scientific status of bitemark comparisons. In: D.L. Faigman (ed.). Science in the Law: Forensic Science Issues. St. Paul, MN: West Publishing.

35

M. Houck and B. Budowle. 2002. Correlation of microscopic and mitochondrial DNA hair comparisons. Journal of Forensic Sciences 47(5):964-967; see also Bromwich, op. cit.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement