paper by Haber and Haber29 presents a thorough analysis of the ACE-V method and its scientific validity. Their conclusion is unambiguous: “We have reviewed available scientific evidence of the validity of the ACE-V method and found none.”30 Further, they state:
[W]e report a range of existing evidence that suggests that examiners differ at each stage of the method in the conclusions they reach. To the extent that they differ, some conclusions are invalid. We have analysed the ACE-V method itself, as it is described in the literature. We found that these descriptions differ, no single protocol has been officially accepted by the profession and the standards upon which the method’s conclusions rest have not been specified quantitatively. As a consequence, at this time the validity of the ACE-V method cannot be tested.31
Recent legal challenges, New Hampshire vs. Richard Langill32 and Maryland vs. Bryan Rose,33 have also highlighted two important issues for the latent print community: documentation and error rate. Better documentation is needed of each step in the ACE-V process or its equivalent. At the very least, sufficient documentation is needed to reconstruct the analysis, if necessary. By documenting the relevant information gathered during the analysis, evaluation, and comparison of latent prints and the basis for the conclusion (identification, exclusion, or inconclusive), the examiner will create a transparent record of the method and thereby provide the courts with additional information on which to assess the reliability of the method for a specific case. Currently, there is no requirement for examiners to document which features within a latent print support their reasoning and conclusions.
Error rate is a much more difficult challenge. Errors can occur with any judgment-based method, especially when the factors that lead to the ultimate judgment are not documented. Some in the latent print community argue that the method itself, if followed correctly (i.e., by well-trained examiners properly using the method), has a zero error rate. Clearly, this assertion is unrealistic, and, moreover, it does not lead to a process of method improvement. The method, and the performance of those who use it, are inextricably linked, and both involve multiple sources of error (e.g., errors in executing the process steps, as well as errors in human judgment).
Some scientific evidence supports the presumption that friction ridge patterns are unique to each person and persist unchanged throughout a