Whether the practical results to be derived from his researches will repay the pains he has bestowed upon them we must take leave to doubt. It will be long before a British jury will consent to convict a man upon the evidence of his finger prints; and however perfect in theory the identification may be, it will not be easy to submit it in a form that will amount to legal evidence.
From an 1892 review in The Athenaeum of
Finger Prints, by Sir Francis Galton
DNA technology makes possible the study of human variability at the most basic levelthe level of genetic material, DNA. Previous methods using blood groups and proteins have analyzed gene products, rather than DNA itself. In addition to providing more direct genetic information, DNA can withstand environmental conditions that destroy proteins, so old, badly degraded samples of bodily fluids still can provide abundant information. If the array of DNA segments (markers) used for comparison is large enough, the probability that two unrelated persons (or even close relatives, except identical twins) will share all of them is vanishingly small. The techniques for analyzing DNA are already very powerful; they will become more so.
DNA analysis is only one of a group of techniques that make use of new and increasingly sophisticated advances in science and technology. Some of the subjects involved are epidemiology, survey research, economics, and toxicology. Increasingly, the methods are technical and statistical, as with forensic DNA analysis. The issues are at the interface of science and law, and involve the difficult problem of accommodating the different traditions in the two areas. For
a dicussion of scientific and legal issues involved in the use of scientific evidence in the courts, see Federal Judicial Center (1994).
The 1992 National Research Council Report
DNA techniques began to be used in criminal cases in the United States in 1988. The emergence of numerous scientific and legal issues led to the formation in 1989 of the National Research Council Committee on DNA Technology in Forensic Science. That committee's report, issued in 1992 (NRC 1992), affirmed the value of DNA typing for forensic analysis and hailed it as a major advance in the field of criminal investigation. In an introductory statement, the committee wrote:
We recommend that the use of DNA analysis for forensic purposes, including the resolution of both criminal and civil cases, be continued while improvements and changes suggested in this report are being made. There is no need for a general moratorium on the use of the results of DNA typing either in investigation or in the courts.
To improve the quality of DNA-typing information and its presentation in court, the report recommended various policies and practices, including
· Completion of adequate research into the properties of typing methods to determine the circumstances under which they yield reliable and valid results (p 8, 61-63).1
· Formulation and adherence to rigorous protocols (p 8, 97ff).
· Creation of a national committee on forensic DNA typing to evaluate scientific and technical issues arising in the development and refinement of DNA-typing technology (p 8, 70-72).
· Studies of the relative frequencies of distinct DNA alleles in 15-20 relatively homogeneous subpopulations (p 14, 90, 94).
· A ceiling principle using, as a basis of calculation, the highest allele frequency in any subgroup or 5%, whichever is higher (p 14, 95).
· A more conservative "interim ceiling principle" with a 10% minimum until the ceiling principle can be implemented (p 14, 91-93).
· Proficiency testing to measure error rates and to help interpret test results (p 15, 88-89).
· Quality-assurance and quality-control programs (p 16, 97-109).
· Mechanisms for accreditation of laboratories (p 17, 23, 100-101).
· Increased funding for research, education, and development (p 17, 153).
· Judicial notice of the scientific underpinnings of DNA typing (p 23, 133).
· Financial support for expert witnesses (p 23, 148-149).
· Databases and records freely available to all parties (p 23, 26, 93-95).
1 Page references indicate where the topics are discussed in the 1992 NRC report.
· An end to occasional expert testimony that DNA typing is infallible and that the DNA genotypes detected by examining a small number of loci are unique (p 26, 92).
Many of the recommendations of the 1992 NRC report have been implemented. Some of the perceived difficulties at the time, such as insufficient information on the differences among various population subgroups, have been largely remedied. Studies of different subgroups, although not done exactly in the manner advocated by the report, have been extensive. New techniques and improvements in old ones have increased the power and reliability of DNA data.
Nevertheless, controversy over the forensic applications of DNA has continued, and the report has been strongly criticized (Balazs 1993; Devlin, Risch, and Roeder 1993, 1994; Kaye 1993; Morton, Collins, and Balazs 1993; Collins and Morton 1994). The most contentious issues have involved statistics, population genetics, and possible laboratory errors in DNA profiling. In 1994, the National Research Council established the present committee to update the 1992 report.
The Committee's Task
The committee's task statement reads:
The committee will perform a study updating the previous NRC report, DNA Technology in Forensic Science. The study will emphasize statistical and population genetics issues in the use of DNA evidence. The committee will review relevant studies and data, especially those that have accumulated since the previous report. It will seek input from appropriate experts, including those in the legal and forensics communities, and will encourage the submission of cases from the courts. Among the issues examined will be the extent of population subdivision and the degree to which this information can or should be taken into account in the calculation of probabilities or likelihood ratios. The committee will review and explain the major alternative approaches to statistical evaluation of DNA evidence, along with their assumptions, merits, and limitations. It will also specifically rectify those statements regarding statistical and population genetics issues in the previous report that have been seriously misinterpreted or led to unintended procedures.
Thus, a number of issues addressed by the 1992 report are outside our province. Such issues as confidentiality and security, storage of samples for possible future use, legal aspects of data banks on convicted felons, non-DNA information in data banks, availability and costs of experts, economic and ethical aspects of new DNA information, accountability and public scrutiny, and international exchange of information are not in our charge.
The major issues addressed in this report are in three groups:
· The accuracy of laboratory determinations. How reliable is genetic typing? What are the sources of error? How can errors be detected and corrected? Can
their rates be determined? How can the incidence of errors be reduced? Should calculation of the probability that an uninvolved person has the same profile as the evidence DNA include an estimate of the laboratory error rate?
· The accuracy of calculations based on population-genetics theory and the available databases. How representative are the databases, which originate from convenience samples rather than random samples? How is variability among the various groups in the US population best taken into account in estimating the population frequency of a DNA profile?
· Statistical assessments of similarities in DNA profiles. What quantities should be used to assess the forensic significance of a profile match between two samples? How accurate are these assessments? Are the calculations best presented as frequencies, probabilities, or likelihood ratios?
Those three sets of questions are related. All fall within the committee's task of analyzing "statistical and population genetics issues in the use of DNA evidence," and of reviewing "major alternative approaches to statistical evaluation of DNA evidence." To help answer the questions, we discuss the current state of scientific knowledge of forensic DNA-typing methods (Chapter 2), ways of ensuring high standards of laboratory performance (Chapter 3), population-genetics theory and applications (Chapter 4), statistical analysis (Chapter 5), and legal considerations (Chapter 6).
In the remainder of this chapter, we elaborate on some of the developments that have occurred since the 1992 NRC report and on the scope of our review and recommendations. In addition, we attempt to clarify various preliminary points about forensic DNA typing before undertaking a more detailed analysis of the methodological and statistical issues in later chapters.
As will be seen in this report, we agree with many of the findings and recommendations of the 1992 report but disagree with others. Statements and recommendations on which we do not comment are neither endorsed nor rejected.
The Validity Of DNA Typing
The techniques of DNA typing outlined in Chapter 2 are fully recognized by the scientific community. To the extent that there are disagreements over the use of these techniques to produce evidence in court, the differences in scientific opinions usually arise when the DNA profile of an evidence sample (as from a crime scene) and that of a sample from a particular person (such as a suspect) appear to be the same. (Although much of DNA analysis involves comparing a sample from a crime scene with one from a suspect, useful comparisons can also be made with DNA from other sources, for example, a victim or a third party who happened to be present at the scene of a crime.) In general, there are three explanations for a finding that two profiles are indistinguishable: the samples came from the same person, the samples came from different persons who happen
to have the same DNA profile, and the samples came from different persons but were handled or analyzed erroneously by the investigators or the laboratory.
At the time of the 1992 NRC report, there were various approaches to assessing the first and second possibilities. Although current information is much more extensive, opinions still differ as to how best to make probability calculations that take advantage of the great power of DNA analysis while being scrupulously careful to protect an innocent person from conviction. We hope in this report to narrow the differences.
The Use Of DNA For Exclusion
The use of DNA techniques to exclude a suspect as the source of DNA has not been a subject of controversy. In a sense, exclusion and failure to exclude are two sides of the same coin, because the laboratory procedures are the same. But there are two important differences:
· Exclusiondeclaring that two DNA samples do not match and therefore did not come from the same persondoes not require any information about frequencies of DNA types in the population. Therefore, issues of population genetics are not of concern for exclusion. However, in a failure to exclude, these issues complicate the calculation of chance matches of DNA from different persons.
· Technical and human errors will occur no matter how reliable the procedures and how careful the operators. Although there are more ways of making errors that produce false exclusions than false matches, courts regard the latter. which could lead to a false conviction, as much more serious than the former, which could lead to a false acquittal.
There have been various estimates of the proportion of innocent prime suspects in major crimes. FBI (1993a) reports that in one-third of the rape cases that were examined, the named suspect was eliminated by DNA tests. Undoubtedly the true proportions differ for different crimes and in different circumstances. Nonetheless, DNA testing provides a great opportunity for the falsely accused, and for the courts, because it permits a prompt resolution of a case before it comes to court, saving a great deal of expense and reducing unnecessary anxiety. Furthermore, a number of convicted persons, some of whom have spent as long as 10 years in prison, have been exculpated by DNA testing.2
Because cases in which a suspect is excluded by nonmatching DNA almost never come to court, experts from testing laboratories usually testify for the prosecution. In exceptional cases, the prosecution, relying on other evidence,
2 Scores of convicted felons are petitioning courts to allow tests to be performed on preserved samples, and more than seventeen of those exonerated by post-conviction DNA testing have been released. See Developments . . . (1995).
proceeds in the face of nonmatching DNA profiles, and the laboratory experts testify for the defense.3 In all cases, the job of the laboratory is the same: to analyze the DNA in samples and to interpret the results accurately and without prejudice for or against either party.
Changes since the 1992 NRC Report
A major change in the last four years has been in the amount of available population data on DNA frequencies in different groups and different geographical regions (see Chapters 4 and 5). Although considerable information was available at the time of the 1992 NRC report, the writers of that report believed that the data were too sparse and the methods for detection of population subdivision too weak to permit reliable calculations of coincidental-match probabilities. In particular, they feared that subsets of the population might have unusual allele frequencies that would not be revealed in an overall population average or not be well represented in the databases used to estimate frequencies. The 1992 report therefore recommended the use of an ad hoc approach for the calculation of an upper bound on the frequencies that would be found in any real population; this approach used what was termed the "ceiling principle." The report recommended that population frequency data be collected on homogeneous populations from 15-20 racial and ethnic groups. The highest frequency of a marker in any population, or 5%whichever was higher, was to be used for calculation. Until the highest frequencies were available, an "interim ceiling principle" was to be used. That would assign to each marker the highest frequency value found in any population database (adjusted upward to allow for statistical uncertainty) or 10%whichever was higher. The result would be a composite profile frequency that did not depend on a specific racial or ethnic database and would practically always exceed the frequency calculated from the database of the reference populations.
The ceiling principles have been strongly criticized by many statisticians, forensic scientists, and population geneticists (Cohen 1992; Weir 1992a, 1993a; Balazs 1993; Devlin, Risch, and Roeder 1993, 1994; Morton, Collins, and Balazs 1993; Collins and Morton 1994; Morton 1994), and supported by others (Lempert 1993; Lander and Budowle 1994). Most courts that have discussed it have accepted it as a way of providing a "conservative" estimate. Conservative estimates deliberately undervalue the weight of the evidence against a defendant. Statistically accurate estimates, based as they are on uncertain assumptions and measurements, can yield results that overvalue the weight of evidence against
3 For example, State v. Hammond, 221 Conn. 264, 604 A.2d 793 (1992).
the defendant, even though on average they produce values that are closer to the true frequency than those produced by conservative estimates.
As detailed in Chapters 4 and 5, information is now available from a number of relevant populations, so that experts can usually base estimates on an appropriate database. Indeed, the 1992 committee might not have intended to preclude such estimates, at least if accompanied by interim ceiling figures. In this context, Lander (a member of that committee) and Budowle (1994) state:
Most importantly, the report failed to state clearly enough that the ceiling principle was intended as an ultra-conservative calculation, which did not bar experts from providing their own "best estimates" based on the product rule. The failure was responsible for the major misunderstanding of the report. Ironically, it would have been easy to correct.
A second change since 1992 is mainly incremental. Individually small but collectively important procedural modifications have improved the technical quality of the DNA-testing process. One has only to compare DNA autoradiographs (see Chapter 2) made five years ago with those of today. Computer analysis and better equipment improve efficiency and can increase measurement accuracy. Perhaps most important, DNA-laboratory analysts have gained experience, not just in individual laboratories but collectively across the field. A mistake whose cause is discovered is not likely to be repeated. Laboratory quality-assurance programs are better developed, and there are now organizations that provide standards and conduct proficiency tests. These are discussed in Chapter 3.
A common technique of forensic DNA testing uses loci that contain variable-number tandem repeats (VNTRs), explained in Chapter 2. These are still of primary importance and are the major topic of our discussion, although we discuss other kinds of genetic markers as well. The standard VNTR system entails data that are subject to imprecision of measurement, so that very similar DNA patterns cannot be reliably distinguished; we discuss these problems in Chapter 5. Furthermore, most current VNTR methods require radioactive materials, and the procedures are slow; it can take six weeks or more for a complete analysis. Chemiluminescent systems can reduce the time, since waiting for sufficient radioactive decay is unnecessary, and these systems are coming into use. Increasingly, more-rapid methods are being used, and these usually permit precise identification of genes. Although this change is gradual, we are approaching a time when analysis will be quicker, cheaper, and less problematic than current methods. We foresee a time when each person can be identified uniquely (except for identical twins).
Paternity testing has traditionally used blood groups and protein markers, but these have been supplemented if not largely supplanted by the much more
powerful DNA methods. The basic procedures are the same for paternity testing as for crime investigation (Walker 1983; AABB 1994), and the experience of paternity-testing laboratories can be valuable in the criminal context as well. Indeed, parentage testing sometimes provides evidence in a criminal proceeding.4 The laboratories can provide information of use in forensic analysis. For example, a discrepancy between mother and child can offer information about error rates or mutation (see Chapter 2). Many laboratories do both forensic and paternity analysis.
Nevertheless, the two applications are different in important respects. Paternity testing involves analysis of the genetic relations of child, mother, and putative father; crime investigations usually involve the genetically simpler question of whether two DNA samples came from the same person. Mutation (see Chapter 2) is a factor to be taken into account in paternity testing; it is not an issue in identity testing. In cases brought to establish paternity for child support, inheritance, custody, and other purposes, the law gives the claims of the parties roughly equal weight and uses a civil, rather than the higher criminal, standard of proof. The 1992 NRC report's recommendations for conservative population and statistical analyses of data were motivated by the legal requirement of proof beyond a reasonable doubt applied in criminal trials. Those recommendations are therefore inappropriate for civil cases. In particular, the report did not propose either of the ceiling principles for paternity testing, and their use in civil parentage disputes is inappropriate. Likewise, the recommendations in the present report apply to criminal forensic tests and not to civil disputes.
The 1992 NRC report (p 70-72) recommended the formation of a National Committee on Forensic DNA Typing (NCFDT), to provide advice on scientific and technical issues as they arise. The NCFDT would have consisted "primarily of molecular geneticists, population geneticists, forensic scientists, and additional members knowledgeable in law and ethics" to be convened under an appropriate government agency. Two suggested agencies were the National Institutes of Health and the National Institute of Standards and Technology.
Neither agency has accepted or been given the responsibility and funding. Instead, the DNA Identification Act of 1994 (42 USC §14131, 1995) provides for a DNA advisory board to be appointed by the Federal Bureau of Investigation from nominations submitted by the National Academy of Sciences and other organizations. The board, which is now in place, will set standards for DNA testing and provide advice on other DNA-forensic matters. This makes it unlikely that the proposed NCFDT will come into being. We expect the new DNA
4 E.g., State v Spann, 130 N.J. 484, 617 A.2d 247 (1993); Commonwealth v Curnin, 409 Mass. 218, 565 N.E.2d 440 (1991).
Advisory Board to issue guidelines for quality-assurance and proficiency tests that testing laboratories will be expected to follow. Laboratories will not be able to obtain federal laboratory-development funds unless they demonstrate compliance with the standards set by the advisory board.
Seemingly Contradictory Numbers
The uncertainties of assumptions about population structure and about population databases and a desire to be conservative have led some experts to produce widely different probability estimates for the same profile. In court one expert might give an estimate of one in many millions for the probability of a random DNA match and another an estimate of one in a few thousandlarger by a factor of 1,000, or more (for an example, see Weir and Gaut ; other examples are given in Chapter 6). Such discrepancies have led some courts to conclude that the data and methods are unreliable. However, probability estimates, particularly the higher values, are intended to be conservative, sometimes extremely so.
Experts are likely to differ far more in their degree of conservatism than in their best (statistically unbiased) point estimates. If two experts give conservative estimates that differ widely, they might both be correct; they often differ not in their expertise, but in their conservatism. For instance, if A says that the distance from Los Angeles to New York is more than 1,000 miles and B says that it is more than 2,000 miles, both are correct; if C says that it is more than 100 miles, this, too, is correct, but excessively conservative and, as a result, much less informative. It might also be misleading, for example, if this gross underestimate led a person to think that he could drive from Los Angeles to New York on one tankful of gasoline. Extreme differences arise if one expert relies solely on direct counts of genetic types in the database and uses no population genetics theory whereas the other makes assumptions grounded in theory. The two experts' best estimates, if both were to use this theory, are likely to be fairly close.
In fact, some have proposed that profile probabilities should be estimated from direct counts of profiles in the database. One problem is that there are trillions of possible five-locus profiles, the overwhelming majority of which are not found in any database. How does one interpret all those zero profile frequencies? One suggestion is to assign an arbitrary value determined by an upper 95% confidence limit. For a database of 100 individuals, this leads to a value of 0.03 for this upper limit; for a database of 1,000, this upper limit is 0.003. The 1992 NRC report suggests that the upper 95% confidence limit be used not only for the zero class, but also instead of the face value estimate for other frequencies as well. However, the report goes on to say that ''such estimates do not take advantage of the full potential of the genetic approach." We emphatically agree with this statement.
Even under the assumption that the database is a random sample, the direct-counting procedure is excessively conservative, giving values several orders of
magnitude greater than even the most conservative estimates based on genetic assumptions. It does not make use of knowledge of the nature of the markers, of standard population genetic theory, and of population data. It therefore throws out a great deal of relevant information that should be used. For these and other reasons, we reject the counting method (see Chapter 5).
Very Small Probabilities
If a testing laboratory uses genetic markers at four or five VNTR loci, the probability that two unrelated persons have identical DNA profiles might well be calculated to be one in millions, or billions, or even less. The smaller the probability, the stronger is the argument that the DNA samples came from the same person. Some have argued that such a small probabilitymuch smaller than could ever be measured directlylacks validity because it is outside the range of previous observations. Yet they might accept as meaningful the statement that the probability that two persons get the same bridge hand in two independent deals from a well-shuffled deck is about one in 600 billion, a number far outside anyone's bridge experience and 100 times the world population.
The proper concern is not whether the probability is large or small, but how accurate it is. Probabilities are not untrustworthy simply because they are small. In most cases, given comparable non-DNA evidence, a judge or jury would probably reach the same conclusion if the probability of a random match were one in 100,000 or one in 100 million.
Because of the scientific approach of statisticians and population geneticists, treatment of DNA evidence has become a question of probabilities. But some other kinds of evidence are traditionally treated in absolute terms. The probative value of DNA evidence is probably greater than that of most scientific evidence that does not rely on statistical presentations, such as firearms, poisoning, and handwriting analysis. We urge that the offering of statistical evidence with DNA profiles not be regarded as something unusual and mysterious. In fact, because much of science is quantitative, the DNA precedent might point the way to more scientific treatment of other kinds of evidence.
Fingerprints and Uniqueness
The history of fingerprints offers some instructive parallels with DNA typing (Stigler 1995). Francis Galton, the first to put fingerprinting on a sound basis, did an analysis 100 years ago that is remarkably modern in its approach. He worked out a system for classifying, filing, and retrieving. He showed that a person's fingerprints do not change over time. He invented an analysis that circumvented the fact that small parts of a fingerprint are not strictly independent. He also found that fingerprints of relatives were similar, although not identical, and that there were no unique racial patterns.
Galton concluded that, given a particular fingerprint pattern on a specified digit, such as the left index finger, the probability that a second specified person would have a matching pattern on the same digit was less than the reciprocal of 40 times the world population at that time, and hence the probability that a pattern identical to the given one occurred on the same finger of anyone else in the population of the world would be less than 1/40. When prints from several fingers are compared, the probability that all will match becomes very small. This means, Galton said, that if two sets of prints are identical they must have come from the same person.
Although Galton paid careful attention to probabilities, his successors usually have not; but see Stoney and Thornton (1986). It is now simply accepted that fingerprint patterns are unique.
The 1992 NRC report (p 92) stated that "an expert shouldgiven the relatively small number of loci used and the available population dataavoid assertions in court that a particular genotype is unique in the population." Yet, what meaning should be attached to a profile frequency that is considerably less than the reciprocal of the world population? Given a person with a profile the frequency of which is estimated at only one-tenth the reciprocal of the world population, the probability that no one else in the world has this profile is about 9/10. Should this person be regarded as unique? If not, how high should the probability be for the profile to be regarded as unique? That is for society or the courts, not the present committee, to decide, but we discuss these issues in Chapter 5. Given that such a decision might be made, we show how to do the requisite calculations.
Designating Population Groups and Subgroups
There is no generally agreed-on vocabulary for treating human diversity. Major groups are sometimes designated as races, and at other times as ethnic groups. Ethnic group is also used to designate subgroups of major groups. The 1992 NRC report used ethnic group both ways. Furthermore, groups are mixed, all the classifications are fuzzy at the borders, and the criteria for membership are variable. For such reasons, some assert that the word race is meaningless (Brace 1995). But the word is commonly used and generally understood, and we need a vocabulary.
For convenience, uniformity, and clarity, in this report we designate the major groups in the United Stateswhite (Caucasian), black (African American), Hispanic, east Asian (Oriental), and American Indian (Native American)as races or racial groups. We recognize that most populations are mixed, that the definitions are to some extent arbitrary, and that they are sometimes more linguistic (e.g., Hispanic) than biological. In fact, people often select their own classification. Nevertheless, there are reproducible differences among the races in the
frequencies of DNA profiles used in forensic settings, and these must be taken into account if errors are to be minimized.
Groups within the racessuch as Finnish and Italian within whites and different tribes among American Indianswill be designated as subgroups. A subgroup can be small, such as the members of a small community descended from a handful of ancestors, or large, such as all those whose ancestors came from a large European country. Because it has different meanings, ethnic group will not be used unless its meaning is clear from context.
Today, there are extensive data on DNA-type frequencies in diverse populations around the United States and in many parts of the world. The data are divided by race and geography and sometimes by ancestry within a race. The sources are varied; they include blood banks, paternity-testing laboratories, hospitals, clinics, genetic centers, and law-enforcement agencies. Although the use of such convenience sampling has been questioned, the degree of similarity between data sets from different sources and different geographical regions supports their general reliability. Furthermore, the VNTR markers used for forensics have no known effects, so there is no reason to think that they would be associated with such characteristics as a person's occupation or criminal behavior.
As emphasized in the 1992 report, the United States is not a homogeneous melting pot. In Chapter 4, we specifically address the problems arising from the fact that the population is composed of local communities of different ancestries, not completely mixed. Because it is difficult to find pure local groups in the United States, we rely more on data from ancestral areas. For example, rather than looking for populations of Danish or Swiss Americans, which are mixed with other populations, we look at data from Denmark and Switzerland. These will differ more from each other than will their American relatives, who have to various degrees had their differences reduced by admixture. The study of European groups should lead to an overestimate of the differences among white ethnic groups in the United States and so permit conservative calculations.
The 1992 report assumed for the sake of discussion that population structure exists. We go further: We are sure that as population databases increase in numbers, virtually all populations will show some statistically significant departures from random mating proportions. Although statistically significant, many of the differences will be small enough to be practically unimportant.
The Nature of Our Recommendations
To deal with uncertainties about population structure, the 1992 NRC report recommended a ceiling principle and an interim ceiling principle. We replace those ad hoc recommendations with the explicit assumption that population substructure exists and recommend formulae that take it into account. We consider special cases, such as relatives of a suspect or instances in which a suspect and
an evidence sample are known to come from the same subgroup. We also discuss the uncertainties of the various calculations.
We discuss but do not propose rules for addressing laboratory error. Laboratory procedures have become more standardized since the last report, largely because of the work of the Technical Working Group on DNA Analysis and Methods (TWGDAM 1991-1995). In addition, DNA-typing and proficiency tests are now common. TWGDAM and the FBI's new DNA Advisory Board can modify their recommendations as technical changes and experience warrant. Rather than make specific technical recommendations, and especially rather than try to anticipate changes, we prefer to leave the detailed recommendations to those groups and trust professional scrutiny and the legal system to call attention to shortcomings. Laboratories now use a variety of testing procedures; in particular, DNA-amplification methods are common and new markers are coming into use. We affirm the importance of laboratories' adhering to high standards, of following the guidelines, and of participating in quality-assurance and accreditation programs.
We make no attempt to prescribe social or legal policy. Such prescriptions inevitably involve considerations beyond scientific soundness. Nevertheless, we recognize the connection between our scientific assessments and the efforts of the legal system to develop rules for using forensic DNA analyses; we describe the relationship between our conclusions about scientific issues and the admissibility and weight of DNA evidence in Chapter 6.
Finally, we recognize that technical advances in this field are very rapid. We can expect in the near future methods that are more reliable, less expensive, and less time-consuming than those in use today. We also expect more rapid and more efficient development of population databases that makes use of DNA already in storage. We urge as rapid development of new systems as is consistent with their validation before they are put into general use.