Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 223
Ballistic Imaging 9 Feasibility of a National Reference Ballistic Image Database In the formative era of modern firearms examination, Hatcher (1935:291–292) noted a development that he interpreted to be suggestive of the adage that “a little knowledge is a dangerous thing.” “Certain very well-intentioned individuals recently came very near having a federal law enacted to require every maker of a pistol or revolver to fire and recover a bullet from each gun made, and to mark that bullet with the number of the gun, and keep it for reference by the legal authorities in case a crime should later be committed with a gun of that caliber.” Hatcher argued against this forerunner of a national ballistic toolmark database (if not a national reference ballistic image database), citing the complexity of the task and the workload burden it would create: In the first place, it is by no means certain that a bullet fired through the same gun several years later would match the one kept for record, for the barrel may have rusted or otherwise changed during the interval. In the second place, the matter of the classification of bullets so as to lighten the labor of looking for the right one of the thousands of record bullets has not, and probably never can be, solved, for the fine scratches, parallel to the rifling marks, on which this identification depends, have nothing by which they can be sub-classified. [Although fingerprints can be classified by general shape patterns, bullets can] be roughly classified by caliber, number of grooves, direction of rifling, etc.; but there is no method of subclassification. Suppose, for example, that the maker produces only 1000 .38 Special caliber guns in the same year. There will be five or six grooves on each bullet, say 5000 groves to be compared in trying to match the murder bullet to only one year’s production of guns of only one maker. It
OCR for page 224
Ballistic Imaging may take from fifteen minutes to one hour to compare each groove, and looking searchingly into the comparison microscope is impossible for more than about three hours a day, otherwise the operator is likely to suffer severely from eye-strain, fatigue, and headache. At this rate, it would take one operator something like four or five years to search one manufacturer’s record bullets for one year’s production of one caliber of gun. More than 70 years later, ballistic imaging technology has demonstrated its capacity to address some of these concerns, providing an initial analysis and sorting of massive volumes of evidence that—now, as then—are impossible for a human examiner to process. The question is whether the technology has advanced to the point that a massive, national database of exhibits and images from new and imported firearms is any more tractable than the collection Hatcher described as well intentioned but dangerous. In this chapter, we present the argument from the preceding chapters in order to answer the primary, titular question of our study: Is a national reference ballistic image database (RBID) a feasible, accurate, and technically capable proposition? In Section 9–A, we discuss the basic question of how many guns would be included in a national RBID, followed in Section 9–B with an outline of other general assumptions on the shape and content of a national RBID. Subject to those assumptions, we consider in Section 9–C the technical aspects of establishing such a database from the information management and manufacturing perspectives, the statistical feasibility of such a database, and other perspectives on the issue. Section 9–D presents our general conclusions. We then discuss the implications of our conclusions on subnational, state-level RBIDs that currently exist or that may be created (Section 9–E). This is important because conclusions for or against a national RBID impact not only state RBIDs but—depending on the weight placed on supporting arguments—on the long-term viability of a crime-evidence database like the National Integrated Ballistic Information Network (NIBIN) as well. Some detailed probabilistic calculations related to the statistical feasibility of an RBID are laid out more fully in the appendix to this chapter, in Section 9–F. 9–A A NATIONAL REFERENCE DATABASE: HOW MANY GUNS? An important consideration in evaluating the feasibility of a national RBID is the magnitude by which ballistic imaging workload would increase: How many guns would have to be entered into such a database? Yearly firearm production figures compiled by the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) reveal that domestic firearms manufacturers produce between 3–3.5 million firearms per year (see Table 9-1). Approximately one-third of these, on the order of 1 million,
OCR for page 225
Ballistic Imaging TABLE 9-1 Firearms Manufactured in and Exported from the United States, 2002–2004 Firearms 2002 2003 2004* Manufactured Handguns 1,088,584 1,121,024 1,022,610 Pistols 741,514 811,660 728,511 Revolvers 347,070 309,364 294,099 Rifles 1,515,286 1,430,324 1,325,138 Shotguns 741,325 726,078 731,769 Miscellaneous 21,700 30,978 19,508 Total 3,366,895 3,308,404 3,099,025 Exported Handguns 56,742 42,864 39,081 Pistols 22,555 16,340 14,959 Revolvers 34,187 26,524 24,122 Rifles 60,644 62,522 62,403 Shotguns 31,897 29,537 31,025 Miscellaneous 1,473 6,989 7,411 Total 150,756 141,912 139,920 *The cover sheet for the 2004 report indicates that 26 percent of manufacturers did not file reports for 2004. No such response or compliance rates are indicated in the 2002 and 2003 reports. SOURCE: Data from Bureau of Alcohol, Tobacco, Firearms, and Explosives Annual Firearms Manufacturing and Export Reports, 2002–2004. are handguns; rifles are the modal category, constituting 35–40 percent of annual domestic firearms production. Relatively few of these firearms—only about 150,000—are exported from the United States. By comparison, tabulations from the U.S. Census Bureau’s Foreign Trade Division (see Thurman, 2006) indicate that 844,866 handguns were imported to the United States in 2004, most from Austria (29 percent), Brazil (24 percent), and Germany (17 percent). Nearly twice as many handguns were imported to the United States as rifles (489,740); an additional 71,625 shotguns and combination guns were imported in 2004 (Thurman, 2006). However, the enabling action for entry in a national RBID is not the production of a firearm or its arrival in the United States; rather, it is the sale of a firearm. The previously cited firearms manufacture statistics do not directly correspond to annual sales to individual customers; they include production for military and law enforcement purposes, and they include guns that may sit in inventory rather than be quickly sold. The ATF estimates about 4.5 million “new firearms, including approximately about 2 million handguns, are sold in the United States” each year (U.S. Bureau
OCR for page 226
Ballistic Imaging of Alcohol, Tobacco, and Firearms, 2000:1). It is important to remember that these figures—and the coverage of a national RBID—include only the primary gun market, which covers sales from licensed dealers to consumers. Cook and Ludwig (1996) estimate that about 2 million secondhand guns are sold each year in the United States, from a mixture of primary and secondary sources (where the secondary gun market includes transactions by unlicensed dealers). The answer to the question of how many guns would have to be entered into a newly established national RBID each year depends crucially on the exact specification of the content of the database—whether the database is restricted to handguns and whether imported firearms from foreign countries are required to be included. As we discuss further in the next section, we generally assume that a national RBID would—at least initially—focus on handguns, and hence an annual entry workload of 1–2 million firearms per year, depending on whether imports are included. 9–B ASSUMPTIONS In Box 1-3, we describe some basic assumptions about the nature of a national RBID, with particular regard to the wording used in past legislation and in the enabling language of the currently operational state RBIDs. It is useful to begin the assessment of the feasibility of a national RBID by revisiting those assumptions. Fundamentally, we assume that a national RBID would—at least initially—be tantamount to a scaled-up version of the current state RBIDs. First, we assume that the “ballistic sample” required for entry in the database would consist of expended cartridge cases and not bullets. Though the enabling legislation in Maryland and New York was vague on this point, the only operationally feasible approach was to restrict attention to casings. It takes more operator time (and money) to enter bullet evidence into a system such as the Integrated Ballistics Identification System (IBIS) than casings, and requiring recovery of a bullet specimen at the end of the manufacturing process would be unduly burdensome. That would require firing into a water tank or other nondestructive trap; as in test firings conducted by the police, firings into a tank must be done one at a time—and the bullet retrieved from the tank between each firing—in order to prevent damage to the specimens and to ensure that recovered bullets are identified as coming from the proper gun. Collecting cartridge casings also involves additional time—the protocol must allow for a casing to be attributed to the correct gun source—but the ejected casing is more amenable to rapid recovery than spent bullets that must be separately fished from a tank. Second, we assume that the focus of a national RBID would be on handguns, as the major gun class used in crime. Expanding state RBIDs
OCR for page 227
Ballistic Imaging to include long guns has been contemplated by legislation in Maryland but not enacted. These first two assumptions—cartridge cases only and a restriction to handguns—combine to limit the ability of the national RBID to generate “cold hits” to one group of firearms: revolvers, which do not automatically expel cartridge casings and, hence, would leave casings at a crime scene only if the gun user manually emptied them at the scene (e.g., to reload). However, we believe that the assumptions are realistic to make the program tractable at the outset. Third, we assume that the actual process of generating samples and acquiring images from them would follow very closely the New York Combined Ballistic Identification System (CoBIS) model: that is, that most of the burden of generating the sample of cartridge casings would fall on firearms manufacturers, who would include the sample in the firearm’s packaging. The burden of actually acquiring images and entering them in the database would be done by another entity, and the envelope containing the sample would be sent for imaging (along with related information) at the point and time of sale. In principle, images could be acquired by manufacturers, but the approach poses major problems both operationally and conceptually. In terms of operations, it would require the placement of at least one IBIS-type installation at every manufacturer’s location and require trained operators, a very costly proposition. Technology for mass batch capture of images from cartridge cases could be developed—Forensic Technology WAI, Inc. (FTI), continues to develop a prototype, which it dubs the Virtual Serial Number System—but the technology is not yet mature, and working with large batches of samples simultaneously exacerbates the problem of ensuring that the sample packaged from a gun was actually fired from that gun (see Section 9–B.2). Conceptually, imaging by the manufacturer is problematic because it is a step removed from the objective of an RBID, connecting ballistics evidence with a point of sale and not the point of manufacture. Achieving the link to point of sale would require a further database of sales, presumably to be merged periodically with the image database using the firearm serial number and other data. Imported firearms are particularly tricky in this regard because they raise potential problems of differential compliance. U.S. legislation to establish a national RBID could compel manufacturers to include test-fired exemplars with newly shipped firearms, for entry into the database, but foreign manufacturers might not be so bound. Hence, imported firearms may involve the additional workload of test firing before sale, in addition to acquiring images. A critical assumption that underlies much of the political debate over a national RBID deals with the information entered into the database along with exhibit images: Should information on the firearm’s purchaser be logged
OCR for page 228
Ballistic Imaging in the database, rather than just information on the firearm? The extent to which personal information is recorded raises the question of whether implementation of a national RBID is tantamount to establishing a national gun registry. Again, we assume that the New York CoBIS model would hold. In New York, licensing information completed at the time of sale is sent along with manufacturer-supplied casing samples to the state police headquarters for processing. However, that personal (purchaser) information is immediately separated from the ballistic image processing and forwarded to another agency, and it is not entered into the CoBIS database. We interpret the goal of a national RBID as suggesting an investigative lead to the point of sale. This is obviously not as direct a lead as could be the case, and requires that investigators follow up with seller records to progress further (akin to the standard gun tracing process described in Box 9-1), but it could still provide BOX 9-1 Tracing Guns The Gun Control Act of 1968 (18 U.S.C. 922(a)) established the legal framework for regulating firearms transactions in the United States, requiring that any individual engaged in the selling of guns in the United States must be a federal firearms licensee (FFL). Significantly, the act also established a set of requirements—a paper trail—designed to allow the tracing of the chain of commerce for any given firearm, from its manufacture or import through its first sale by a retail dealer. Each new firearm, whether manufactured or imported, must be stamped with a unique serial number (27 CFR 178.92; ATF Ruling 76-28). Manufacturers, importers, distributors, and FFLs are required to maintain records of all firearms transactions, including sales and shipments received; FFLs must also report multiple handgun sales and stolen firearms to ATF and provide transaction records to ATF in response to firearms trace requests. When FFLs go out of business they are required to transfer their transaction records to ATF, which then stores them for use in tracing. Local law enforcement agencies may initiate a trace request by submitting a confiscated gun and associated information to the ATF’s National Tracing Center (NTC); in addition to descriptors of the gun itself, this associated information may include the location of the recovery of the gun, the criminal offense associated with the recovery, and the name and date of birth (if known) of the firearm’s possessor. The NTC searches this information against its in-house databases—the records of out-of-business FFLs and the records of multiple handgun sales. If no matching information is found from these queries, NTC agents contact the manufacturer or importer and begin following the chain of subsequent transfers until they identify the first retail seller and (through that FFL’s records) the first buyer of the gun. The table below summarizes gun trace results in 1999, omitting on the order of 11,000 trace requests from foreign agencies (summary counts and percentages are recomputed from the cell entries in the original table).
OCR for page 229
Ballistic Imaging some spark to criminal investigations that may otherwise grow cold. The assumption that purchaser information would not be recorded in an RBID is consistent with the federal law that prohibits the establishment of “any system of registration of firearms, firearms, owners, or firearms transactions or dispositions” by federal or state agencies (18 U.S.C. 926(a)). We also assume that the user interface to a national RBID would mirror—and likely build on top of—the current interface of the NIBIN program. Specifically, we assume that queries on the database would be initiated by state and local law enforcement agencies, who would acquire images from evidence they wished to compare and send them over a network for comparison. (Doing this on NIBIN-supplied IBIS equipment, and effectively using the existing NIBIN terminals as the interface to the RBID, would obviously require changes in legislation—which currently limits Trace Result Count Percent Completed Traces (by method) 82,669 52.9 Out-of-business FFL records 13,167 8.4 Multiple sale reports 3,627 2.3 FFL record 60,526 38.7 Other 5,349 3.4 Incomplete/Not Traced (by reason) 73,690 47.1 Too old 16,192 10.4 Serial number problem 16,920 10.8 Error on trace request 17,588 11.2 Dealer record problem 15,123 9.7 Other 7,867 5.0 Total 156,359 100.0 SOURCE: Cook and Braga (2001:Table 1). Of the guns submitted for tracing in 1999, slightly more than half were successfully traced to the point of origin. Trace failures may be caused by the age of the gun (e.g., manufactured before 1968 and hence exempt from serial numbering and recordkeeping), or because of problems with the serial number, the submission form, or the information on file with the FFL where the gun was first sold. “End to end” or investigative traces—completely documenting the chain of possession from manufacture or import through the most recent owner—are considerably more expensive and are not routine. However, under the Youth Crime Gun Interdiction Initiative, ATF does perform “end to end” tracing for all firearms recovered from people under 21 years old.
OCR for page 230
Ballistic Imaging NIBIN to crime-scene evidence—and in the memoranda of understanding with partner sites.) A partial explanation for the scarcity of hits from the current state RBIDs in Maryland and New York is a relative scarcity of searches performed on the system, and a key reason for that lack of queries is that questioned evidence must be transported to a specific site for entry on RBID-specific equipment. To promote usage of the system, we assume that ways would be found to allow local law enforcement to directly query the database without turning over the physical evidence to other agencies, thus raising concerns about the chain of custody of that evidence. In articulating this model, we further assume that possible high-probability matches on the national RBID would be returned to those localities for their review and, if desired, for them to subsequently request pieces of physical evidence to confirm a hit. A technical assumption—and a difference between a national RBID and the existing NIBIN system—concerns the performance of automatic comparison requests. In the current NIBIN framework, any new piece of evidence entered into the system incurs an automatic comparison against all evidence entries within that NIBIN site’s partition, and the results of that comparison are returned to the local site after processing at one of the three ATF national laboratories. (Manual comparison requests can also be initiated.) This default behavior is sensible for a database like NIBIN, which is assumed to consist exclusively of case-related evidence and for which the interrelationship between entries is of interest. In a national RBID, however, the interrelationships between entries in the database are not of direct interest (since there is no reason to expect a match between two newly manufactured or imported guns), and performing comparison requests as each new entry is added only serves to increase the computational demands on the system infrastructure.1 What is interesting in the RBID setting is the comparison results that are obtained when a piece of crime scene evidence is entered and compared against the RBID. Hence, we assume that comparison requests in a national RBID would be manually generated or automatic when it is known that a new image being acquired comes from crime scene evidence. 1 This is not to say that interrelationships between RBID entries—and what comparison scores say about them—are uninteresting; indeed, an RBID provides ideal opportunities for studies of system performance in a large database of known nonmatches. Hence, comparison requests of RBID entries against the balance of the database are of great potential research interest, but are logically unnecessary as part of the data entry process.
OCR for page 231
Ballistic Imaging 9–C TECHNICAL FEASIBILITY 9–C.1 Information Management Perspective At one basic level, a national RBID is technically feasible: Current and projected computer capabilities can handle the information flows associated with such a database. In our assessment, a national RBID would be a sizable but not insurmountable computational challenge and would be within the capacity of existing technology. The human workload necessary to process exhibits and acquire images would be formidable, but possible. In this section we describe this conclusion using basic calculations that—although “back of the envelope” in nature—are meant to be “worst case” projections. We include computational, networking, staffing, and physical requirements, and impose a number of stricter assumptions (beyond the general nature of the database) in making this analysis. These additional assumptions include: The work of collecting test-fired exhibits and acquiring images from them will be distributed across a small number of geographic sites. In this, we diverge from the New York CoBIS and Maryland MD-IBIS models, where routing of all database entries through a single site is tractable, and move toward the existing NIBIN model where computational infrastructure is divided across three sites (and entry dispersed over more than 200 localities). Economies of scale are maximized if the workers and machines are clustered into a dozen or less geographic centers. We will assume that there are 10 such data acquisition centers. Assume a data entry rate of samples from 1 million guns per year, and that image acquisition itself takes approximately 5 minutes. The 5-minute mark follows from our high-level assumption that cartridge cases, and not bullets, are to be imaged into the system, and is a plausible assumption with the current two-dimensional imaging standard. However, it may be an overly optimistic assumption for three-dimensional surface measurement, as it has developed to date (see Chapter 7), if that emerges as the imaging standard for the database. That said, the time needed to acquire three-dimensional measurement data has decreased significantly from the earliest efforts at imaging three-dimensional contours of bullets; with further refinement and automation, a 5-minute acquisition time is not unreasonable in the long run. Allow 5 minutes per entry for associated tasks, such as barcode reading, preparing and mounting the exhibits, and transporting exhibits between physical storage areas. Data collection for this national system would run 24 hours a day,
OCR for page 232
Ballistic Imaging 7 days a week. Timeliness of searches on the database requires round-the-clock operation. Under these assumptions, six guns or exhibits can be processed by a human operator each hour. Multiplied by 2,000 hours per year, this implies 12,000 guns processed per operator per year, and hence a human staff of at least 84 operators. A three-shift staff of 84 requires 28 data entry terminals; to allow headroom for maintenance (or equipment failures), this could be expanded to 40–42 data entry terminals. The rate at which queries are made of the national RBID—that exhibits are entered by state and local law enforcement agencies for comparison purposes with the database—will depend on local law enforcement acceptance and staff limitations. As described in previous chapters, large differences between jurisdiction in the effective use of the existing NIBIN system depends on differences in acceptance of the technology, hence the set of recommendations in Chapter 6 to enhance NIBIN by making it a more vital part of the investigative system. The actual use of New York’s CoBIS database, in terms of queries made, has been vastly short of expectations. Still, we have to assume that the presence of a national RBID would lead to the desire to conduct searches against it, as the technology is accepted and such searches become routine. Hence, for the purposes of this section, we assume 1,000 query exhibits are entered (nationwide) each day. It is expected that these searches will be done on an ad hoc basis, rather than in large batches. A reference image will be sent in parallel to a collection of geographically dispersed servers, over conventional networking, for comparison against stored images. The system’s ability to handle this throughput depends on the speed of the comparison process and the size of the database against which the reference image is compared. As we reiterate later in this chapter, a common logical flaw in considering a national RBID is looking at the large number of new guns produced annually (that would have to be entered in the database) and assuming that the system will automatically be swamped by the computational demands of performing one-against-millions comparisons. However, one would never do a straight comparison of one image against the entire database; like the current IBIS and NIBIN setup, some demographic filtering will inevitably be done to reduce the size of the comparison set. In addition to demographic filtering, similar subsetting may be done on the shape of the firing pin, gun entry and crime occurrence dates, gross features of the casing, and (perhaps) geographic region and proximity. Exactly how much of a reduction can be expected is an open question and would impact the computational requirements. If it can be assumed that reference images can be compared against stored images at a rate of 30 per second (on a PC-class machine), and that demographic subsetting can whittle down the comparison set of images to
OCR for page 233
Ballistic Imaging 1/20 of the full database size, then—in aggregate—comparing a reference image to 1 year’s worth of RBID data would mean performing 50 million pairwise comparisons per day. This would require 20 PC-class machines as comparison servers. If one plans for a factor of three in “headroom,” then 60 machines are required. Each year that the system is in operation, 60 additional machines must be purchased (or the original 60 replaced by ones that are twice as fast). Storage space, both electronic and physical, is a significant “wild card” in implementing the technical infrastructure for a national RBID. In terms of electronic storage, the per-casing disk storage for two-dimensional greyscale images as currently done by the IBIS platform is on the order of 1 megabyte. At 1 million casings per year, the aggregate system must be capable of storing 1 terabyte of information during the first year, and then to add 1 terabyte per year thereafter. Given modern computing environments, this is certainly feasible. However, these demands would have to be scaled upward with a change in imaging standard, either to finer-resolution two-dimensional photography or to three-dimensional imaging. The per-casing storage would also increase if practices such as those we recommend for the NIBIN program—entering of more than one exemplar per gun, particularly one of a different ammunition type—are used as standard protocols for a new national RBID. Physical storage of the casing exhibits is also an important consideration. We expect that human firearms examiners would still be needed to confirm “hits” on the national RBID through direct comparison; hence, the physical casings must be retained and must remain accessible. They must be filed in such a way that they can be retrieved with ease, that they are not damaged, and that there is minimal risk of being exchanged or confused with exhibits from a different firearm. Hence, simply packing envelopes of exhibits in large boxes and warehousing them is not a viable option, and the physical structure would have to be designed accordingly. The computing and network assumptions sketched above suggest that the informational throughput in one direction—submitting an inquiry to the database for processing—is manageable. However, care would have to be taken in specifying the reciprocal flow of comparison results back to requesting sites. Though we critique the IBIS 20 percent threshold elsewhere in the report and recommend that it be revisited (Recommendation 6.15), the threshold does serve the purpose of limiting the amount of image and score data that must be pushed back from regional correlation servers to NIBIN partner agencies for every comparison request. Some limit on the number of results routinely returned on comparison requests would likely have to be established to keep transmission times in check. The preceding is a somewhat simplified list of concerns from the information management perspective; practically, the implementation of a national RBID would raise related—and complex—concerns. Of these,
OCR for page 242
Ballistic Imaging 9–E IMPLICATIONS FOR STATE REFERENCE BALLISTIC IMAGE DATABASES Having concluded that a national RBID is inadvisable at this time, a natural follow-up question is what this conclusion means for the state-level RBIDs currently in operation in Maryland and New York and as may be implemented by other states. Although the core arguments that can be made against a national RBID can be applied to a state RBID, we conclude that the smaller-scale state databases are critically important proving grounds for improvements in the matching and scoring algorithms used in ballistic imaging. Indeed, they provide an ideal setting for the continuing empirical evaluation of the underlying tenets of firearms identification in general. The state databases can be a critical, emerging testbed for research in ballistic imaging and firearms identification. Early in ATF’s work with the IBIS platform, Masson (1997:42) observed that as ballistic image databases grew in size, the IBIS rankings tended to produce suggested linkages that might look promising on-screen—and might also be tricky to evaluate using direct microscopy: As the database grew within a particular caliber, 9mm for instance, there were a number of known non-matched testfires from different firearms that were coming up near the top of the candidate list. When retrieving these known non-matches on the comparison screen, there were numerous two dimensional similarities. When using a comparison microscope, these similarities are still present and it is difficult to eliminate comparisons even though we know they are from different firearms. Far from undermining the utility of the system, Masson (1997:43) argued that this finding presented a critical learning opportunity. “In the past, best examples of known nonmatched agreement were collected from casework and thus, surfaced sporadically;” in addition to the potential for generating hits, Masson suggested value in studying misses. “Firearms examiners should take advantage of this current expanded database to fully familiarize themselves with the extent of similarities found in many non-identifications in order to hone their criteria for striae identification” because the “examiner’s power of discrimination can be heightened because of the experience.” Even in the best of operational circumstances, RBIDs should not be expected to produce torrents of hits or completed matches. They are, at root, akin to detecting low-base-rate phenomena in large populations, and present particular difficulties because—by construction—such large populations contain a great many elements that are virtually identical in all but the tiniest details. A major reason that the current state databases have underperformed in generating hits is that they have been undersearched. As
OCR for page 243
Ballistic Imaging put most bluntly, in a discussion of the MD-IBIS hit that yielded a criminal conviction, by a critic of the current implementation, “If you don’t use the system … it isn’t going to work” (quoted in Butler, 2005). The utility of state-level RBIDs will depend on how often the database is actually queried in the conduct of investigations and how investigative leads are followed up. The design of the current databases, and the need to ensure a firewall from NIBIN data due to the legal restrictions on NIBIN content, have made the databases inconvenient to search: exhibits must be transported to specific facilities for acquisition and comparison. To that end, mechanisms for encouraging searches of state RBIDs by law enforcement agencies in the same state or region should be developed and the results evaluated. To the extent that law permits and arrangements can be made, broader research involving the merging and comparison of state-level RBID images with NIBIN-type evidence would also be valuable. 9–F APPENDIX: MODELS OF HYPOTHESIZED SYSTEM PERFORMANCE Throughout this appendix, we restrict the discussion to cartridge casings; however, the same problem formulation would apply to bullets. Suppose one has a database that consists of N images of casings, where N is a large number. These images may correspond to D different types of (new) guns. For each gun type, there are nd different images, from different guns of the same type or various gun and ammunition combinations, etc. So the database has a total of images. Consider now a newly acquired casing from a crime scene. One wants to compare the image of the new casing with the N images in the database and find the best K matches. The top K matches will then be scrutinized by a firearms examiner, and a direct physical comparison made will be to verify any hits. Assume that the database does in fact contain a casing fired from the particular crime gun. Then, the statistical feasibility of the problem depends on whether the correct image will be among the top K matches, when K is a reasonably small number (top 10, top 50, or even top 100) even though N, the size of the database, is very large—on the order of millions. Specifically, some of the statistical questions of interest are: What is the probability that the correct image from the database (the one that corresponds to the crime gun) will be in the top K? How does this probability decrease with N? What are the critical factors that affect it? How large should K = K(α) be if we want to be certain that the correct image is in the top K with probability at least (1 − α)? How does this depend on the size of the database and other factors?
OCR for page 244
Ballistic Imaging 9–F.1 A Simple Formulation For a particular combination of image capture technology and algorithm, the comparison of a newly acquired casing with the N images in the database yields comparison scores X1, …, XN. (The scores themselves are functions of the comparison algorithm but are considered variable—and subject to a probability distribution—because of the variability in the markings of the newly acquired casing, because the arrival of a new casing can be seen as a draw from an underlying distribution, and because of variability in the image capture process.) Assume throughout that a high score implies a good match; furthermore, as stated above, assume that there is a casing in the database that corresponds to the crime gun (so that there is a true or “right” match). To be specific, let X1 be the score obtained for the “right” match. Suppose the scores X1, …, XN are independent. (See the end of this section for a discussion of this assumption.) Let Xi be distributed according to Fi(x), i = 1, … N. Furthermore, let denote the indicator of the event that the score from one of the wrong casings has a higher score than X1, the right match. Note that the Ij’s are dependent since X1 is common to all of them. Let One can compute pj using the expression The key random variable of interest for our problem is the number of scores that are ranked higher than the true match X1. The questions of interest can be answered if one can compute the distribution of T. For example, the probability that the score of the true casing is in the top-K matches is obtained by computing P(T < K) ≥ 1− α: that is, the probability that the total number of wrong matches is strictly less than K. Similarly, the question of how large should K be chosen to ensure that this probability is at least (1 − α) is answered by choosing K so that P(T < K) ≥ 1 − α. Analyzing this distribution will also show how
OCR for page 245
Ballistic Imaging the probability and K = K(α) vary with the size of the database and what other factors influence them. It is clear that they depend critically on pj’s, the probabilities. (Other important parameters are discussed below.) If the Xj’s are independent, then in the simple case where all the pj’s are the same and equal p, T will have a binomial distribution with parameters N and p. However, the Xj’s are not independent; in this case, with a single p, T has a correlated binomial distribution with a simple correlation structure. In our application, however, the pj’s will all be different, and the distribution of T is more complicated. But one can still write down expressions for the distribution of T. For example, the probability that X1 is the top score is where pj (x) = P(Xj > x). Expressions for P (T = k) and P (T ≤ k) can be similarly written down. However, one will have to resort to numerical or other kinds of approximation to compute the required probabilities. Since N is very large, a normal approximation is the simplest and most natural. It is easy to see that For computing the variance, since the Ij’s are dependent (due to common X1), we have to take the covariances into account. The variance of T is where pjk = pj if j = k and One can now approximate the distribution of T by a normal distribution with mean β and variance γ2. Based on this, the probability of having the correct match being in the top-K scores can be approximated as Furthermore, to ensure that this probability of the correct one being in the top-K scores is at least (1 − α), i.e., , we must take .
OCR for page 246
Ballistic Imaging The key factors underlying these are β and γ, which depend on the pj’s and the pjk’s. To see more clearly what influences these pj’s and pjk’s, suppose the distributions of the Xi’s are all Gaussian, that is, F1(x) is , I = 1, … , N. (One can just as easily consider any other parametric distribution.) 9–F.2 Calculations and Insights In the rest of this appendix we take the Xi’s to be independent and normal with mean μi and variance . Then Furthermore, where is the standard normal density. These correspond to probabilities of quadrants of bivariate normal random variables and have to be calculated numerically. We offer two general observations. First, the Gaussian case is much more general than it seems at first. The rankings of the scores are invariant under any monotone transformations of the Xi’s, i.e., I [Xj > X1] = I [ h (Xj) > h (Xj)] for any monotone increasing, continuous function h(·). Thus, assuming a lognormal distribution, for example, is equivalent to assuming a normal distribution. Second, recall the assumption that the scores, X1, …, XN’s, are independent. Since these are all matches to the same casing from the crime scene, a natural question is whether this will induce dependence among the Xi’s and if so how will the assumption of independence affect the results. Statistically speaking, what is the difference between treating the image of the crime scene casing as fixed versus random? It turns out, however, that if the effect of the common source of dependence is the same on the Xi’s, it does not matter. Specifically, suppose Xi = Yi + Z for i =1, …, N where the Yi’s are independent and Z is the common source of dependence for the Xi’s due to the crime scene casing. Then, it is easy to see that I [Xj >X1] = P (Yj > Y1), where the Yi’s are independent. The dependence can be more than additive, as long as it is additive up to a monotone increasing transformation.
OCR for page 247
Ballistic Imaging More specifically, if Xi = h(Yi + Z) for a monotone increasing function, then I [Xj >X1] = I (Yj >Y) where the Yj’s are independent. It is possible that the effect of the common source (i.e., the crime scene casing) is not the same on the different images, in which case the analysis will be more complicated. We will not deal with this case here. For two cases, we compute the probabilities of interest under several scenarios to see how they vary with N and the parameter values μj’s and σj’s of the Gaussian distribution. Case 1 We start with the simple case where there is only one gun type, D = 1, and all the images correspond to different guns of the same type. In some sense, this is the make-or-break case, since there has to be enough separation of the images that correspond to guns of the same type. One has the matching image X1 from the crime scene gun and the others X2, …, XN that are all from different guns but of the same type. To keep things simple, assume that X2, …, XN all have the same distribution with parameters μ2 and σ2. Let μ1 and σ1 be the mean and standard deviation of the matching image. The computations depend only on μ1 − μ2, so one can assume without loss of generality that μ1 = 0. We consider different values for N and ∆ = μ1 − μ2 in the calculations. In this analysis, we address only the second question that is posed in the introduction: What are the values of K = K(α) needed to ensure a confidence level of at least 100(1 − α)%, that is, that a correct image is found in the top K with at least the specified probability? The tables below give the number K of matches we need to examine to ensure that the true casing is in the top K for a given size of the database and parameter configurations. We also give K corresponding to 50 percent even though a 50 percent confidence level would commonly be viewed as unacceptable; the main reason for giving it is because it corresponds to the mean of the random variable T. It provides a (conservative) lower bound to the value of K under various assumptions about the variances of the Xj’s. Optimistic Scenario It turns out that the values of K(α) depend greatly on the ratio of σ1 to σ2, that is, the variability of the true match to that of the wrong matches. First take the extreme case where σ1 = 0, i.e., X1 has zero variance. Recall that one is interested in the random variables Ij = I [Xj > X1] and . If X1 has zero variance, then the Ij’s are N independent. Furthermore, in this special case where X2, …, XN have the same distribution, T has a binomial distribution.
OCR for page 248
Ballistic Imaging TABLE 9-2 Values of K(α) for Various Configurations of N and α for the Optimistic Scenario ∆ N − 1= n1 − 1 Confidence Level 50% 75% 90% 99% 2 1,000 23 26 29 34 2 10,000 228 238 247 262 3 10,000 14 16 19 23 3 100,000 135 143 150 163 4 100,000 4 5 6 8 4 1,000,000 32 36 39 45 4 10,000,000 317 329 340 359 5 10,000,000 3 5 6 7 5 100,000,000 29 33 36 42 Table 9-2 gives the values of K(α) for various combinations of N and that might be of interest. For example, if ∆ = μ1 − μ2 = 4 and there are about 100,000 images from the same type of gun in the database, and one wants a 99 percent confidence level, then one needs to look at the top K = 8 matches. If N increases to about 1,000,000, then one needs to look at the top K = 45 matches. The situation considered here—that variance of X1 is zero or very small relative to that of the other matches—is a very optimistic scenario. The required number of matches will be much larger when the variance of X1 is of the same order of magnitude as that of the other Xj’s. We turn to this comparison next. But a caveat is in order first: the confidence levels in Tables 9-2 through 9-5 refer only to the probability of the true match being in the top K. They do not say anything about the correct one being actually identified in practice, which would depend on a firearms examiner reviewing the results of all K matches and finding the correct one (retrieving the physical evidence for a direct comparison). This may or may not actually happen. Pessimistic Scenario This scenario considers exactly the same setup as before except that σ1 = σ2. The results depend only on the ratios, so one might as well take them to equal one. For the computations in Table 9-3, we used Monte Carlo simulation to approximate the probabilities
OCR for page 249
Ballistic Imaging TABLE 9-3 Values of K(α) for Various Configurations of N and α for the Pessimistic Scenario ∆ N − 1 = n1 − 1 Confidence Level 50% 75% 90% 99% 2 1,000 79 165 245 380 2 10,000 787 1,660 2,450 3,815 3 1,000 17 50 80 130 3 10,000 169 500 800 1,310 4 1,000 2 12 20 33 4 10,000 23 110 190 325 4 100,000 233 1,110 1,900 3,255 5 10,000 2 18 33 60 5 100,000 20 190 340 600 5 1,000,000 203 1,875 3,380 5,970 6 100,000 1 23 43 76 6 1,000,000 11 230 430 770 6 10,000,000 110 2,310 4,280 7,680 7 1,000,000 1 25 45 80 7 10,000,000 4 230 430 775 8 10,000,000 1 10 20 40 Even though the simulation error was less than 10−8, the error in the standard error of T can be large when the database size N is of the order of 106 or bigger. Recall that there are roughly N2 covariance terms. So there is large variability in the values of K in Table 9-3 for large N, and for these cases, they should be interpreted only as providing approximate guidelines. Several features are of interest in Table 9-3. First, the values of K are much larger than in Table 9-2. The reason for the larger values of K is that the mean of T is smaller since instead of Φ(−Δ) in the earlier case. Furthermore, the variance of T is now much larger due to the positive correlation among the Ij = I [Xj > X1]’s. This dependence gets larger with the ratio σ1/σ2, i.e., the variance of X1 relative to the others. A particularly discouraging feature is that, for fixed ∆ and α, the values of K scale up almost linearly in the size of the database N. In the independent case in Table 9-2, the standard deviations were scaling up in terms of . But here they are scaling up linearly due to the covariances. More specifically, there are (N − 1)(N − 2) covariance terms, and these are
OCR for page 250
Ballistic Imaging TABLE 9-4 Values of K(α) for Various Configurations of n1 − 1, n2, ∆1, ∆2, and α for the Optimistic Scenario ∆1 n1 − 1 ∆2 n2 Confidence Level 50% 75% 90% 99% 2 1,000 3 1,000 25 28 31 36 2 1,000 4 10,000 24 27 30 35 2 1,000 5 100,000 23 26 29 34 2 1,000 5 1,000,000 23 26 29 34 3 10,000 4 100,000 17 20 22 27 3 10,000 4 1,000,000 46 50 54 61 4 1,000,000 5 1,000,000 32 36 40 46 4 1,000,000 5 10,000,000 35 39 43 49 about the same order as the variance of Ij, so the standard deviation of T is now increasing linearly with N; this is troublesome as it leads to much larger values of K. Case 2 We now consider situations in which there is more than one gun type in the database. The essence of the problem can be captured by just two types, so we restrict attention to this case. Again, assume that X1 has mean μ1 and variance , all the Xj’s corresponding to the same gun type as X1 have common mean μ2 and variance , and finally all the Xj’s corresponding to the second gun type have common mean μ3 and variance . Tables 9-4 and 9-5 give the values of K = K(α) for various values of ∆1 = μ1 − μ2, ∆2 = μ1 − μ3, n1, and n2. Table 9-4 corresponds to the optimistic scenario where the variance of X1 is zero. Recall that the Ij’s are all independent in this case. Table 9-5 corresponds to the pessimistic case where the variance of X1 is the same as the variance of the other Xj’s. The calculations in Tables 9-4 and 9-5 suggest that—as in the simpler one-gun case—values of K can quickly grow to levels of practical implausibility from the perspective of reviewing database comparison reports, particularly for low Δ values and less-clear separations between gun types. However, they also illustrate the importance of the degree of mean separation between the images from different gun types (akin to the discussion of overlap metrics in Section 9–C.3). Notice in Table 9-5 that if ∆2 is 2 units bigger than Δ1 and n1 = n2, the values of K in Table 9-5 are about the same as that in Table 9-3. A similar conclusion
OCR for page 251
Ballistic Imaging TABLE 9-5 Values of K(α) for Various Configurations of n1 − 1, n2, ∆1, ∆2, and α for the Pessimistic Scenario ∆1 n1 − 1 ∆2 n2 Confidence Level 50% 75% 90% 99% 3 1,000 5 1,000 17 52 82 135 3 1,000 6 10,000 17 51 82 135 3 1,000 7 100,000 17 51 81 133 4 10,000 6 10,000 24 113 192 330 4 10,000 7 100,000 24 112 192 330 4 10,000 8 1,000,000 24 112 190 325 5 10,000 7 10,000 3 20 35 60 5 10,000 8 100,000 3 20 35 62 5 10,000 9 1,000,000 3 19 35 61 6 100,000 8 100,000 2 25 50 85 6 100,000 9 1,000,000 2 24 44 76 6 100,000 10 10,000,000 2 26 50 80 7 1,000,000 9 1,000,000 1 30 50 90 7 1,000,000 10 10,000,000 1 26 48 85 holds if ∆2 is 3 units bigger than ∆1 and n2 =10n1 or if ∆2 is 4 units bigger than ∆1 and n2 = 100n1. So, for instance, the ability to detect matches in a relatively small database containing equal numbers of moderately distinct images (∆1 = 4, ∆2 = 6; 10,000 each) is comparable to that when one small set of images (∆1 = 4; 10,000) is flooded with 1,000,000 images that are vastly different in mean (∆2 = 8).
OCR for page 252
Ballistic Imaging This page intentionally left blank.