Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 162
Ballistic Imaging 6 Operational and Technical Enhancements to NIBIN As discussed in Chapter 1, the committee’s interpretation of our charge is focused on offering advice on three basic policy options: maintain the National Integrated Ballistic Information Network (NIBIN) system as it is, enhance the NIBIN system in several possible ways (without expanding its scope to include new and imported firearms), or establish a national reference ballistic image database as a complement or adjunct to the current NIBIN. The first of these options may readily be viewed as something of a “straw man,” particularly given the open-ended nature with which we were asked to consider enhancements or improvements to NIBIN. No program is perfect: there is always opportunity for refinement and improvement, and such is the case with NIBIN. The underlying concepts of NIBIN are sound—facilitating transfer of information between geographically dispersed law enforcement agencies and giving those agencies access to technology that could generate investigative leads that would otherwise be impossible. However, the program falls short of its potential in several respects, and this chapter proposes some directions for improvement. After briefly reviewing other perspectives that have been raised about improving the content and performance of the NIBIN system (Section 6–A), our comments focus on possible and suggested enhancements. The second section (6–B) considers operational enhancements, those that concern the administration of the program and the use of the system in general. The third section (6–C) considers technical enhancements, those that deal with the specific technology used by the NIBIN program; this section builds on Chapter 4’s discussion of the current Integrated Ballistics Identification System (IBIS) platform.
OCR for page 163
Ballistic Imaging In phrasing some of our recommendations, we opt for generic descriptions—“ATF and its NIBIN contractors” or the “NIBIN technical platform”—since they describe functionality that should apply regardless of the specific platform or vendor. One major possible enhancement of interest to the committee—a change in the basic imaging standard from two-dimensional photography to three-dimensional topography—is not discussed here; instead, we give the topic more detailed examination in Chapters 7 and 8. 6–A OTHER PERSPECTIVES ON NIBIN ENHANCEMENT The Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) and the NIBIN program have made strides to gather feedback on system procedures and performance from the user base, efforts for which they should be commended. Formally, forums for the gathering of feedback have included periodic meetings of the ATF-established NIBIN Users Congress since November 2002; users are also asked to serve as regional outreach coordinators, providing a sounding board for comments both informally and through the user group sessions. Based on the user group meetings, ATF and Forensic Technology WAI, Inc. (FTI), periodically update (and describe progress in addressing) a “top 10” list of user concerns and suggestions for improving NIBIN and the IBIS platform. In addition, NIBIN program staff periodically collect reports from the regions on indicators of system usage—e.g., cross-regional searches and number of correlation requests that have not been reviewed by local sites—that go beyond the monthly operational statistics. The committee chair and staff attended the sixth NIBIN Users Congress meeting at FTI’s U.S. training center in Largo, Florida, in October 2004. That session suggested a strong commitment among program managers and local users to making the system work more effectively as a key part of routine investigations. Concerns expressed at the meeting ranged from time-consuming software glitches (e.g., the focus jumping to the top of the list when an already-viewed comparison report is deleted rather than advancing to the next line) to serious interface issues (e.g., problems with the lighting filter on the microscope, particularly for side light images, that led some agencies to jury-rig fixes using Post-It notes to get acceptable images). This particular session came in the wake of the rollout of a new version of IBIS software meant to be compliant with federal government and Department of Justice cybersecurity requirements. The switch to the new version was problematic and debilitating in some sites, effectively shutting down evidence entry for days or weeks; user feedback helped assess the scope of the implementation problems and can suggest better practices for future major revisions. Some of the enhancements we suggest below reflect
OCR for page 164
Ballistic Imaging comments from the Users Congress meeting, as well as other observations from committee member visits to local NIBIN installations. Another source of commentary on specific enhancements to improve NIBIN is the operational audit of the program conducted by the U.S. Department of Justice, Office of Inspector General (2005). The audit offered 12 formal recommendations to ATF; see Box 6-1. The audit included examination of a complete snapshot of the NIBIN database and its attempt to link NIBIN data to Uniform Crime Reports data based on Originating Agency Identifier (ORI) codes: hence the specific recommendations to ensure ORI BOX 6-1 Recommendations from 2005 U.S. Department of Justice Inspector General Audit of NIBIN Program Based on its review of NIBIN practices, the U.S. Department of Justice, Office of Inspector General (2005) offered 12 specific recommendations to ATF in its audit report: Determine whether additional IBIS equipment should be purchased and deployed to high-usage nonpartner agencies, or whether equipment should be redistributed from the low-usage partner agencies to high-usage nonpartner agencies. Provide additional guidance, training, or assistance to the partner agencies that indicated they did not perform regional or nationwide searches because they either lacked an understanding of the process or lacked manpower to perform such searches. Ensure that NIBIN partner agencies enter the [Originating Agency Identifier (ORI)] number of the contributing agency for all evidence entered into NIBIN. Resolve the duplicate case ID number issue in the NIBIN database for the Colorado Bureau of Investigation–Montrose; and the Rhode Island State Crime Laboratory. Research the reasons why 12 agencies have achieved high hit rates with relatively low number of cases entered into NIBIN and share the results of such research with the remaining partner agencies. Establish a plan to enhance promotion of NIBIN to law enforcement agencies nationwide to help increase participation in the program. The plan should address steps to: (1) increase the partner agencies’ use of the system, (2) increase the nonpartner agencies’ awareness and use of the system, and (3) encourage the partner agencies to promote the NIBIN program to other law enforcement agencies in their area. Determine whether new technology exists that will improve the image quality of bullets enough to make it worthwhile for the participating agencies
OCR for page 165
Ballistic Imaging reporting (about 55,000 records in the databases had missing ORI codes) and the specific identification glitch detected for cases in Colorado and Rhode Island. The Inspector General report also offers sound advice to evaluate the user base for the portable Rapid Brass Identification (RBI) units, which have the potential for permitting cartridge case entries by other agencies without a full IBIS set-up but which have been found to be problematic by previous users. We generally concur with the Inspector General’s recommendations and advance some themes from those recommendations in our own guid- to spend valuable resources to enter the bullet data into NIBIN, and deploy the technology if it is cost-effective. Perform an analysis of the current [Rapid Brass Identification (RBI)] users, and any other potential users, to determine if they would use an improved system enough to warrant the additional cost. If the analysis concludes that another system would be cost-effective, then ATF should pursue funding to obtain the system. Provide guidance to partner agencies on the necessity to view correlations in a timely manner and to ensure that correlations viewed in NIBIN are properly marked. Monitor the nonviewed correlations of partner agencies and take corrective actions when a backlog is identified. Research ways to help the partner agencies eliminate the current backlog of firearms evidence awaiting entry into NIBIN. The research should consider whether the partner agencies can send their backlogged evidence to the ATF Laboratories or to other partner agencies for entry into NIBIN, and whether improvements to the efficiency of NIBIN would facilitate more rapid and easy entry of evidence. Coordinate with Department of Justice law enforcement agencies that seize firearms and firearms evidence to help them establish a process for entering the seized evidence into NIBIN. Asked to review a draft of the audit report, ATF noted its partial or full concurrence with every recommendation; the ATF response comprises Appendix XV of the audit report. SOURCE: Text of recommendations excerpted from U.S. Department of Justice, Office of Inspector General (2005).
OCR for page 166
Ballistic Imaging ance below. As noted in Box 6-1, ATF reviewed a draft of the Inspector General’s audit and was asked for comment; the agency indicated partial or full concurrence with all 12 specific recommendations. 6–B OPERATIONAL ENHANCEMENTS Suggesting operational enhancements to the NIBIN program is a complicated task due to the program’s very nature. At its root, NIBIN is a grant-in-aid program that makes ballistic imaging technology available to law enforcement agencies to an extent that would not be possible if departments had to acquire the necessary equipment on their own. However, although ATF provides the equipment, the state and local law enforcement agencies must supply the resources for entering exhibits and populating the database. Accordingly, the incentive structures are complex: promoting top-down efforts by NIBIN administration to stimulate NIBIN entry necessarily incurs costs by the local departments. So, too, does suggesting that local NIBIN partners make concerted outreach efforts to acquire and process evidence from other agencies in their areas. The benefits that may accrue can be great, providing the vital lead that may put criminals in jail or generating the spark that may solve cold cases. Yet those benefits are not guaranteed, and the empirical data needed to inform the tradeoffs—on the number and nature of queries or on the success of NIBIN in making “warm” hits where there is some (but perhaps weak) investigative reason to suggest links between incidents—are not collected. Accordingly, our suggested operational enhancements follow two basic themes. First, the process for acquiring evidence should be improved and, when possible, streamlined in order to promote active participation by NIBIN partners and to make ballistic imaging competitive for scarce forensic laboratory resources with DNA and other types of analysis. Second, the NIBIN management must have the information and resources necessary to allocate and reallocate equipment to agencies in order to maximize system usage. 6–B.1 Priority of Entry In suggesting ways to improve the entry of evidence, a natural place to start is to suggest a prioritization or a structure for entry: which types of ballistics evidence, generally or from specific types of crimes, should be given top priority in order to maximize chances of obtaining hits and generating leads? On this point, the current composition of the NIBIN database suggests preferences that have emerged among partner agencies: more cartridge casings are entered than bullets and, in both instances, exhibits from test firings of recovered weapons are more frequently entered than indi-
OCR for page 167
Ballistic Imaging vidual specimens recovered as evidence from crime scenes. Recommendation 7 of the Inspector General audit of NIBIN (U.S. Department of Justice, Office of Inspector General, 2005) urges a general reconsideration of the imaging of bullets, motivated by survey responses from agencies about why they do not enter bullet evidence. Reasons cited for not entering bullets into NIBIN included the time-consuming and difficult nature of acquiring bullets, as well as a perceived low probability of success in generating hits. It has also become common practice by NIBIN users to acquire only firing pin and breech face images from cartridge casings and not the ejector marks when those are available. From observations of NIBIN sites, this seems to be largely due to the added time required to acquire that image (free-hand tracing of the region of interest), even though some research described in Section 4–E documents increased chances of generating hits when all three images are collected. Understanding that decisions on entry priorities must be made at the local level, as determined by available resources, we suggest one basic ordering. Recommendation 6.1: In managing evidence entry workload, NIBIN partner sites should give highest priority to entering cartridge casings collected from crime scenes, followed by bullet evidence recovered from crime scenes. This recommendation is based in part on the findings of our study of completed hits in 1 year’s worth of operational data from NIBIN; evidence suggests that the prompt acquisition and processing of cartridge case evidence results in the greatest number of hits. We do not discount the importance of the hits that arise from the entry of specimens test fired from firearms recovered by the police; links drawn to past cases (and past crimes) can be very useful in effective prosecution of criminal suspects. However, we believe that the system’s greatest benefit may come from its use as a tool for working with active, open case files, generating investigative leads that may lead to the apprehension of at-large suspects rather than confirming other offenses associated with a gun (and suspect) already in police custody. Though our committee’s focus on a national reference ballistic image database has led us to focus more on the imaging of cartridge cases than bullets, we give the entry of evidence bullets a slight edge in priority over the entry of nonevidence (test-fired) cartridge casings. This again favors emphasizing the use of NIBIN in the most active crime investigations. However, this choice will ultimately be contingent on continuing improvements to the technology, streamlining the image acquisition process and improving comparison results for bullets. (We discuss related concerns
OCR for page 168
Ballistic Imaging on the tension between entering bullet and casing evidence in Section 6–B.3.) A rough priority order for the entry of evidence would be the following: (1) cartridge case evidence recovered at crime scenes, (2) bullet evidence recovered at crime scenes, (3) casings test fired from weapons recovered by police that will not be destroyed or removed from circulation (i.e., must be returned to owner), (4) bullets test fired from weapons recovered by police that will not be destroyed or removed from circulation, (5) casings from weapons recovered at crime scenes that are to be destroyed, (6) bullets from weapons recovered at crime scenes that are to be destroyed, and (7) evidence entries that are archival in nature (e.g., working through and modernizing a backfile). 6–B.2 Expanding System Usage Hits are only possible in the NIBIN system if evidence is entered into the database, and local departments will only put priority on entering evidence into NIBIN if they see tangible benefit in the form of hits. In this circle, we believe that it is important that the potential for NIBIN to generate active investigative leads be the primary emphasis; to the extent that NIBIN entry is viewed as drudgery or simply “feeding the beast” to no apparent end, participation will wane. Recommendation 6.2: In order to promote wider use of NIBIN resources and to ensure that entry of ballistics evidence into NIBIN is a high priority, ATF should work with state and local law enforcement agencies to encourage them to incorporate ballistic imaging as a vital part of the criminal investigation process. This work should include early and continued involvement of agency forensic staff in working with detectives on cases involving ballistics evidence and regular department reviews of NIBIN-related cases. This kind of promotion should include encouragement of programs like the Los Angeles Police Department’s “Walk-In Wednesdays,” a designated time for detectives to consult with firearms examiners and IBIS technicians, enter evidence into NIBIN, and analyze resulting comparison results. The lessons learned in areas like Boston (as described in Appendix A), where cross-jurisdictional NIBIN searches have proven highly successful, should also be studied and disseminated to the broader NIBIN partner base. Through its “Hits of the Week” program, the central NIBIN program administration has provided limited anecdotal data on the system’s performance in jurisdictions and in solving a variety of crime types. These kinds of case stories can serve to instill confidence in the system and promote con-
OCR for page 169
Ballistic Imaging tinued “buy-in” by NIBIN partner sites. As described in Chapter 5, though, the “Hits of the Week” most often chronicle cases in which NIBIN analysis is only brought into play when a firearm—and frequently a suspect—is in custody. The “Hits of the Week” that speak to links between evidence casings and bullets are less satisfying as short anecdotes because they typically have to be left unresolved, noting that “investigation is continuing” or that leads are being followed up. The NIBIN program would be well served by adding to the staccato “Hits of the Week” more detailed investigative studies of completed cases that describe the contribution of NIBIN-generated leads. On the subject of hits, the NIBIN program has the capacity to make a simple change that may help participation by overcoming an odd quirk and subtle disincentive in the current structure. Recommendation 6.3: A separate count variable of cross-jurisdictional hits should be added to the system’s basic operational statistics, crediting both the originating jurisdiction of linked evidence and the site that confirms the hit. As described in Box 5-3, the NIBIN program currently credits completed “hits” to the site that actually completes the microscopic examination that confirms the match. In many cases, matches will be made between pieces of evidence within the same agency and the same NIBIN site. However, other hits may be made locally (including evidence from nonpartner agencies submitting evidence to a NIBIN site), regionally, or cross-regionally. Both agencies are instructed to mark completed hits in their system, but only the agency confirming the hit is supposed to report it to NIBIN management. Moreover, “if a hit occurs between two sites, the information is not transferred to the other site by the system. Rather, the other site must be [separately] notified to create the hit in its own database” (U.S. Department of Justice, Office of Inspector General, 2005:110). It is a serious impediment that data on interagency hits are not automatically or systematically recorded as part of the NIBIN program’s default operational statistics; without that information, it is difficult to have a complete sense of the system’s usage. But the current asymmetric definition of a hit also sharply undercuts the “network” aspect of NIBIN: Agencies that serve as good partners (or who take the trouble to route evidence to NIBIN partners in their area) by entering their data in a timely fashion should receive credit when their effort bears fruit, even if the hit is actually made in another place. Ideally, tabulations should be made not only of hits across NIBIN sites but across different ORI codes as well, in order to better detect current nonpartners who might benefit from NIBIN equipment installation.
OCR for page 170
Ballistic Imaging Alternatively, the NIBIN definitions of a “hit” could be revised to be symmetric, creating both the source(s) and the verifier of evidence matches. However, this change is undesirable because it would double-count (or more) the number of NIBIN-generated investigative leads. 6–B.3 Improving Image Entry Protocols The acquisition of evidence into NIBIN can be very time consuming, particularly for bullet evidence. Even for cartridge casings, the mechanics of positioning evidence under the microscope and taking the images is only a part of the time demand. The time needed to collect the images may be topped by the time needed to clean, prepare, and mount the evidence; the time to prepare necessary paperwork, notes, and reports on entry; the time to prepare written reports on possible and completed hits; and the filing (or refiling) of evidence into storage. There is a need for the acquisition process to be routinized and rigorous; analysis is for naught if anything in the acquisition process compromises the chain of evidence and renders the exhibits inadmissible in court. When local agencies have affirmed a commitment to ballistic imaging as part of their analyses and revised procedures for the entry and filing of evidence, streamlined procedures have been developed to make NIBIN entry more rapid. A notable example of this type of procedural review was completed by the New York City Police Department (NYPD), which reviewed its evidence processing routines and revamped them into the “Fast Brass” system (see Box 6-2). Building from models like the New York example, other departments may find ways to work through existing backlogs and realize more benefits from their NIBIN participation. Recommendation 6.4: State and local law enforcement agencies should be encouraged to streamline the ballistic image acquisition process and reporting requirements as much as possible, in order to facilitate rapid data entry and avoid evidence backlogs. The California technical evaluation of a potential state reference ballistic image database made reference to low levels of bullet hits achieved by the NYPD. The ATF critique of the technical evaluation attributed this to one part of the Fast Brass process: the department’s policy of entering only casings if both bullets and casings are recovered from the same crime scene. Thompson et al. (2002:17) commented that “ATF utilizes both the bullet and cartridge casing entry aspects of IBIS, and we recommend that our NIBIN partner agencies do the same in entering their crime gun evidence.” They argue that the NYPD policy jeopardizes the chances to
OCR for page 171
Ballistic Imaging BOX 6-2 New York City Police Department “Fast Brass” Processing The New York City Police Department policy is to enter ballistics evidence into its IBIS within 24 to 48 hours of its delivery to the department’s crime lab. A typical IBIS entry workload is on the order of 10–40 bullets and 100–150 cartridge casings per week. In 2002, faced with an IBIS entry backlog of about 1,300 cases, the department sought to streamline its entry process to eliminate redundancy. The resulting “Fast Brass” process pared the inventory and case note report filed for ballistics evidence to a limit of one page and required a full report (of less than five pages) only for IBIS-generated hits. In cases in which multiple bullets or casings were recovered and all were of the same type and caliber, the Fast Brass rules put priority on immediately entering only one of the exhibits (presumably, the one judged to have the clearest toolmarks). Phased in over the course of 2003, the new Fast Brass protocols succeeded in eliminating the IBIS entry backlog; about 9,650 items were entered into IBIS, and 310 hits were achieved in 2003, compared with 8,400 items and 195 hits in 2002. Another evidence protocol maintained by the NYPD is based on a prioritization of resources and assessment of current system performance: if both bullets and casings are recovered from the crime scene and they are of the same caliber, only the casings are entered into IBIS. Of the nearly 1,400 IBIS hits obtained by the NYPD from October 1995 through December 2004, fewer than 10 were generated by bullet evidence—hence a higher priority on cartridge case entry. SOURCE: McCarthy (2004). make hits in crimes where casing evidence is not likely to be recovered: “drive-by shootings in which the bullets are found at the scene but the casings remain in the shooter’s vehicle, for example.” It is impossible to fully evaluate the tradeoff between entering bullets and entering casings without a line of empirical research that is lacking at present: When both casings and bullets are recovered from the same scenes or collected in test firings and both are entered into NIBIN, how do relative scores and ranks on the cartridge case markings compare to those for bullets? Further work in this area could also help finalize a priority for exhibit entry, as described in Recommendation 6.1, suggesting whether potential gains in generating hits compare with resource efficiencies inherent in favoring the entry of casings over bullets.
OCR for page 172
Ballistic Imaging 6–B.4 Formalize Best Practices One of our committee’s plenary meetings was held in the Phoenix metropolitan area, where several NIBIN sites at various levels of jurisdiction—state police, county sheriffs, and municipal police departments—are clustered. Another of our meetings included presentations by the NYPD and officials from the Boston area, commenting on usage of ballistic imaging technology in that area. In addition, each member of the committee and its staff visited at least one NIBIN site or IBIS installation. Our discussions at these sites corroborate what is evident from NIBIN operational data, including the analysis done in the Inspector General audit of NIBIN (U.S. Department of Justice, Office of Inspector General, 2005). That is, active participation in NIBIN and image entry into the system spans a continuum, from vigorous users who put high priority on use of the system to agencies for which data entry (like the resulting number of hits) is much more limited. In the preceding sections we have touched on some of the reasons for this variability, including the time-consuming nature of bullet entry and perceptions of limited payoff in terms of confirmed hits; our recommendations in the rest of this chapter try to address some other points of aggravation by NIBIN users. As we noted above, ATF has done a commendable job in soliciting feedback from its users, and it is important that this continue. But we also believe that it is important that—drawing on local users’ experience—NIBIN management take a detailed look at sites that have most successfully and productively used the system. Through such a review, it would be useful to distill “best practices” by high-achieving agencies—for example, means of obtaining high-level commitment by agency officials, methods for working through returned lists of comparison scores, or interacting with detectives and beat officers—for dissemination to all NIBIN partners. Recommendation 6.5: Local NIBIN experience should be a basis of research and development activities by ATF, its contractors, and the National Institute of Justice. Local experience could usefully contribute to such efforts as “best practices” for image acquisition, investigative strategies, data archiving standards, and the development and refinement of NIBIN computer hardware and software. 6–B.5 Entry of Multiple Exemplars Although many of our recommendations are intended to make NIBIN image acquisition less burdensome, there is one point on which we believe that a slight loss in efficiency will ultimately lead to greater effectiveness in
OCR for page 175
Ballistic Imaging to include more than one to maximize the chances of finding connections to other incidents that might involve the same gun. Likewise, in test firing a weapon in police custody, all manner of variations are possible, and we do not suggest that agencies try to anticipate every possible shooting condition. What we do suggest is that more than one exhibit be put into NIBIN, ideally representing some span of ammunition makes. Recommendation 6.6: The NIBIN program should consider a protocol, to be recommended to partner sites, for the entry of more than one exhibit from the same crime scene or test firing when more than one is available. For crime scene evidence, more than one exhibit—but not necessarily all of them—should be entered, rather than having examiners or technicians select only the “best” exemplar. For test-fired weapons, it is particularly important to consider entering additional exhibit(s) using different ammunition brands. To be truly effective, this recommendation necessarily incurs a basic technical enhancement to the current IBIS platform; see Recommendation 6.10; some of the usability enhancements suggested in Recommendation 6.13 also complement the notion of multiple exemplars. 6–B.6 Reallocation of NIBIN Resources The final operational enhancement we suggest is an echoing of Recommendation 1 in the Inspector General audit of NIBIN (U.S. Department of Justice, Office of Inspector General, 2005). The NIBIN program does have procedures in place for monitoring low-usage sites and sending warning messages. As ATF commented in its reply to a draft of the audit report, “consideration must be given to the availability of IBIS technology to law enforcement agencies that reside in regions that historically have low usage based on the amount of firearms crimes” (U.S. Department of Justice, Office of Inspector General, 2005:131). That is, ATF is aware that a strict quota of evidence entries per month is an unfair benchmark, since agencies vary in the number of gun crimes (and hence the number of possible NIBIN entries) they encounter. That said, systemic low usage should be grounds for reallocation of scarce program resources to other agencies who can be more effective partners in the system. Recommendation 6.7: Priority for dispensing NIBIN system technology should be given to high-input environments. This entails adding machines (and input capacity) to sites that process large volumes of evidence and especially to sites that lack their own NIBIN installations but that routinely and regularly submit evidence to regional NIBIN
OCR for page 176
Ballistic Imaging sites for processing. For NIBIN partner agencies with low volume of entry of crime scene evidence, the ATF should continue to develop its procedures for reallocating NIBIN equipment to higher performance environments. 6–C TECHNICAL ENHANCEMENTS Several of them deal with the specific functionality and interface of the current IBIS platform; others are broader in scope and speak to the type of information that should be recorded for the NIBIN system as a whole. Put another way, these recommendations are not a “to do” list for the current IBIS or its developers, but will require collaboration between system developers, NIBIN management, and the program’s user base. A common theme of our technical recommendations extends from our general assessment of the IBIS platform in Section 4–F: that it is a sorter and a tool for search that is commonly, and unfortunately, confused with a vehicle for verification; the two are very different functions. The recommendations we offer are meant to improve the system’s effectiveness as an engine to search and process large volumes of data and to give its users more flexibility to explore possible connections between cases. 6–C.1 The Language of “Correlation” We begin with a matter that is inherently technical, even though it does not deal directly with computer hardware or software: It is an issue of nomenclature, of what to call the basic process performed by the IBIS technology. As described in Chapter 4, Forensic Technology WAI, Inc., and the IBIS user base describe the process as “correlation,” even though system training materials repeatedly stress that the actual correlation “scores” are of little consequence and that what matters is the rank of particular exhibits. We avoid using “correlation” throughout this report, describing the algorithm and process as “comparison” instead. In statistics, and as has seeped into common parlance, the correlation coefficient measures the strength of linear association between two random variables. Scaled to fall between 0 (no relationship) and 1 (perfect linear relationship), the correlation coefficient provides a clear and easy to understand measure of association. That IBIS uses the same term in labeling its scores imparts to the process—however subtly—an undue degree of quantitative confidence. This is not to say that the IBIS procedures are either unreliable or unsophisticated; indeed, we argue quite the opposite in Chapter 4. To fully warrant the term correlation, the scores reported by ballistic imaging systems would have the same easily understood interpretation as a
OCR for page 177
Ballistic Imaging correlation coefficient; this is almost certainly an unrealizable goal. Absent that, what would be helpful is any kind of benchmark or context that can be attributed to system-reported scores. Recommendation 6.8: Normalized comparison scores—such as statistical correlation scores, which scale to fall between 0 and 1—are vital to assign meaning to candidate matches and to make comparison across searches. Though current IBIS scoring methods may not lend themselves directly to mathematically normalized scores, research on score distributions in a wide variety of search situations should be used to provide some context and normalization to output correlation scores. Possible approaches could include comparing computed pairwise scores with assessments of similarity by trained firearms examiners or empirical evaluation of the scores obtained in previous IBIS searches and confirmed evidence “hits.” 6–C.2 Collecting the Right Data Audit Trail As discussed in Chapter 5, it is impossible to make a full evaluation of the NIBIN program and its effectiveness because the data that are systematically collected on system performance is far too limited. The monthly operational reports that are reviewed by the NIBIN program consist of basic counts of evidence (entered that month and cumulative) and of completed hits. Even within this extremely limited set of variables, the information collected is not rich enough to answer important questions, such as whether hits are more often realized when connecting two pieces of crime scene evidence or in linking a crime scene exhibit to one test fired from a recovered weapon. Completely absent from the standard operational statistics are any indicators of the searches performed by the system (save for the fact that the entry of every piece of evidence should incur a local search by default). Certainly, some of the data that one would like to have to evaluate the system’s effectiveness are not items that can or should be maintained within the IBIS platform; these items include any of the indications of the quality of the investigate leads generated by completed hits, whether an arrest was made in a particular case (or cases), and whether convictions are achieved. But we believe that IBIS at present is too “black box” in nature and that it is not amenable to analysis or evaluation; the system should be capable of generating a fuller audit trail and operational database than the inadequate
OCR for page 178
Ballistic Imaging monthly summaries currently generated and assembled by NIBIN program staff. Recommendation 6.9: ATF should work with its NIBIN contractor to ensure that the system’s hardware and software systems generate an audit trail that is sufficient to adequately evaluate system usage and effectiveness. In most cases, these data should be generated automatically by the software; however, others will require changes to the software so that data may be entered manually (as is currently the case with the recording of hits). The data items that should be routinely tallied and evaluated include (but are not limited to): counts of manually requested database searches, such as those against other regions or the nation as a whole; information on the origin of the case with which a hit is detected (not just the case number and agency that detects and verifies the hit); and characteristics of cases in which a possible match is deemed sufficiently strong to request the physical evidence for direct comparison by an examiner, including the “correlation” scores and ranks for the match, an indicator of which image(s) motivated the request, and an indicator of the disposition of the case (either a hit or a nonhit). Ammunition Type The previous recommendation addressed our concern that the NIBIN machinery does not currently produce the right operational data, for effective analysis. We now turn to how the system could benefit from collection of a fundamental variable during the demographic entry stage of image acquisition. In our observations of IBIS at work, a major deficiency in the current set-up is the inability to specify what is known about the ammunition used in the exhibit. Some information about ammunition make can be entered in a “notes” field on the demographic entry screen, but ammunition brand and type should be a standard variable that agencies can use in filtering or sorting their comparison score reports (see Recommendation 6.13). It could also be used as a presorting variable to narrow down the search space before initiating a manual search, as might be desirable in following up a series of shootings for which links and common features are suspected in advance. In Recommendation 6.6, we urge the entry of multiple exemplars, particularly involving the use of multiple ammunition types when test firings from a weapon are possible. Having ammunition as a viewable variable would be invaluable in interpreting the results of comparison runs in cases where multiple exemplars are in the database.
OCR for page 179
Ballistic Imaging In offering this recommendation, we recognize that it is not as simple a fix as it may appear. To promote more consistent entry, headstamp information would likely have to be entered using a drop-down list, which could be lengthy and would have to adjust to changes in the ammunition market (as is the case with built-in lists of firearms manufacturers). The best way to implement this change, including the easiest spot in the data entry process in which to insert the new item, should be determined on the basis of feedback from NIBIN users. Recommendation 6.10: ATF and its technical contractors should facilitate the entry of ammunition brand information for exhibits, when it is known or apparent from the specimens. In consultation with its NIBIN user base, ATF should also consider allowing entry of other relevant fields, such as the composition of the primer and the nature of the jacketing of the bullet. 6–C.3 Improving Search Strategies and Server Workload Refinement to the image acquisition process—making it more accurate and less burdensome—is critical to full use of NIBIN resources. So, too, are refinements to the nature of searches conducted. To be most effective, searches have to be easy to specify (if they are not automatic) and must be relevant and important to the local law enforcement agencies using the system. We do not suggest or advocate that nationwide searches against the whole NIBIN database should be routine and default, but we do concur with the Inspector General audit of NIBIN that it is important that agencies have the knowledge and training to initiate nationwide searches if conditions in a case warrant a sweeping search. It is not surprising that agencies rarely conduct national searches given that, at present, a national search must be carried out by searching each NIBIN region separately. What is disturbing about some agency responses to the Inspector General’s survey is that some partners use only the default local search because they do not know how to initiate wider searches or because they consider those searches irrelevant. Accordingly, we echo the Inspector General’s Recommendation 2 and amplify it. As a matter of routine, we believe that NIBIN management should periodically conduct national or multiregional searches on samples of evidence, both to get a sense of the ease with which those searches can be conducted and to determine whether the searches indicate possible (or spurious) matches. Recommendation 6.11: Even though national or cross-regional searches against the NIBIN database may be rare, the capacity for such a search to be conducted should exist and should be well communicated
OCR for page 180
Ballistic Imaging to NIBIN partner agencies. A protocol for national or multiregion searches, whether initiated by individual agencies or in regular system checks by ATF, should be promulgated, with an eye toward providing some investigative spark in open but cold crime investigations. In consultation with its user base, the NIBIN program should also work to ensure that the default searches performed by the system are adequate for user needs. This entails periodically reviewing the region and partition structure of the NIBIN database; it may also involve working with IBIS developers to define easily accessible “shortcut” searches, rather than work through display maps and a drop-down list every time a certain search region is desired. Recommendation 6.12: Based on information from NIBIN users, ATF and its technical contractors should: regularly review the partition structure of the NIBIN database (which defines the default search space for local agencies) for its appropriateness for partner agencies’ needs, and develop methods for flexible and user-designed searches that may be more useful to local agencies than the default partitions. These types of searches could be based on the frequency of contacts between local law enforcement agencies or intelligence on the nature and dynamics of known gun market corridors, among other possibilities. Additional flexible search possibilities could include searches in areas of known gang activity or between jurisdictions where connections were successfully made in previous investigations. A peculiar and disturbing finding from the U.S. Department of Justice Inspector General audit of NIBIN is that there are NIBIN partner agencies that enter exhibits into the database but do not regularly (or ever) review the comparison scores that are returned by the NIBIN regional servers. It is difficult to say why this is the case. In part, though, it may be due to the structure of the NIBIN database itself, funneling all evidence and comparison requests through IBIS correlation servers at three ATF laboratories. It is unrealistic to expect completely instantaneous results, even if each site had its own servers (which we do not suggest). Yet the distributed nature of the network necessarily involves some considerable amount of waiting: waiting for new images and requests to be uploaded to the servers, waiting for comparison routines to be performed, and waiting for comparison scores and images to be pushed back to the local installations. Our committee and staff site visits included trips to two of the ATF laboratories; at both we saw the general slow-down at IBIS stations when
OCR for page 181
Ballistic Imaging the local NIBIN sites were “polled” for new images and processing was being performed. Given the time involved, it is not difficult to imagine local agency staff moving on to other duties rather than waiting on returned results. Again, we do not suggest that there is necessarily anything wrong with the NIBIN program’s strategy of consolidating servers at a limited number of sites, and we do not suggest that this strategy and the waiting time that it incurs is the complete, direct cause of agencies not following up comparison score results. What we do suggest is that NIBIN management must also periodically consider whether the regional server workload is balanced so that the time from image acquisition to comparison score results is as small as possible for NIBIN users. 6–C.4 User Improvements for NIBIN as a Search Tool As we discuss in Section 4–F and above in this chapter, we think that the NIBIN program and the IBIS platform would be best served by breaking away from a strict top-10, verification-focused posture; it is best conceived as a tool for search, analysis, and discovery. The current IBIS is fairly rigid in its structure, affording users little or no flexibility in defining the reports that are generated by the system or the interface they view on screen. Comparison scores are repeated in a basic spreadsheet layout, and users are effectively limited to choosing which column to sort, which row to highlight, and which row (exhibit-to-exhibit) comparison to pull up for viewing. No graphical indication of the distribution of scores is provided (as might be useful to see clear “breaks” or gaps in the scores), and it can be difficult to see where a particular exhibit (or set of exhibits) fall in the rankings across the different scores. As another example, the IBIS Multiviewer interface allows users to see several exhibit-to-exhibit comparisons at once, showing the images in an array; however, useful text or labels of what exhibits or cases are currently being shown in the Multiviewer are lacking. Moreover, the Multiviewer comparisons are anchored to the reference exhibit that was run in the comparison request; as examiners peruse multiple images, it would be useful to pull up pairs of nonreference exhibits from the score results for closer examination, to find possible “chains” of three or more same-gun exhibits found in the same set of scores. The enhancements we suggest include some user-interface modifications that would make the IBIS platform more useful for analysis, but is not meant to be exhaustive of all such modifications. Recommendation 6.13: To enhance the NIBIN technical platform as an analytical tool, ATF and its technical contractors should:
OCR for page 182
Ballistic Imaging allow users to filter and sort the returned lists of comparison scores and ranks by such variables as gun type, ammunition type, reporting agency, and date of entry; use persistent highlighting or coloring to allow users to readily see the relative positioning of specific exhibit(s) across the rankings for different marks (e.g., to be able to see where the top five exhibits by breech face score fall in the rankings by firing pin, or to see where multiple exhibits from the same case lie in any of the rankings); use visual cues to alert reviewers of comparison scores that exhibits have already been physically examined and deemed a hit (or examined and found not to be a hit); permit flexibility in the Multiviewer screen (on which multiple images can be displayed in an array) so that two nonreference exhibits can easily be compared side by side, thus permitting easier examination of chains of potentially linked exhibits; and permit flexibility in specifying the printed reports produced by the system so that listings of multiple exhibits are more informative than the current exhibit/case number and score layout. 6–C.5 Side Light Images Although IBIS computes comparison scores for breech face impression using an image taken using a center ring light, examiners generally prefer visually examining the alternative image taken using a side light when reviewing potential comparisons. The side light image is a representation more akin to what examiners are able to see looking directly at a cartridge casing through a microscope; the side light adds contrasts that give a better sense of depth and of the texture of the primer surface. Given this preference, George (2004a:288) argued for additional work on imagery akin to the side light image: “[FTI] needs to develop images which are more compatible with those the user actually views on the comparison microscope. The user must be able to visually eliminate or associate candidates in order to have any level of confidence that a match is not being overlooked.” We agree that users should have a clearer visual benchmark to consider when examining comparison score results, even if the actual image acquired by the system for use in deriving signatures and computing scores is different and taken under conditions most favorable to the comparison process. However, we also suggest that IBIS developers explore ways to make use of the auxiliary information collected in the side light image: Methods for computing an alternative comparison score based on the side light image should be developed and tested to see how they perform relative to the IBIS-standard methodology using the center light image.
OCR for page 183
Ballistic Imaging Recommendation 6.14: Because the side light image of the breech face impression area is more consistent with firearms examiners’ usual view of ballistics evidence—and may be the basis for pulling potential matches for direct physical examination—the side light imagery should be a more vital part of the NIBIN process. Users should have the option to view (if not actually capture) the side light image before acquiring the center light image, for easier inspection of the casing’s alignment and basic features. IBIS developers should experiment with comparison scores and rankings based on the side light image, and compare those with scores using the standard center light image. 6–C.6 Operator Variability In the current IBIS system, users entering images into the system are confronted with several system-computed default suggestions—on image focus, image lighting, and the suggested placement of region-of-interest delimiters. Users have the capacity to adjust or override these defaults. In our site visits, we observed a variety of such adjustments, less on image focus but much more frequently on the intensity of lighting. At some sites, operators would increase the lighting slightly because their firearms examiners found the slightly brighter images easier to work with; at other sites, operators would do exactly the opposite. The exact placement of region-of-interest delimiters is obviously crucial to subsequent comparisons, as it dictates the image content used to derive a mathematical signature, but the effects and tolerances on the other user-adjustable parts of the acquisition process are not well documented.2 Research on these lines—for instance, looking at the impact on scores when comparison images are lightened or darkened by degrees—should be conducted and used to promulgate best practices throughout the NIBIN system. FTI is continuing to develop a new system, dubbed BrassTRAX, that is very literally more of a “black box” than the current IBIS/BRASSCATCHER 2 On a site visit to the New York Police Department, we had opportunity to try one such adjustment. We requested that an examiner acquire breech face and firing pin images from the same image three times. Twice, the examiner entered the image as normal, adjusting the lighting slightly if he deemed it appropriate; this allowed us to see a near-perfect match (and resulting score). The third acquisition was set several steps brighter than the examiner would ordinarily prefer, though it was far short of complete saturation and a pure-white image. Both scores were fairly robust to the lighting change; the two normal-lighting images were returned as the top-ranked pair on both scores, with breech face and firing pin scores of 315 and 351, respectively. The scores against the over-bright image only degraded slightly for the breech face but more so for firing pin—302 and 282, respectively—but they were still comfortably the number-2 ranked comparison.
OCR for page 184
Ballistic Imaging platform; as described in Box 4-1, the system is already being positioned as the next-generation IBIS. Physically, the unit is a box with only one spot for entry or adjustment: A cartridge casing is inserted into the tray at one corner of the box. The equipment then automatically handles all parts of image acquisition (save for demographic data entry), including the alignment and rotation of the casing. Development of such a platform is intriguing, but—consistent with Recommendation 6.14—it is important that users also be comfortable with viewing and interpreting the imagery generated by the system. As complete automation of the image acquisition process continues to evolve—reducing the effect of operator variability—it is particularly important that systems be developed with procedures for routine calibration and validation. System performance over time in processing known, standard exhibits should be a regular part of system monitoring, and the capacity for logging these calibration data in a simple and recoverable manner (for subsequent analysis) should be a priority. Further specification of calibration and validation routines should make use of exhibits that can be entered and compared at different points in time and at different NIBIN sites, including ongoing efforts by the National Institute of Standards and Technology to develop a “standard bullet” and a “standard casing” as known measurement standards. 6–C.7 Revisiting the Comparison Process and 20 Percent Threshold Finally, we turn to a critical part of the current process: the coarse comparison pass, in which all eligible exhibits are compared with the reference exhibit using a rougher comparison score, and only the top 20 percent of scores (for any of the types of markings) are retained for subsequent processing. As discussed in Chapter 4, this threshold was originally intended as a computational aid, restricting the pool of candidates for more detailed comparison beyond the prefiltering imposed by subsetting the database by demographic data (e.g., incident date and caliber family). However, the major analyses of IBIS performance described in Chapter 4—particularly the George (2004a, 2004b) studies, in which the coarse comparison step was completely waived—demonstrate that the sharp thresholding does cause known sister exhibits to be excluded from consideration. We see the same behavior in our own analyses in Chapter 8. In some of the experiments we performed, loss of potential matches was virtually guaranteed: The database was small and heavily concentrated with sister exhibits from the same guns, and so the imposition of any threshold or removal of exhibits from final consideration would incur some losses. But we also observed known sister exhibits to be screened out by the coarse comparison pass in runs against much larger segments of the New York CoBIS database.
OCR for page 185
Ballistic Imaging Figure 4-2 shows the basic printed report generated by IBIS, the top 10 ranked pairings by the different cartridge case markings. Reported prominently on the sheet is a sample size of 12,353. In discussing this type of report with other parties—such as investigating detectives, departmental superiors, and legal counsel—the meaning of “sample size” can be explained relatively easily as (roughly) the subset of the database matching the reference exhibit in caliber. But no information is readily provided on the effective sample size that is most relevant to the scores presented on the page—the number of exhibits retained after the coarse pass, for which the full scores were computed. That this effective sample size can be as small as 2,470 would be surprising, and potentially misleading, to observers without a detailed knowledge of all the steps in the IBIS comparison process. We do not argue that there is anything inherently wrong with a first, coarse cut of the database or the specific method used; however, research should still be done to determine whether 20 percent is an appropriate measure, balancing gains in processing time with the potential to miss hits. We also believe that NIBIN users should have the capacity to easily adjust the threshold level in regenerating comparison score results. Particularly if circumstances lead to court trials where an IBIS-suggested linkage is the primary (or very important) evidence, it would behoove agencies and examiners to be able to demonstrate that the suggested pairing came about in a process where all eligible exhibits were subjected to the same score and rank, rather than roughly 20 percent of them. As with national and cross-regional searches, we also suggest that 100 percent full-comparison requests (that is, waiving the coarse comparison entirely) should be performed by NIBIN management as a matter of routine research and evaluation. Recommendation 6.15: In light of improvements in computer processing time, the relatively ad hoc choice of 20 percent of potential exhibit pairs from the coarse comparison step should be reexamined. IBIS developers should consider removing the 20 percent threshold restriction or revising the percentage cut if it does not seriously degrade search time over moderate database sizes. In any event, IBIS developers should make it easier for local agencies to adjust the threshold level or to waive the coarse comparison pass altogether if specific investigative cases warrant a full, unfettered regional search of evidence at the expense of some processing speed.