The themes raised in this appendix are vitally important concepts, based on our interactions with crime data users and stakeholders. Included among these themes are what is arguably the single most important and pervasive use of national crime statistics and what we suggest might be the single most important benefit of national crime statistics collection. These are weighty topics that underscore both the difficulty and the importance of modernizing crime statistics, and that warrant a fuller airing than brief mentions in earlier text. We provide details on these issues in this appendix because exploring them at appropriate detail in Chapters 2 and 3 would be unduly disruptive to the overall flow of arguments there.
In Section D.1, we review the historical and current implementation barriers—real and perceived—that hinder the assimilation of data (from state and local data resources or other sources) at the national level, focusing on the development of the National Incident-Based Reporting System (NIBRS). We also describe what has been done to remove those barriers to change in national crime statistics. In Section D.2 we posit that attention to data and information quality should be billed as the primary value added by national-level integration of crime statistics. In Section D.3, we suggest presenting crime information in a manner that allows for (not undercuts) the primary use to which stakeholders put crime statistics—comparisons in crime across
jurisdictions and over time. A final major theme in Section D.4 concerns the special problems in characterizing crime involving businesses or organizations as actors in crime, and how improvements in the documentation of crime affecting business might be obtained.
In Section B.6 (Appendix B), we mentioned early research that tried to articulate the barriers—real or imagined—that slowed adoption of the Uniform Crime Reporting (UCR) Program’s National Incident-Based Reporting System (NIBRS) after its introduction in the late 1980s, and we briefly described some of them in Section 2.2.3 (Chapter 2). But these issues demand fuller commentary for two basic reasons. First, the lessons learned from NIBRS’s development are applicable more widely—not just to incident-based reporting of crime data from local agencies, but also to other efforts that involve assimilating data from disparate state, local, private, and other sources. Although the language and examples used in this section are almost exclusively UCR/NIBRS-oriented; they serve as an important and telling case study. Many of the same barriers can be expected to occur in the coordination of data collection on other offense types from diverse external sources, such as records of environmental violations or child welfare investigations from state or municipal offices. Many of the same barriers seen in NIBRS development are inevitable in working with administrative and other sources of data to generate measures of the nontraditional offenses in the recommended crime classification. Combining data from multiple state administrative departments, or from the courts, will involve managing patchworks of standards and sets of competing interests, as was the case in NIBRS. The second reason for commentary is that we envision police-report data to continue to be an important, primary source of national crime statistics in our ideal, modernized crime data infrastructure. Accordingly, a pivotal question that must be asked is: How can there be confidence in implementing this long-term vision when 30 years of NIBRS development have resulted in participation levels shy of 40 percent and when some federal agencies have not reported data to the UCR program (as required by federal law)?
For each of the barriers, we briefly suggest mitigation factors—generally, things that are different now that increase the probability of successful adoption of incident-based reporting and innovations in crime statistics as a whole. Two of these potential success factors are sufficiently important, and crosscut many of the historical barriers, that it is best to state them first and then refer to them as needed:
- The establishment of the National Crime Statistics Exchange (NCS-X) initiative by the Bureau of Justice Statistics (BJS), in cooperation with the
Federal Bureau of Investigation (FBI): The NCS-X work to stimulate NIBRS-format reporting by a carefully designed sample of jurisdictions, and promoting NIBRS planning and development more generally, is critically important in several ways. Methodologically, it is a partial return to first principles for incident-based police-report crime data—-retrofitting national crime data collection with the start-from-sample approach that had been advocated by the Blueprint for the Future of the Uniform Crime Reporting Program (Poggio et al., 1985) at the outset. When finished, the NCS-X additions will finally give NIBRS data the opportunity to demonstrate their merits in that they will be generalizable to the nation as a whole and important subnational groups. The capacity of detailed incident-specific data to examine the dynamics of crime have heretofore only been available on the national basis for homicide (through the UCR’s Supplementary Homicide Report [SHR] collection), and the case for active NIBRS expansion has been weakened without it. The NCS-X work has also been important in level-setting and taking stock of the current information management systems capacity among many state and local law enforcement agencies. But most fundamentally it represents the hint, and hopefully the start, of concerted national-level engagement in the compilation of national crime statistics—an active stance toward the production of crime statistics rather than a more passive, aggregation stance, and one consistent with our call for strong coordination and governance in Chapter 3.
- The FBI’s stated decision to retire UCR Summary format reporting in favor of NIBRS by 2021: In our discussions in 2014, before the decision began to take shape, the panel heard a range of viewpoints on how major changes in police-report data collection might be made, and might be made most effectively. Solid arguments could be made for or against either a top-down or a bottom-up approach. For instance, major changes being made by fiat—from on high, as when new crimes (and new forms) have been added to UCR collection in response to legislative mandate—are prone to meet resistance among local law enforcement respondents, who might see the unfunded mandate placed upon them swell larger. But, on the other hand, the bottom-up approach—embodied, in part, by the elaborate FBI Criminal Justice Information Services (CJIS) Advisory Policy Board (APB) approach—is properly grounded in recognition of the voluntary nature of data transfer to national collections. Yet, it can be slow and cumbersome and generally precludes the implementation of quick, major change to adapt to emerging problems. Accordingly, the FBI’s decision to retire a system that has held sway for nearly a century is remarkable in its own right. The manner in which it was made is also important, combining elements of both approaches—made a priority by FBI leadership, but predicated on increased recognition among the whole
stakeholder base of the inadequacy of UCR Summary counts going forward, done with the endorsement of major law enforcement support organizations, and coursing rapidly through the APB process. Publicly setting an ambitious deadline has associated risks, but—for purposes of overcoming resistance to change—positioning “NIBRS as the only choice” for national reporting is likely the most effective strategy.
Many of the barriers to more rapid NIBRS adoption can be most succinctly summarized as local agencies being—for a variety of reasons—simply unwilling or unable to make the switch. Decomposing the unwilling side first, the most commonly voiced barrier to NIBRS implementation, by the local law enforcement executives who would be responsible for it, is probably the apprehension about the appearance of crime rate “spikes” in their jurisdictions due to the counting of multiple offenses in crime incidents: that is, through elimination of the Hierarchy Rule by which only a single offense is recorded. This concern is grounded in the undeniable fact that changing the yardstick of measurement will inevitably have some effect on the measure. The crime numbers will indeed be different for some offenses, with increases and offsets to be examined and reconciled. But the testimony we heard from incident-based reporting adopters in our meetings—coupled with the experience of changing crime statistics in other countries as well as the recent U.S. experience in revising the UCR definition of rape earlier this decade—convinces us of a few key, related points. First, our strong impression from early adopters is that real spikes—major, substantive increases in particular offense rates—will likely be closer to the exception than the rule for local jurisdictions. Where they occur, they may speak usefully to and be explained by other data collection issues, such as past misunderstanding or misapplication of relevant definitions. In any event, this is a barrier for which the logical (and effective, from past experience) remedy is simple communication. Best practices for the first several releases of new-format data can and should be developed from previous and early NIBRS adopters, including the importance of building awareness of the forthcoming change among the public, the media, and state and local government officials. This is a topic in which the NCS-X experience—a fairly large and diverse cohort of state and local jurisdictions going through the change simultaneously—will also be helpful, with more localities having managed any short-term spikes in crime rates to contribute to the best practices for conversion.
If the fear of spikes is the most vocalized barrier related to general unwillingness to change, then perhaps the most deeply felt barrier might be the lack of any clear incentive to change—in the sense that direct benefit to a local agency’s daily operations of this change is not clear. Put more simply, the completely understandable question that a local law enforcement agency would raise—laboring under what is effectively an unfunded mandate and possibly contributing to the UCR Summary solely to meet requirements under state
law—is, “what’s in it for me?” NIBRS benefits could be always be described elegantly and compellingly—see Box B.3 for the benefits of participation as outlined in 1992—but at a high, abstract level, and not showing how the data supplied to the national collection would be returned in a way that could inform local policies and operations. The way in which NIBRS was rolled out—en masse for contribution by any agency, rather than emphasizing first adoption by a sample as recommended by the 1985 Blueprint—had consequences in this regard. The lack of national representativeness in the take-up meant that the compiled NIBRS data were neither published nor analyzed to the full extent possible, leaving the process without any visible feedback loop to the early-adopting local participants. They had no sense that any enhancement or value had been added to their contributed data. States that have their own incident-based reporting systems were able to make use of those data—and generate interesting and useful findings—but not necessarily in a way that would convince other states or localities to see potential gains in efficiency or effectiveness from switching standards.
Related to this barrier, we will argue in Section D.2 for positioning attention to data quality and integrity as a critical part of the value-added benefits by contribution to national crime statistics. This, too, is an area in which the NCS-X sample—and, finally, analyses making use of real data—will help greatly. But, more fundamentally, what has changed in more recent years that makes this barrier more superable than in the past is increased appreciation of data analytics by law enforcement practitioners. In some cases, this is borne out of experience with COMPSTAT or related programs—COMPSTAT being the practice adopted by New York City and other cities of using locally available, point-specific crime data to hold precinct supervisors and other officials accountable for changes in crime in their areas. In others, this comes from the intelligence gained through the mapping of crime over time and, in more well-resourced local departments, the establishment of crime analysis units to carry out the work. Still other departments have adopted the practice of publishing fairly detailed crime information on their websites, on a close to real-time basis, or working with online vendors to host such information resources. In short, our sense from interaction with law enforcement practitioners is that there is a greater recognition of crime data as a basic tool of accountability, internally within departments and externally to the public they serve. In this climate, then, we think that analyses coming from NCS-X NIBRS data will make it easier to make the case to law enforcement executives that incident-based reporting to a national program will accomplish the following:
- Support evidence-based policing and policymaking, by facilitating comparison of crime problems (and eventually the interventions to address them) across jurisdictions;
- Permit benchmarking, or comparison in proper context, to “communities like us” on factors other than raw population size, overcoming a critical limitation of aggregate-count-only UCR Summary data;
- Help in public policy formulation, particularly with respect to the “new” crime areas, in which answers to previously unanswered questions could spur new policies and practices; and
- Improve local law enforcement’s awareness and intelligence of their own areas, including by permitting analyses of offenses affecting detailed victim populations in other areas (or nationally) that may be useful in developing interventions in or programs for those population subgroups in the local area. (Through more research on NIBRS data, the broader public may also obtain benefits of added intelligence about crime—for instance, in the form of more useful demographic/geographic offense rates or the estimation of probabilities of victimization for at-risk populations.)
Agencies that have had the wherewithal to devote in-house data analyst time to their own data resources—in some cases, either directly publishing crime statistics on the Internet or enabling the open, public mapping of crime events—might be particularly well poised to exploit the benefits of comparison/contrast with peer agencies based on nationally compiled data. Sharing of those results and acquired expertise would further aid the cause of broader implementation.
A final barrier of local agencies’ unwillingness to adopt to a new system is a pure problem of political will, and again is partly a consequence of how NIBRS began on an open-to-all basis rather than targeting initial effort. Many of the same departments that the Blueprint authors suggested be emphasized first still remain NIBRS holdouts. Absent participation by large, major-city departments, smaller jurisdictions might resist change to NIBRS for lack of demonstration/example to follow. Uniformly, the law enforcement agencies in the nation’s largest cities refrained from making the switch, even in states otherwise poised to make the conversion. Whatever the reason (and they varied), the downstream effect on wider participation was undoubtedly considerable. This is an ongoing, lingering problem, and one in which both cross-cutting success factors—the NCS-X work that is finally targeting several of the historical big-city “holdouts” and the strengthening of political will that may come from the 2021 UCR Summary sunset deadline—will hopefully help to address. Two other specific points should be made in this regard, the first of which is that—relative to the historical problem of landing large adopters for incident-based reporting—it is difficult to understate the potential importance of California and Texas (as well as Indiana) committing to meet (or beat) the NIBRS-conversion deadline in enacted law (see Section 3.2 and Appendix E). Second, as we allude to later, this stands as a pivotal moment in time when some large cities and state programs will have to take on
systems modernization initiatives, because their in-house-developed systems have exceeded their practical lifetime and may be untenable to maintain in modern computing environments. Hence, this may be a uniquely appropriate time to influence the shape of replacement systems, ideally in a way that directly facilitates national-reporting extracts.
Turning to barriers related to local agencies being unable to change to NIBRS in the past, a major sticking point was and remains concern over the (high) costs of NIBRS implementation, both near-term (the start-up costs of implementation or conversion) and long-term (e.g., ongoing maintenance and training costs). These fundamental worries could apply with equal force to local agencies of different size and with different degrees of extant technical sophistication:
- Just given the numbers involved—of terminals (mobile or otherwise) needed for the capture of police reports, of software licenses needed, and so forth—major technology upgrades are a costly proposition for even large city police departments, much less smaller sized agencies. Those agencies that invested in home-grown, customized information management systems over the years faced extensive costs in overhauling systems—reengineering that is rarely if ever a simple matter of directly porting source code directly to a new computer environment—and could understandably balk, particularly without a compelling end benefit.
- As we suggested in Appendix B, a small law enforcement agency without a sophisticated information management system (and so wholly unprepared for rapid conversion) could fairly view the completion of even the simple UCR Summary forms as a major task of intricate bookkeeping. But the complement to that statement is also true, and partially speaks to cost-based concern in making the switch: Having been conditioned for so long to comply with the intricate UCR Summary calculations, a local agency might find a change to NIBRS potentially overwhelming simply because it tries to fix what isn’t (operationally) broken, and would instead upend decades-old administrative routines and work-arounds.
- Generally, the strong impression that we were left with from our interactions with law enforcement practitioners is that the marketplace for law enforcement–specific automated information systems is growing in size but that the number of technical solution providers (i.e., systems vendors) is relatively small,1 with many of them listing NIBRS-type reporting capability as a selling point. Indeed, it is likely that numerous
1 In our meetings and discussions, we heard references to numerous vendors or solution providers working in the area of law enforcement–specific information management, comprising both records management and computer-aided dispatch. The solution providers vary in the extent to which they enable mobile-based reporting or use cloud computing capabilities, and to which they explicitly list NIBRS-format output as a key feature. We did not examine the features or
state and local departments that have bought information management solutions are, as a result, equipped with systems that have detailed NIBRS-type reporting built into their systems but the capability is simply not used—possibly because it is an additional-cost licensing add-on. Getting those departments and systems ready for NIBRS submissions is a markedly different proposition than one in which the NIBRS-format reporting would have to be engineered anew in system code.
- Put most colloquially, the opening phase of NIBRS development came with neither carrot nor stick, in terms of suasion and incentive to participate. Substantial seed funding was not available to state or local departments, but neither was reporting in NIBRS format (or the development of plans to do so) linked in any way to fund awards under the Edward Byrne Memorial Justice Assistance Grant (JAG) or Community Oriented Policing Services (COPS) Office grant programs. As discussed in Chapter 3, the underlying reason for not employing a firmer hand—respecting the longstanding and inherently voluntary nature of data contribution to the national reporting program—is eminently reasonable, but the fact remains that the tactic left local agencies with no incentive of any form to adopt.
- As mentioned at the outset, costs after the initial conversion/adoption are a major concern. Computers age, systems require update and maintenance, new and continuing staff require training, and so forth—all of which can be daunting to local law enforcement executives under perennial budgetary pressure to curb costs of management relative to reducing the number of officers on the beat.
- Though it is instinctive to be drawn to the problem of systems at the local police level—contemplating systems development within and across thousands of agencies—the challenge of systems upgrade and maintenance at the state level should not be discounted. Changes to UCR/NIBRS content or to reporting requirements imposed by state law necessarily involve additional costs for the state coordinating programs, and is made more complicated because the state UCR coordinating body is typically also the group involved in populating criminal history record databases
capabilities in depth, relying on the interpretations of the law enforcement practitioners involved in our panel’s meetings, and we do not claim to have performed an exhaustive inventory. But, as support for assertions such as the law enforcement information management systems vendor space being complex yet still relatively small in size—it is appropriate to mention that solution providers that came up in our meetings include: Motorola (including subsidiary Spillman Technologies, which branches more into crime data analysis and mapping), Integraph, COPLINK (originally a joint venture of the University of Arizona and the Tucson Police Department, now developed by IBM’s i2 subsidiary), OSSI/Open Software Solutions (now developed by SunGard Data Systems), TriTech, TylerTech, and New World Solutions. Inclusion in this listing does not constitute endorsement or approval of any sort, and exclusion is likewise not intended as a slight in any fashion.
for the state. As with the systems used by local agencies for records management, there is no direct incentive for the vendors of software used for state information repositories to change their software for free.
The issue of start-up/conversion costs, in particular, is one that is directly addressed by the NCS-X initiative. After years of relative stagnation in NIBRS take-up, the more hopeful recent take-up has been concomitant with the show of national-level commitment to NIBRS that is embodied in the NCS-X work. To the extent that justice system improvement grants will have the resources to continue to provide material assistance to ongoing system maintenance and updating, the NCS-X work is also perhaps the start of a mitigation of the concern about ongoing costs. Another mitigation strategy related to managing overall costs is to focus additional effort on work with, and recommending minimum information system requirements for, the vendors providing hardware and software solutions to both state and local law enforcement agencies. We recognize that there remains heterogeneity in the adoption of information management systems across agencies and from different vendors, as well as the custom, legacy solutions and internal paper reporting that exists in some places. Indeed, our discussions with local agency representatives about the babel of reporting systems and styles that can exist within the same region means that, from the day-to-day investigative standpoint, the only real mechanism for direct information sharing by agencies—still today—is through convening in-person case coordination meetings, simply because systems do not readily “talk” to each other. So there is a great distance to be traveled before national crime statistics collection through automated, direct harvest of state and local records is possible. Reducing the dimensionality of the problem by focusing on the unique individual constraints and demands on information systems vendors is an important step, just as reducing dimensionality by coordination with states is essential for coordinating the inflow of crime data. Work with vendors/solution providers was absent in the early, formative days of NIBRS, but returning to it now might have collateral benefit in shaping the kinds of approaches that might be necessary to bring the smallest and least-resourced law enforcement into the NIBRS and attribute-based reporting fold, such as shared, regional reporting systems and the provisioning of crime reporting via cloud computing.
Another barrier on the “unable” side of the ledger is that—particularly in the late 1980s, but still true to a degree today—the format and structure of NIBRS records and data is complex, representing a daunting computational task and an inflexible design. Before relational databases and other computational innovations truly took hold in the broader market, law enforcement agencies could understandably blanch at suddenly being faced with record structures permitting up to 99 victims and offenders each—and intricately annotating victim-offender relationships between each possible pair. (We discuss a further
complication associated with inflexible structures—NIBRS’s historic tendency to rely perhaps too heavily on automated “edit checks” of input data—in Section D.2.2) As tough as NIBRS data records might be to populate, they were also not overly amenable to analysis after the fact—again, particularly in the earliest days of NIBRS development. Lost opportunities for building the case for wider NIBRS adoption were a consequence. The design and structure of NIBRS are among the reasons we argue for viewing full-participation NIBRS as an intermediate step to a more fully modernized attribute-based crime reporting system; it is, ultimately, a database structure developed in the 1980s and not scrutinized for operational effectiveness. This barrier is also partially mitigated—at least with regard to post hoc analysis—simply by advances in computing speed and power since the 1980s. At the same time, NIBRS coordinators may also be able to leverage and build from the work of the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan, which has carried out work to manipulate NIBRS structures into “flat files,” or more conventional record layouts, to make them more amenable to analysis. A final mitigation structure is to continue the work that has already been done within the FBI’s UCR Modernization program in making UCR and NIBRS more usable for agencies and stakeholders alike, including the development of additional data-input format options (e.g., in XML) and the evolving Crime Data Explorer portal for dissemination.
A final major barrier is related to the cost-benefit assessment that underlies the generation of call or incident reports by law enforcement officers in the first place; it is the perception that more detailed, NIBRS-type reporting would take more time than “what we do now” in local departments, keeping officers from their duties on the streets and writing reports instead. The adage typically attributed to Josiah Stamp in 1929, that “the Government are very keen on amassing statistics,” but “what you must never forget is that every one of those figures comes in the first instance from the [village watchman],3 who just puts down what he damn pleases,” is grounded in some truth. But so too is a theme that recurred throughout the panel’s discussions: It would also be a major error to conceptualize officers’ reports purely in the abstract as “data input” cogs in the system and to view officers as report writers who happen to have the power of arrest. Adding to the tension, as we have previously discussed, is that the same officer report that is the germ of police-report data can also be a sensitive legal document—the precursor to the investigative process and the documentation of
2 A related, sometimes expressed barrier related to relatively inflexibility in NIBRS standards is concern that the process of becoming “certified” by the NIBRS program to submit data in the NIBRS format is overly time-consuming. Indeed, it has been stated that failure to achieve all of the requirements for NIBRS certification is the major single factor that has held up direct connection between the Defense Incident-Based Reporting System (DIBRS) and NIBRS (U.S. Department of Defense, Inspector General, 2014).
3 The quotation—referring to the collection of information in India—used the term chowty dar.
attempted and completed offenses that may be held up to sharp scrutiny in the judicial process.
This is a perpetual concern and one without a pat, easy remediation strategy; for local departments, the time management trade-off between an officer spending even 20 minutes writing out a report versus returning to active patrol after a quick half-page scribble is neither easy nor obvious. We took one important mitigation step in formulating our proposed classification and associated attributes; as described in Section 5.2.2 of Report 1, we very deliberately limited the range of attributes to things that would be relatively straightforward for a beat officer (or survey respondent, or administrative database recordkeeper) to objectively assess and document. In doing so, we excluded a great many things that would be useful or even important to know about crime offenses and kept the scope of the attribute collection to roughly the same level as the current NIBRS data elements. But, otherwise, the only mitigation of this concern can come through experience. Departments will rethink and retool their own procedures for reporting (and records management based on those reports), work through any complications that may arise, and build from lessons learned in pilot work and by NCS-X/NIBRS adopters adjusting to new work flows. In our discussions, the ideal for initial report generation that recurred throughout was, effectively, a software-solution model—one that, like tax preparation software, echoes concepts of good survey design by guiding people through simple, stepwise questions. In this structure, an officer might characterize a broad, general offense group, but the proper offense code(s) would ultimately be derived from the attributes and other information recorded in the report.
This remains a useful ideal vision that is likely already achieved in part in some agencies’ information management systems. But just as information systems vary across departments, so too will agencies’ proximity to this kind of ideal division. Officer report generation (and the priority that local departments put on it) runs the gamut from hand-filled paper “top sheet” descriptions (with or without a dictated narrative) that may be keyed later by records clerks to records fully born in electronic form in officers’ Mobile Computer Terminal units in their vehicles. We are heartened through our discussions with practitioners, and those who have gone through the conversion to fully electronic reporting, that this is a general area in which local agency culture is continuing to evolve: Our sense is that report writing is increasingly seen by local law enforcement agencies as an indicator of their quality and effectiveness, and so improving it and training for it is growing in importance (ideally, with generation of the material for crime statistics as a transparent by-product). When we asked our meeting participants about the difficulty in making the transition—whether there were loud complaints about the time investment and such—the collective answer was that there was early resistance to change (just as there is with any policy), and that the transition was easier
for newer personnel who were less steeped in older routines. Our discussions suggested other management strategies that might be helpful—striving always to make reporting as painless as possible (e.g.., tailoring the reporting “form” to the data entry mode or minimizing the number of “clicks” needed to complete fields if a report is being completed on a mobile device), emphasizing the role of supervisors in encouraging accurate reporting and catching errors, and rewarding successes (e.g., when a crime is solved or case is won based on precision in reporting, making those successes known). But, in the end, the only way to overcome resistance is by simply insisting on the policy change and just doing it.
As recounted in detail in Section B.4, data quality is a topic with which the UCR police-report data apparatus has struggled for some years. Researchers have long documented shortcomings in the UCR data, including how the UCR Program deals with missing data through processes like imputation. The first serious attention to the FBI’s imputation routines appears to have come about when BJS was called upon to develop formulas for allocating Local Law Enforcement Block Grant Program funds—tied to UCR totals—by law in 1994. Following the law’s guidelines, BJS analyzed raw crime data provided to the national UCR program, but quickly found that 19 percent of the 18,413 “contributing” agencies in the relevant time period (36 months, 1992–1994) had not reported crime counts for any month in the range; an additional 17 percent missed at least one month of reporting (Maltz, 1999:7,9). Thereafter, BJS provided some technical assistance to the FBI on improving its imputation processes, including convening an expert conference on the topic.4 Concern over imputation routines—for UCR Summary counts, much less NIBRS-format data—exists for all offenses.5 To the extent that the FBI imputes for nonresponse now, it seems to be heavily if not exclusively based on stratification along a single variable—population of the agency’s service population—which is overly simplistic; partial-year counts are commonly weighted up to a full year without reflecting any kind of seasonal component.
There is an unfortunate tendency to assume that simply getting information into a records management system is sufficient or that the process of entering
4 In 2004, BJS issued a technical report (Bauer, 2004) summarizing BJS’s final allocation formula and the amounts of money dispersed to the statement under the Local Law Enforcement Block Grant Program from 1996 to 2004. BJS was formally tasked with deriving the formula for the Edward Byrne Memorial Justice Assistance Grant program, the successor to the block grant program (Hickman, 2005).
5 The problem may be particularly acute for the offense of arson; though arson was added to the UCR in 1979, the extent of missing data in the arson series has been such that subnational estimates for arson have not been produced.
data into a system with numerous “edit checks” and the running of automated validation routines is adequate to alleviate error. It is still very much possible to have data submissions that meet all technical requirements and that pass any number of procedural edit checks but still suffer from major quality problems. At the national UCR-program level, the existing routines that have been deployed to detect problems in UCR Summary data (Akiyama and Propheter, 2005) are reasonable approaches:
- Cross-sectional outlier detection, or comparing an agency’s report to similar agencies based on population size, urbanization, agency type, and geographic location;
- Longitudinal outlier detection, or detecting seriously aberrant counts in an agency’s reports over time; and
- Proportionality outlier detection, or examination of deviations from stratum distribution on a number of issues (weapon distributions; proportion of simple and aggravated assault; distribution of monetary loss in property crimes; proportion of violent to property crimes).
However, these checks are limited to outlier detection—finding numerical anomalies but not necessarily data quality problems due to misapplied definitions or the like. From our conversations with law enforcement practitioners, we heard multiple examples of agency staff who, pressed for time and under demand to show results in getting records entered into the system, responded to automated edit-check flags by simply changing relevant codes to “NULL”—enough to satisfy the edit check, but decidedly inaccurate. A U.S. Department of Defense, Inspector General (2014) report, exploring reasons why the Defense Incident-Based Reporting System (DIBRS) was not yet certified to contribute data to NIBRS, made the jarring discovery that at least one police agency within the Defense Department had effectively formalized the practice of entering “null” codes in DIBRS records that had been flagged as erroneous/anomalous by DIBRS edit checks. Functionally, this process had the effect of correcting errors through deletion of the data. Meanwhile, NIBRS itself—with its own numerous edit checks—has adopted a strategy about which we heard much from numerous stakeholders at our meetings: a curiously unforgiving “all or nothing” approach under which entire NIBRS incident records may be discarded if a relatively minor edit check is failed. The quintessential example is a homicide that occurs in conjunction with a robbery, in which the pure edit-check approach would reject the entire record if the “value of stolen property” element associated with the robbery is missing/out of bounds. This default approach, if not revised and resubmitted, would delete the record of a homicide because the value of a watch stolen in the course of the robbery was not properly recorded.
In the past, and continuing today, there is no real substitute—for purposes of finding the root causes of data quality problems—for using multiple quality
evaluation methods, because no one methodology can assure quality data. The most common way for this to occur is through direct audits, and the matter of audits is one in which our conversations with law enforcement executives suggested considerable esteem. However unusual it is to hear active requests for an audit of any kind, our discussions continually included the expressed hope that audits from the national UCR and state programs would happen more frequently and more intensively. Local departments that have experienced public challenges to their credibility over alleged miscounting of crime are particularly appreciative of the bolstered trust and accountability that can result from the audit process. Historically, the FBI’s process of audits—the Quality Assurance Review (QAR) process—has been conducted with the states on a triennial basis. Roughly speaking, the UCR Program would develop a list of agencies to be covered in the audit; this would be negotiated with the state program and, typically, 10–12 agencies would be selected for inclusion in the audit. Records and reports drawn from agency files would then be reviewed by FBI staff, either in person or remotely via mail. Budget constraints promoted a hiatus in the FBI’s QAR process, but the audits were set to begin again in mid-2016 in somewhat expanded format, based on feedback from a focus group of state UCR managers. Going forward, the FBI CJIS Audit Unit was set to offer three types of audit/review (Criminal Justice Information Services Division, 2016a:9–10):
- The successor to the traditional FBI spot-audit, a Statistical/Quantitative Review, would continue on the triennial cycle; “the national UCR Program staff will identify two offenses to be reviewed as of October 1 of each triennial cycle”; “up to 400 incidents per each offense” would be reviewed by FBI staff on a “simple random sample basis” to identify audit targets. For those sample incidents, the relevant records (such as case files and narrative reports) would be compiled by the state UCR program and reviewed by FBI staff “over a three-day period” at the state agency.6
- A Service/Qualitative Review would be an “optional add-on” to the first audit type, in which the state UCR program could designate up to two local agencies within the state to be audited—in the same manner as the Statistical/Quantitative Review—by FBI staff.
- Special Reviews would be permitted on a case-by-case and available-resource basis, and are audits done on an ad hoc basis off of the triennial cycle for “situations that require immediate FBI assistance.”
6 A later quarterly program update noted that a Probability Sampling Method had been added to this review, by which “the CAU staff will evaluate the local agency records from a sample of local agencies” in cases where “a state program is unable to collect local agency records in support of the Statistical Review methodology,” but the process was not clearly explained (Criminal Justice Information Services Division, 2016b:7–8).
There are any number of impediments to full, complete, and accurate reporting of crime and offense incidents by local law enforcement agencies—some malignant, some benign, and all potentially corrosive to an agency’s credibility. Some of these include the following:
- Varying standards across agencies for dealing with incidents, including what triggers completion of a report (i.e., whether citizen complaints/calls for service are routinely logged and start the reporting process versus having to physically report offenses at station);
- Design and implementation factors in the information management systems used at local level, including ease (or difficulty) of data entry for acceptance in system;
- State of police–community relations in the locality (poor relations may result in fewer crimes being reported to authorities);
- Operational siloes and rivalries within the agency that may impede data sharing, as may rules on victim confidentiality (e.g., domestic violence or sexual crime units may not routinely share data with records clerks);
- Inadequate training in report preparation or system usage; and
- Inaccuracy through inability to meet state or national data submission deadlines, including through shifting staff/resource priorities or software/network failures.
In addition to audits by either the national or state level, the other way in which multiple sets of eyes might be drawn to potential data quality problems is through the institution by local departments of multistage processes for the review of officer-filed reports—that is, trying to catch sources of error at the origin of the eventual crime data record. Multistage review, in which both a supervisor and records personnel would have the ability to suggest corrections—and the report/record would be opened for correction and resubmission—was expressed to the panel as an effective best practice model to consider. But it is also one, it was just as quickly noted, that smaller and less-resourced departments might simply be unable to do or afford (not least of which because dedicated, separate records staff might be beyond the agency’s resources).
Against the backdrop of all these issues, it is very deliberate that the ideal coordination and governance roles that we outlined for national crime statistics repeatedly reference data quality issues. There is much work to be done in bringing procedures for handling missing data up to expected standard practices of a statistical agency—and likewise for the development of edit checks and remediations that flag anomalies but that do not risk destruction of important partial data. Part of demonstrating national commitment to the collection of national crime statistics must be finding ways to continue to improve and expand systematic spot audits. And, in reframing national crime statistics data collection as a strong federal-state cooperative program, it should be possible to
converge on something close to this ideal, rough workflow for the police-report component:
- The national coordinating body/governance structure establishes uniform, national definitions of offenses (via the classification), supporting rules, and data quality procedures.
- Local law enforcement agencies submit offense reports (or transfer records) on a monthly basis (at least) to a state-level coordinator, which is enabled to serve as a check on data quality: vetting the data, converting codes (if not already done by local software) in records to meet the national definitions, flagging incident reports/records with errors, and returning them to the local agencies.
- Local agencies correct errors and resubmit to the state coordinator; data are checked again and, pending successful review, forwarded to the national coordinating body.
- The national coordinator vets the data, flagging incident reports/records with errors, and returns them to the state coordinator; in turn, the local agency is contacted and corrects and resubmits problematic reports.
- On a periodic basis, local agencies take stock of recurring problems, retraining data entry personnel as necessary and modifying software (or contacting vendor to modify software) to resolve issues.
- On a periodic basis, state coordinators take stock of recurring problems, modifying software (or contacting vendor to modify software) as needed to resolve issues.
In this climate, we think local agencies would realize direct benefits—to their credibility and accountability in the eyes of the populations they service—from a quality-oriented approach, and that this would make it easier to sustain the case for active participation in the national program.
In closing this section, we also wish to at least tentatively mention that new technologies may open additional avenues for improving overall data quality. Third-party triangulation is one approach that is done on a limited basis now, effectively matching offense-specific police-report data to incidents identified by other sources such as advocacy groups (e.g., for hate crime, domestic violence, and sexual assault cases). Such an approach might be applied for offense types for which incident-level data are available in both police-report data and from administrative/external resources. But, more generally, Web-scraping approaches that touch a multitude of Internet pages open up the possibility of such triangulation for some traditional and new offense types. In 2016, BJS briefly advanced a revision to its Arrest-Related Death reporting program that basically used Web-scraping (scouring open-source/media sites for reports of use-of-force incidents) that could then be compared to law enforcement agency-supplied data—essentially, asking local agencies how and whether they accounted for a specific sample of incidents in the media-source data. As
we noted in Section 1.3 (Chapter 1), we do not believe it to be our role to comment on use-of-force data and their development in this report, but the methodological idea is a powerful one; done well, it amounts to the dual-systems estimation technique that is now a principal source of estimation of differential undercount or overcount in the decennial census. It could be a very interesting methodology to study quality of data on sensitive offenses such as bias-motivated or hate crimes.
One of the historical themes in the development of national crime statistics that we noted in Appendix B was the escalation of warnings against simple, direct comparison between multiple city-level counts or rates of crime published in the UCR tables. The warnings were, and are, grounded in truth. A variety of external or contextual factors may partially explain differences in the figures across jurisdictions and over time, and so a simple difference in rates is not necessarily indicative of differences in criminality. In our panel’s workshop sessions, we heard from local agencies that appreciated the intent of the admonition against comparison and, indeed, had been burned in the past by overly simplistic comparisons drawn by city officials, by reporters, or by members of the public. Budgetary and resource assumptions were based on comparisons between jurisdictions similar to each other only (at best) by total population, ignoring any sociodemographic or geographic context. Yet, at the same time, it has long been recognized that benchmarking—comparison of how a particular agency’s offense patterns compare with peers—is the main (if not sole) purpose and perceived benefit of nationally compiled crime statistics to many crime data users/stakeholders. In our panel’s interactions with crime data users, local agencies noted the inherently contradictory nature of typical UCR data presentations: extensive tabular listings that practically beg the drawing of direct comparisons while simultaneously arguing against doing so.
An underlying problem is that inappropriate comparisons are often drawn, out of proper context, precisely because it is not easy to get the kind of supplemental, auxiliary data that could put crime figures in context and permit valid comparisons. In our discussions, we heard of local departments calling and working with their peers to identify jurisdictions that are similar with respect to poverty rates, extent of segregation, or other sociodemographic factors. Agencies and other data users that are more savvy to this process of digging for context wind up relying on U.S. Census Bureau data almost as heavily as the crime statistics themselves. Arguably, as we heard from other data users, it does not help matters that the exact manner by which the FBI computes the
agency service population totals that serve as denominators for calculated rates is fairly opaque to the public.
Concurrent with the beginning of our panel’s work, BJS convened several meetings of a Crime Indicators Working Group (CIWG) comprised principally of local law enforcement executives and their support organizations, to which the basic question was put: What information about crime or related to crime could BJS supply that would be most useful to local law enforcement,
not only in day-to-day operations but in engagement with members of their community? The basic, ideal indicator data framework developed by the CIWG in its discussions is shown in Box D.1. Not surprisingly, CIWG participants expressed wishes for fine-grained crime data that could support studies of the temporal and spatial patterning of crime—ideally at the level of city neighborhoods, if not finer. Relative to response to crime, the group also saw a need for more longitudinal analysis of outcomes through later stages of the justice system (i.e., from arrests through prosecution/court filings through correctional disposition) in order for local agencies to measure their ultimate success or effectiveness in addressing crime problems. But the brief framework description in the box understates—in its reference to “indicators of public safety and disorder” as well as community sociodemographic information—the strong desire expressed by the group for the simultaneous display and dissemination of crime data and the contextual/noncrime data that could help explain variations therein.
Accordingly, this section appeals for a change in approach in the presentation of modern, national crime statistics: Give data users the necessary tools to make crime rate comparisons in proper context rather than dismissively advise against doing that one thing—drawing comparisons—that is the primary use of nationally compiled data for most stakeholders. Specifically, we suggest that the dissemination platform for new, national crime statistics should also provide straightforward access to auxiliary data resources and tabulations that may permit meaningful comparisons between jurisdictions and across time. The coordinating body for crime statistics that we recommend in Chapter 3 should work with the U.S. Census Bureau, possibly through the quinquennial Census of Governments and related geographic support services, to articulate (and update) geographic boundary files for law enforcement agency service areas—making it easier to generate special tabulations (of population estimates and data from other sources such as the American Community Survey) for more accurate rate calculation and fuller analysis of crime data in community context. To clarify and reiterate, this is not an argument that the national crime statistics program should be in the business of creating place-based reports and analysis on its own, but that it should make the construction of the same—by other stakeholders—feasible. As hinted at by the CIWG in its proposed framework, the generation of contextual variables on the community context of crime—relevant noncrime data that may usefully bear on crime levels and trends—will likely involve multiple sources. Accordingly, an important frontier for the NCVS is making more effective use of the survey’s completed interviews—including the survey’s “screener” portion and those questions asked of all households whether they have specific victimization incidents to report or not—to generate this kind of information.
A consequence of decades of heavy focus on a small number of violent and property crimes—and near-exclusive focus on interpersonal crime—is that the nation’s current crime statistics are particularly poor at addressing crime in which businesses, organizations, or government agencies are actors in crime as victims or offenders. A major underlying cause of difficulty in understanding commercial victimization illustrates additional challenges. For a business to admit that it has been the victim of a major credit card hack or extensive organized theft is an exposure of vulnerability and weakness that may put them at competitive disadvantage with their peers. Big businesses and large retailers tend to have their own security and investigative personnel and their own protocols for sharing information on offenses; they may report some offenses (e.g., known shoplifting incidents involving goods above some threshold value), but otherwise, crime reporting may expose too much vulnerability. Hence, where others see “crime” and “policing,” businesses see “risk” and “risk mitigation”; where others see “theft,” “pilferage” (theft by an employee), or “shoplifting,” businesses see “stock loss” or “shrinkage.” The picture of crime in the United States is necessarily incomplete without more attention to businesses or non-person actors; it might be convenient to just write off shrinkage, or fraud against the government, or like offenses as seemingly invisible offenses, but the sums of money involved are immense.7 Ultimately the costs associated with these offenses show up in consumer prices or charges to the taxpayer downstream.
Because the issue has parallels with crime collection, a brief summary of efforts to assess retail theft is noted:
- Federal law enacted in January 2006 (119 Stat. 3092) mandated that “the Attorney General and the Federal Bureau of Investigation, in consultation with the retail community,” create a national database “to track and identify where organized retail theft8 type crimes are being committed” in the nation. The law specifically directed that the database be “housed and maintained in the private sector,” but that it “allow Federal, State, and local law enforcement officials as well as authorized retail companies (and authorized associated retail databases)” to populate
8 The law defines “organized retail theft” as (a) violations of state theft/shoplifting law that involve both “quantities of items that would not normally be purchased for personal use or consumption” and “the purpose of reselling the items [or] reentering the items into commerce”; (b) the receipt, sale, disposal, or any handling of property “[known] or should be known to have been taken” in violation of condition (a); or (c) coordination/organization to do either of the preceding things (119 Stat. 3092).
and access the database.9 The resulting database, the Law Enforcement Retail Partnership Network (LERPNet), was developed through a partnership of the FBI, the Retail Industry Leaders Association, the National Retail Federation, and the Food Marketing Institute (FMI); responsibility for LERPNet was passed to Verisk Retail (part of Verisk Analytics) in 2011, and Verisk announced a third release of the underlying database platform (dubbed LERPnet Falcon) in June 2013.10
However, what began as a clear legal mandate for data collection—one with a promise of funds attached, and one explicitly defined to be on business’s terms and in private-sector operation—seems to have unraveled. When the U.S. Government Accountability Office (2011) was tasked to report on the state of information on organized retail theft, the discussions with corporate security personnel summarized in the report were fairly scathing of LERPnet. The report found that businesses had found the system difficult to use and were finding considerably greater success in banding together in regional consortia, maintaining their own databases of organized retail theft occurrences. Meanwhile, as of August 2017 (and for some unknown length of time prior to then), the LERPnet database has at least been closed to new participants and is possibly shut down entirely; the website http://www.veriskretail.com/lerpnet/ returns an advisory that “Verisk Retail is not currently onboarding LERPnet clients as we re-envision our [organized retail crime]-oriented solutions for retailers and law enforcement,” and other commonly listed URLs for the LERPnet interface appear to be nonfunctional.
- To provide some general information on retail theft, the National Retail Federation has sponsored an annual National Retail Security Survey for just over 25 years; the survey is now administered as a web survey to a small set of company executives or security officers (80–100 in the 2015–2017 administrations of the survey, judging from press releases accompanying the survey results). The 2016 instance of the survey (covering retail results in calendar 2015) estimated inventory shrinkage levels as averaging 1.38 percent of total sales among respondents—extrapolating to a $45.2 billion national problem. The survey tends to show that the majority of that shrinkage is accounted for by two classes of theft—external theft (including shoplifting and organized retail theft,
9 The law also called for creation of a joint task force to “combat organized retail theft,” and so the $20 million in authorized appropriations over four fiscal years was aimed principally at “educating and training federal law enforcement” and “apprehending and prosecuting individuals,” with the creation of the database as the third and final priority. The law included more generic assertions that the Justice Department “make available funds to provide for the ongoing administrative and technological costs to federal law enforcement agencies participating in the database project” and that it “may make grants to help provide for the administrative and technological costs to [participating] State and local law enforcement agencies.”
at 36.5 percent in the 2017 survey) and internal theft (pilferage/employee theft, at 30 percent), with administrative or paperwork error, vendor fraud or error, and unknown sources comprising the rest. But, beyond a rough glimpse of the scope of the problem, the survey is not capable of providing additional detail or insights. The survey also asks about the presence of various security measures, but the univariate data from the survey are an incomplete measure of their potential effectiveness.11
- Finally, and directly akin to this panel’s work in trying to define the scope of crime in general: The Retail Industry Leaders Association (RILA) commissioned a study to go back to first principles in addressing the problem of retail loss generally. The resulting study, Beyond Shrinkage (Beck, 2016), does exactly what this study did as its first step: It defines a hierarchical classification structure that decomposes the concept of “total retail loss” as a prelude to suggesting pilot work and data collection to begin to fill in that classification with data. Specifically, the proposed taxonomy imposes a first-level split by retail setting (store, retail supply chain, e-commerce, and corporate); within each of these retail settings, known stock loss is further decomposed into final categories depending on whether they are malicious or nonmalicious in nature (each of the four settings are subject to unknown stock loss, as well). The full details and definitions of the classification are given in the study report but, illustrative of the range of crime types encompassed in this framework, the general malicious categories under the four retail settings are:
- In-store: external theft, internal theft, customer frauds, voucher/loyalty card scams, burglary/criminal damage/arson, cash theft
- Retail supply chain: internal theft, burglary/criminal damage/arson, in-transit losses
- E-commerce: customer frauds (including charges to false/stolen credit card or fraudulent claims of failed delivery)
- Corporate: corporate frauds
We draw two principal messages from our discussions with stakeholders and deliberations and from the preceding information on measuring retail loss. The first is that, fundamentally, collecting data on crimes affecting businesses largely amounts to trying to achieve information sharing in a culture where information sharing is anathema. Hence, to our system’s data collection
11 The full survey results are available to National Retail Federation members, but the basic shape of the surveys can be inferred from federation and related press releases, such as https://nrf.com/resources/retail-library/national-retail-security-survey-2017, http://losspreventionmedia.com/insider/loss-prevention/2017-national-retail-security-survey-now-available-for-data-submission/, and https://nrf.com/media/press-releases/retailers-estimate-shoplifting-incidents-of-fraud-cost-44-billion-2014.
modes of police-report data, personal/household survey information, and administrative-type records, it is likely that a fourth option involving the cultivation of “safe havens” for information sharing between organizations may need to be developed. The idea is that the shared risk of revealing and discussing crime in such a safe haven would be offset by the shared reward of picking up best practices for responding to offenses. Such safe havens might engender opportunities for anonymized data collection more effectively than police-report data or commercial victimization survey data ever could, though it may take time for culture change—and trust—to develop. One possible model here is the National Cyber-Forensics and Training Alliance (NCFTA), a Pittsburgh-based nonprofit with whom we spoke about the challenges of defining cybercrime but that also provides a useful, broader organizational example.12 NCFTA effectively serves as a roundtable for companies, government agencies, and academia to work together to combat cybercrime. Its work involves identifying individual and collective risk, formulating strategies for mitigating cybercrime and stopping its thread, and ultimately neutralizing it (through law enforcement, as in efforts to shut down several “dark web” sites, or through shared software tools and solutions). Primarily, the alliance is meant to serve as a bridge for both resources and information—getting partners to band together and share information to make difficult problems tractable, for the good of the whole coalition, when individual companies’ risks/exposures might not be worth the cost of seeking law enforcement intervention. The second principal message is the related point that trade associations such as RILA might emerge as the source of data and key indicators for particular offenses, on the grounds that businesses might be more amenable to information sharing within an association than with law enforcement, survey researchers, and the like. Moreover, enterprises by trade support organizations such as the proposed classification system for total retail loss should be strongly encouraged, fostering principled stances to define and measure things related to crime.
12 In addition to general cybercrime, NCFTA currently does major work in the areas of cyberfinancial crime, consumer/e-commerce fraud, and malware deployment.
This page intentionally left blank.