CRIME STATISTICS HAVE MANY USERS, and the panel elicited extensive input on the uses of crime data through a series of open meeting discussions with researchers, practitioners, advocates, business representatives, policy makers, and others. These discussions were designed to hear a wide range of views about current uses of data, gaps in users’ data needs, and what an ideal set of crime indicators for the nation would entail. In doing so, the panel also heard comments from numerous users who also gather such crime data, such as representatives from police departments and other investigative agencies, businesses, and researchers. These discussions provided additional information about the practical challenges involved in obtaining the kinds of data that are sufficient for different purposes. In this chapter we focus on providing an overview of the broad range of uses of crime statistics discussed during these meetings so that the scope of crime information needs can be better understood in conjunction with the taxonomy that is proposed in this report. Issues associated with the implementation challenges of the proposed crime classification system raised by potential data collectors will be discussed in the final report.
In general, the uses of existing crime data include operational and resource allocation decisions by law enforcement, local and state government agencies, and businesses and other groups. Crime data are also a critical source of information for program and policy evaluations by researchers in government, academia, and the public and private sectors. They are also used by advocates of particular issues and by the public, and are often seen as measures of accountability. For some of these purposes, existing crime data appear to be adequate, though users often noted many ways that the available data could be
improved. For many types of crime, however, the data are incomplete, lacking in consistency, inadequate, or unavailable.
We profile the demands for crime statistics, and summarize what we heard from a multitude of users and practitioners in crime data, for the obvious and important reason that those discussions are a major part of our evidentiary record and greatly informed our panel’s discussions. But we also do so because it is certainly rare for a review of U.S. crime statistics to tap so broad and diverse a range of perspectives. The International Association of Chiefs of Police (IACP) committee that created the Uniform Crime Reporting (UCR) Program was—by nature—comprised of 12 police chiefs or commissioners; its work was supported by an 11-member advisory committee whose membership included future Federal Bureau of Investigation (FBI) director J. Edgar Hoover and Census Bureau director William M. Steuart and was largely comprised of federal, state, or local agency heads. The consultant committee engaged by the FBI in 1957 (Federal Bureau of Investigation, 1958:9) was chaired by an academic researcher and rounded out by two police chiefs. And the redesign consortium that worked on the revised National Crime Victimization Survey in the late 1980s drew extensively from academia, survey research organizations, and statistical agencies but—with its focus on the survey—did not extensively mine law enforcement practitioner input. What follows, then, in this brief overview of the users and uses of crime data, is a rare attempt to get all sides to the same table, so to speak, in the hopes of envisioning a more useful crime statistics system.
Law enforcement agencies are one of the major providers of crime data and the ways in which the different agencies in the country use their crime data differs considerably. Some of the smaller local police departments in the country, for example, simply record the crime incidents that come to their attention and forward their reports to their state’s Statistical Analysis Center or directly to the FBI’s UCR program. However, not all police departments do this on a regular basis as participation in the national UCR program is voluntary. The reasons for non-regular participation are varied, but in some cases this is because the agency has relatively few crimes to report on a monthly basis and therefore reports are accumulated and then submitted periodically or annually. The majority of the nation’s approximately 18,000 law enforcement agencies, however, do report regularly to the UCR program, using either the Summary Reporting System (SRS) format or using the National Incident Based Reporting System (NIBRS). Many police departments use the data to issue their own reports on crime in their jurisdictions on an annual basis, and most states issue annual reports based on the compilations of local agency crime reports
that are sent to them. These reports are then used to inform the public and government officials about local and state levels of crime, and changes in the levels of crime over time.
Aside from serving as a general indicator of crime in their own communities, crime data compiled by state, local, and other law enforcement agencies are often used for strategic decision-making and operational or tactical purposes. Many police departments use what is referred to as a “CompStat” approach in which detailed departmental crime data are summarized by in-house crime analysis units and disseminated to police commanders (typically on a weekly basis). These data are used to discuss the nature of emerging and continuing crime problems in different areas of the jurisdiction. The purpose of these meetings is to track crimes and the efforts used to deal with these crimes, and to provide information that allows for better decision-making about tactical strategies for addressing these problems. Another important aspect of CompStat meetings is that they provide police commanders with greater managerial control over their field operations. However, it can be argued with similar strength that the CompStat approach to police management has drawbacks to temper its benefits—not the least of which a sort of “negative quota” mentality that comes from managing to crime counts, creating at least the appearance of an incentive to manipulate or misreport crime incidences so as to curb the appearance of spikes of crime (see, e.g. Eterno and Silverman, 2012). More fundamentally, not all police departments have the luxury of dedicated crime analysis units—and even those that do face the difficult problem of putting CompStat-type crime numbers in proper context, to understand the underlying dynamics behind upticks or downticks of some crime types.
An important concern that was raised about police-based crime statistics is the timeliness of their release from the FBI’s UCR program; pointedly, even participants from departments that reported making use of “evidence-based” approaches spoke of having little use for time-lagged counts that progressed through the entire UCR collection process. Crime statistics typically are released by the FBI in their annual publication Crime in the United States approximately 10 months after the collection year (for example, crime statistics for 2014 were released during the last week of September 2015). Although police departments have crime data for their jurisdictions as soon as they are compiled in their own data management systems, information about crime in other jurisdictions is not available to them through the UCR program until much later, thus precluding timely comparative assessments about how changes in their crime rates may be related to problems occurring elsewhere. Moreover, the information available in the UCR annual publication necessarily excludes details on the types of problems that may be emerging because the data are reported in summary form, primarily consisting of the total counts and rates for the eight index offenses (i.e., the eight major categories of violent and property crime), rather than with the more expansive detail that the NIBRS system can
provide (e.g., 23 offense categories, victim characteristics, etc.). The lack of detail in the annual report is largely due to the fact that NIBRS crime reporting is not used by the majority of police departments (approximately 6,300 agencies use NIBRS to report to the FBI) and therefore such detailed crime comparisons cannot be made across all agencies.
The importance of the delay in the release of crime statistics was most recently made evident in 2015 when police chief organizations and the U.S. Attorney General held meetings to discuss apparent homicide and crime increases across the country;1 a similar set of discussions was convened in 2006 (see, e.g., Police Executive Research Forum, 2006; Rosenfeld, 2007).2 During both of these periods, law enforcement agencies were in need of timely information about whether the increases in violence they were experiencing locally were unique to their own cities or part of a broader national pattern and trend. Such information informs police departments about the nature of their crime problems and their needs for resources, and also informs the public about whether and how increases in crime in their areas might be unique (for example, whether the increases in homicides are limited to drug-related or domestic violence). However, because the necessary crime data would not be available from the FBI until long after the apparent crisis period, police organizations, including the Police Executive Research Forum in 2006 and the Major Cities Chiefs Association in 2015, commissioned their own informal surveys of their membership in an attempt to obtain the data necessary to evaluate the problem. A critical limitation of these ad hoc surveys of crime is that they are based on information of unknown reliability as cities experiencing crime increases are more likely to be over-represented in the data. Other examples of frustration over the lack of timely release of crime data are evident when advocacy groups and news media and academic researchers compile their own city crime databases using the “real-time” data that are available on the large majority of urban police department websites. The use of ad hoc and non-systematic gathering of crime data is problematic, leading to unproductive debates about resource needs, the causes of apparent increases and decreases in crime, and accountability. During crisis periods, an important problem with the current system noted by law enforcement agencies and others is the time delay between data submission to the UCR program and dissemination.
Policy makers at the local, state, and federal levels need accurate and timely data on crime to inform budgetary decisions about the amount of resources
2See also the announcement of one such summit at http://mpdc.dc.gov/release/major-citieschiefs-association-national-summit-violence-americapress-event.
needed to address crimes of various types. Crime data are used to inform projections of the resources needed for criminal justice agencies to investigate cases, prosecute and defend arrestees, supervise persons on probation and parole, and incarcerate offenders in jails and prisons. In addition, policy makers may use crime and victimization data to estimate the amount of resources needed for specific types of crime victims (such as child abuse, intimate partner violence, and elder abuse victims), and grant agencies often require victim service providers to use such data to evaluate the effectiveness of their programs designed to reduce these crimes.
Tasked by its legal authorizing language to “give primary emphasis to the problems of State and local justice systems” (42 USC § 3731), BJS has cultivated a network of Statistical Analysis Centers (SACs). The SACs have a coordinating and support link in the Justice Research and Statistics Association (JRSA), and BJS provides limited funding and technical assistance to SACs through the agency’s State Justice Statistics (SJS) grant program. Currently, there are 51 SACs in the United States that are responsible for collecting and distributing state and local crime and criminal justice data from the states and U.S. territories. The organizational characteristics and placement of SACs varies across states and territories, though most are housed within their State Administering Agency (typically located in the office of the Governor or Attorney General). In eight states, the SACs are housed in other state agencies (such as state Highway Patrol) and in another seven states, they are located in universities. Only two states (Texas and North Carolina) do not have SACs.3 In some states, SACs play the role of, or are co-located, with the state UCR Program that relays police-report data to the FBI and the national UCR Program. However, the basic role of the SACs is not as an intermediate collector for any nationally compiled crime data but as a critical, research-oriented interpreter of justice-related data (including non-BJS crime statistics) for state policy makers. Our predecessor National Research Council (2009a:175) panel was highly complimentary of this “relatively low-cost activity on BJS’s part,” noting that it brings with it “great dividends in terms of outreach and feedback.”
SACs also play an important role in compiling information for state planning initiatives related to the criminal justice system. They use crime and criminal justice data to inform a variety of stakeholders about the nature of their data availability, collection processes and procedures, and promote the capacity of organizations to conduct evaluations of various criminal justice programs and public policies. As one example of such work, the New York
State Division of Criminal Justice Services (with approximately 30 personnel) provides tools to help facilitate and improve the automated transmission of crime data from police departments, and to reduce errors in the uploading of incident reports. They also develop special topic reports for the state on issues such as homicide and domestic homicide, and have developed tools to assess cost-benefit approaches to examine the state’s alternatives-to-incarceration programs. A second example from Arizona’s Criminal Justice Commission informs local, county, and state agencies about the strengths and limitations of the criminal history records data repository, and their analysis of criminal history records has been used to assess the effectiveness of funds intended to reduce the amount of time necessary for criminal case processing. A third example is the Georgia SAC, which uses crime and criminal justice data to conduct needs assessments of state drug enforcement strategies by combining data from numerous sources, and also conducts victim service needs assessments by linking geographic crime and victim claims data. These examples of SAC activities are intended to illustrate some of the ways in which crime data is used to help inform state policy makers and the public about crime and responses to crime.
Federal, state, and local legislators often are provided with crime and justice data to assist them with efforts to identify priority areas, design responsive legislation, and help make budgetary decisions for law enforcement and justice agencies in specific locales. Reports based on these data may come from numerous sources, including members of their constituencies, advocacy groups, or research from state SACs or other crime analysts. Because of the overlap in data use by legislators and the others users noted here, only a few illustrative examples of how these officials use crime data are provided. It should be noted, however, that many meeting participants voiced concerns that legislators often fail to use crime data to inform their decisions and legislative actions.4
Not all issues of concern to legislators and their constituencies can be addressed with existing crime data. Consequently, in some of these instances legislative efforts have been made to require the collection of new crime data. For example, the Hate Crimes Statistics Act of 1990 ordered the Department of Justice to establish guidelines and gather data on crimes involving prejudice based on race, ethnicity, religion, or sexual orientation. The original act developed out of concern over the national coverage and accuracy of data compiled by third-party sources such as the Anti-Defamation League, as well as growing concern over perceived increases in such bias-motivated crime over
4Examples of such concerns also can be readily found in newspapers: see, for example, the Los Angeles Times editorial “Crime legislation: Focus on facts, not fear” (April 7, 2013).
the preceding decade. Independent of the statute, then-President Clinton announced in 1997 that the NCVS also would be used to produce estimates of hate crimes because of worries that hate crimes might be particularly underrepresented in reports to law enforcement. A second example of the use of legislation to spur the development of data on a specific form of crime was the Trafficking Victims Protection Reauthorization Act of 2005, which required biennial reporting on the scope and characteristics of human trafficking in the United States (Banks and Kyckelhahn, 2011). As discussed earlier in this report, a third example appeared in the final omnibus federal spending bill for 2015, which included provisions that required the NCVS to “include statistics relating to honor violence,” though this was done without specifying what is meant by the term or noting why the NCVS, as opposed to some other crime data collection effort, was considered to be a reliable tool for doing so (P.L. 113-235).
Federal legislators often request crime information and related assessments from the Government Accountability Office (GAO) to inform legislative issues, and reports from these requests are made available on the GAO website. GAO reports cover a wide range of crime-related topics and include assessments of the availability of data on specific crimes (for example, on sexual assault, fraud risks in federal programs, and cybersecurity), the quality of some of the existing crime data, the rigor of the methodologies used in research evaluations of crime-related programs, and the state of the evidence about specific crime programs. Though these reports are often requested by federal legislators, it is challenging to determine whether and how the findings in these reports may have been used subsequently by legislators. It should be noted that when such assessments are completed, the results may lead to well-founded decisions to offer no legislative changes. However, evidence of such decisions is inherently more difficult to obtain.
Policy formulation requires identifying problems, weighing the importance of those problems based on their magnitude and impact, and developing policy approaches to address them. Policy implementation involves making decisions for the appropriateness of the policy, encouraging people to adopt that policy, and securing the resources necessary to carry it out. Accordingly, in the crime and justice area, crime statistics play vital roles in both policy justification and fund allocation. Participants in our meetings acknowledged that nationally compiled crime statistics are certainly not the only determinant of what policies are developed and put forward, in part because many crime-related problems currently do not have well-developed comparative data to support crime concerns. But the participants also noted that crime statistics are routinely used to make the case for the importance of the problem the policy
is designed to solve. Crime is a high-profile and sensitive issue and there is still a tendency to “govern by anecdote” or formulate policy responses on the basis of a single, particularly dramatic issue. But the panel also heard that agencies’ ability to put such exceptional incidents into broader context through the use of crime statistics can prove particularly effective in initiating and evaluating policy changes or maintaining current policies.
The Bureau of Justice Assistance (BJA), a component of the Office of Justice Programs (OJP), is the largest public funding agency in the justice area, moving on the order of $2.1 billion annually. Much of those funds are awarded to state and local law enforcement agencies by legally defined formula, based on calculations by BJS that use UCR data. The largest component of BJA-distributed funds is the Edward Byrne Memorial Justice Assistance Grant (or Byrne JAG) program that provides grants to state and local law enforcement departments for both planning and practical (e.g., procurement of new equipment) purposes. Overall funding levels for the Byrne JAG program has become commonly contentious in the annual congressional appropriations cycle—typically because some departments’ allocations are trimmed to be added back into the JAG pool of funds.
By law, JAG funds are explicitly tied to the proportion of UCR crime in a jurisdiction, and reporting of three years of data is a prerequisite for fund eligibility. This is implied by the allocation formula and made explicit in another passage in the law. One of three specified limitations on allocations to local governments is that “no allocation under this section shall be made to a unit of local government that has not reported at least three years of data on part 1 violent crimes of the Uniform Crime Reports to the Federal Bureau of Investigation within the immediately preceding 10 years” (42 USC § 3755(e)(3)).5 For funds going to states, the law directs that 50 percent of the pool be distributed proportionately to state population size, and 50 percent distributed based on ratio of “the average number of part 1 violent crimes of the Uniform Crime Reports of the Federal Bureau of Investigation reported by such State for the three most recent years reported by such State” to “the average annual number of such crimes reported by all States for such years” (42 USC § 3755(a)(1)). For funds to local governments, there is no comparable population-based allocation, but rather the full pool is allocated “bear[ing] the same ratio to such share as the average annual number of part 1 violent crimes reported by such unit to the Federal Bureau of Investigation for the
5As described earlier, there is no direct/explicit authorization of the UCR program in the U.S. Code, relying instead on general powers vested in the Attorney General to collect crime records. However, the use of UCR statistics for Byrne JAG grant allocation is sufficiently ubiquitous that the Byrne JAG passages of code sometimes become vehicles to adjust UCR code. For instance, the bill H.R. 906 in the 113th Congress sought to define “part 1 violent crimes” in the JAG statute to include human trafficking (commercial sex acts) and human trafficking (involuntary servitude)—and so requiring immediate elevation of those offenses to Part I status and collection of them.
3 most recent calendar years for which such data is available” to the total reported Part I crimes for the state (42 USC § 3755(d)(2)(A)). Even under some special circumstances, BJA grant allocations are made to correspond to UCR-reported numbers. Notably, an emergency supplemental appropriations act in 2007 dedicated $50 million in new state and local law enforcement grant funds “for local law enforcement initiatives in the Gulf Coast region related to the aftermath of Hurricane Katrina”—but subject to being “apportioned among the States in quotient to their level of violent crime as estimated” by the UCR Program “for the year 2005” (P.L. 110-28; 121 Stat. 152).
Other law enforcement and public safety grant-making programs administered by the BJA and other OJP units are not as explicitly based on UCR or other crime statistics. The Office of Juvenile Justice and Delinquency Prevention (OJJDP) uses a relative rate index matrix that includes the ratios of UCR-measured arrest rates for delinquency among different racial and ethnic groups as a factor in some of its funding. The Office of Violence Against Women (OVW) generally uses population numbers instead of crime statistics, while the fixed disbursements and other grants from the Crime Victims Fund administered by the Office for Victims of Crime (OVC) are not directly tied to crime statistics information. It should also be noted that nationally compiled crime statistics typically are not used by the federal grant-making agencies for accountability purposes—that is, to judge the effectiveness of a previously issued grant. Rather, these numbers are used as indicators of resource need associated with the magnitude of the crime problem. That said, as grantmakers such as BJA shift toward the integration of evidence-based policies and research into their work, crime statistics and their analysis in community context would be expected to play a stronger role in grant application decisions.
In at least one notable historic instance, the use of UCR data was required by law for the administration of grant funding. The Violent Crime Control and Law Enforcement Act of 1994 authorized the award of grant money to as many as 15 “chronic high intensive crime areas” in order to spur the development of “comprehensive model crime prevention programs.” No precise formula was assigned for deriving these areas except that “at a minimum,” such areas have “consistently high rates of violent crime as reported in the Federal Bureau of Investigation’s ‘Uniform Crime Reports”’ and “chronically high rates of poverty as determined by the Bureau of the Census” (P.L. 103-322; 108 Stat. 1844, 1846). In addition, other federal granting agencies outside of the Department of Justice may require UCR crime data to demonstrate need for funding. For example, the U.S. Department of Labor also required for its “Training to Work 2-Adult Reentry” grants that applicants demonstrate that the target areas for their programs are in areas of high poverty and high crime by “providing statistical data that shows that the felony crime rate of the target area is higher than the felony crime rate of one or more of the adjoining communities.” This comparative crime data can only be provided by the UCR.
Similar types of provisions hold in different states for the allocation of public safety grant funds. For instance, California’s Budget Act of 2014 (SB 852) mandated that local assistance grants from their Board of State and Community Corrections pool of $28 million include direct allocations to “be made available to the city in California with the highest rate” of particular crimes “as reported by city police departments in the most recent United States Department of Justice Uniform Crime Report”: specifically, $670,000 to the city with the highest murder rate and $665,000 each to the cities with the highest reported rape and robbery rates.
In sum, meeting participants revealed numerous ways in which crime data are required for purposes of obtaining federal or state assistance and resources for a variety of different types of programs. This was not always the case, however, because data for some types of crime problems are not available from the UCR. This may be because the crime issue of concern is not recorded in police data because it lies outside the jurisdiction of local police departments (such as crimes covered by federal law), or because it is the responsibility of other state and local agencies (e.g., cases of child abuse and neglect), or because the potentially necessary details that would demonstrate high rates of need are not widely available in the UCR summary reports (such as violence against women or elder abuse).
While the state-level SACs described in Section 3.2.1 can play an important role in marshaling available data to analyze the impacts of crime-related policies, the past two decades have seen larger steps toward evidence-based policy-making in criminal justice. The nonpartisan Washington State Institute for Public Policy (WSIPP) is an early example, created at Evergreen State College in 1982 by the state legislature. It is mandated by state policy makers that the WSIPP update and maintain an inventory of evidence-based practices in a variety of policy areas, including use of a benefit-cost simulation model to estimate potential returns on investment of specified policy approaches. In the adult and juvenile criminal justice arenas, much of their cost-benefit analysis has concentrated on correctional and judicial processing levers (e.g., drug courts and rehabilitation/treatment diversion programs for offenders with addiction or mental health problems). In 2011, WSIPP took on a project to begin applying the same cost-benefit analyses to policing interventions (Aos and Drake, 2013). More recently, the Results First Initiative jointly funded by the Pew Charitable Trusts and the John D. and Catherine T. MacArthur Foundation has built partnerships with, at most recent count, just over 20 additional states to apply similar cost-benefit analysis in various policy arenas.6
Even focused on corrections and judicial processing issues, high-quality crime statistics are an important input in understanding inflows and trends in the justice system. But, certainly to the extent that these entities become more invested in applying evidence-based methods in crime prevention and policing, demand for complete and consistent crime data—capable of comparison across both time and location—will only escalate.
Use of crime data by researchers in both the public sector and academia is extensive and diverse. Because this research covers a very large range of data uses and approaches, the discussion below necessarily provides a very brief overview of its primary features with respect to currently available crime data and gaps in existing data. In addition, public-sector researchers (such as those in SACs and other research organizations) and academic researchers often work in collaboration with other users of crime data such as law enforcement agencies, local, state, and federal agencies, businesses, and other groups, so there is considerable overlap between their uses of crime data and uses by others.
Academic and public-sector research consists of both descriptive and multivariate analysis of crime and victimization problems and their outcomes. Crime data are used at numerous levels of analysis to describe the extent to which crime varies over time, across places (such as countries, states, cities, neighborhoods, and other areas), across organizations (such as schools, businesses, and sectors of the economy), between individuals and groups, and how individuals’ experiences with crime and victimization vary and change over the life-course. The type of data used for the descriptive analyses of these variations necessarily depends on the research question and the availability of crime data at the various levels of analysis. For example, studies of national-level crime trends for major categories of crime must use UCR or NCVS data as they are the nation’s two main indicators of crime, providing different types of information as well as distinct trends during some time periods. Studies of subnational variations in violent and property crime rates, however, have relied on the UCR because the current NCVS sample was not designed to produce reliable subnational estimates of crime.7 The demand for additional subnational information by researchers and others recently prompted BJS to redesign the sample in ways that will allow reliable multi-year state-level
7Currently, the publicly available NCVS data do allow for general comparisons between urban, suburban, and rural areas, and some ability to estimate victimization rates for the 40 largest (core county) MSAs.
estimates of victimization for approximately half of the largest states in the United States.
Beyond describing trends in the major categories of violence and property crime, researchers often examine these data with additional information from other sources to assess the association of crime rates with social, demographic, and economic factors; criminal justice resources and practices; and changes in the law. Some researchers have also attempted to forecast future rates of crime, though this is an area fraught with significant challenges (see National Research Council, 2009b, for an overview of the literature on crime trends).
While studies of UCR (and NCVS) crime trends provide basic and essential information about levels and changes in violent and property over time, researchers noted a wide range of crimes that are not captured by these measurement systems. It is very difficult to determine, for example, whether crimes against businesses and other organizations, the environment, or government agencies have increased or decreased over time, and trends for some types of crimes against persons are unknown as well (e.g., human trafficking, fraud). There are numerous reasons why such information is difficult to obtain, but the lack of this basic information means that current understandings of crime trends are incomplete and dominated by analyses of “street crimes” that can be more easily obtained because the reports are initiated by victims and local police. Other types of crime (such as fraud) can have different detection rates and mechanisms, and data for these types of incidents may only be available after investigations are completed. When this is the case, the crime data are dependent on the level of investigation and the incidents are only revealed when prosecutors proceed with charges of illegal activity. Without additional information about investigation resources and processes, charge count data provides information on crime that may be misleading in terms of both levels and trends in such crimes.
Another major component of public-sector and academic research combines data and statistical models to infer how different factors and policies affect crime rates, and how crime rates may, in turn, affect other important socioeconomic outcomes (such as neighborhood change and economic development). The unit of analysis for these types of studies also varies and includes highly aggregated rates for places such as states, cities, counties, and neighborhoods, but may also be based on lower levels of aggregation or persons when the research is interested in understanding how different treatment policies affect individuals’ risk for future criminal involvement. Some examples of these aggregate rate studies include research on the effects of the death penalty on homicide rates (e.g., National Research Council, 2012), gun legislation on county or state violence rates (National Research Council, 2005), and policing strategies on neighborhood, block group, or street segment rates of crime (e.g., National Research Council, 2004b; Weisburd et al., 2012). In each of these types of studies, the need for geographic information about the location of the
incident is important, and with more targeted interventions, the geographic data for incidents of crime needs to be more precise. Studies of program effects on individuals’ offending typically follow persons over time and use either arrest or other criminal justice system data as an indicator of criminal involvement. Alternatively, because such data only include information on detected criminal activity, some researchers track persons over time and administer self-report surveys to obtain information about offending. With either approach, the researcher must be able to link the person’s crime data with previous information about the individuals and their participation in the program under evaluation. The more detailed and reliable the crime information, the more useful the results will be for policy evaluation purposes.
There are many policy advocates or issue constituencies that use crime and victimization data to make arguments to advance their claims about the nature and extent of the problem they want to see addressed. Some of these groups may be advocating for new data collections (such as in the case of previously discussed efforts to obtain hate crime statistics), while others may be advocating for changes in existing data collections to better capture the problem of concern. A recent example of the latter instance can be found in the effort to redefine “rape” incidents in the UCR program. Advocates for this change argued that the long-standing definition used by the FBI was highly restricted and did not capture the full range of sexual assaults, as it defined rape as “the carnal knowledge of a female forcibly and against her will.” Many police agencies interpreted this to exclude sexual offenses that were criminal in their own jurisdictions, such as those involving anal or oral penetration, or penetration with objects. In addition, the definition excluded rapes committed against males. The new UCR definition of rape became effective on January 1, 2013, and states that rape is “penetration, no matter how slight, of the vagina or anus with any body part or object, or oral penetration by a sex organ of another person, without the consent of the victim.” Assessments of the difference in 2013 NIBRS counts of rape between the legacy and the revised definition suggests that this change increased the number of incidents in that year by roughly 42 percent.8
Advocacy groups also were recently successful in their efforts to change the way animal abuse crimes are counted and presented in national FBI crime statistics. In particular, the Animal Welfare Institute and the National Sheriffs’ Association separately proposed the addition, and were joined by the Association of Prosecuting Attorneys and the Animal Legal Defense Fund.9
Beginning in January 2016, these crimes will move from the NIBRS “group B” category of “other crimes” and be counted as a new “group A” crime of animal abuse. Under the new rules, animal abuse is defined to include incidents of simple or gross neglect, intentional abuse and torture, organized abuse, and animal sexual abuse (Criminal Justice Information Services Division, 2015a:9). For group A offenses in NIBRS, police agencies are asked to submit incident data, while for group B offenses, only arrest data are submitted. Therefore, this change will produce data that will allow for the monitoring of trends in animal abuse incidents that come to the attention of police.
Advocacy groups also request that other national data sources, such as the NCVS, be modified to obtain data on their issue of concern, particularly when it is believed that victims of certain crimes are unlikely to report the incident to the police. However, because the NCVS is a self-report survey rather than a record-keeping mechanism by police departments, changes to the survey are not often easily accommodated as each request would require unique considerations. For example, if a new victimization rate is desired for a subgroup in the population that is relatively small in size, the sampling framework of the NCVS necessarily limits the precision of the rate that would be obtained and may not be feasible. In addition, the questions necessary to identify the subgroup may be problematic in that respondents may not be willing to answer such questions, such as would likely be the case to learn whether undocumented immigrants experience higher rates of crime than citizens. For these types of reasons, the issues that are necessary to consider for obtaining new crime and victimization data via the NCVS are different from those that must be considered when changes are proposed for the UCR.
The panel also obtained input about uses of crime data from the business sector, and oftentimes their uses of these data are unique from those of other groups. Businesses may use UCR crime data to learn about the nature and extent of problems in the cities or communities in which they operate or are considering for expansion or relocation opportunities. Some businesses may use local crime data to target sales of their products, such as burglar alarms or antitheft devices. But a large component of crime data use by businesses is focused on analyzing and responding to their own crime information collection systems to protect the businesses against thefts from customers and employees, as well as other crimes including cyberattacks of various types. Discussions with business representatives suggested that a large, but unknown proportion of the crimes against their companies is not reported to police. Instead the data are used to monitor losses, improve security, and thwart anticipated future
incidents. There appears to be growing coordination of these security efforts by businesses in related sectors and over common concerns.
One example of business “crime” data that contains information distinct from that provided by either the UCR or NCVS is the National Retail Federation’s annual National Retail Security Survey (NRSS). According to the 2015 survey of 100 senior loss-prevention executives, inventory shrinkage in 2014 due to shoplifting, employee and other internal theft, paperwork errors, and other factors amounted to approximately $44 billion. The two largest components of this loss were attributed to shoplifting (38%) and employee/internal theft (35%). However, unlike the UCR which provides larceny incident counts, these data estimate crime in terms of inventory loss amounts that are more readily estimated than the number of distinct incidents or persons involved in retail inventory loss.
A very large amount of crime information appears daily in news media outlets, most often as descriptions of recent specific incidents, offenders, and victims, but also in the form of national and local crime statistics to illustrate comparative crime rates and trends. For example, the release of annual statistics from the UCR and NCVS by the U.S. Department of Justice is typically covered in major news outlets, but increasingly local media outlets turn to their local police departments to provide regular updates on recorded crimes. Several unique issues about media and public use of crime statistics are noted here, including efforts to improve the understanding of crime and appropriate uses of data to help better inform the public about crime and related issues.
Journalists and other media personnel often have been criticized for their misuse or misinterpretation of crime statistics, and for failing to put recent unique or high-profile incidents in broader temporal context. Without such contextual information, the most recent newsworthy crime is often seen as an indicator of a new trend, and the continual coverage of crime in this way can contribute to the false impression that rates are continuously on the rise. Efforts to improve journalists’ coverage of crime and justice issues are being developed by the Center on Media, Crime and Justice at John Jay College in New York, including dissemination of handbooks on covering crime and justice issues, conferences to discuss media and data and substantive issues, awards for examples of best crime and justice coverage, as well as other activities.
Media coverage of crime has helped in some instances to spur public criticisms of gaps in data systems, and journalists have been responsible for producing pressure to make changes in crime data records. For instance, in several cities, such as Baltimore, St. Louis, and Philadelphia, journalists uncovered anomalous rates and trends in police records for rapes in these cities,
leading to questions about police “unfounding” or not recording rapes that were brought to their attention by victims, and these stories led to challenges among police departments to justify their numbers. Following the shooting death of Michael Brown in Ferguson, Missouri, many journalists from around the world reported that the U.S. data for illegal police-killings of civilians are highly inadequate for analysis of trends or associated factors.10 The lack of data on these incidents led several media and other organizations to “crowd-source” and develop their own, often competing, counts of these incidents from online reports. Subsequently, federal legislation has been proposed to require states to report these and other police use-of-force incidents.11
The panel also heard about an additional media use of crime statistics that prompted accusations of unfairness and negative stereotyping of cities and police departments, and this involved the reporting of simplistic crime “rankings” by some media outlets. Crime rankings simply list cities in order of their FBI reported crime rates, typically by using an index of crimes or index of violent crimes. Cities may score near the top of such rankings because they actually have higher crime rates, or because they are more likely to record their crimes, have higher crime-reporting rates by victims, or have cities that are relatively small in proportion to their surrounding metropolitan areas, thus capturing more incidents in the numerator of their crime rates without additional population in the denominator. Cities that have not regularly participated in the UCR program (e.g., Chicago) are also excluded from these rankings and therefore they benefit from their failure to provide data. Though the FBI website and other organizations have warned against such simplistic rankings, they persist and cities that are ranked near the top of these lists report that this misuse of crime data harms their efforts to attract businesses, conventions, and other events, perhaps further perpetuating the problems that those producing such rankings seek to correct.
Reflecting a similar type of concern, police departments and city officials report concerns about damages to their cities when their data systems change over from the UCR summary system to the NIBRS system. NIBRS data do not use the same hierarchical coding structure and therefore can count crimes differently from the SRS, for example in incidents that involve multiple crimes (such as robbery and aggravated assault). Officials noted the need for assistance to help explain to the media and the public why the new NIBRS
10Of course, the lack of systematic, national collection of data on incidents involving (excessive) use of force also drew publicity when the shortcomings in existing data were publicly noted by FBI director James Comey; see https://www.fbi.gov/news/speeches/hard-truths-law-enforcementand-race.
11See, e.g., the Police Reporting Information, Data, and Evidence (PRIDE) Act of 2015, introduced in the Senate by Sens. Barbara Boxer (D-CA) and Cory Booker (D-NJ) and in the House by Rep. Joaquin Castro (D-TX) as S.1476 and H.R.3481, respectively.
crime counts are likely to produce higher rates than those provided by the traditional summary system.
Participants in the discussion meetings provided extensive feedback to the panel about how they did or did not use crime data, the perceived gaps in the data for their specific purposes, and the challenges involved with using crime data accurately and in a timely way. More importantly, they spoke to the schism between the current and ideal systems: what they would like to do with crime data, relative to what they actually do (or can do) with current data. There was also discussion about the various mechanisms by which crime data could be reported and released in ways that would best inform the public about crime in the nation, as well as in their communities. Annual national, state, and local reports were deemed useful, but participants also noted that additional information to better understand the contexts in which crime rates differ would be more helpful. It was suggested that national reports would be more complete if they included information about other crimes, such as federal crimes that were not recorded in the UCR system but were equally important to understanding the fuller nature and amount of crime in the nation. Suggestions were also made for more detailed reports on specific types of crimes, such as domestic violence and gun violence at the subnational level, rather than relying strictly on broader crime categories as is typically done with UCR data. In addition, local reports were believed to be more responsive if they could be compiled and released in a more timely way so that public community initiatives to improve crime can be better monitored.
We are, purposefully, very sparing in designating formal recommendations and conclusions, because the major point of this report is to propose a single new classification for crime, in Chapter 5. That said, we think that our canvassing of crime data user needs supports a general conclusion—however blunt or “obvious” it may be—that merits explicit statement, as it informs the remainder of our work. That basic conclusion, put colloquially, is that there is no “magic bullet” for crime measurement and statistics; the terrain is too broad and the demands too diverse to be satisfied by any single, omnibus data system.
Conclusion 3.1: There is strong demand for comprehensive, yet detailed, information about crime by a broad range of users. No single data collection can completely fulfill the needs of every user and stakeholder, providing data with sufficient detail, timeliness, and quality to address every interest of importance. Any structure devised to measure “crime in the United States” should necessarily be conceptualized as a system of data collection efforts, and informative details about the collection and quality of the distinct
components in this data system should be included to help ensure proper interpretation and use of the data.