National Academies Press: OpenBook

Strengthening the National Institute of Justice (2010)

Chapter: 6 Assessing Research Programs

« Previous: 5 Building a Research Infrastructure and Guiding Policy and Practice
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

6
Assessing Research Programs

The committee concluded in Chapter 5 that the National Institute of Justice (NIJ) lacks a culture of self-assessment. NIJ conducts very little transparent assessment, either periodically or systematically. This lack of assessment has limited our ability to assess not only NIJ’s programs but also the influence of those programs. In this chapter, we discuss why assessing the impact of sponsored research is important and critique what NIJ has done to assess the efficacy of its research programs. We also summarize what we learned from our own limited efforts to examine the influence of NIJ-funded research and from our review of the assessment practices of other federal research agencies. All of this is presented in order to provide NIJ with initial guidance on establishing its own self-assessment practices.

THE IMPORTANCE OF ASSESSMENT

Like many federal research agencies, NIJ supports research to address a broad range of national needs and objectives. This is primarily accomplished by setting research agendas and then providing grants to investigators in regard to those agendas. This is also done by developing and funding the research infrastructure, which includes programs of education, training, and tools for the succeeding cohorts of researchers. Federal research agencies serve the role of sustaining lines of research in areas of ongoing national need, and they also have the ability to redirect support when opportunities arise in new directions for knowledge and societal benefit. Historically, the

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

federal government can make the difference between development or stagnation of research fields (National Research Council, 2007).

Because priority-setting decisions can alter the vitality of research fields and because government has limited resources to support scientific activities, assessing progress in order to appropriately set priorities is essential. Since the need to understand progress and to set priorities is ongoing, the process of assessing progress needs to be continuous, consistent, and in accordance with an agency’s mission and goals. If they were not mindful before, the passage of the Government Performance and Results Act (GPRA) in 1993 has made all federal agencies sensitive to the importance of assessing the results of their activities (Sunley, 1998). Its enactment addressed demands for accountability and demonstrated accomplishments by requiring that all federal agencies, including research agencies, develop multiyear strategic plans and evaluate and report annually on their activities.

CURRENT PRACTICES

As this report illustrates, NIJ does very little to strategically assess its performance and even less to track the influence of its research on scholarship and practice. There is very little internal management information gathered by NIJ or its component offices that focus or assess effectiveness, and what external assessments it does support are irregularly conducted, narrowly focused on a few programs,1 and often done in response to political criticisms. What internal assessments it does conduct, such as regularly review progress reports from grantees and maintain dialogue with grantees and constituent groups, are rarely summarized for its constituents. In other words, NIJ’s own processes for assessing the results of its activities are not transparent and are viewed by the committee as inconsistent or nonexistent.

PART Evaluation

The current NIJ approach to assessing the efficacy of its programs is based largely on the Program Assessment Rating Tool (PART) of the Office of Management and Budget (OMB).2 The PART is designed to help agencies identify a program’s strengths and weaknesses in an effort to inform funding and management decisions aimed at making the program more ef-

1

NIJ’s assessment efforts have been focused on its outreach activities and forensic capacity-building programs and not on its research portfolios.

2

For more information, see http://financingstimulus.net/part.html [accessed March 24, 2010].

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

fective. NIJ first completed the PART in 2005 and has updated it annually.3 NIJ acknowledges the limited numbers of specific long-term performance measures that focus on outcomes and reflect its mission. It attributes this to the nature of its program, consisting of basic and applied research, which have uncertain, long-term outcomes and impose measurement difficulties.4 It also acknowledges that its program assessments do not satisfy PART definitions of scope, quality, independence, or regularity (U.S. Office of Management and Budget, 2010).

In 2005, using PART, OMB provided an assessment of NIJ and of NIJ’s selection of performance measures in the areas of purpose/design, strategic planning, program management, and results/accountability. The detailed assessment is available online. OMB gave NIJ a rating of “Adequate” for its overall evaluation. Programs that are “performing” have ratings of Effective, Moderately Effective, or Adequate. Programs categorized as “not performing” have ratings of Ineffective or Results Not Demonstrated. The Adequate rating describes a program that needs to set more ambitious goals, achieve better results, improve accountability, or strengthen its management practices (see http://www.whitehouse.gov/omb/expectmore/rating.html [accessed March 17, 2010]).

Although the intent of PART is to demonstrate accountability for NIJ’s work and to provide information about the results of its work for later planning, the specific performance measures included do not give much meaningful information about NIJ’s operations or its influence on scholarship and practice because of the limitations described below.

Performance Measures

The PART assessment for NIJ includes the following program performance measures: average days until closed status for delinquent grants; number of new final grant reports, research documents, and grantee research documents published; total number of electronic and hard copy documents/publications/other requested; number of fielded technologies; and number of citations of NIJ products in peer-reviewed journals. See Table 6-1 for the recorded measures for the period 2003-2008. Note that these performance measures are all outputs and not outcomes, and the linkage between these and NIJ’s reported strategic goals is not apparent.

3

The results of NIJ’s assessment can be found at http://www.whitehouse.gov/omb/expectmore/summary/10003804.2005.html [accessed June 22, 2009]. The assessment details can be found at http://www.whitehouse.gov/omb/expectmore/detail/10003804.2005.html [accessed June 22, 2009].

4

The difficulties associated with applying PART to R&D agencies have received much attention (National Research Council, 2008b; Radin, 2008; Redburn, Shea, and Buss, 2008).

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

TABLE 6-1 Program Performance Measures as Reported in the PART by NIJ

Year

Target

Actual

Measure: Number of Fielded Technologies

2001

Baseline

5

2002

N/A

6

2003

N/A

5

2004

N/A

8

2005

N/A

15

2006

Baseline

26

2007

25

21

2008

26

17

2009

28

 

2010

32

 

2011

35

 

2012

37

 

2013

39

 

Measure: Number of Citations of NIJ Products in Peer-Reviewed Journals

2003

Baseline

54

2004

55

53

2005

60

65

2006

65

176

2007

70

96

2008

70

259

2009

70

 

2010

Discontinued

 

Measure: Total Number of NIJ Electronic and Hard Copy Documents/Publications/Other Requested

2003

Baseline

5,416,579

2004

5,600,000

5,616,648

2005

5,850,000

7,327,961

2006

6,080,000

3,568,919

2007

6,310,000

3,070,622

2008

7,500,000

6,953,762

2009

4,000,000

 

2010

4,500,000

 

2011

4,500,000

 

2012

4,750,000

 

2013

4,750,000

 

Measure: Average Days Until Closed Status for Delinquent NIJ Grants

2003

Baseline

511

2004

400

275

2005

200

81

2006

90

80

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

Year

Target

Actual

2007

90

80

2008

90

88

2009

90

 

2010

80

 

2011

80

 

2012

70

 

2013

60

 

Measure: Number of New NIJ Final Grant Reports, Research Documents, and Grantee Research Documents Published

2003

Baseline

328

2004

182

226

2005

192

325

2006

258

257

2007

258

178

2008

259

171

2009

300

 

2010

300

 

2011

300

 

2012

300

 

2013

300

 

NOTE: N/A = not applicable.

SOURCE: Created from information available: http://www.whitehouse.gov/omb/expectmore/detail/10003804.2005.html [accessed December 30, 2009].

Average Days Until Closed Status for Delinquent NIJ Grants. The first measure, days until closed for delinquent grants, is an indicator of management. Although it appears straightforward, it is unclear when a grant is considered delinquent. As long as NIJ is consistent in its definition of when a grant is delinquent across the years, this may be a useful performance measure. However, it would be more appropriate to define a management performance measure in terms that are more easily understood by a broad audience. For example, “number of months for release of data as measured by time from end of data collection to data release on Internet” or “percentage of grant award or funding decisions made available to applicants within 9 months of application receipt or deadline date” (see National Research Council, 2008b).


Number of New Research Publications. The number of new research publications seems straightforward. However, there is no information on what is meant by “NIJ and grantee research documents” or on how these figures are compiled (e.g., Are grantees reporting how many related research docu-

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

ments they produce? When and how often is this reported to NIJ?) Note also that the NIJ-based production of research publications is declining; a trend discussed in Chapters 3 and 5.


Number of Publications Requested. The number of requested publications also seems straightforward. But again, there is little information to the reader as to how these data are compiled. Are these requests made to NIJ directly, through the National Criminal Justice Reference Service (NCJRS), or both? Do the numbers indicate number of requests or number of publications requested? There is a sizable drop in total number for 2006 and 2007, thereby raising questions as to whether this suggests a decline in requests or perhaps a change in the way the figures were compiled for those years.


Number of Fielded Technologies. For the measure of fielded technologies, the committee obtained some information from NIJ on the nature of these data and how they were compiled. With this added information, the committee concludes that NIJ’s efforts to assemble this performance measure are inconsistent and ill defined.

The committee received a listing of the 26 fielded technologies for 2006 and the 21 fielded technologies for 2007. These listings contain brief descriptions of the technologies transferred, the award numbers, the performers of the awards, and the Office of Science and Technology (OST) divisions that managed each award. Upon examination, we noted that the technologies reported to be fielded in 2006 were products of grants originating in the period 1995-2005 and those in 2007 in the period 1998-2006, thus raising the question as to when a technology is considered “fielded.” Is it when the grant is closed or when the “transfer” to the field takes place? Furthermore, we determined from the lists that the fielded technologies cover a broad range of technology activities, from an actual developed and marketed product to a training CD. We were unable to ascertain what is meant by a “fielded technology.”

According to the fiscal year (FY) 2009 Performance Budget Report for the Office of Justice Programs (U.S. Department of Justice, 2008), the performance data item “number of fielded technologies” represents the NIJ-developed technologies that are transferred to the field for use by criminal justice practitioners. The original measure may have been limited to counting the number of technology prototypes produced for counterterrorism,5 interoperable communications, computer crimes, and protective equipment; however, this technology transfer measure has since been broadened and now includes publications, demonstrations, examples of commercialized

5

It is important to note that the FY 2006 target was reset as the baseline because of the phase-out of counterterrorism funds from NIJ to DHS.

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

technologies resulting from research, new DNA markers,6 and assistance for first adopters (U.S. Department of Justice, 2008).

When a performance measure covers such a broad array of outputs, it becomes difficult to demonstrate a program’s efficiency and impact. An assessment of NIJ’s role in preparing technologies for use by the criminal justice field and gauging their impact would require an in-depth examination of the technical aspects of these technologies as well as deliberations among researchers and criminal justice practitioners on their relevance. This kind of assessment was beyond the scope and resources of our study. From the committee’s very limited review, it is fair to conclude that NIJ has initiated some work that is quite impressive7 and befitting its role to identify and support research with specific forensic and law enforcement applications. However, NIJ has also supported some work that appears to be less cutting-edge development or to have impact limited to a specific locale as opposed to the broader field.8

Although NIJ seems to be engaging in relevant work in transferring technologies to the field, its inclusion of so many different kinds of activities as fielded technologies is misleading and not a useful way of measuring program outcomes. For the measure to be useful, it should be clearly defined to include a timeframe and categorization of what is being measured. Many infer “fielded technologies” to be the percentage of NIJ-sponsored projects that resulted in commercialized products currently on the market; but if that is not the case, then it should be clearly noted. As noted throughout this report, OST supports a wide range of research and other activities and often combines them together without distinguishing among its products and efforts, making assessment of its research efforts difficult. This may be the result of a lack of the necessary technology expertise to filter and categorize appropriately technology-related outcomes and products.

6

A DNA marker is a gene or DNA sequence having a known location on a chromosome and associated with a particular gene or trait.

7

For example, Brijot Imaging Systems commercialized its passive millimeter wave weapons detection camera, which is based on technology developed by Lockheed Martin with NIJ funding through an interagency agreement with the U.S. Air Force Research Laboratory. In addition, NIJ support of mini-STR typing systems has fostered a unique forensic application. The mini-STR typing system is designed to provide a DNA profile on degraded DNA (often collected at crime scenes). The system is well designed and did well in beta testing, and its commercialization by Applied Biosystems Inc. qualifies as an important success.

8

For example, a grant was awarded and counted as a fielded technology for implementing AmberView in West Virginia school systems. AmberView is a program that assisted state law enforcement by quickly issuing a digital picture of a missing or abducted child and has since been discontinued (Kasey, 2009).

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

Citations of NIJ Products. The performance measure “number of citations of NIJ products” is also not useful in determining what is being measured. According to NIJ, the citation figures were obtained from different sources over the period 2003-2008. For example, in 2003-2005, the Social Science Citation Index was used; in 2006, Sociological Abstracts and Ebsco Academic Search Premier were used; in 2007, the Social Science Citation Index (SSCI) was used; and in 2008, Sociological Abstracts, SocIndex, and the Sage Publications Database were used. According to NIJ, the citation information was derived from searches using variations of NIJ’s name, and the results reflect the appearance of NIJ’s name in reference lists of articles, books, etc. As a result, this citation count captures only the citations of publications authored or published by NIJ and significantly underestimates the influence of NIJ-supported research on scholarship, since searches were never conducted on the works of the numerous principle investigators that NIJ supports.

Assessment Power of PART

It is impossible to draw any concrete or specific findings regarding NIJ’s influence on research and practice from the PART assessment. It is unclear what the specific program performance numbers generated by NIJ indicate, since there is no information as to how the numbers were generated or whether they represent consistent figures from year to year. Criticisms have been leveled at PART, but the Obama administration is unlikely to abandon it, focusing instead on improving it.9 NIJ should give greater attention to the identification of appropriate performance measures and the data required to track them. The committee does not consider NIJ’s current effort for the PART assessment a measure of the possible use or influence of its research, and it does not compare to other approaches and efforts for generating some estimate of influence.

Other Assessments

Assessments of NIJ are acknowledged on its website (see Figure 6-1). However, the only identified ongoing effort to assess program performance is the reference to the PART. The only external assessment conducted at the request of NIJ that is identified to the public is this study by the National Research Council (NRC)—an assessment conducted 30 years after the first one by NRC in 1977. The other “assessments” have been audits conducted by the Office of the Inspector General which evaluated NIJ’s operations and

9

Statement of Jeffrey D. Zients, deputy director for management, OMB, before the Budget committee, U.S. Senate, October 29, 2009.

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
FIGURE 6-1 The limited assessment of NIJ.

FIGURE 6-1 The limited assessment of NIJ.

SOURCE: Available: http://www.ojp.usdoj.gov/nij/about/assessments.htm [accessed February 19, 2010].

accounting practices, and programmatic reviews by the U.S. Government Accountability Office, which assessed the methodological rigor of evaluations and the quality of program monitoring for specific programs during specific periods. These assessments do not represent efforts to routinely and consistently collect data in order to assess the quality and impact of research programs.

THE COMMITTEE’S ANALYSES

Since NIJ has not conducted assessments of its research programs, we chose to make initial attempts to assess the impact of NIJ-funded research. Our data collection efforts began as a way to capture some otherwise unavailable information about NIJ’s influence on scholarship and practice. Although resource constraints and the lack of information precluded more comprehensive analyses, these endeavors provide some information on NIJ’s influence and, more importantly, provide an illustration of what NIJ could carry out and what information could be obtained with respect to

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

gauging its influence on policy and practice. This section offers two exemplars of what can be learned from a limited effort to capture measures of impact of NIJ programs. It includes (1) citation analyses and (2) a survey of researchers and practitioners.

Citation Analyses

Citation analysis, although limited, offers an easily adoptable approach to gauging the influence of NIJ-sponsored research. Although it cannot tell much about effects on practice, citation analysis is a standard method for examining the impact of research literature. Because there is no clear consensus of how citation analyses should be carried out or what their precise meaning or interpretation is, the committee provides examples that may serve as an illustrative approach (among many) for NIJ to gauge its influence on scholarship.

Citation analyses count the number of times an individual piece of literature has been cited by another piece of literature. In other words, an article with 20 citations has been consulted more than an article with 1 citation, and the former article can thus be considered to be somewhat more influential than the latter article. The essence of citation analysis is that literature with a higher volume of citations is more influential than literature with a lower volume of citations. Of course, although citations to research can be both positive and critical, they provide one means of assessing the influence of that work. Citation analyses also help to understand the birthplace of certain ideas or methods. For example, a single study may be cited by 2,000 publications that subsequently carry out theoretical and empirical research that can be largely attributable to the original, influential study.

The committee carried out two different citation analyses. First, the committee selected prominent journals in the criminology and criminal justice area and journals in the forensic sciences area: Criminology, Justice Quarterly, Criminology and Public Policy, Forensic Science International, and Journal of Forensic Sciences. The committee then went through them manually for a period of time (1995-2008) and recorded the number of articles that mentioned NIJ-funded data or support from NIJ in some capacity (these are usually, though not always, listed in an acknowledgment section). The committee then took that subset and identified citations as one barometer of measuring impact.

Table 6-2 presents the results of this analysis. In the criminology/criminal justice journals, 126 of the 1,051 articles published (12 percent) referenced funding support from NIJ. In the forensic sciences journals, 75 of the 6,119 articles published (1.2 percent) referenced funding support from NIJ. It is worth noting that none of the 7,170 articles across the 5 journals reported using data from an earlier NIJ-funded study. This may be less of

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

TABLE 6-2 Publication/Citation Analysis of Articles Linked to NIJ Support in Five Journals

 

Total Number of Articles

Total Number Mentioning Use of NIJ-Supported Data

Total Number Mentioning Funding from NIJ

Total Citation Count for Articles Mentioning NIJ

Criminology

439

0

53

854

Justice Quarterly

375

0

52

276

Criminology & Public Policy

237

0

21

2

Forensic Science International

3,135

0

11

15

Journal of Forensic Sciences

2,984

0

64

0

TOTAL

7,170

0

201

1,147

NOTE: With respect to the criminology/criminal justice journals, the analysis included articles, reaction essays, and comment-oriented papers. A decision was made not to include editorials or book reviews in the searches. For the forensic science journals, the following types of articles were included in the search: articles, short reports, case studies, experiments, comment papers, and announcements of population data. The analysis excluded editorials, errata, corrections, book reviews, letters to the editor, replies to letters to the editor, and technical notes.

a reflection of NIJ’s impact and more of the field’s limited reference to the agency originally funding the research that generated the data for additional or secondary data analysis. A citation analysis was performed for those articles which referenced funding support from NIJ. As the table shows, a total of 1,147 citations were identified for the 201 articles referencing support from NIJ. Articles appearing in the criminology/criminal justice journals were cited more often than those appearing in the two forensic science journals. Most of the highly cited articles appeared in Criminology. The most cited articles are identified in Box 6-1.

At first glance, this analysis indicates low visibility in these highly respected and established journals. The journals we selected undoubtedly present a limited sample, because some of NIJ’s funded research may be published in other, more specialized outlets in the policing, courts, corrections, and forensic science areas or in other criminological/criminal justice/forensic science journals. Still, since NIJ is considered a leader in these areas, one would expect research projects and products emerging from a

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

BOX 6-1

Highly Cited Articles

  • Mastrofski, S.D., Worden, R.E., and Snipes, J.B. (1995). Law enforcement in a time of community policing. Criminology, 33(4), 539-563 (52 citations).

  • Morenoff, J.D., Sampson, R.J., and Raudenbush, S.W. (2001). Neighborhood inequality, collective efficacy, and the spatial dynamics of urban violence. Criminology, 39(3), 517-559 (159 citations).

  • Reisig, M.D., and Parks, R.B. (2000). Experience, quality of life, and neighborhood context: A hierarchical analysis of satisfaction with police. Justice Quarterly, 17(3), 607-630 (58 citations).

criminal justice research agency funding stream to have more visibility in these journals. However, without any baseline information for comparison or the resources and information available to carry out a similar search for an earlier time period, no definitive conclusions can be drawn from this analysis. Since there currently exists no other independent assessment of NIJ in this regard, this approach can be considered as providing a baseline to such an undertaking.

For our second analysis, the committee compiled a list of the principal investigators for NIJ’s research grants 1995-2007, and used the National Archive of Criminal Justice Data (NACJD)/Inter-university Consortium for Political and Social Research (ICPSR) online bibliography10 to identify publications relevant to respective grants with the associated principal investigator (PI) as an author. For the purpose of this analysis, the committee focused on grants that had the potential to result in peer-reviewed literature—that is, the research grants. The committee was able to generate a sample of 2,238 research grants with PIs identified for the period 1995-2007.

Because of missing information and limitations in matching grant titles with data titles, we are unable to report verifiable statistics on the

10

ICPSR is the host to NACJD, as discussed in Chapter 5. They maintain an online Bibliography of Data-related Literature, which is a searchable database that contains over 48,000 citations of known published and unpublished works resulting from analyses of data held in the ICPSR archive. The bibliography was developed with support from the National Science Foundation.

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

publications and subsequent citation of NIJ-funded research. Of the 2238 grants in the sample, only 130 of the associated principal investigators were available through the NACJD/ICPSR bibliography, that is, only 130 PIs had uploaded their data to NACJD, and as one would expect from the discussion in Chapter 5 on the data archive, most, if not all, of these emerged from the Office of Research and Evaluation awards and not Office of Science and Technology. From there, we were able to accurately link only a sample of 46 grants to data and subsequent publications, in the archive. We then identified a total of 113 related literature entries (i.e., journal articles published by the respective PIs, though not necessarily as first author, in relation to their NIJ-funded research)11 associated with those 46 grants. A citation count was then performed on these 113 related literature entries using SSCI through Web of Science.12 The combined citation count was 571. Although we cannot draw any conclusions from this citation analysis because of known limitations in the sample information and the absence of baseline information for comparison, we did, however, find the publicly available NACJD/ICPSR online bibliography to be a very useful tool.

This searchable database has extensive information about each data set held in the archive, including a list of resulting literature. It also has an online citation reporting feature. This feature permits the generation of a report of related literature that relied on any data supported by NIJ held in the NACJD/ICPSR archive. According to this reporting, 989 unique NIJ data collections were cited a total of 4,373 times in 3,621 publications13 published between January 1, 2000, and August 18, 2009. This publicly available tool not only generates figures illustrative of scholarly use but also provides a bibliography of the literature citing NIJ-supported data and links to more information on those data sets. The tool can easily be used to compare data collection citation for different time periods after January 1, 2000. To our knowledge, this feature has not been used by NIJ to track the impact of its research. To the extent that more data could be archived, this will expand the pool of data available for others to use and will also improve NIJ’s ability to track the use and influence of its awards.

11

On occasion, there were other individuals listed as having published work based on the data listed with NACJD/ICPSR, but this portion of our analyses focused solely on the work of the NIJ PI’s.

12

The committee recognizes that the SSCI count via the Web of Science is entirely contingent on journal collection, abstraction, and reporting and this varies across journal outlets. Thus, this count should be viewed as a lower-bound, rough approximation.

13

These publications include 320 books or book sections, 110 conference proceedings, 1,497 journal articles, 7 magazine and newspaper articles, 1,542 reports, 119 theses, and 16 documents.

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

Limitations of Citation Analysis

Citations mean something in assessing contributions to knowledge (and for gauging influence on academics), but they are far more problematic with regard to assessing impact on policy. They are also problematic for assessing adoption into practice; articles that describe a new technique or protocol may be rarely cited but nevertheless may find their way into many procedures manuals.

There are many other routes from research to policy and practice than publication in academic journals. As Weiss (1979) has observed, rational use of research evidence in a problem-solving mode is the least common type of use. More often research evidence is used politically, to bolster a preexisting position. That is, policy makers and practitioners who are partial to a particular course of action already are apt to use new research findings to support their position. Much of the most common use of research comes about through what Weiss (1978) refers to as “enlightenment,” in which policy makers and practitioners do not necessarily have to know the published research but what they hear is the nature of findings, such as “Scared Straight doesn’t work.” It becomes impossible to trace the path of transmission of the information, because it is largely informal, through conversations, trade publications, magazines, the office grapevine, etc.

Thus, citation counts are likely to underestimate the influence of research. So it is reasonable to conclude that any citation information presented here represents a lower bound of influence. The committee does not conclude that a potentially low documentation of NIJ-funded data or citation count indicates that NIJ was not influencing research or criminal justice policy. The committee had neither the information nor the resources necessary to compare data with other time periods, projected goals, or other agencies, so the committee is unable to draw conclusions.

When conducted accurately and consistently, citation analysis can be one barometer (among several) to track progress over time. Citation analysis can be worthwhile for evaluating publication records of individual scientists or research products, as long as some of the limitations are sufficiently considered. However, citation analysis should always be considered as one of several evaluation criteria (Schoonbaert and Roelants, 1996).

Survey of Criminal Justice Researchers and Practitioners

Another possible source of information for the purpose of assessment is periodic surveys of constituent communities aimed at identifying the consideration, use, and implementation of research findings. In November 2008, the committee conducted a survey to learn about the views of criminal justice researchers and practitioners. The committee wanted to know

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

how familiar respondents were with NIJ’s activities and what they thought about the quality and impact of these activities. The committee was also interested in overall perceptions of NIJ as an independent science agency. A summary of what was learned is presented below. More details on the survey can be found in Appendix B.

Survey Results

The survey sample, a quota sample, included 347 researchers and 162 practitioners (21 percent of those originally contacted by e-mail). The target researcher sample consisted of members of the American Society of Criminology. The target practitioner sample consisted of leaders and key staff in well-known organizations with an interest in criminal justice issues.14 Despite the limitations of survey findings (presented below), the survey results gave the committee insight into areas of relative strength and weakness in NIJ performance as well as into differences in the perceptions of researchers and practitioners and offer an otherwise unavailable window on the views of the field.

The results paint a picture of respondents who feel strongly that there is a current need for a federal research agency dedicated to crime and justice issues. Researchers were more likely than practitioners to express a critical need for such an institution. NIJ can fill this need; however, the vast majority of respondents recognize that many of the agency’s operations must be improved.

The respondents were very familiar with NIJ. They report a high use of NIJ’s products, such as publications and websites, and events, such as conferences and workshops. The respondents gave high marks to the usefulness of NIJ resources. NIJ data resources (such as NCJRS and NACJD) were more widely used by the researchers. The practitioners were more likely to have attended NIJ workshops and conferences. About a third considered NIJ a primary funding source for their work or had served on one of NIJ’s peer-review panels or advisory groups. Nearly all of the researchers had used or had cited NIJ-sponsored research in their own work. This high level of familiarity underscores the importance of NIJ to the field and lends credibility to the survey findings.

Respondents were asked to rate their satisfaction with NIJ’s performance in a number of different areas. Satisfaction was rated on a five-

14

See Chapter 4, footnote 1, for an explanation of how the terms “researcher” and “practitioner” are used in this report. We recognize that because of our available sample pool we were only able to reach a limited population of researchers. In addition, a small percentage of our practitioner sample (14 percent) would have been considered researchers if we had the ability to separate them out. They were affiliated with our target practitioner organizations but work as researchers in government agencies.

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

point scale from very positive to very negative. Percentages are reported to indicate the number of respondents indicating positive or very positive satisfaction. In interpreting the responses, the committee relied on the 50 years of opinion research experience of the survey firm conducting the survey and their judgment of percentage scores. “Based on other studies of performance using similar measures, scores in the 90 percent or higher range are considered outstanding. Organizations with scores at this level are usually growing and have a high level of retention [in terms of staff and customers]. Scores in the 70-80 percent range indicate some good points and some areas for improvement. Scores below 70 percent are indicative of more serious problems” (see Appendix B). NIJ’s satisfaction scores in many areas were 60 percent or less.

The results reveal a number of areas in need of improvement to increase respondent satisfaction including qualifications of the staff, consultation with the researcher and practitioner communities, and NIJ leadership. In addition, many of the respondents pointed to the inadequacy of NIJ resources (i.e., only 27 percent rated the adequacy of resources positively). Researchers were less satisfied than practitioners in several areas: qualifications of the staff, NIJ leadership, adequacy of resources, and consultation with the researcher community. Practitioners reported more satisfaction than researchers with NIJ’s overall performance; however, both are still low ratings (69 versus 50 percent).

Researchers and practitioners were asked about satisfaction with NIJ specifically in the areas of their particular interest. Among the researcher sample, satisfaction with NIJ performance is considered moderate to low in such areas as dissemination, funding, and research agenda setting. However, researchers tended to be more satisfied with the dissemination of findings to the research community than to policy makers and practitioners. Recipients of NIJ grants expressed low levels of satisfaction with the grant process and project monitoring in such areas as ease of applying and quality of feedback. Awardees are most satisfied with staff responsiveness, fairness, and competence. They expressed the least satisfaction with the transparency of the award process. Unsuccessful grant applicants in our sample also expressed similar low satisfaction with the ease of applying, the quality of feedback, the quality of funding decisions, and the transparency of the award process. Those in our sample who never applied to NIJ for funding did not do so more often because they thought they were unlikely to get funded.

Among the practitioner sample, satisfaction with NIJ performance is considered moderate in the areas of disseminating relevant research knowledge to practitioners and policy makers, identifying research and technology needs, and maintaining fairness and openness in practices. Practitioners ap-

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

pear to be more satisfied than researchers with NIJ’s dissemination as well as its commitment to fairness and openness. Practitioner satisfaction with NIJ performance is considered relatively low in other areas: development of affordable and effective tools and technologies, improvement of forensic laboratories, technical assistance, testing of existing and new technologies, and development of equipment standards.

Respondents were asked about the independence of NIJ. Our sample was split on the issue of independence. A third indicated that it does not have the necessary independence, while a third indicated it does, and another third was not sure. More practitioners than researchers believe it does have the necessary independence. Researchers, asked to judge political considerations impacting NIJ, believe NIJ has been most impacted in setting research priorities and selecting proposals for funding and less so in disseminating research findings.

The open-ended comments are reflective of what respondents believe the impact of political considerations, whether external or internal to the agency, are on NIJ. Over a third of the 509 respondents offered comments. Areas of concern described in the mostly negative comments include inappropriate political influence on NIJ (6 percent), lack of continued research funding (4 percent), the need for NIJ to operate independently (3 percent), the desire for NIJ to develop an unbiased grant process (3 percent), and an interest in diversifying the research to include topics other than DNA analysis, technology, and terrorism (3 percent).

Limitations of Survey Findings

To accomplish its mission, NIJ needs to interact effectively with two key audiences: (1) criminal justice practitioners and (2) researchers conducting studies related to crime and justice. A web-based survey was chosen as a cost-effective strategy for collecting information on NIJ’s effectiveness from large numbers of these key stakeholders. This survey has some limitations that must be considered in interpreting the findings.

One limitation derives from the population eligible for interview. For this survey, the researchers eligible for the sample were limited to members of the American Society of Criminology, and such membership hardly covers all researchers conducting studies related to crime and justice. Scientists whose primary focus is on hard sciences, such as forensic sciences or equipment development, may be less likely to join than social scientists. Some researchers do not choose to join or join other criminology professional groups. It was even more challenging to define operationally the population of criminal justice practitioners, encompassing truly vast numbers of individuals engaged in a wide range of jobs. For this survey, eligible practi-

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

tioners were limited to individuals who held leadership roles in a range of national professional organizations during the prior decade. As leaders of national organizations, eligible respondents may have had more exposure to NIJ’s role at a broad level but perhaps less experience with day-to-day effects on practice. Despite these exclusions, the sampled populations constitute audiences for large numbers of NIJ presentations and key target audiences for NIJ.

More serious limitations stem from the low response rates that characterize this—and most—web-based surveys. The reader is urged to remember that the views expressed are those of respondents who have e-mail addresses, were motivated to respond, and completed the survey in time to be included in the results. Thus it is difficult to know, or even estimate, the extent to which different results would have been obtained from interviews with all individuals in the population. To assist in the proper interpretation of this web-based survey, the results are presented in terms of ranges and compared with results of similar surveys.

BEST PRACTICES OF OTHER AGENCIES

The regular review of research programs is important not only to retrospectively determine an agency’s accomplishments but also to inform the strategic planning of its research programs. Other granting agencies employ both internal and external mechanisms for assessing their influence on scholarship and practice and to use these data to advance their agency’s goals (U.S. General Accounting Office, 2003c). In reviewing how other federal research agencies15 address the issue of assessing influence, the committee reached a number of conclusions. First, many research agencies are struggling with defining quality and then constructing a process to assess it (National Research Council, 2007). This is especially true in assessing the impact of research or research outcomes as a measure of quality. Second, scientists outside the agency are heavily relied on, either individually or as peer reviewers, to assess the impact of research. Third, a combination of approaches for collecting information is used to define an agency’s influence from its investment in research.

There are two generic strategies for assessing scientific progress and setting research priorities: (1) analytic techniques, such as citation analysis, and (2) deliberative processes, such as those of expert review panels. (National Research Council, 1999a, 2000, 2001a, 2005d, 2008b, 2009b). Often it is the information and data from the former that informs the latter, resulting in a report of accomplishments and strategic recommendations.

Information and data in the form of metrics and performance measures

15

The committee received briefings from agency directors and program division directors of several federal research agencies (see Chapter 1).

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

BOX 6-2

Examples of Performance Measures

Quantifiable

  • Research grants awarded

  • Percentage of awards peer reviewed

  • Time between awards and final reports

  • Data sets produced by supported PIs

  • Publications and products produced by the agency, staff, and supported PIs

  • Presentations by staff and supported PIs

  • Distributions and citations of such works as a proxy for use of publications and products

  • Tools, technologies, and models produced

  • Conferences and workshops supported by the agency and attendance at such

  • The demographics of participants

Qualitative

  • PI-submitted descriptions of contributions and collaborations within and across disciplines

  • Testimonial letters

  • Surveys of constituents

  • External reviews, which can take the form of workshop summaries, program evaluations, or advisory reports

SOURCES: National Research Council (2008b), National Science Foundation (2009).

are indicators and statistics that are used to gauge progress with respect to stated goals. The associated metrics and performance measures may be quantitative or qualitative (National Research Council, 2005d). Box 6-2 lists a number of examples of quantifiable and qualitative information that have been collected to help assess the impact of research.

Although space constraints preclude a listing of all assessment efforts of other research agencies, the committee does highlight some efforts made by the Division of Behavioral and Social Research of the National Institute on Aging and the National Science Foundation (NSF) that appear promising.16

16

The National Institutes of Health (NIH) also undertakes several different types of internal and external evaluations regarding their research program. Specifically, it has a Performance

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

National Institute on Aging

The National Institute on Aging conducts periodic and broad review of its divisions to assess their overall performance and the appropriateness of the future research being considered. Its Division of Behavioral and Social Research (BSR) was reviewed in 2008 by a committee of 16 distinguished scientists. In conducting its review, the committee received background material17 to assist them in their deliberations, participated in three 90-minute conference calls prior to the full 2-day review meeting, and participated in a 45-minute conference call to finalize its report.

The review was guided by the five overarching areas for consideration (National Institute on Aging, National Advisory Council on Aging, Division of Behavioral and Social Research Review Committee, 2009):

  1. What promising areas for future research should be encouraged?

  2. Has BSR been supporting a balanced, high-quality, and innovative portfolio of research? Are there significant gaps? What areas are weaker than they should be, and which, if any, might now be deemphasized?

  3. Is the branch structure appropriate to the science? Is BSR adequately staffed?

  4. How can BSR promote training and development of new scholars in fields that are becoming increasingly interdisciplinary? Is BSR attracting adequate numbers of high-quality individuals to pursue research careers in fields of relevance to BSR, and can their professional development be sustained?

  5. What can be done to ensure appropriate review of high-risk, interdisciplinary research projects and program projects?

In addition, nine subcommittees were formed to consider research areas that represent important, burgeoning areas or areas needing extra attention that may be perceived as being deficient or potentially critical for progress.

Assessments Office, in the Division of Program Coordination, Planning, and Strategic Initiatives (DPCPSI), which coordinates all NIH program performance activities, which include monitoring and assessing NIH-level program performance through several federally mandated reporting mechanisms. These mechanisms include GPRA, OMB PART, and the Performance and Accountability Report (PAR). Although several reporting mechanisms exist, many assess progress and performance through specific GPRA goals (see http://nihperformance.nih.gov/ [accessed July 2, 2009]).

17

Such background materials include the 2004 review committee report, the BSR response to the 2004 review, the BSR organizational chart, BSR funding trends over time (as proportion of NIA total, by mechanism, by portfolio area, and by branch), a BSR memorandum on structure and staffing, BSR-relevant press releases for the period 2004-2008, background briefing papers by staff on certain research areas, and BSR media mentions for the period 2006-2008.

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

Each subcommittee was tasked with preparing a short briefing paper highlighting achievements and shortcomings in their respective research areas. These subcommittee papers and materials, provided by BSR staff, served as the primary basis of deliberations of scientific directions.

The review committee found that BSR has been highly responsive to the recommendations in a previous 2004 review report and has made excellent progress. The review committee felt that BSR has been substantially transformed in just 4 years with a number of notable accomplishments (National Institute on Aging, National Advisory Council on Aging, Division of Behavioral and Social Research Review Committee, 2009). Although there is seldom a clear line between performance and congressional action, it appears possible that as a result of this positive review, the FY 2010 budget request for BSR reflected an increase of $1.9 million from its FY 2009 funding level (Consortium of Social Science Associations, 2009).

National Science Foundation

NSF also uses a committee of outside advisers periodically to conduct reviews of its programs every three years, referred to as the Committee of Visitors. Among the activities it may examine are the program’s health, its direction, the responsiveness of the funding process to the agency’s goals and mission, research results, and future research ideas. The Committee of Visitors carries out its responsibilities by closely examining a random sample of grants and all information related to the grants: the processing routines, the average number of reviews, and the success rate. After review, the written report is posted on the NSF website.18 This process is noteworthy because of the level of detail and because it examines not only the quality of research portfolios but also the process around creating those portfolios. It assesses staff as well as researchers and is a very transparent process.

In addition, NSF performs three ongoing tasks aimed at yielding information from which to assess the influence of its research program. First, NSF undertakes a “development of people outcomes,” in which the agency tracks the research careers (publications, citations, and awards) of the individuals they fund. Second, like the National Institutes of Health, NSF has developed a “top discoveries” feature on their website that highlights “discoveries that began with NSF support.”19 Third, NSF has a performance

18

See http://www.nsf.gov/od/oia/activities/cov/covs.jsp [accessed July 10, 2009].

19

The Discoveries tab on the NSF website lists the top discoveries as a result of NSF investment (see http://www.nsf.gov/discoveries/ [accessed June 25, 2009]). NIJ does identify significant impacts that have emerged from its research, but these accomplishments are scattered throughout their website, products, and publications and are not contained in one easily accessible location.

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

accountability tab on their website that provides information on internal and external performance assessments designed to provide “stakeholders and taxpayers with vital information about the return on their investments.” These performance assessments are guided by GPRA and NSF’s own strategic plan, which include tracking performance and impact and developing various metrics. NSF has an oversight group, referred to as the Advisory committee for GPRA Performance Assessment, formed in 2002 to provide advice and recommendations to the NSF director regarding the Foundation’s performance. It meets annually and assesses NSF’s overall performance with respect to its strategic outcome goals.

CONCLUSIONS

The committee concludes that NIJ does not have any mechanism in place for monitoring on a regular basis the impact of the research it funds or the accumulation of useful knowledge for science and for affecting public policy. It has not adopted an assessment approach by qualified staff that integrates quantitative metrics (e.g., number of citations, publications produced and requested, and technologies commercialized) and qualitative reviews (e.g., narrative descriptions of research accomplishments and survey results). We found no evidence of deliberative and transparent assessment practices. We found no record of systems appropriate for tracking accomplishments and performance measures. The performance measures NIJ currently compiles are vague, inconsistent, and largely incomplete. They are not easily understood by a broad audience or sufficiently accurate to be credible and have not been durable to remain relatively constant over the years. As such, they are not easily linked to its mission and strategic goals.

We recognize there are no easy solutions to measuring the extent to which research has informed what is known about crime and crime prevention. It is even more difficult to determine the impact of research on how researchers conduct future inquiries and how policy makers and practitioners think about criminal justice issues. Because research does not exhibit immediate, short-term effects, its impact is usually measured and assessed over the long term. In addition, research does not operate in a vacuum, and it is often influential only when timing, political agendas, financial resources, and empirical facts converge. Hence, establishing the influence of research is a difficult and imprecise undertaking (Petersilia, 1987).

NIJ’s task is further complicated by the lack of clarity regarding the kinds of research it supports. As discussed in Chapters 2 and 3, NIJ’s research program has swung back and forth between basic and applied research. Sometimes applied research is characterized as policy-relevant research; however, even basic research can have implications for policy and practice. The

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

committee recognizes that all fit within NIJ’s research mission, particularly when one considers the specific needs of the criminal justice community that might not be addressed in research funded by other federal research agencies. Even though most scientists “know it when they see it” (National Research Council, 2005d), the definitions of basic, applied, and policy-relevant research are not consistently articulated. NIJ can make a significant contribution by promoting the dialogue necessary to clarify these kinds of research in the context of criminal justice needs. This in turn will help it determine appropriate metrics for measuring its impact. The adoption of thoughtful metrics or performance measures as part of a routine approach to assessing the progress of its scientific investments is essential for a strong, more viable research organization.

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×

This page intentionally left blank.

Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 187
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 188
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 189
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 190
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 191
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 192
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 193
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 194
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 195
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 196
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 197
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 198
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 199
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 200
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 201
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 202
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 203
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 204
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 205
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 206
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 207
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 208
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 209
Suggested Citation:"6 Assessing Research Programs." National Research Council. 2010. Strengthening the National Institute of Justice. Washington, DC: The National Academies Press. doi: 10.17226/12929.
×
Page 210
Next: 7 Recommendations »
Strengthening the National Institute of Justice Get This Book
×
Buy Paperback | $92.00 Buy Ebook | $74.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The National Institute of Justice (NIJ) is the nation's primary resource for advancing scientific research, development, and evaluation on crime and crime control and the administration of justice in the United States. Headed by a presidentially appointed director, it is one of the major units in the Office of Justice Programs (OJP) of the U.S. Department of Justice. Under its authorizing legislation, NIJ awards grants and contracts to a variety of public and private organizations and individuals.

At the request of NIJ, Strengthening the National Institute of Justice assesses the operations and quality of the full range of its programs. These include social science research, science and technology research and development, capacity building, and technology assistance.

The book concludes that a federal research institute such as NIJ is vital to the nation's continuing efforts to control crime and administer justice. No other federal, state, local, or private organization can do what NIJ was created to do. Forty years ago, Congress envisioned a science agency dedicated to building knowledge to support crime prevention and control by developing a wide range of techniques for dealing with individual offenders, identifying injustices and biases in the administration of justice, and supporting more basic and operational research on crime and the criminal justice system and the involvement of the community in crime control efforts. As the embodiment of that vision, NIJ has accomplished a great deal. It has succeeded in developing a body of knowledge on such important topics as hot spots policing, violence against women, the role of firearms and drugs in crime, drug courts, and forensic DNA analysis. It has helped build the crime and justice research infrastructure. It has also widely disseminated the results of its research programs to help guide practice and policy. But its efforts have been severely hampered by a lack of independence, authority, and discretionary resources to carry out its mission.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!