National Academies Press: OpenBook

Improving Evaluation of Anticrime Programs (2005)

Chapter: Executive Summary

« Previous: Front Matter
Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

Executive Summary

Effective guidance of criminal justice policy and practice requires evidence about their effects on the populations and conditions they are intended to influence. The role of evaluation research is to provide that evidence and to do so in a manner that is accessible and informative to policy makers. Recent criticisms of evaluation research in criminal justice indicate a need for greater attention to the quality of evaluation design and the implementation of evaluation plans.

In the context of concerns about evaluation methods and quality, the National Institute of Justice asked the Committee on Law and Justice of the National Research Council to conduct a workshop on improving the evaluation of criminal justice programs and to follow up with a report that extracts guidance for effective evaluation practices from those proceedings.

The workshop participants presented and discussed examples of evaluation-related studies that represent the methods and challenges associated with research at three levels: interventions directed toward individuals; interventions in neighborhoods, schools, prisons, or communities; and interventions at a broad policy level.

This report highlights major considerations in developing and implementing evaluation plans for criminal justice programs. It is organized around a series of questions that require thoughtful analysis in the development of any evaluation plan.

Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

WHAT QUESTIONS SHOULD THE EVALUATION ADDRESS?

Program evaluation is often taken to mean impact evaluation—assessing the effects of the program on its intended outcomes. However, the concepts and methods of evaluation research include evaluation of other aspects of a program such as the need for the program, its design, implementation, and cost-effectiveness. Questions about program effects are not necessarily the evaluation questions most appropriate to address for all programs, although they are usually the ones with the greatest generality and potential practical significance.

Moreover, evaluations of criminal justice programs may have no practical, policy, or theoretical significance if the program is not sufficiently well developed for the results to have generality or if there is no audience likely to be interested in the results. Allocating limited evaluation resources productively requires careful assignment of priorities to the programs to be evaluated and the questions to be asked about their performance.

  • Agencies that sponsor and fund evaluations of criminal justice programs should assess and assign priorities to the evaluation opportunities within their scope. Resources should be directed mainly toward evaluations with the greatest potential for practical and policy significance from expected evaluation results and for which the program circumstances are amenable to productive research.

  • For such public agencies as the National Institute of Justice, that process should involve input from practitioners, policy makers, and researchers about the practical significance of the knowledge likely to be generated and the appropriate priorities to apply.

WHEN IS IT APPROPRIATE TO CONDUCT AN IMPACT EVALUATION?

A sponsoring agency cannot launch an impact evaluation with reasonable prospects for success unless the specific program to be evaluated has been identified; background information has been gathered that indicates that evaluation is feasible; and considerations that describe the key issues for shaping the design of the evaluation are identified.

  • The requisite background work may be done by an evaluator proposing an evaluation prior to submitting the proposal. To stimulate and capitalize on such situations, sponsoring agencies should consider devoting some portion of the funding available for evaluation to support (a) researchers proposing early stages of evaluation that address issues of

Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

priority, feasibility, and evaluability and (b) opportunistic funding of impact evaluations proposed by researchers who find themselves in those fortuitous circumstances that allow a strong evaluation to be conducted of a significant criminal justice program.

  • Alternatively, the requisite background work may be instigated by the sponsoring agency for programs judged to be of high priority for impact evaluation. To accomplish this, agencies should undertake feasibility or design studies that will assess whether an impact evaluation is likely to be successful for a program of interest.

  • The preconditions for successful impact evaluation are most easily attained when they are built into a program from the start. Agencies that sponsor program initiatives should consider which new programs may be significant candidates for impact evaluation. The program initiative should then be configured to require or encourage as much as possible the inclusion of the well-defined program structures, record-keeping and data collection, documentation of program activities, and other such components supportive of an eventual impact evaluation.

HOW SHOULD AN IMPACT EVALUATION BE DESIGNED?

Evaluation design involves many practical and technical considerations related to sampling and the generalizability of results, statistical power, measurement, methods for estimating program effects, and information that helps explain effects. There are no simple answers to the question of which designs best fit which evaluation situations and all choices inevitably involve tradeoffs between what is desirable and what is practical and between the relative strengths and weaknesses of different methods. Nonetheless, some general guidelines can be applied when considering the approach to be used for a particular impact evaluation.

  • A well-developed and clearly-stated Request for Proposals (RFP) is the first step in guarding against implementation failure. When requesting an impact evaluation for a program of interest, the sponsoring agency should specify as completely as possible the evaluation questions to be answered, the program sites expected to participate, the relevant outcomes, and the preferred methods to be used. Agencies should devote sufficient resources during the RFP-development stage, including support for site visits, evaluability assessments, pilot studies, pipeline analyses, and other such preliminary investigations necessary to ensure the development of strong guidance to the field in RFPs.

  • Development of the specifications for an impact evaluation (e.g., an RFP) and the review of proposals for conducting the evaluation should

Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

involve expert panels of evaluators with diverse methodological backgrounds and sufficient opportunity for them to explore and discuss the trade-offs and potential associated with different approaches.

  • In order to strengthen the quality of application reviews, a two-stage review is recommended: the policy relevance of the programs under consideration for evaluation should be first judged by knowledgeable policy makers, practitioners, and researchers. Proposals that pass this screen should then receive a scientific review from a panel of well-qualified researchers, focusing solely on the scientific merit and likelihood of successful implementation of the proposed research.

  • Given the state of criminal justice knowledge, randomized experimental designs should be favored in situations where it is likely that they can be implemented with integrity and will yield useful results. This is particularly the case where the intervention is applied to units for which assignment to different conditions is feasible, e.g., individual persons or clusters of moderate scope such as schools or centers.

  • Before an impact evaluation design is implemented, the assumptions on which the validity of its results depends should be made explicit, the data and analyses required to support credible conclusions about program effects should be identified, and the availability or feasibility of obtaining the required data should be demonstrated.

HOW SHOULD THE EVALUATION BE IMPLEMENTED?

High-quality evaluation is most likely to occur when (a) the design is tailored to the respective program circumstances in ways that facilitate adequate implementation, (b) the program being evaluated understands, agrees to, and fulfills its role in the evaluation, and (c) problems that arise during implementation are anticipated as much as possible and dealt with promptly and effectively.

  • Plans and commitments for impact evaluation should be built into the design of programs during their developmental phase whenever possible.

    A detailed management plan should be developed for implementation of an impact evaluation that specifies the key events and activities and associated timeline for both the evaluation team and the program.

  • Knowledgeable staff of the sponsoring agency should monitor the implementation of the evaluation.

  • Especially for larger projects, implementation and problem solving may be facilitated by support of the evaluation team through such activities as meetings or cluster conferences of evaluators with similar projects

Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

for the purpose of cross-project sharing or consultation with advisory groups of veteran researchers.

WHAT ORGANIZATIONAL INFRASTRUCTURE AND PROCEDURES SUPPORT HIGH-QUALITY EVALUATION?

The research methods for conducting an impact evaluation, the data resources needed to adequately support it, and the integration and synthesis of results for policy makers and researchers are all areas in which the basic tools need further development to advance high-quality evaluation of criminal justice programs. Agencies with a major investment in evaluation, such as the National Institute of Justice, should devote a portion of available funds to methodological development in areas such as the following:

  • Research aimed at adapting and improving impact evaluation designs for criminal justice applications; for example, development and validation of effective uses of alternative designs such as regression-discontinuity, selection bias models for nonrandomized comparisons, and techniques for modeling program effects with observational data.

  • Development and improvement of new and existing databases in ways that would better support impact evaluation of criminal justice programs. Measurement studies that would expand the repertoire of relevant outcome variables and knowledge about their characteristics and relationships for purposes of impact evaluation (e.g., self-report delinquency and criminality; official records of arrests, convictions, and the like; measures of critical mediators).

  • Synthesis and integration of the findings of impact evaluations in ways that would inform practitioners and policy makers about the effectiveness of different types of criminal justice programs and the characteristics of the most effective programs of each type and that would inform researchers about gaps in the research and the influence of methodological variation on evaluation results.

To support high-quality impact evaluation, the sponsoring agency must itself incorporate and maintain sufficient expertise to set effective and feasible evaluation priorities, manage the background preparation necessary to develop the specifications for evaluation projects, monitor implementation, and work well with expert advisory boards and review panels.

  • Agencies that sponsor a significant portfolio of evaluation research in criminal justice, such as the National Institute of Justice, should main-

Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×

tain a separate evaluation unit with clear responsibility for developing and completing high-quality evaluation projects. To be effective, such a unit will generally need a dedicated budget, some authority over evaluation research budgets and projects, and independence from undue program and political influence on the nature and implementation of the evaluation projects undertaken.

  • The agency personnel responsible for developing and overseeing impact evaluation projects should include individuals with relevant research backgrounds who are assigned to evaluation functions and maintained in those positions in ways that ensure continuity of experience with the challenges of criminal justice evaluation, methodological developments, and the community of researchers available to conduct quality evaluations.

  • The unit and personnel responsible for developing and completing evaluation projects should be supported by review and advisory panels that provide expert consultation in developing RFPs, reviewing evaluation proposals and plans, monitoring the implementation of evaluation studies, and other such functions that must be performed well in order to facilitate high-quality evaluation research.

Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 1
Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 2
Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 3
Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 4
Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 5
Suggested Citation:"Executive Summary." National Research Council. 2005. Improving Evaluation of Anticrime Programs. Washington, DC: The National Academies Press. doi: 10.17226/11337.
×
Page 6
Next: 1 Introduction »
Improving Evaluation of Anticrime Programs Get This Book
×
 Improving Evaluation of Anticrime Programs
Buy Paperback | $37.00 Buy Ebook | $29.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Although billions of dollars have been spent on crime prevention and control programs during the past decade, scientifically strong impact evaluations of these programs are uncommon in the context of the overall number of programs that have received funding. Improving Evaluation of Anticrime Programs is designed as a working guide for agencies and organizations responsible for program evaluation, for researchers who must design scientifically credible evaluations of government and privately sponsored programs, and for policy officials who are investing more and more in the concept of evidence-based policy to guide their decisions in crucial areas of crime prevention and control.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!