7
Summary, Conclusions, and Recommendations: Priorities and Focus

Effective policy in many areas of criminal justice depends on the ability of various programs to reduce crime or protect potential victims. However, evaluations of criminal justice programs will not have practical and policy significance if the programs are not sufficiently well-developed for the results to have generality or no audience is interested in the results. Moreover, questions about program effects, which are usually those with the greatest generality and potential practical significance, are not necessarily appropriate for all programs. Allocating limited evaluation resources productively, therefore, requires careful prioritizing of the programs to be evaluated and the questions to be asked about their performance. This observation leads to the following recommendations:

  • Agencies that sponsor and fund evaluations of criminal justice programs should routinely assess and prioritize the evaluation opportunities within their scope. Resources should mainly be directed toward programs for which there is (a) the greatest potential for practical and policy significance from the knowledge expected to result and (b) the circumstances are amenable to research capable of producing the intended knowledge. Priorities for evaluation should also include consideration of the evaluation questions most important to answer (e.g., process or impact) and the aspect(s) of the program on which to focus the evaluation.

  • For public agencies such as the National Institute of Justice, that process should involve input from practitioners and policy makers, as well as researchers, about the practical significance of the knowledge likely to be generated from evaluations of various types of criminal jus-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 61
Improving Evaluation of Anticrime Programs 7 Summary, Conclusions, and Recommendations: Priorities and Focus Effective policy in many areas of criminal justice depends on the ability of various programs to reduce crime or protect potential victims. However, evaluations of criminal justice programs will not have practical and policy significance if the programs are not sufficiently well-developed for the results to have generality or no audience is interested in the results. Moreover, questions about program effects, which are usually those with the greatest generality and potential practical significance, are not necessarily appropriate for all programs. Allocating limited evaluation resources productively, therefore, requires careful prioritizing of the programs to be evaluated and the questions to be asked about their performance. This observation leads to the following recommendations: Agencies that sponsor and fund evaluations of criminal justice programs should routinely assess and prioritize the evaluation opportunities within their scope. Resources should mainly be directed toward programs for which there is (a) the greatest potential for practical and policy significance from the knowledge expected to result and (b) the circumstances are amenable to research capable of producing the intended knowledge. Priorities for evaluation should also include consideration of the evaluation questions most important to answer (e.g., process or impact) and the aspect(s) of the program on which to focus the evaluation. For public agencies such as the National Institute of Justice, that process should involve input from practitioners and policy makers, as well as researchers, about the practical significance of the knowledge likely to be generated from evaluations of various types of criminal jus-

OCR for page 61
Improving Evaluation of Anticrime Programs tice programs and the appropriate priorities to apply. However, this is distinct from assessment of specific proposals for evaluation that respond to those priorities, a task for which the expertise of practitioners and policy makers is poorly suited relative to that of experienced evaluation researchers. BACKGROUND CHECK FOR PROGRAMS CONSIDERED FOR EVALUATION There are many preconditions for an impact evaluation of a criminal justice program to have a reasonable chance of producing valid and useful knowledge. The program must be sufficiently well-defined to be replicable, the program circumstances and personnel must be amenable to an evaluation study, the requirements of the research design must be attainable (appropriate samples, data, comparison groups, and the like), the political environment must be stable enough for the program to be maintained during the evaluation, and a research team with adequate expertise must be available to conduct the evaluation. These preconditions cannot be safely assumed to hold for any particular program nor can an evaluation team be expected to locate and recruit a program that meets these preconditions if it has not been identified in advance of commissioning the evaluation. Moreover, once the program to be evaluated has been identified, certain key information about its nature and circumstances is necessary to develop an evaluation design that is feasible to implement. It follows that a sponsoring agency cannot launch an impact evaluation with reasonable prospects for success unless the specific program to be evaluated has been identified and background information gathered about the feasibility of evaluation and what considerations must be incorporated into the design. Recommendations: The requisite background work may be done by an evaluator proposing an evaluation prior to submitting the proposal. Indeed, evaluators occasionally find themselves in fortuitous circumstances where conditions are especially favorable for a high-quality impact evaluation. To stimulate and capitalize on such situations, sponsoring agencies should devote some portion of the funding available for evaluation to support (a) researchers proposing early stages of evaluation that address issues of priority, feasibility, and evaluability and (b) opportunistic funding of impact evaluations proposed by researchers who find themselves in circumstances where a strong evaluation of a significant criminal justice program can be conducted. The requisite background work may be instigated by the agency

OCR for page 61
Improving Evaluation of Anticrime Programs sponsoring the evaluation of selected programs. To accomplish this, agencies should support feasibility or design studies that assess the prospects for a successful impact evaluation of each program of interest. Appropriate preliminary investigations might include site visits, pipeline studies, piloting data collection instruments and procedures, evaluability assessments and the like. The results of these studies should then be used to identify program situations where funding a full impact study is feasible and warranted. The preconditions for successful impact evaluation can generally be most easily attained when they are built into a program from the start. Agencies that sponsor program initiatives should consider which new programs may be significant candidates for impact evaluation. The program initiative should then be configured to require or encourage as much as possible the inclusion of the well-defined program structures, record keeping and data collection, documentation of program activities, and other such components supportive of an eventual impact evaluation. SOUND EVALUATION DESIGN Within the range of recognized research designs capable of assessing program effects, there are inherent trade-offs that keep any one from being optimal for all circumstances. Careful consideration of the match between the design and the program circumstances and evaluation purposes is required. Moreover, that consideration must be well-informed and thoughtfully developed before an evaluation plan is accepted and implemented. Although there are no simple answers to the question of which designs best fit which evaluation problems, some guidelines can be applied when considering the approach to be used for a particular impact evaluation. When requesting an impact evaluation, the sponsoring agency should specify as completely as possible the evaluation questions to be answered, the program sites expected to participate, the outcomes of interest, and the preferred methods to be used. These specifications should be informed by background information of the type described above. Development of the specifications for an impact evaluation (e.g., an RFP) and the review of proposals for conducting it should involve expert panels of evaluation researchers with diverse methodological backgrounds and sufficient opportunity for them to explore and discuss the trade-offs and potential associated with different approaches. The members of these panels should be selected to represent evaluators whose own work represents high methodological standards to avoid perpetuating the weaker strands of evaluation practice in criminal justice.

OCR for page 61
Improving Evaluation of Anticrime Programs Given the state of criminal justice knowledge, randomized experimental designs should be favored in situations where it is likely that they can be implemented with integrity and will yield useful results. This is particularly the case where the intervention is applied to units for which assignment to different conditions is feasible, e.g., individual persons or clusters of moderate scope such as schools or centers. Before an impact evaluation design is implemented, the assumptions upon which its validity depends should be made explicit, the data and analyses required to support credible conclusions about program effects should be identified, and the availability of the required data should be demonstrated. This is especially important when observational or quasi-experimental studies are used. Meeting the assumptions that are required to produce results with high internal validity in such studies is difficult and requires statistical models that are poorly understood by laypeople and, indeed, many evaluation researchers. Research designs for assessing program effects should also address such related matters as the generalizability of those effects, the causal mechanisms that produce them, and the variables that moderate them when feasible. SUCCESSFUL IMPLEMENTATION OF THE EVALUATION PLAN Even the most carefully developed designs and plans for impact evaluation may encounter problems when they are implemented that undermine their integrity and the value of their results. Arguably, implementation is a greater barrier to high-quality impact evaluation than difficulties associated with formulating a sound design. High-quality evaluation is most likely to occur when the design is tailored to the respective program circumstances in a way that facilitates adequate implementation, the program being evaluated understands, agrees to, and fulfills its role in the evaluation, and problems that arise during implementation are anticipated and dealt with promptly and effectively. Recommendations: A well-developed and clearly-stated RFP is the first step in guarding against implementation failure. An RFP that is based on solid information about the nature and circumstances of the program to be evaluated should encourage prospective evaluators to plan for the likely implementation problems. If the necessary background information to produce a strong RFP is not readily available, agencies should devote sufficient resources during the RFP-development stage to generate it. Site visits, evaluability assessments, pilot studies, pipeline analyses, and other such preliminary investigations are recommended.

OCR for page 61
Improving Evaluation of Anticrime Programs The application review process can also be used to enhance the quality of implementation of funded evaluations. Knowledgeable reviewers can contribute not only to the selection of sound evaluation proposals but to improving the methodological quality and potential for successful implementation of those selected. In order to strengthen the quality of application reviews, a two-stage review is recommended whereby the policy relevance of the programs under consideration for evaluation are first judged by knowledgeable policy makers, practitioners, and researchers. Proposals that pass this screen then receive a scientific review from a panel of well-qualified researchers. The review panels at this second stage focus solely on the scientific merit and likelihood of successful implementation of the proposed research. The likelihood of a successful evaluation is greatly diminished when it is imposed on programs that have not agreed voluntarily or as a condition of funding to participate. Plans and commitments for impact evaluation should be built into the design of programs during their developmental phase whenever possible. When the agency sponsoring the evaluation also provides funding for the program being evaluated, the terms associated with that funding should include participation in an evaluation if selected and specification of recordkeeping and other program procedures necessary to support the evaluation. Commissioning an evaluation for which the evaluator must then find and recruit programs willing to participate should be avoided. This practice not only compromises the generalizability of the evaluation results, but it makes the success of the evaluation overly dependent upon the happenstance circumstances of the volunteer programs and their willingness to continue their cooperation as the evaluation unfolds. A detailed management plan should be developed for implementation of an impact evaluation that specifies the key events and activities and associated timeline for both the evaluation team and the program. To ensure that the role of the program and other critical partners is understood and documented, memoranda of understanding should be drafted and formally agreed to by the major parties. Knowledgeable staff of the sponsoring agency should monitor the implementation of the evaluation, e.g., through conference calls and periodic meetings with the evaluation team. Where appropriate the agency may need to exercise its influence directly with local program partners to ensure that commitments to the evaluation are honored. Especially for larger projects, implementation and problem solving may be facilitated by support to the evaluation team in such forms as meetings or cluster conferences of evaluators with similar projects for the purpose of cross-project sharing and learning or consultation with advisory groups of veteran researchers.

OCR for page 61
Improving Evaluation of Anticrime Programs When arranging funding for impact evaluation projects, the sponsoring agency should set aside an emergency fund to be used on an as-needed basis to respond to unexpected problems and maintain implementation of an otherwise promising evaluation project. IMPROVING THE TOOLS FOR EVALUATION RESEARCH The research methods for conducting impact evaluation, the data resources needed to adequately support it, and the integration and synthesis of results for policy makers and researchers are all areas where the basic tools need further development to advance high-quality evaluation of criminal justice programs. Agencies such as NIJ with a major investment in evaluation should devote a portion of available funds to methodological development in areas such as the following: Research aimed at adapting and improving impact evaluation designs for criminal justice applications; for example, development and validation of effective applications of alternative designs such as regression-discontinuity, selection bias models for nonrandomized comparisons, and techniques for modeling program effects with observational data. Development and improvement of new and existing databases in ways that would better support impact evaluation of criminal justice programs and measurement studies that expand the repertoire of relevant outcome variables and knowledge about their characteristics and relationships for purposes of impact evaluation (e.g., self-report delinquency and criminality, official records of arrests, convictions, and the like, measures of critical mediators). Synthesis and integration of the findings of impact evaluations in ways that inform practitioners and policy makers about the effectiveness of different types of criminal justice programs and the characteristics of the most effective programs of each type and that inform researchers about gaps in the research and the influence of methodological variation on evaluation results. ORGANIZATIONAL SUPPORT FOR HIGH-QUALITY EVALUATION To support high-quality impact evaluation, the sponsoring agency must itself incorporate sufficient expertise to help set effective and feasible evaluation priorities, accomplish the background preparation necessary to develop the specifications for evaluation projects, monitor implementation, and work well with expert advisory boards and review panels. Maintaining such resident expertise, in turn, requires an organizational

OCR for page 61
Improving Evaluation of Anticrime Programs commitment to evaluation research and evidence-based decision making within a culture of respect for these functions and the personnel responsible for carrying them out. Recommendations: Agencies such as NIJ that sponsor a significant portfolio of evaluation research in criminal justice should maintain a separate evaluation unit with clear responsibility for developing and completing high-quality evaluation projects. To be effective, such a unit will need a dedicated budget, a certain amount of authority over the evaluation research budgets and project selection, and independence from undue program and political influence on the nature and implementation of the evaluation projects undertaken. The agency personnel responsible for developing and overseeing impact evaluation projects should include individuals with relevant research backgrounds who are assigned to evaluation functions and maintained in those positions in ways that ensure continuity of experience with the challenges of criminal justice evaluation, methodological developments, and the community of researchers available to conduct quality evaluations. The unit and personnel responsible for developing and completing evaluation projects should be supported by review and advisory panels that provide expert consultation in developing RFPs, reviewing evaluation proposals and plans, monitoring the implementation of evaluation studies, and other such functions that must be performed well in order to facilitate high-quality evaluation research.