data to permit comparison of results across sites, but these must usually be developed ad hoc because there are few standards and little basis for identifying the most relevant measures.

There is much that the agencies that sponsor criminal justice evaluations might do to help alleviate these problems. Most directly, work should be supported on outcome measurement aimed at improving program evaluation and establishing cross-project comparability when possible. It would be especially valuable for evaluation projects if a compendium of scales and items for measuring criminal justice outcomes and the intermediate variables frequently used in criminal justice evaluations could be developed or identified and promoted for general use. Grantees could be asked to select measures from this compendium when appropriate to the evaluation issues. Also, public-use dataset delivery could be incorporated into grant and contract requirements and existing datasets could be expanded to include replication at other sites. Small-scale data augmentation and measurement development projects could be added to large evaluation projects.

The other area in which significant methodological development is needed relates to the research design component of impact evaluations. For the crucial issue of estimating program effects, randomized designs can be difficult to use in many applications and impossible in some and observational studies depend heavily on statistical modeling and assumptions about the influence of uncontrolled variables. Improvements are possible on both fronts. Creative adaptations of randomized designs to operational programs and fuller development of strong quasi-experimental designs, such as regression discontinuity, hold the potential to greatly improve the quality of impact evaluations. Similarly, improvements in statistical modeling and the related area of selection modeling for nonrandomized quasi-experiments could significantly advance evaluation practice in criminal justice.

As with measurement issues, there is much that agencies interested in high-quality impact evaluations could do to advance methodological improvement in evaluation design, and at relatively modest cost. Design-side studies could be added to large evaluation projects; for instance, small quasi-experimental control groups of different sorts to compare with randomized controls and supplementary data collections that allowed exploration of potentially important control variables for statistical modeling. Where small-scale or pilot evaluation studies are appropriate, innovative designs could be tried out to build more experience and better understanding of them. Secondary analysis of existing data and simulations with contrived data could also be supported to explore certain critical design issues. In similar spirit, meta-analysis of existing studies could be undertaken with a focus on methodological influences in contrast to the typical meta-analytic orientation to program effects.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement