Skip to main content

Currently Skimming:

4 How Should an Impact Evaluation Be Designed?
Pages 34-44

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 34...
... This situation requires that the design provide some basis for constructing a credible estimate of the outcomes for the counterfactual conditions. Another fundamental characteristic of impact evaluation is that the design must be tailored to the circumstances of the particular program being evaluated, the nature of its target population, the outcomes of interest, the data available, and the constraints on collecting new data.
From page 35...
... The job of a good impact evaluation design is to neutralize or rule out such threats to the internal validity of a study. Although numerous research designs are used to assess program effects, it is useful to classify them into three broad categories: randomized experiments, quasi-experiments, and observational designs.
From page 36...
... In the most common type, an intervention group is compared with a control group that has been selected on the basis of similarity to the intervention group, a specific selection variable, or perhaps simply convenience. For example, researchers might compare offenders receiving intensive probation supervision with offenders receiving regular probation supervision that is matched on prior offense history, gender, and age.
From page 37...
... The greatest threat to the internal validity of quasi-experimental designs, therefore, is usually uncontrolled extraneous influences that have differential effects on the outcome variables that are confounded with the true program effects. Simply stated, the equivalence that one can assume from random allocation of subjects into intervention and control conditions cannot be assumed when allocation into groups is not random.
From page 38...
... The major threat to the internal validity of observational designs used for impact evaluation is failure to adequately model the processes influencing variation in the program and the outcomes. This problem is of particular concern in criminal justice evaluations because theoretical development in criminology is less advanced than in disciplines, like economics, that rely heavily on observational modeling (Weisburd, 2003)
From page 39...
... Different designs are more or less difficult to implement well in different situations and may provide different kinds of information about program effects. Well-implemented randomized experiments can be expected to yield results with more certain internal validity than quasi-experimental and observational studies.
From page 40...
... In criminal justice, however, essential data are often not available and theory is often underdeveloped, which limits the utility of quasi-experimental and observational designs for evaluation purposes. As this discussion suggests, the choice of a research design for impact evaluation is a complex one that must be based in each case on a careful assessment of the program circumstances, the evaluation questions at issue, practical constraints on the implementation of the research, and the degree to which the assumptions and data requirements of any design can be met.
From page 41...
... GENERALIZABILITY OF RESULTS As mentioned in the discussion above, one important aspect of an impact evaluation design may be the extent to which the results can be generalized beyond the particular cases and circumstances actually investigated in the study. External validity is concerned with the extent to which such generalizations are defensible.
From page 42...
... Even when statistical power is examined in criminal justice evaluations, the approach is frequently superficial. For example, it is common for criminal justice evaluators to estimate statistical power for program effects defined as "moderate" in size on the basis of Cohen's (1988)
From page 43...
... It is thus important that a careful process evaluation accompany an impact evaluation to provide descriptive information on what happened during a study. Process evaluations should include both qualitative and quantitative information to provide a full picture of the program.
From page 44...
... . This does not mean that single-site studies cannot be useful for drawing conclusions about program effects or developing policy, only that caution must be used to avoid overgeneralizing their significance.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.