curriculum has been discontinued but the behavioral program has been continued. For the school district, the benefits of this trial were more immediate.

In another example, a study of young adolescents at risk for delinquency tested three active preventive intervention conditions against a control: a parent intervention alone, a peer-based intervention, and a combined peer and parent intervention. The parent condition alone produced a beneficial outcome; the combined peer–parent intervention produced results similar to the control; and the peer-based intervention produced more delinquency than did the other conditions (Dishion, Spracklen, et al., 1996; Dishion, Burraston, and Poulin, 2001; Dishion, McCord, and Poulin, 1999). Detailed examination revealed that the at-risk adolescents were learning deviant behavior from the more deviant peers in their group before, during, and after the program. This adverse, or iatrogenic, effect when a peer group includes a high proportion of delinquent youth is thought to be a major factor in explaining why boot camps and other similar programs often show a negative impact (Welsh and Farrington, 2001). In this way, analysis of intervention failures can be highly informative in guiding new prevention programs.

Testing Whether a Program’s Population Effect Can Be Improved by Increasing the Proportion Who Participate

In randomized trials with individual- or family-level assignment, often a large fraction of those randomly assigned to a particular intervention never participates in that intervention, even after consenting (Braver and Smith, 1996). This minimal exposure from not coming to intervention sessions means that they cannot benefit from the intervention. Would the intervention be more effective if one could increase participation? Or would outreach to a more difficult-to-engage portion of the population be counter-productive, because they already have the skills or resources that the intervention develops, or because the intervention does not meet their needs? Given the generally low level of participation in many effective interventions, it has been increasingly important to identify ways to increase a program’s reach into a community to those who could benefit (Glasgow, Vogt, and Boles, 1999).

Some designs help evaluate these self-selection effects. One option is to use “encouragement designs” under which individuals are randomly selected to receive different invitation strategies, reinforcers, or messages to encourage acceptance of an intervention. This approach can be seen in an evaluation of the impact of Head Start programs by the Administration for Children and Families (2005). Because these programs were already available in most counties in the United States, and the program is viewed as a valuable resource, especially for poor families, it was considered unethical

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement