The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Preventing Mental, Emotional, and Behavioral Disorders Among Young People: Progress and Possibilities
assumption that the estimate is unbiased depends on the following conditions being met (Rubin, 1974):
The sample selected for study is representative of the population.
As a whole, the participants assigned to the two intervention conditions are equivalent to one another.
The intervention received is the same as the one randomly assigned.
Any differences in assessment are unrelated to the intervention condition.
Attrition or loss to follow-up is unrelated to the intervention condition.
Each individual’s response under the assigned intervention is unaffected by the intervention conditions assigned to all others in the sample.
Adhering to a specified study protocol for maintaining equivalence will go a long way toward satisfying many of these criteria. For example, when the assignment to an intervention is in fact random or a stratified random process, the second condition of equivalent intervention groups is satisfied. Likewise, attrition bias and assessment bias can both be minimized if the procedures for recontacting and reassessing participants in the follow-up period are performed blind to intervention condition (Brown and Liao, 1999; Brown, Indurkhya, and Kellam, 2000) or corrections are made for missing data at baseline.
Possible Inferences in Response to Self-Selection
One innovative change in the way prevention trials are now analyzed is to account for self-selection factors that differentiate those who choose to participate in the prevention program from those who do not. Consideration of self-selection factors is critical in examining the effects of prevention programs aimed at individual young people or families. Some decline to participate at all, others may participate in the intervention initially but drop out before the study is completed, and others may continue to participate throughout the intervention period.
It is tempting to compare the outcomes by level of participation and interpret any differences as being due to the effects of the intervention. For example, one might find that, on average, those exposed to the full intervention had poorer outcomes overall compared with those who did not participate. This might suggest the conclusion that the intervention was harmful. However, these observed differences alone are not a sufficient basis for statements about program effect or causality, and indeed such an intervention could well be beneficial for those who participate, despite the