deciding when an intervention’s early results are sufficiently promising to support additional funding for a long-term follow-up study. A limited number of preventive interventions have now received funding for long-term follow-up, and many of these have demonstrated effects that appear stronger over time (Olds, Henderson, et al., 1998; Wolchik, Sandler, et al., 2002; Hawkins, Kosterman, et al., 2005; Kellam, Brown, et al., 2008; Petras, Kellam, et al., 2008; Wilcox, Kellam, et al., 2008). It is difficult for reviewers to assess whether an intervention’s relatively modest early effects are likely to improve over time or diminish, and therefore some of the most promising prevention programs may miss an opportunity for long-term funding.

NONRANDOMIZED EVALUATIONS OF INTERVENTION IMPACT

Conducting high-quality randomized trials is challenging, but the effort and expense are necessary to answer many important questions. However, many critical questions cannot be answered by randomized trials (Greenwald and Cullen, 1985; Institute of Medicine, 1994). For example, Skinner, Matthews, and Burton (2005) examined how existing welfare programs affected the lives of families. Their ethnographic data demonstrated that many families cannot obtain needed services because of enormous logistical constraints in reaching the service locations.

In other situations, there may be no opportunity to conduct a true randomized trial to assess the effects of a defined intervention, because the community is averse to the use of a randomization scheme, because ethical considerations preclude conducting such a trial, or because funds and time are too limited. Even so, many opportunities remain to conduct careful evaluations of prevention programs, and much can be gained from such data if they are carefully collected. Indeed, much has been written about the limits of the knowledge that a standard randomized trial can provide, and natural experiments can sometimes provide complementary information (West and Sagarin, 2000).

When a full randomized trial cannot be used to evaluate an intervention, an alternative study should be designed so that the participants in the intervention conditions differ as little as possible on characteristics other than the intervention itself. For example, it will be difficult to distinguish the effect of an intervention from other factors if a community that has high readiness is compared with a neighboring community that is not at all ready to provide the intervention. It may be necessary to work with both communities to ensure that they receive similar attention before the intervention starts as well as similar efforts for follow-up.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement