random assignment and to revise the program in response to these results using methods described by Collins, Murphy, and Bierman (2004) and West, Aiken, and Todd (1993).
Internet-based programs are also likely to present methodological challenges. First, a randomized trial would typically depend on data from self-reports obtained through the Internet, and uncertainty as to the validity of these data, as well as the proportion of participants willing to respond to long-term evaluations, could limit the evaluation plan. It may be necessary to use a multistage follow-up design (Brown, Indurkhya, and Kellam, 2000; Brown, Wang, et al., 2008), which would include a phone or face-to-face interview for a stratified sample of study participants.
In most health research, trials are staged in a progression from basic to clinical investigations to broad application in target populations, allowing for an ordered and predictable expansion of knowledge in specific areas (e.g., Greenwald and Cullen, 1985). In the prevention field, rigorous evaluations of the efficacy of a preventive intervention can be lengthy, as are studies of replication and implementation. However, opportunities exist for strategic shortcuts. One approach is to combine several trials sequentially. For example, in a school-based trial, consecutive cohorts can serve different purposes. The first cohort of randomly assigned students and their teachers would comprise an effectiveness trial. In the second year, the same teachers, who continue with the same intervention condition as in the first year, along with a second cohort of new students, can be used to test sustainability. Finally, a third student cohort can be used to test scalability to a broader system, with the teachers who originally served as the intervention’s controls now also trained to deliver the intervention.
A related issue involving the staging of trials is determining when there is sufficient scientific evidence for moving from a pilot trial of the intervention to a fully funded trial. In the current funding climate, researchers often design a small, pilot trial to demonstrate that an intervention looks sufficiently strong to proceed with a larger trial. Reviewers of these applications for larger trials want to have confidence that the intervention is sufficiently strong before recommending expanded funding. However, as pointed out by Kraemer, Mintz, and colleagues (2006), the effect size estimate from the pilot trial is generally too variable to provide a good decision-making tool to distinguish weak from strong interventions. There is need for alternative sequential design strategies that lead to funding of the promising interventions.
Another methodological challenge involving the review process is