Many evidence-based prevention programs are delivered to small portions of the population. A small number of state agencies, schools, communities, or families select programs with the highest levels of evidence, opting instead for programs that have less evidence, or no program at all. One promising approach to improve program reach to individual families is to integrate business models into prevention to address consumer needs from the beginning (Rotheram-Borus and Duan, 2003). By following a prevention service development model that integrates consumer preferences from the beginning (Sandler, Ostrom, et al., 2005), the research team can aim for effectiveness and large-scale implementation from the start of the product development cycle.
Similarly, there is a need for greater consideration of the most effective metrics to report outcomes to the public. Although effect size may be the most appropriate metric for studies of indicated interventions in which all participants begin with a substantial rate of symptoms, it may be a poor metric for universal interventions. In universal interventions, it is usually the case that a large percentage of the population begins with low levels of symptoms, and thus it is unlikely (at least in the short term) that much of this population will benefit from the intervention. In most cases it is only in the higher symptom group of the population that larger effect sizes will be obtained (Wilson and Lipsey, 2007). Thus, for universal interventions, alternative methods are needed to convey the practical and social policy significance (Davis, MacKinnon, et al., 2003; McCartney and Rosenthal, 2000). Cost-effectiveness is one such metric, as universal interventions may achieve more benefit in relation to their cost given their large reach.
Although their internal validity makes them valuable science, randomized control trials do not always have good external validity. Furthermore, much academic research is rarely applied to the day-to-day world. Science can often benefit from the experience of everyday clinical observations. For example, in 1982 when clinical observations in a community mental health setting found an extraordinary number of children exposed to violence, a plethora of scientific research projects confirmed this observation, culminating in several large-scale strategies to prevent these children from developing mental health sequelae (Jenkins and Bell, 1997; Bell, 2004).
In addition, communities often implement programs because they are based on extensive clinical wisdom and have widespread community support. Research designed to empirically test programs being implemented in naturalistic environments could identify approaches that are readily imple-