ferent levels of benefits compared with costs, this uncertainty may be moot, but in other cases, it is important to consider carefully.
Perhaps the most important source of uncertainty pertains to longer term outcomes. In many economic evaluations, longer term outcomes of participants in an intervention are not observed and instead must be projected on the basis of other data. Many long-term benefits of early prevention programs cannot be measured until middle childhood and adolescence (e.g., juvenile crime). Longitudinal data used to make projections, such as correlations between the incidence of MEB disorders in childhood and in adulthood, do not necessarily represent accurate causal estimates, as Foster, Dodge, and Jones (2003) note. Another important source of uncertainty is a lack of statistical power. As Mrazek and Hall (1997) observe, many studies in this literature have modest sample sizes and are not sufficiently powered to look at key measures of effectiveness; typically, adequately powered estimates of cost-effectiveness require even larger samples than estimates of effectiveness per se (Ramsey, McIntosh, and Sullivan, 2001). A third, related source of uncertainty results from the outcomes measured: that is, whether interventions that appear to be cost-effective in reducing risk factors closely connected to MEB disorders, but do not measure disorders as an outcome, can actually prevent the incidence of these disorders.
Another source of uncertainty includes potential differences between cost-efficacy and cost-effectiveness. Evaluations of interventions conducted in research settings (efficacy studies) may get different results if conducted in real-world settings (effectiveness studies), raising potential questions about whether the cost-effectiveness (or more accurately, cost-efficacy) would be realized if the intervention were implemented in a nonresearch environment (see Foster, Dodge, and Jones, 2003, for a brief discussion of this). Similarly, the costs of interventions implemented in real-world settings may differ from the costs in a research setting.
In addition, as discussed in more detail in Chapter 11, a major challenge in prevention research, particularly when dealing with whole communities, is that preventive interventions are likely to have differential impact on individuals in different contexts because (a) participants have different risk and protective factors that cause different responses to the intervention; (b) the level of participation in interventions varies; and (c) interventions are routinely delivered with varying levels of fidelity and adoption. These factors can reduce overall impact compared to that seen in efficacy trials; thus some analyses of behavioral or economic outcomes in community implementation studies may not find significant effects.
There are challenges in measuring the cost of the time of children and other people involved in interventions. Those challenges can lead to poor estimates of costs, creating either an over- or underestimate. Often, however, analysts omit such time costs, introducing a clear bias toward