much harder to conclude that the intervention was responsible for the observed positive outcome. A challenge of the logic model approach is that it requires an accurate assessment of the amount and kind (dose) of changes in the community/environment (e.g., Collie-Akers et al., 2007). Intervention intensity, duration, and fidelity have been found to be associated with size of effects in other evaluation fields, and they are widely recognized as important concepts, although the concept of reach is not always addressed (Hulleman and Cordray, 2009).

By adding even a few design features, evaluations become stronger to assess effectiveness. Even two pre-intervention measurements, rather than a single baseline measurement, can help to reduce uncertainties about secular trends in behavior or health outcomes and increase reliability of measures. With local- or state-level surveillance systems, it may even become feasible to use short interrupted time series (or multiple base line designs), a far preferable design that helps to control for several alternative explanations (Shadish et al., 2002). Causal modeling, also called path analysis, builds on the logic model approach by establishing that an intervention precedes the outcomes in time, then applies regression analysis to examine the extent to which the variance in outcomes is accounted for by the intervention compared to other forces. The “population dose” approach also uses causal modeling in analysis, but causal modeling can be used independently of dose measurement—it is a statistical control concept (see Appendix H for additional information on the “population dose” approach). Although it confines itself to examining associations, the Healthy Communities Study is an especially rigorous example of causal modeling in that it includes measures of both the amount and intensity of community programs/policies (the dose) and childhood obesity rates (the intended outcome) (see Appendix H). Finally, the regression-discontinuity design rules out most alternative explanations and provides similar estimates to those of experimental designs, provided that its assumptions are met (Shadish et al., 2002). Yet it is under-utilized in prevention research (see Appendix H for an example of regression-discontinuity design).

Synthesis and Generalization

Disseminating and Compiling Studies

Understanding the extent of community-level changes required to bring about health outcomes is the first step toward generalized knowledge and spread of effective prevention. Local evaluations are vital to this process, because there will be some overlap in the mix of intervention components, creating potential to identify the ones with power to effect change. Yet compiling and synthesizing the results of local evaluations are challenges, for at least two reasons. First, measures of policy, environment, and even behavioral changes are not yet collected using commonly accepted measures that can be compared and synthesized. Cost information is rare, although recent federal efforts in Community Transformation Grants (CTGs) and Communities Putting Prevention to Work (CPPW) may soon cast light on the issue of resources necessary for these efforts. Second, website locations for an end user of evaluation to visit and find the desired information are in flux—the Cochrane Collaboration and the Task Force on Community Preventive Services are the main repositories for systematic reviews, but their emphasis on strength of evidence tends to underrate the weight of evidence from evaluations conducted under less controlled conditions. Evaluation results are scattered in peer-reviewed and non-peer-reviewed publications, across many websites and at presentations at multiple conferences. In the interest of generalized knowledge, more needs to be done to aggregate study findings about what combinations of strategies work and under what conditions.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement