The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Bridging the Evidence Gap in Obesity Prevention: A Framework to Inform Decision Making
methods used. A simple intervention that reaches the entire population may have substantial impact; in the case of tobacco control, cigarette taxes proved to have an impact due in large part to their ability to reach all smokers. The impact of other universal tobacco control interventions was achieved with more complex strategies.
It is important to point out that some of the early policy victories in tobacco control did not have a large impact, but they paved the way for subsequent policy actions that had a much greater impact and thus were critical to the success of those actions. For example, school policies prohibiting smoking on school property by teachers and staff affected small numbers of smokers, but set an example for subsequent clean air legislation, smoke-free worksites, and eventually smoke-free restaurants. The policies also helped denormalize smoking in the eyes of school-age children. Another example is the mounting of mass media campaigns that initially had limited impact on smokers themselves, but helped create an informed electorate that would support ballot initiatives and legislative acts for higher tobacco taxes, more clean air controls, and more constraints on advertising and the sale of tobacco to minors. Understanding the context of an intervention and the preliminary actions that preceded it can be key in determining its effectiveness and potential impact.
How Do We Implement This Information for Our Situation?
When making choices about how to address public health problems such as obesity, decision makers should consider whether the relationships identified in research studies will hold up in their own state, locality, or setting. This is especially so if the data on which best-practice evidence is based come from academic settings, college towns, university-affiliated hospitals or clinics, or artificially simulated settings, as is often the case with published evidence. Decision makers should also consider whether the resources and oversight of interventions in the evidence-producing studies are reproducible in their local settings, and whether the best-practice interventions can be taken to scale with larger numbers of organizations or staff lacking the training and close supervision that typically characterize the protocols of scientific studies. Were the extensive training and close supervision of those conducting the intervention related only to the research functions, or are they essential to successful replication of the intervention’s effectiveness in any setting? Was the formally controlled trial artificially complex for research purposes, or is its protocol intrinsically high maintenance? In addition, decision makers need to consider what degree of latitude their policies, program plans, or instructions and guidelines can allow in applying the tested interventions to the variety of subpopulations or individual cases that may be encountered. Although the scientific view of many evidence-based guidelines is that they should be applied faithfully and rigorously as in the protocol tested in efficacy trials or effectiveness studies, that is, applied with “fidelity,” deviations from such strict adherence to protocol may be necessary for adaptation to local circumstances. Yet if an intervention then does not work—if the results in the application fall short of those in the scien-