. "5 Specifying Questions and Locating Evidence: An Expanded View." Bridging the Evidence Gap in Obesity Prevention: A Framework to Inform Decision Making. Washington, DC: The National Academies Press, 2010.
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Bridging the Evidence Gap in Obesity Prevention: A Framework to Inform Decision Making
term. This is a core issue in the selection of an intervention or set of interventions. Decision makers need to understand how interventions work to change environments or behavior and whether they have additional indirect effects—positive or negative. A key aspect of “What” questions is whether a particular intervention is sufficient alone or requires other interventions at the same or different levels to have any effect or the maximum effect.
The most informative study designs for generating evidence of impact are comparative experiments or approximations of experiments that allow for evaluation of the effects of an intervention against a comparison condition or control group. The control group provides a reference point for what might have occurred in the absence of the intervention. Without a comparison condition of some type, one cannot determine whether changes observed were actually due to the intervention or might have occurred anyway as a result of other influences that coincided with the intervention. Outcomes may be compared against those of another intervention thought to have no effect on the problem being studied or with an alternative intervention designed to address the same problem. Comparisons may be for a population overall but may also focus on specific subgroups, for example, to assess whether the effects of the intervention and the alternative are similar in higher- versus lower-risk groups. Outcomes at one point in time may also be compared with outcomes assessed in the same population previously (e.g., a historical comparison or time series approach.
Studies that generate answers to “What” questions are referred to as impact assessments or outcome evaluations in the evaluation literature (Fitzpatrick et al., 2004; Rossi et al., 2004); they fall within the general category of effectiveness research in the social and behavioral sciences. The interventions evaluated might include programs, policies, laws, or some combination thereof operating at different levels of a region, community, organization, or institution. Kuo and colleagues (2009) provide a good example of a health impact assessment of a state law. Using published and unpublished data to model consumer response to point-of-purchase calorie postings at large chain restaurants, the authors quantify the potential impact of California’s state menu labeling law on population weight gain in Los Angeles County.
Additional evidence of interest for “What” questions can be obtained by study designs that examine multiple pathways to outcomes (various causal mechanisms with direct and indirect effects), ripple effects (effects of an intervention on secondary outcomes that are linked to the main outcomes of interest), and unintended consequences (either positive or adverse effects that can be attributed to the intervention). For example, Schwartz and colleagues (2009) surveyed school students in Connecticut before and after low-nutrition snacks were removed from their schools to address concerns about potential unintended adverse effects (e.g., compensatory eating); these effects were not found after the intervention was implemented. A similar finding was reported from Arkansas based on surveys conducted after the statewide body mass index (BMI) screening and related school-based obesity prevention policies were implemented (Thompson and Card-Higginson, 2009) (see the discussion of this initiative later in this chapter).