• Instrumentation—A change in the measuring instrument could lead to increases or decreases in the level of the outcome variable in the absence of a treatment effect. Sometimes these changes can be quite subtle, as when a well-validated measurement instrument may measure different constructs in children of different ages (e.g., standardized math tests).

  • Testing—Simply the taking of a test or knowledge of the specific dimensions of one’s behavior that are being monitored could lead to increases or decreases in scores in the absence of a treatment effect.

  • Statistical regression—Participants could be selected for treatment that is extreme relative to the mean of their group or is initiated at particular crisis periods in their lives (e.g., a period of binge eating during the holidays). Unreliability in measurement or the passage of the crisis with time could lead to scores that are closer to the mean upon retesting in the absence of a treatment effect. Again, these effects can be quite subtle (see Campbell and Kenny, 1999, for a review).

Any pretest–posttest design may be subject to plausible versions of one or more of these threats to its level of certainty (internal validity) depending on the specific research context, making it very difficult to conclude that a causal effect of the treatment has occurred. Once again, following Campbell’s tradition, the strategy of adding design elements to address specific plausible threats can help provide more confidence that the treatment rather than other confounding factors has had the desired effect. For example, replicating the pretest–posttest design at different times with different cohorts of individuals can help rule out history effects, while taking several pretest measures to estimate the maturation trend in the absence of treatment can help rule out history. Box 8-4 in Chapter 8 identifies many of the design elements that can be employed; Shadish and Cook (1999) and Shadish and colleagues (2002) present fuller discussions. The design element approach does not enable the certainty about causal inference provided by the RCT, but it can often greatly improve the evidence base on which decision makers make choices about implementing interventions.

Economic Cost Analysis

Studies that assess the economic costs of obesity can differ in terms of their breadth and perspective. Differences in breadth will be reflected in choices of the population(s) covered (e.g., defined by age, gender, race/ethnicity, socioeconomic status), the range of diseases considered, and the types of costs to include. These decisions will be driven by the perspective taken in the study, as well as by available data.

One key distinction is in the methodological approach employed. In conducting economic cost studies, researchers choose between a “prevalence-based” approach and an “incidence-based” approach (Lightwood et al., 2000). The prevalence based approach (also referred to as an “annual cost” or “cross-sectional” approach) is



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement