The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Community Perspectives on Obesity Prevention in Children: Workshop Summaries
permit cross-program analysis. One participant pointed out the significant investments being made by communities in developing their own evaluation tools and assessment instruments, and suggested that the development of common measures would not only save money, but also help consolidate the experience of communities in complementary areas. To date, that kind of exchange has been difficult to achieve.
On the other hand, it was noted that individual obesity prevention programs use different evaluation measures for some good reasons. For example, an important benefit of community-based programming is that it can be targeted to a local population and environment, and evaluation measures need to be tailored to this context. Participants raised concern about applying a one-size-fits-all measure to very different and often diverse communities. Even within the same community, programs serve different needs for different policies, initiatives, and funders, so a variety of measures may be necessary. Some participants suggested that the lack of common measures is a consequence of the multiple factors involved in obesity, including the community environment, physical activity and the existence of safe areas for play, eating behaviors, and access to healthful foods. This multifactorial nature of obesity necessitates multiple interventions within a community, which in turn may require multiple evaluation approaches. A further challenge to the development and use of common measures arises in multisite studies, where the comparisons across sites made possible by such measures may be divisive, creating an atmosphere of competition rather than collaboration. Another issue is how one can compare two programs with the same measures when the two communities involved started from different baselines. For example, community A may already have obesity prevention policies and strategies in place at the start of an intervention, while community B has none. If both communities are evaluated using the same measures, how does the evaluator account for the fact that community B has made fewer gains in obesity prevention because it started from a different point? The end result may be that communities with the greatest need are evaluated as less successful than their counterparts, and their programs are therefore discontinued.
To address this tension between the need for and the pitfalls of using common measures, several evaluators suggested developing such measures as a starting point and giving the evaluator the flexibility to incorporate additional factors so as to tailor the evaluation to the specific program, population, and community environment. It was further suggested that, through the formation of a working group of experts in the field and community-based evaluators, and perhaps through regular workshops and seminars, a range of evaluator perspectives and experiences could be incorporated into the development of common measures.