Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 32
Progress in Preventing Childhood Obesity: How Do We Measure Up? 2 Framework for Evaluating Progress The nation is in the midst of initiating changes in policies and actions that are intended to combat the childhood obesity epidemic across many sectors, including government, relevant private-sector industries, communities, schools, work sites, families, and the health care sector. Active evaluation of these efforts is needed. The Health in the Balance report (IOM, 2005) noted that As childhood obesity is a serious public health problem calling for immediate reductions in obesity prevalence and in its health and social consequences, the committee strongly believed that actions should be based on the best available evidence—as opposed to waiting for the best possible evidence (p. 111). The challenge presented in this report is to take the next step toward developing a robust evidence base of effective obesity prevention interventions and practices. Evaluation is central to identifying and disseminating effective initiatives—whether they are national or local programs or large-scale or small-scale efforts. Once effective interventions are identified, they can be replicated or adapted to specific contexts1 and circumstances, scaled up, and widely disseminated (IOM, 2005). This chapter discusses the challenges and opportunities for evaluating childhood obesity prevention efforts. Key questions and principles designed to direct and guide evaluation efforts are presented. Furthermore, the com- 1 In this report, context refers to the set of factors or circumstances that surround a situation or event and give meaning to its interpretation.
OCR for page 33
Progress in Preventing Childhood Obesity: How Do We Measure Up? mittee introduces an evaluation framework that can be used by multiple stakeholders to identify the necessary resources and inputs, strategies and actions, and range of outcomes that are important for assessing progress toward childhood obesity prevention. Subsequent chapters provide specific examples to illustrate the use of the framework in conducting program evaluations in a variety of settings. The chapter concludes with the committee’s recommendations that establish the foundation for the implementation actions discussed in subsequent chapters of the report. OVERVIEW OF EVALUATION Evaluation is an important component of public health interventions because it helps decision makers make informed judgments about the effectiveness, progress, or impact of a planned program. The committee defines evaluation as the systematic assessment of the quality and effectiveness of a policy,2 program,3 initiative, or other action to prevent childhood obesity. It is an effort to determine whether and how an intervention meets its intended goals and outcomes. Evaluations produce information or evidence that can be used to improve a policy, a program, or an initiative in its original setting; refine those that need restructuring and adaptation to different contexts; and revamp or discontinue those found to be ineffective. Evaluation fosters collective learning, supports accountability, reduces uncertainty, guides improvements and innovations in policies and programs, may stimulate advocacy, and helps to leverage change in society. Many types of evaluations can contribute to the knowledge base by identifying promising practices and helping to establish causal relationships between interventions and various types of indicators and outcomes. Evaluations can also enhance understanding of the intrinsic quality of the intervention and of the critical context in which factors can moderate4 or mediate5 the interventions’ effect in particular ways. Evaluations are needed to demonstrate how well different indicators predict short-term, intermediate-term, and long-term outcomes. An indicator (or set of indicators) helps provide an understanding of the current effect of an intervention, future 2 Policy is used to refer to a written plan or a stated course of action taken by government, businesses, communities, or institutions that is intended to influence and guide present and future decisions. 3 Program is used to refer to what is being evaluated and is defined as “an integrated set of planned strategies and activities that support clearly stated goals and objectives that lead to desirable changes and improvements in the well-being of people, institutions, environments, or all of these factors.” See the glossary in Appendix B for additional definitions. 4 A moderator is a variable that changes the impact of one variable on another. 5 A mediator is the mechanism by which one variable affects another variable.
OCR for page 34
Progress in Preventing Childhood Obesity: How Do We Measure Up? prospects from the use of the intervention, and how far the intervention is from achieving a goal or an objective. Indicators are used to assess whether progress has been made toward achieving specific outcomes. An outcome is the extent of change in targeted policies, institutions, environments, knowledge, attitudes, values, dietary and physical activity behaviors, and other conditions between the baseline measurement and measurements at subsequent points over time. Evaluations can range in scope and complexity from comparisons of pre- and postintervention counts of the number of individuals participating in a program to methodologically sophisticated evaluations with comparison groups and research designs. All types of evaluations can make an important contribution to the evidence used as the basis on which policies, programs, and interventions are designed. A major purpose of this Institute of Medicine (IOM) report is to encourage and demonstrate the value for conducting an evaluation of all childhood obesity prevention interventions. The committee strongly encourages stakeholders responsible for childhood obesity prevention policies, programs, and initiatives to view evaluation as an essential component of the program planning and implementation process rather than as an optional activity. If something is considered valuable enough to invest the time, energy, and resources of a group or organization, then it is also worthy of the investment necessary to carefully document the success of the effort. The committee emphasizes the need for a collective commitment to evaluation by those responsible for funding, planning, implementing, and monitoring obesity prevention efforts. Evaluation is the critical step in the identification of both successful and ineffective policies and interventions, thus allowing resources to be invested in the most effective manner. Because sufficient outcomes data are not yet available in most cases to evaluate the efficacy, effectiveness, sustainability, scaling up, and systemwide sustainability of policy and programmatic interventions, the committee uses the term promising practices in this report to refer to efforts that are likely to reduce childhood obesity and that have been reasonably well evaluated but that lack sufficient evidence to directly link the effort with reducing the incidence or prevalence of childhood obesity and related comorbidities. They are not characterized as best practices, as they have not yet been fully evaluated. Furthermore, the term best practices has inherent limitations in the conceptualization and application to health promotion and health behavior research. Green (2001) suggests that clinical interventions are typically implemented in settings with a great deal of control over the dose, context, and circumstances. The expectation that health promotion research will produce interventions that can be identified as best practices in the same way that medical research has done with efficacy trials should be replaced with the concept of best practices for the most appropriate interventions for the setting and population. Thus, best
OCR for page 35
Progress in Preventing Childhood Obesity: How Do We Measure Up? practices resulting from health promotion research focus on effective processes for implementing action and achieving positive change. These may include effective ways of engaging communities; assessing the needs and circumstances of communities and populations; assessing resources and planning programs; or connecting needs, resources, and circumstances with appropriate interventions (Green, 2001). As described throughout this report, childhood obesity prevention efforts involve a variety of different interventions and policy changes occurring in multiple settings (e.g., in the home, school, community, and media). This “portfolio approach” to health promotion planning may be compared with financial investments in a diversified portfolio of short-term, intermediate-term, and long-term investments with different levels of risk and reward. This type of approach encourages the classification of obesity prevention interventions on the basis of their estimated population impact and the level of promise or evidence-based certainty around these estimates (Gill et al., 2005; Swinburn et al., 2005). Evaluations are conducted for multiple stakeholders and the findings are broadly shared and disseminated (Guba and Lincoln, 1989). These audiences include policy makers, funders, and other elected and appointed decision makers; program developers and administrators; program managers and staff; and program participants, their families, and communities. Moreover, these diverse evaluation audiences tend to value evaluations for different reasons (Greene, 2000) (Table 2-1). Evaluations inform decision- TABLE 2-1 Purposes of Evaluation for Different Audiences Purpose Audience Inform decision-making and provide accountability Policy makers, funders, and other elected and appointed decision makers Understand how a program or policy worked as implemented in particular contexts and the relative contribution of each component to improve the intervention for replication, expansion, or dissemination or to advance scientific knowledge Program developers, researchers, and administrators Improve the program, enhance the daily program operations, or contribute to the development of the organization Program managers and staff Promote social justice and equity in the program Program participants, their families, and communities SOURCE: Greene (2000).
OCR for page 36
Progress in Preventing Childhood Obesity: How Do We Measure Up? making, provide accountability for policy formulation and reassessment, and enhance understanding of the effectiveness of a program or policy change. Evaluations are also used to improve or enhance programs and promote the principles of social justice and equity in the program. Encouraging the dissemination of the evaluation results to a broad audience is an important element of developing a “policy-shaping community” that serves as a critical constituency for further implementation and evaluation efforts (Cronbach and Associates, 1980). Evaluation should also be conducted with appropriate respect for diverse cultural practices, traditions, values, and beliefs (Pyramid Communications, 2003; Thompson-Robinson et al., 2004; WHO, 1998). As discussed further below and in Chapter 3, it is important to be particularly attentive to existing health and economic disparities in conducting evaluations of childhood obesity prevention actions and programs as reflected in the type of evaluation questions asked and the criteria used to make judgments about program quality and effectiveness. Different types of evaluations (e.g., formative, process, and outcome evaluations) (Box 2-1) relevant to the stage of the intervention and the purpose of the evaluation are conducted. In addition, impact evaluation may be conducted to examine effects that extend beyond specific health outcomes and may include economic savings and benefits, cost-utility, and improved quality of life (CGD, 2006). Large-scale interventions often build on multiple evaluations from the outset of the project so that at each step along the development and implementation of the project, data are collected and analyzed to assess the best use of resources and to make refinements as needed. Evaluations provide data that are interpreted to generate judgments about the quality and the impact of the program experience for its participants and about the planned and desirable outcomes that have been achieved. These judgments often rest on established standards and criteria about educational quality and nutritional, dietary, or physical activity requirements, among other criteria. Too often an evaluation is focused on a narrow set of objectives or criteria and the broader policy or program goals may not be adequately considered. Additionally, stakeholders may vary in their judgments about how much improvement is sufficient for a program to be viewed as high quality and effective (Shadish, 2006). A comprehensive review of childhood obesity prevention interventions examined a variety of selection criteria for interventions including methodological quality, outcome measures, robustness in generalizability, and adherence to the principles of population health (e.g., assessments of the upstream determinants of health, multiple levels of intervention, multiple areas of action, and the use of participatory approaches). Of 13,000 programs that promote a healthy weight in children and that were recently reviewed, only 500 pro-
OCR for page 37
Progress in Preventing Childhood Obesity: How Do We Measure Up? BOX 2-1 Types of Program Evaluations The following are different types of evaluations that are conducted. A large-scale and more complex or sophisticated evaluation may conduct all types of these evaluations and assess a variety of multiple outcomes, as well as explain how they were achieved. Formative Evaluation: A method of assessing the value of a program and shaping the features of an intervention at an early stage of development before the program or intervention is implemented. A formative evaluation focuses on issues such as understanding how a program works or whether a target audience understands messages or to test the feasibility of implementing an intervention in a new setting or context. Process Evaluation: A means of assessing strategies and actions to reveal insights about the extent to which implementation is being carried out in accordance with expected standards and the extent to which a given action or strategy is working as planned. Outcome Evaluation: An approach for assessing whether or not anticipated changes or differences occur as a result of an intervention. This type of evaluation assesses the extent of change in targeted attitudes, values, behaviors, policies, environments, or conditions between baseline measurement and subsequent points of measurement over time. vided adequate information about their implementation that could be used to identify promising practices for childhood obesity prevention based on chosen criteria (Flynn et al., 2006). The committee has identified several relevant criteria that can be used to judge the design and quality of interventions and encourages funders and program planners to consider the following actions: Include diverse perspectives (House and Howe, 1999) and attend to the subpopulations in the greatest need of prevention actions—particularly underserved, low-income, and high-risk populations that experience health disparities;
OCR for page 38
Progress in Preventing Childhood Obesity: How Do We Measure Up? The use of relevant empirical evidence related to the specific context when an intervention is designed and implemented; The development of connections of program efforts with the efforts of similar or potentially synergistic programs, including a concerted effort to develop cross-sectoral connections and sustained collaborations; and The linkage of interventions that aim to produce structural, environmental, and behavioral changes in individuals and populations relevant to childhood obesity prevention. The committee developed six overriding principles to guide the approach to program evaluation. First, evaluations of all types—no matter the scale or the level of complexity—can contribute to a better understanding of effective actions and strategies. Localized and small-scale obesity prevention efforts can be considered pilot projects, and their evaluation can be modest in scope. Second, defensible and useful evaluations require adequate and sustained resources and should be a required component of budget allocations for obesity prevention efforts—for both small local projects and large extensive projects. The scope and scale of evaluation efforts should be appropriately matched to the obesity prevention action. Third, evaluation is valuable in all sectors of obesity prevention actions. It is important to recognize that effective action for obesity prevention will not be achieved by a single intervention. However, an intervention or a set of interventions that produces a modest or preliminary change may contribute importantly to a larger program or effort. Multisectoral evaluations that assess the combined power of multiple actions can be especially valuable for informing what might work in other settings. Fourth, evaluation is valuable at all phases of childhood obesity prevention actions, including program development, program implementation, and assessment of a wide range of outcomes. In particular, evaluation can contribute to an improved understanding of the effects of different types of strategies and actions—leadership actions, augmented economic and human resources, partnerships, and coalitions—on the short-term, intermediate-term, and long-term outcomes. Fifth, useful evaluations are contextually relevant, are culturally responsive, and make use of the full repertoire of methodologies and methods (Chapter 3). Evaluations may need to be modified, depending on how programs evolve, the evidence collected, and shifts in stakeholder interests. Sixth, evaluation should be a fundamental component of meaningful and effective social change achieved by stakeholders engaging in a range of dissemination and information-sharing activities through diverse communications channels to promote the use and scaling up of effective policies and interventions. Because evaluation offers opportunities for collective learning and accountability, widespread dissemination of evaluation findings
OCR for page 39
Progress in Preventing Childhood Obesity: How Do We Measure Up? can lead to policy refinements, program improvements, community advocacy, and the strategic redirection of investments in human and financial resources. EVALUATION CAPACITY Insights obtained during the committee’s three regional symposia suggest that there is a substantial gap between the opportunity for state and local agencies and organizations to implement obesity prevention activities and programs and their capacity to evaluate them (Figure 2-1). It was not surprising to find that at the community level, where the great majority of obesity prevention strategies are expected to be carried out, the capacity for conducting comprehensive program evaluations is limited. Research conducted in academic settings is the principal source of in-depth scientific evidence for specific intervention strategies. Existing public sector surveillance systems and special surveys serve as critical components for the ongoing monitoring and tracking of a wide range of childhood obesity-related indicators. Although more comprehensive evaluations are needed and surveillance systems need to be expanded or enhanced, especially for the monitoring of policy, system, and environmental changes, the gap between the opportunity for evaluations and the capacity to conduct evaluations at the local level appears to be a significant impediment to the identification and widespread adoption of effective childhood obesity prevention programs. Three strategies might be helpful in addressing the opportunity-capacity evaluation gap. First, and most important, local program managers should be encouraged to conduct for every activity and program an evaluation that is of a reasonable scale and that is commensurate with the existing local resources. The evaluation should be sufficient to determine whether the program was implemented as intended and to what extent the expected changes actually occurred. For most programs for which strategies and desired outcomes are adequately described, careful assessment of how well those strategies are carried out (also called fidelity) and modest assessments of outcomes after the program is implemented compared with the situation at the baseline are sufficient. In these contexts, obtaining baseline measures at the outset of programs is critical. As noted above, every program deserves an evaluation but not every intervention program needs to or has the capacity to undertake a full-scale and comprehensive evaluation. Second, government and academic centers can increase the amount of guidance and technical assistance concerning intervention evaluations that they provide to local agencies (Chapter 4). Third, government and academic agencies and centers conducting comprehensive evaluations can more quickly identify activities and programs that deserve more
OCR for page 40
Progress in Preventing Childhood Obesity: How Do We Measure Up? FIGURE 2-1 Closing the evaluation gap at the local level.
OCR for page 41
Progress in Preventing Childhood Obesity: How Do We Measure Up? extensive evaluation if they communicate frequently with local agencies and each other about their interventions. The two-way arrows highlighted in Figure 2-1 symbolize the dual benefit that is likely to result when the academic and governmental sectors partner with local programs to enhance evaluation capacity at the local level. The arrow tips marked “A” connote the delivery of local-level evaluation capacity building through the planned efforts of the academic and the governmental sectors. The arrow tips marked “B” reflect the opportunities for those in the academic and governmental sectors to work with and expand upon local pilot programs that show promise for attaining measurable health benefits and merit consideration for diffusion and replication. Although it may be unrealistic to expect local-level program personnel to have the capacity to conduct full-scale comprehensive evaluations, it is not at all unreasonable for local-level programs to have in place practical mechanisms that will enable them to detect, record, and report on reasonable indicators of the progress and the impact of a program. Issues and examples related to who will pay for the evaluation efforts and the role of government and foundations are discussed throughout the report. Training opportunities to enhance the ability of stakeholders to conduct evaluations are needed. As indicated above, evaluation is often viewed as primarily being within the purview of foundations, government, and academic institutions. Evaluation is a basic function and integral element of public health programs. However, the core competencies related to conducting community evaluations should be widely disseminated to staff members of nonprofit organizations, schools, preschools, after-school programs, faith-based organizations, child-care programs, and many others. The full utilization of the expertise of academic institutions, foundations, and public health departments in partnership with community and school groups will provide the knowledge base for well-designed evaluation strategies. Tools such as distance learning can take further advantage of disseminating this information. As discussed in Chapter 4, the Centers for Disease Control and Prevention (CDC)’s Nutrition and Physical Activity Program to Prevent Obesity and Other Chronic Diseases is focused on state capacity building, implementation, and enhanced training opportunities. Several practitioner-focused training programs have been developed through the CDC Prevention Research Centers (Chapters 4 and 6). Further, evaluation training for teachers and school staff can be included as a component of school wellness plans and will provide another opportunity to enhance evaluation capacity. EVALUATION FRAMEWORK Children and youth live in environments that are substantially different from those of a few decades ago. Many environmental factors substantially
OCR for page 42
Progress in Preventing Childhood Obesity: How Do We Measure Up? increase their risk for obesity. Efforts to evaluate obesity prevention programs should take into account the interconnected factors that shape the fabric of the daily lives of children and youth. Experienced evaluators have long acknowledged the importance of identifying and understanding the key contextual factors (e.g., the environmental, cultural, normative, and behavioral factors) that influence the potential impact of an intervention (Tucker et al., 2006). The evaluation framework that the committee developed offers a depiction of the resources, strategies and actions, and outcomes that are important to childhood obesity prevention. All are amenable to documentation, measurement, and evaluation (Figure 2-2). The evaluation framework also illustrates the range of important inputs and outcomes while giving careful consideration to the following factors: The interconnections and quality of interactions within and among the multiple sectors involved in childhood obesity prevention initiatives; The adequacy of support and resources for policies and programs; The contextual appropriateness, relevance, and potential power of the planned policy, intervention, or action; The relevance of multiple levels and types of outcomes (e.g., structural, institutional, systemic, environmental, and behavioral for individuals and the population and health outcomes); The potential impact of interventions on adverse or unanticipated outcomes, such as stigmatization or eating disorders (Doak et al., 2006); and The indicators used to assess progress made toward each outcome; selection of the best indicators will depend on the purpose for which they are intended (Habicht and Pelletier, 1990; Hancock et al., 1999) and the resources available to program staff to collect, analyze, and interpret relevant data. CDC has developed three guides for evaluating public health and other programs relevant to obesity prevention, including: Framework for Program Evaluation in Public Health (CDC, 1999), Introduction to Program Evaluation for Public Health Programs (CDC, 2005a), and Physical Activity Evaluation Handbook (DHHS, 2002). The guides offer six steps for evaluating programs: (1) engage stakeholders, (2) describe the plan or program, (3) focus the evaluation design, (4) gather credible evidence, (5) justify conclusions, and (6) share lessons learned. Other important elements for program development and evaluation emphasized by the guides include the documentation of alliances, partnerships, and collaborations with those in other sectors; the establishment of program goals and objectives; the assessment of the available human and economic resources; and the selection of specific
OCR for page 63
Progress in Preventing Childhood Obesity: How Do We Measure Up? Attributing Causation and Effects to Interventions Because of the numerous and intertwined determinants of changes in dietary intake and physical activity it is often difficult for a single intervention, especially if it is modest in scope, to have a measurable impact (Swinburn et al., 2005). In addition, the impact of targeted programmatic interventions is difficult to determine when other often broader population-level interventions, such as media campaigns or increases in opportunities for physical activity in the community, are going on at the same time. The effectiveness of targeted programmatic interventions may also be obscured by gaps or barriers elsewhere in the chain. For example, classroom instruction on the value of physical activity, even if it is effective, may be irrelevant to children or youth who do not have safe places to be active outside or who are more strongly attracted to new sedentary pursuits, such as watching television shows, playing sedentary DVDs and videogames, or using the Internet. Furthermore, behavioral improvements resulting from an intervention in one setting may be offset by compensation or regression in other settings. For example, an increase in activity level during a physical education class may be counterbalanced by a change in after-school activities that increase the time that a child or adolescent spends in sedentary activities, such as more recreational screen time. Similarly, reducing calorie intake with a more nutritious lunch purchased in the school cafeteria may be offset by the increased consumption of high-calorie snacks after school. Finally, interventions addressing systemic and environmental precursors of childhood obesity and other factors early in the causal chain cannot be expected to demonstrate changes in the prevalence of childhood obesity in the short term. The difficulties inherent in assessing the contribution, if any, of a single intervention, plus the multiplicity of interventions currently being implemented at the individual, family, community, state, and national levels, elevate the need for summary evaluation methods. Thus, in addition to evaluations of specific policies and programs, there is a need for population-wide assessment that examines the overall progress of the prevention of childhood obesity. Surveillance and monitoring will provide the data needed for these types of assessments. Measuring Dietary Patterns and Physical Activity Behaviors Current methods of measuring dietary patterns and activity behaviors are insufficiently precise to accurately detect subtle changes in energy balance that can influence body weight (IOM, 2005, 2006; NRC, 2005). The difficulties lie in measuring the energy involved in a specific exposure (e.g., the number of calories consumed at lunch or expended in physical educa-
OCR for page 64
Progress in Preventing Childhood Obesity: How Do We Measure Up? tion class) and the full range of places, times, and ways in which energy expenditure occurs (e.g., at home or school, during or after school, or during free time or scheduled activities). All currently available measures of dietary and physical activity behaviors capture only a portion of daily behaviors and do so, at best, with only modest accuracy. Similar problems exist for the measurement of the determinants of excess energy consumption or insufficient energy expenditure. For example, accurate methods of measuring access to fruits, vegetables, and other low-calorie high nutrient foods and beverages or places for engaging in physical activity are still being developed. In addition, the specific characteristics of the built environment that are instrumental in making foods and beverages that contribute to a healthful diet accessible or that encourage play and physical activity on a regular basis are not known. Therefore, the evaluation tools, indicators, or performance measures10 that are available may lack sufficient specificity (e.g., precision) or sensitivity (e.g., ability to measure incremental change at appropriate levels) to relate specific behaviors to specific outcomes. The task of evaluation will be greatly facilitated by research on and the development of more accurate measurement tools, indicators, and performance measures. CDC is in an early phase of developing the Obesity Prevention Indicators Project, which will identify and select potential indicators and performance measures for the evaluation of obesity prevention programs and interventions. This process proposes to provide a forum for the sharing of information among funding agencies about current and future strategies and initiatives on program funding, monitoring, and evaluation; the development of criteria for the identification and the selection of common indicators that can be shared across national-, state-, and community-level programs; and the summary and dissemination of selected indicators for program evaluation by intervention setting and also according to recommendations presented in Healthy People 2010 and the Health in the Balance report (IOM, 2005) (Laura Kettel-Khan, CDC, personal communication, May 27, 2006). Developing Interventions Interventions pertaining to the structural and systemic causes of childhood obesity, such as those focused on overcoming the paucity of public parks and playgrounds in high-risk neighborhoods or providing easy access 10 A performance measure links the use of resources with health improvements and the accountability of programs or partners. Performance measures are used to ensure the efficient and effective use of resources, especially financial resources.
OCR for page 65
Progress in Preventing Childhood Obesity: How Do We Measure Up? to fresh fruits and vegetables, are recognized as important contributors to the prevention of childhood obesity. However, there is limited empirical evidence to guide their development or their evaluation. Innovative approaches to evaluation design that measure the relative impact of multiple changes to the built environment on a population’s behaviors are needed, for example, methods that assess the collective impact of designing sidewalks, walking trails, and public parks on physical activity levels (TRB and IOM, 2005). Analyses of the contribution of community change to population health outcomes that have been conducted in other areas of public health, such as automobile and highway safety, may offer insight into methods that can be used to assess the contribution of environmental changes (e.g., amount, intensity, duration, and level of exposure) to long-term population-level outcomes. Modifications of highways, intersections, and pedestrian crossings, in conjunction with changes in the use of seatbelts, child safety seats, and other interventions have been extensively evaluated (Economos et al., 2001; NTSB, 2005). Translating and Transferring Findings to Diverse Settings and Populations The social and cultural diversity within the United States precludes assumptions about the transferability of interventions from one subsector of the population to another. A program that succeeds in Oakland, California, may not do well in Birmingham, Alabama. This does not mean that interventions are not transferable, because despite the cultural and racial/ethnic diversity of the U.S. population, we share many common characteristics. However, it means that transferability is not assured and should be assessed. As new information is generated, it will be important to ensure that the new information and evidence is promptly incorporated into ongoing interventions (Chapter 3). Table 2-3 describes three main areas—knowledge generation, knowledge exchange, and knowledge uptake—and 12 in- TABLE 2-3 Three Areas and 12 Components of Evidence-Based Policymaking Knowledge Generation Knowledge Exchange Knowledge Uptake Credible design Relevant content Accessible information Accurate data Appropriate translation Readable message Sound analysis Timely dissemination Motivated user Comprehensive synthesis Modulated release Rewarding outcome SOURCE: Choi (2005).
OCR for page 66
Progress in Preventing Childhood Obesity: How Do We Measure Up? terdependent elements of evidence-based policymaking (Choi, 2005). The knowledge generation, exchange, and uptake sequence can be used to generate, translate, and transfer or adapt the results of obesity prevention research and program evaluations to present promising practices to different target audiences. Surveillance and Monitoring: Data Sources and Measurement Tools Surveillance and monitoring activities generally do not provide an adequate evaluation of any single intervention effort. They do, however, provide an essential assessment of the progress of the overall effort to the prevention of childhood obesity. Current surveillance systems are primarily designed to monitor the health and behavioral components of the obesity epidemic. Surveillance systems that monitor the precursors of changes in dietary and physical activity behaviors, such as policy change or alterations in the built environment, need to be expanded or developed. A variety of surveillance systems are useful sources of data. Several types of measurement tools can be used to monitor and evaluate childhood obesity prevention policies and interventions. Appendix C provides a detailed summary of many available data sources and outcome indicators that may be used to assess progress by use of the evaluation framework for different sectors. A brief summary is provided below. Several organizations monitor policies as well as proposed or enacted state legislation related to obesity prevention (Appendix C; Chapters 4 and 7). Examples include the CDC’s Nutrition and Physical Activity Legislative Database (CDC, 2005b), the National Conference of State Legislatures’ summary of childhood obesity policy options (NCSL, 2006), the Trust for America’s Health annual report of federal and state policies and legislation (TFAH, 2004, 2005), and NetScan’s Health Policy Tracking Service for state legislation on school nutrition and physical activity (NetScan, 2005). However, there is great variability within and across these legislative databases. The committee noted that a central database or information repository that tracks obesity-related legislation and that can be periodically updated is needed. Several national and state cross-sectional and longitudinal surveys provide indicators and outcomes related to childhood obesity prevention surveillance and monitoring (Appendixes C and D). Different types of proprietary data sources could potentially be informative for the monitoring and evaluation of childhood obesity prevention policies and interventions. However, because proprietary data are collected for commercial purposes by private companies, either the data are not publicly available or the cost of obtaining the data is prohibitively expensive (IOM, 2006). Nevertheless, marketing research data about children and adolescents, trends in the mar-
OCR for page 67
Progress in Preventing Childhood Obesity: How Do We Measure Up? keting of consumer food and beverage products, and food retailer supermarket scanner point-of-sale data (IOM, 2006; NRC and IOM, 2004) (Chapter 5) can be purchased from marketing and media companies and analyzed, and the results can be published in peer-reviewed publications. Moreover, because marketing data become less commercially useful over time, older data may be donated for public use purposes. A comprehensive inventory of the available databases and measurement tools is beyond the scope of this study. However, programs may use different tools in various settings to obtain results about indicators and outcomes. These include the BMI report card and the FITNESSGRAM®/ACTIVITYGRAM® used in the school setting (Chapter 7); health impact assessments used for environmental changes (Chapter 6); mobilizing action through community planning and partnerships (Chapter 6); and a system dynamics simulation modeling (Homer and Hirsch, 2006). This type of modeling can be used to understand a variety of factors—obesity trends in the United States, the types of interventions needed to alter obesity trends, the subpopulations that should be targeted by specific interventions, and the length of time needed for actions to generate effects (Laura Kettel-Khan, CDC, personal communication, May 27, 2006). Support Needed from Research and Surveillance The evaluation framework presented in this report is offered in direct support of the U.S. childhood obesity prevention efforts. As obesity prevention programs, strategies, and actions continue to be initiated around the country, evaluation can play a critical role in furthering our collective understanding of the complex character and contours of the obesity problem and of meaningful and effective ways to address it. The committee emphasizes that program evaluations of varying scope and size at all levels and within all sectors have a vital role to play in addressing the childhood obesity epidemic. Evaluation can help to document progress, advance accountability, and marshal the national will to ensure good health for all of our children and youth. In support of this commitment to evaluation, targeted research is also needed to Develop better methods to measure all components of the evaluation framework and to strengthen available data sources so that more complete and accurate information is available for the components; this is especially relevant for methods to accurately assess the eating patterns and physical activity behaviors of children and youth. Enable the ongoing assessment and research into the complex dynamics of childhood obesity, especially those that cross disciplinary
OCR for page 68
Progress in Preventing Childhood Obesity: How Do We Measure Up? boundaries such as marketing and community change, policy analysis, or economics and education; and Sustain efforts in this area so that long-term health outcomes can be achieved; these include reducing the BMI levels in the population, reducing obesity prevalence and at-risk obesity prevalence, and reducing obesity-related morbidity in children and youth. The translation of evaluation and research findings into promising and best practices constitute the primary means for accelerating national efforts to reverse the epidemic of childhood obesity. Since the need for effective evaluation is ongoing, both the capacity and quality of evaluation will be positively influenced by the presence of a national commitment to support obesity prevention research and the rapid dissemination of research findings—across the geographical landscape—to stakeholders involved in prevention efforts in states and communities. The health outcomes will not be achieved by any single obesity prevention program or action and, consequently, fall outside the boundaries of most program and policy evaluations. This is true for certain behavioral outcomes as well. At the same time, vigilant and credible monitoring of these population-level indicators and indices is critical to the overall national plan to prevent childhood obesity, and is a responsibility that should be assumed by the health agencies of the government (Chapter 4). Vertically integrated surveillance systems that offer usable data at local, state, and national levels are particularly important. Implications of the Issues and Challenges The comprehensive data needed to inform large and complex evaluations will come, in a large part, from more modest assessments. From these evaluations will flow the information required to improve knowledge about which activities and programs should be examined more fully and which should be recommended for more expansive implementation. Unfortunately, there has been limited recognition of the value of carefully designed but modest evaluations. This chapter has discussed issues that may be useful in guiding the design and implementation of practical evaluations. The evaluation framework portrays the wide range of strategies and outcomes that can be evaluated and the need for evaluations with various levels of methodological complexity. Much can be learned from more modest assessments that use quantitative and qualitative methods. The four key evaluation questions may also be useful in indicating innovative approaches, such as mixed-method approaches, to evaluation (Greene and Caracelli, 1997, 2003; Tashakkori and Teddlie, 2003).
OCR for page 69
Progress in Preventing Childhood Obesity: How Do We Measure Up? SUMMARY AND RECOMMENDATIONS A meaningful evaluation of a nationwide childhood obesity prevention effort is possible only through the collective commitment of the stakeholders responsible for planning, implementing, and monitoring prevention actions. The evaluation framework presented in this chapter will assist in identifying relevant contexts and sectors; necessary resources and inputs; effective strategies and actions; and explicit institutional, environmental, behavioral, and health outcomes that signify meaningful changes in obesity-related indicators. Evaluation efforts should focus on the specific characteristics of the contexts being served; the rationale and supporting evidence for a particular action that is matched to particular contexts; the quality and reach or power of the action implemented; and the difference that the action makes in preventing childhood obesity, especially for those most at risk. The committee’s four recommendations emphasize the need for leadership, evaluation, and dissemination across all relevant sectors— government, industry, communities, schools, and families. Each of these recommendations is further expanded upon in Chapters 4 through 8, in which the committee recommends specific implementation actions that should be taken to ensure the availability of adequate resources and a focus on strengthening childhood obesity prevention efforts and their evaluation. An exception is Chapter 3, which only includes recommendations 2 and 3 because they explicitly identify the need to account for diverse perspectives when designing culturally relevant interventions that address the special needs of diverse populations and high-risk groups. All the recommendations and implementation steps are summarized in Appendix E. These recommendations are consistent with those of many other reports (CDC, 1999, 2005a; CGD, 2006; DHHS, 2002; WHO, 1998). They collectively call attention to the urgent need to provide more and better information to improve peoples’ lives through evaluation. The recommendations emphasize the importance of dedicating significant resources to the evaluation of interventions. They also advance an evaluation process that meaningfully engages diverse stakeholders in the evaluation design and process and that legitimizes the multiplicity of stakeholder perspectives, notably, program recipients along with funders, administrators, and professional staff. Recommendation 1: Government, industry, communities, schools, and families should demonstrate leadership and commitment by mobilizing the resources required to identify, implement, evaluate, and disseminate effective policies and interventions that support childhood obesity prevention goals.
OCR for page 70
Progress in Preventing Childhood Obesity: How Do We Measure Up? Recommendation 2: Policy makers, program planners, program implementers, and other interested stakeholders—within and across relevant sectors—should evaluate all childhood obesity prevention efforts, strengthen the evaluation capacity, and develop quality interventions that take into account diverse perspectives, that use culturally relevant approaches, and that meet the needs of diverse populations and contexts. Recommendation 3: Government, industry, communities, and schools should expand or develop relevant surveillance and monitoring systems and, as applicable, should engage in research to examine the impact of childhood obesity prevention policies, interventions, and actions on relevant outcomes, paying particular attention to the unique needs of diverse groups and high-risk populations. Additionally, parents and caregivers should monitor changes in their family’s food, beverage, and physical activity choices and their progress toward healthier lifestyles. Recommendation 4: Government, industry, communities, schools, and families should foster information-sharing activities and disseminate evaluation and research findings through diverse communication channels and media to actively promote the use and scaling up of effective childhood obesity prevention policies and interventions. There will be a greater likelihood of success when public, private, and voluntary organizations purposefully combine their respective resources, strengths, and comparative advantages to ensure a coordinated effort over the long term. Evaluations will contribute to building a strong and diverse evidence base on which promising and best practices can be identified, scaled up, and institutionalized across different settings and sectors. REFERENCES Alliance for a Healthier Generation. 2006a. What is the Healthy Schools Program? [Online]. Available: http://www.healthiergeneration.org/ [accessed May 8, 2006]. Alliance for a Healthier Generation. 2006b. Alliance for a Healthier Generation and Industry Leaders Set Healthy School Beverage Guidelines for U.S. Schools. [Online]. Available: http://www.healthiergeneration.org/beverage.html [accessed May 8, 2006]. Best A, Moor G, Holmes B, Clark PI, Bruce T, Leischow S, Buchholz K, Krajnak J. 2003. Health promotion dissemination and systems thinking: Towards an integrative model. Am J Health Behav 27(Suppl 3):S206–S216. Bobbitt-Cooke M. 2005. Energizing community health improvement: The promise of microgrants. Prev Chronic Dis [Online]. Available: http://www.cdc.gov/pcd/issues/2005/nov/05_0064.htm [accessed March 4, 2006]. CDC (Centers for Disease Control and Prevention). 1999. Framework for program evaluation in public health. MMWR 48(RR-11):1–42.
OCR for page 71
Progress in Preventing Childhood Obesity: How Do We Measure Up? CDC. 2005a. Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide. [Online]. Available: http://www.cdc.gov/eval/evalguide.pdf [accessed April 13, 2006]. CDC. 2005b. State Legislative Information: Search for Bills. [Online]. Available: http://apps.nccd.cdc.gov/DNPALeg/ [accessed April 24, 2006]. CGD (Center for Global Development). 2006. When Will We Ever Learn? Improving Lives Through Impact Evaluation. Washington, DC: CGD. [Online]. Available: http://www.cgdev.org/content/publications/detail/7973 [accessed June 6, 2006]. Choi BCK. 2005. Twelve essentials of science-based policy. Prev Chronic Dis [Online]. Available: http://www.cdc.gov/PCD/issues/2005/oct/05_0005.htm [accessed March 4, 2006]. Cronbach LJ and Associates. 1980. Toward Reform of Program Evaluation. San Francisco, CA: Jossey-Bass. DHHS (U.S. Department of Health and Human Services). 2002. Physical Activity Evaluation Handbook. Atlanta, GA: CDC, DHHS. [Online]. Available: http://www.cdc.govnccdphp/dnpa/physical/handbook/pdf/handbook.pdf [accessed June 1, 2006]. Doak CM, Visscher TL, Renders CM, Seidell JC. 2006. The prevention of overweight and obesity in children and adolescents: A review of interventions and programmes. Obes Rev 7(1):111–136. Economos CD, Brownson RC, DeAngelis MA, Foerster SB, Foreman CT, Gregson J, Kumanykia SK, Pate RR. 2001. What lessons have been learned from other attempts to guide social change? Nutr Rev 59(3):S40–S56. Fawcett SB, Sterling TD, Paine-Andrews A, Harris KJ, Francisco VT, Richter KP, Lewis RK, Schmid TL. 1995. Evaluating Community Efforts to Prevent Cardiovascular Disease. Atlanta, GA: National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention. Fawcett SB, Schultz J, Carson V, Renault V, Francisco V. 2002. Using Internet-based tools to build capacity for community based participatory research and other efforts to promote community health and development. In Minkler M, Wallerstein N, eds. Community Based Participatory Research for Health. San Francisco, CA: Jossey-Bass. Pp. 155–178. FHWA/DoT (Federal Highway Administration/Department of Transportation). 2006. Safe Routes to Schools. [Online]. Available: http://safety.fhwa.dot.gov/saferoutes/ [accessed June 10, 2006]. Flynn MA, McNeil DA, Maloff B, Mutasingwa D, Wu M, Ford C, Tough SC. 2006. Reducing obesity and related chronic disease risk in children and youth: A synthesis of evidence with “best practice” recommendations. Obes Rev 7(Suppl 1):7–66. Gill T, King L, Webb K. 2005. Best Options for Promoting Healthy Weight and Preventing Weight Gain in NSW. Sydney, Australia: NSW Centre for Public Health Nutrition and NSW Department of Health. [Online]. Available: http://www.cphn.biochem.usyd.edu.au/resources/FinalHealthyWeightreport160305.pdf [accessed May 23, 2006]. Goldman KD, Schmalz KJ. 2005. “Accentuate the positive!”: Using an asset-mapping tool as part of a community-health needs assessment. Health Promot Pract 6(2):125–128. Gortmaker SL, Peterson K, Wiecha J, Sobol AM, Dixit S, Fox MK, Laird N. 1999. Reducing obesity via a school-based interdisciplinary intervention among youth: Planet Health. Arch Pediatr Adolesc Med 153(4):409–418. Green L. 2001. From research to “best practices” in other settings and populations. Am J Health Behav 25(3):165–179. Green LW, Glasgow RE. 2006. Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology. Eval Health Prof 29(1):126–153. Green LW, Mercer SL. 2001. Can public health researchers and agencies reconcile the push from funding bodies and the pull from communities? Am J Public Health 91(12):1926– 1943.
OCR for page 72
Progress in Preventing Childhood Obesity: How Do We Measure Up? Greene, JC. 2000. Understanding social programs through evaluation. In: Denzin NK, Lincoln YS, eds. Handbook of Qualitative Research, 2nd ed. Thousand Oaks, CA: Sage Publications. Pp. 981–999. Greene JC, Caracelli VJ. eds. 1997. Advances in Mixed-Method Evaluation: The Challenges and Benefits of Integrating Diverse Paradigms. New Directions for Evaluation No. 74. San Francisco, CA: Jossey-Bass. Greene JC, Caracelli, VJ. 2003. Making paradigmatic sense of mixed methods practice. In: Tashakkori A, Teddlie C, eds. Handbook of Mixed Methods in Social and Behavioral Research. Thousand Oaks, CA: Sage Publications. Pp. 91–110. Guba EG, Lincoln YS. 1989. Fourth Generation Evaluation. Thousand Oaks, CA: Sage Publications. Habicht JP, Pelletier DL. 1990. The importance of context in choosing nutritional indicators. J Nutr 120(Suppl 11):1519–1524. Hancock T, Labonte R, Edwards R. 1999. Indicators that Count! Measuring population health at the community level. Can J Public Health 90(Suppl 1):S22–S26. Homer JB, Hirsch GB. 2006. System dynamics modeling for public health: Background and opportunities. Am J Public Health 96(3):452–458. Hopson R. 2003. Overview of Multicultural and Culturally Competent Program Evaluation. Oakland, CA: Social Policy Research Associates. [Online]. Available: http://www.calendow.org/reference/publications/pdf/evaluations/TCE0509-2004_Overview_of_Mu.pdf [accessed April 18, 2006]. House ER, Howe KR. 1999. Values in Evaluation and Social Research. Thousand Oaks, CA: Sage Publications. HWVA (Healthy West Virginia Act). 2005. Partnership for a Healthy West Virginia. [Online]. Available: http://www.healthywv.com/ [accessed April 13, 2006]. IOM (Institute of Medicine). 2005. Preventing Childhood Obesity: Health in the Balance. Washington, DC: The National Academies Press. IOM. 2006. Food Marketing to Children and Youth: Threat or Opportunity? Washington, DC: The National Academies Press. Kansas State Department of Education. 2004. Kansas Coordinated School Health. [Online]. Available: http://www.kshealthykids.org/ [accessed February 12, 2006]. Kretzmann JP, McKnight JL. 1993. Building Communities from the Inside Out. Chicago, IL: ACTA Publications. Midgley G. 2006. Systemic intervention for public health. Am J Public Health 96(3):466– 472. NCSL (National Conference of State Legislatures). 2006. Childhood Obesity—2005 Update and Overview of Policy Options. [Online]. Available: http://www.ncsl.org/programs/health/ChildhoodObesity-2005.htm [accessed April 24, 2006]. NetScan. 2005. School Nutrition & Physical Education Legislation: An Overview of 2005 State Activity. [Online]. Available: http://www.rwjf.org/files/research/NCSL%20-%20April%202005%20Quarterly%20Report.pdf [accessed April 24, 2006. NRC (National Research Council). 2005. Improving Data to Analyze Food and Nutrition Policies. Washington, DC: The National Academies Press. NRC and IOM. 2004. Children’s Health, The Nation’s Wealth. Washington, DC: The National Academies Press. NTSB (National Transportation Safety Board). 2005. We Are All Safer: Lessons Learned and Lives Saved, 1975–2005. 3rd edition. Safety Report NTSB/SR-05/01. Washington, DC: NTSB [Online]. Available: http://www.ntsb.gov/publictn/2005/SR0501.pdf [accessed August 14, 2006].
OCR for page 73
Progress in Preventing Childhood Obesity: How Do We Measure Up? Parisi Associates. 2002. Transportation Tools to Improve Children’s Health and Mobility. California Office of Traffic Safety; Safe Routes to School Initiative, California Department of Health Services; Local Government Commission. [Online]. Available: http://www.dot.ca.gov/hq/LocalPrograms/SafeRTS2School/TransportationToolsforSR2S.pdf [accessed June 3, 2006]. Potter LD, Duke JC, Nolin MJ, Judkins D, Huhman M. 2004. Evaluation of the CDC VERB Campaign: Findings from the Youth Media Campaign Longitudinal Survey, 2002–2003. Rockville, MD: WESTAT. Public Health Institute. 2004. Fact Sheet. [Online]. Available: http://www.calendow.org/news/press_releases/2004/03/CalTEENSpresskit.pdf [accessed June 3, 2006]. Pyramid Communications. 2003. Communities Helping Children Be Healthy: A Guide to Reducing Childhood Obesity in Low-Income African-American, Latino and Native American Communities. [Online]. Available: http://www.rwjf.org/files/publications/HealthyChildren.pdf [accessed February 20, 2006]. Ruiz-Primo MA, Shavelson RJ, Hamilton L, Klein S. 2002. On the evaluation of systemic science education reform: Searching for instructional sensitivity. J Res Sci Teach 39(5):369–393. Shadish WR. 2006. The common threads in program evaluation. Prev Chronic Dis [Online]. Available: http://www.cdc.gov/Pcd/issues/2006/jan/05_0166.htm [accessed August 7, 2006]. Siegel JE, Weinstein MC, Russell LB, Gold MR. 1996. Recommendations for reporting cost-effectiveness analyses. Panel on Cost-Effectiveness in Health and Medicine. J Am Med Assoc 276(16):1339–1341. Staunton CE, Hubsmith D, Kallins W. 2003. Promoting safe walking and biking to school: The Marin County success story. Am J Public Health 93(9):1431–1434. Swinburn B, Gill T, Kumanyika S. 2005. Obesity prevention: A proposed framework for translating evidence into action. Obes Rev 6(1):23–33. Tashakkori A, Teddlie C. eds. 2003. Handbook of Mixed Methods in Social and Behavioral Research. Thousand Oaks, CA: Sage Publications. TFAH (Trust for America’s Health). 2004. F as in Fat: How Obesity Policies are Failing America. Washington, DC: The Trust for America’s Health. [Online]. Available: http://healthyamericans.org/reports/obesity/ObesityReport.pdf [accessed July 23, 2006]. TFAH. 2005. F as in Fat: How Obesity Policies are Failing America 2005. Washington, DC: The Trust for America’s Health. [Online]. Available: http://healthyamericans.org/reports/obesity2005/Obesity2005Report.pdf [accessed July 23, 2006]. Thompson-Robinson M, Hopson RK, SenGupta S. eds. 2004. In Search of Cultural Competence in Evaluation: Towards Principles and Practices. New Directions for Evaluation #102. San Francisco, CA: Jossey-Bass. TRB (Transportation Research Board) and IOM. 2005. Does the Built Environment Influence Physical Activity? Examining the Evidence. TRB Special Report 282. Washington, DC: The National Academies Press. [Online]. Available: http://books.nap.edu/htmlSR282/SR282.pdf [accessed December 29, 2005]. Tucker P, Liao Y, Giles W, Liburd L. 2006. The REACH 2010 logic model: An illustration of expected performance. Prev Chronic Dis [Online]. Available: http://www.cdc.gov/pcd/issues/2006/jan/05_0131.htm [accessed March 4, 2006]. Wang LY, Yang Q, Lowry R, Wechsler H. 2003. Economic analysis of a school-based obesity prevention program. Obes Res 11(11):1313–1324. WHO (World Health Organization). 1998. Health Promotion Evaluation: Recommendations to Policy-Makers. Report of the WHO European Working Group on Health Promotion Evaluation. Copenhagen, Denmark: WHO.
Representative terms from entire chapter: