8

Monitoring and Summative Evaluation of Community Interventions1

Why: Why develop a Community-Level Obesity Intervention Monitoring and Summative Evaluation Plan? Monitoring and summative evaluation of local interventions is critically important both to guide community action and to inform national choices about the most effective and cost-effective interventions for funding, dissemination, and uptake by other communities.

What: What is a Community-Level Obesity Intervention Monitoring and Summative Evaluation Plan? Complementary to the Community Obesity Assessment and Surveillance Plan (in Chapter 7), a Monitoring and Summative Evaluation Plan for community-level obesity interventions is a template to help communities to monitor implementation of the intervention and evaluate the long-term outcomes and population impacts such as behavior change, reduced prevalence of obesity, and improved health.

How: How should a Community-Level Obesity Intervention Monitoring and Summative Evaluation Plan be implemented? A template for customizing plans for monitoring and summative evaluation identifies priorities to accommodate local differences in terms of opportunities for change, context, resources available for evaluating strategies recommended in the Accelerating Progress in Obesity Prevention report, and stakeholder input. Because innovations in obesity prevention often receive their initial test at the community level, rigorous and practical methods are desirable to build national knowledge. Combining knowledge from both experimental studies and practice experience can inform national evaluation by casting light on the prevalence of strategies, their feasibility, and their ease of implementation.

_____________

1 A portion of this chapter content was drawn from commissioned work for the Committee by Allen Cheadle, Ph.D., Group Health Cooperative; Suzanne Rauzon, M.P.H., University of California, Berkeley; Carol Cahill, M.L.S., Group Health Cooperative; Diana Charbonneau, M.I.T., Group Health Cooperative; Elena Kuo, Ph.D., Group Health Cooperative; and Lisa Schafer, M.P.H., Group Health Cooperative.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 223
8 Monitoring and Summative Evaluation of Community Interventions1 Why: Why develop a Community-Level Obesity Intervention Monitoring and Summative Evaluation Plan? Monitoring and summative evaluation of local interventions is critically important both to guide community action and to inform national choices about the most effective and cost-effective interventions for funding, dissemination, and uptake by other communities. What: What is a Community-Level Obesity Intervention Monitoring and Summative Evaluation Plan? Complementary to the Community Obesity Assessment and Surveillance Plan (in Chapter 7), a Monitoring and Summative Evaluation Plan for community-level obesity interventions is a template to help communi- ties to monitor implementation of the intervention and evaluate the long-term outcomes and population impacts such as behavior change, reduced prevalence of obesity, and improved health. How: How should a Community-Level Obesity Intervention Monitoring and Summative Evaluation Plan be implemented? A template for customizing plans for monitoring and summative evaluation identifies priorities to accommodate local differences in terms of opportunities for change, context, resources avail- able for evaluating strategies recommended in the Accelerating Progress in Obesity Prevention report, and stakeholder input. Because innovations in obesity prevention often receive their initial test at the community level, rigorous and practical methods are desirable to build national knowledge. Combining knowledge from both experimental studies and practice experience can inform national evaluation by casting light on the prevalence of strategies, their feasibility, and their ease of implementation. 1  Aportion of this chapter content was drawn from commissioned work for the Committee by Allen Cheadle, Ph.D., Group Health Cooperative; Suzanne Rauzon, M.P.H., University of California, Berkeley; Carol Cahill, M.L.S., Group Health Cooperative; Diana Charbonneau, M.I.T., Group Health Cooperative; Elena Kuo, Ph.D., Group Health Cooperative; and Lisa Schafer, M.P.H., Group Health Cooperative. 223

OCR for page 223
T his chapter presents guidance to develop plans for monitoring and evaluating2 community-level obesity prevention interventions.3 Flexibility in developing community-level monitoring and summa- tive evaluation plans is appropriate given the variety of user needs (as summarized in Chapter 2), local context, and available resources. Monitoring and evaluating community-level efforts to prevent obesity is critical for accelerating national progress in obesity prevention and for providing evidence to inform a national plan for evaluation. Community-level evaluation encompasses the issues of learning not only “what works,” but also the relative feasibility to implement interventions in different situations and the comparative effectiveness of various strategies—the extent to which they work. This information is essen- tial to improving a national plan for evaluation. In line with “what works,” monitoring of the implemen- tation of interventions also informs local implementers on how to improve and manage interventions. It casts light on how and why these interventions may prevent obesity. Finally, it encompasses translating effective interventions for implementation on a broader scale and determining the contexts in which they are and are not effective (i.e., generalizability). This learning will allow greater return on national invest- ments in obesity prevention. Definition of Community-level Interventions As described in Chapter 7, the Committee defines community level as activities conducted by local governmental units (e.g., cities, counties), school districts, quasi-governmental bodies (e.g., regional plan- ning authorities, housing authorities, etc.) and private-sector organizations (e.g., hospitals, businesses, child care providers, voluntary health associations, etc.). Communities vary widely with respect to popula- tion size, diversity, context, and impact of obesity. Community capacities for monitoring and summative evaluation are also highly variable, with a wide range of expertise and resources for collecting and using data to evaluate the implementation and effectiveness of interventions. Community intervention monitoring and summative evaluation can be focused on programs, sys- tems, policies, or environmental changes, or any combination of these in multi-faceted initiatives. • A local program focuses on a specific sub-population of a community, most often takes place in a single setting or sector (e.g., schools), is usually administered by a single organization, and deploys a limited set of services or health promotion strategies. In the past, local efforts focused mostly on counseling, education, and behavior-change programs delivered directly to individu- als as well as some broader school-based and community-based programs. Published reports showed modest effects of these programs when done alone (e.g., Anderson et al., 2009; Waters et al., 2011), so the field has moved to incorporating them into more comprehensive or multi- level interventions. • A community-level initiative is a multi-level, multi-sector set of strategies focused on a defined geographic community or population, and it typically includes policy, program, and environ­ mental changes in different parts of the community (e.g., government, business, schools, com- 2  As defined in Chapter 1, monitoring is the tracking of the implementation of interventions compared to standards of performance. Evaluation is the effort to detect changes in output, outcomes, and impacts associated with interventions and to attribute those changes to the interventions. 3  Interventions refer to programs, systems, policies, environmental changes, services, products, or any combination of these multi-faceted initiatives. 224 Evaluating Obesity Prevention Efforts

OCR for page 223
munity organizations). Multi-component mass media campaigns, such as the Home Box Office (HBO) Institute of Medicine (IOM) campaign The Weight of the Nation (TWOTN), that utilize community screenings, learning toolkits, and local events, also fall into this category. Based on experience with control of tobacco and other drugs, multi-component initiatives hold greater potential to prevent obesity than do programs or individual strategies by themselves (IOM, 2012a). This chapter covers some important considerations for monitoring and summative evaluation that exist across obesity prevention programs and community-level initiatives. The chapter emphasizes the par- ticular challenges and opportunities of community-level evaluation, for which evaluation methods are less well established and evolving. The Special Challenges of Community-level Initiatives Evaluators have less control over community-level initiatives than they do over research-based pro- grams or nationally guided efforts such as U.S. Department of Agriculture’s (USDA’s) feeding programs and other federal transportation initiatives. This makes the monitoring of implementation essential and the use of rigorous evaluation methods more challenging (Hawkins et al., 2007; Sanson-Fisher et al., 2007). Any evaluation must weigh trade-offs between internal and external validity, feasibility, and ethics ­ versus experimental control, and intrusiveness versus free choice among participants. These decisions become more difficult for initiatives that arise from community decision making (Mercer et al., 2007). For example, communities will institute their own mix of local policies and environmental changes, mak- ing random assignment to a particular intervention (program or policy), and thus attribution of cause and effect with outcomes, more difficult. Exposure to certain elements of a community initiative can sometimes be determined by random assignment, but exposure to the entire “package” usually cannot.4 Characteristics of a community influence both the implementation and the outcome of the intervention being evaluated, requiring assessment of community contextual influences (IOM, 2010; Issel, 2009). In general, the field needs to develop efficient and valid methods for community evaluations including the documentation of the unfolding, sequencing, and building of multiple changes in communities and sys- tems over time (Roussos and Fawcett, 2000) and synergies among these changes (Jagosh et al., 2012). Community-level intervention on policy, environment, and systems is a relatively new approach, and therefore evidence of the effectiveness of most of these strategies is limited. In particular, more empirical evidence is needed about whether, and to what extent, changing food environments promotes healthier eating (Osei-Assibey et al., 2012). There also is some uncertainty about which specific changes in the built environment will lead to increases in physical activity (Heath et al., 2006; Kahn et al., 2002). Appropriate methods are emerging to evaluate community-level impact, but most studies continue to be cross-sectional (an observation made at one point in time or interval) (Booth et al., 2005; Heath et al., 2006; Papas et al., 2007). Several strategies with evidence of effectiveness are listed in the Centers for Disease Control 4  Although random assignment of communities to entire policies and systems has not, to the Committee’s knowledge, been attempted in the obesity prevention field, both the United States and other countries have randomly assigned place and people to policies in the past, as in the case of the RAND Health Insurance Experiment (Brook et al., 2006) and Mexico’s Seguro Popular experiment (King et al., 2009). Since the 1980s, researchers have randomized entire communities to multi-faceted prevention initiatives, but the experiments are often costly and relatively rare, with limited generalizability (COMMIT Research Group, 1995; Farquhar et al., 1990; Merzel and D’Afflitti, 2003; Wagner et al., 2000). Monitoring and Summative Evaluation of Community Interventions 225

OCR for page 223
and Prevention (CDC) Community Guide (Community Preventive Services Task Force, 2013), as well as in “What Works for Health,” a resource associated with the County Health Rankings model of assessing community needs (County Health Rankings, 2013). However, to date, CDC and IOM recommendations for strategies to include in community-level initiatives tend to rely on expert opinion (IOM, 2009; Khan et al., 2009). Evidence points to comprehensive, community-level initiatives as the most promising approach to promote and sustain a healthy environment (Ashe et al., 2007; Doyle et al., 2006; Glanz and Hoelscher, 2004; IOM, 2009; Khan et al., 2009; Ritchie et al., 2006; Sallis and Glanz, 2006; Sallis et al., 2006), particularly when supported by state or national mass media and other components that communities cannot afford (CDC, 2007). Related work on tobacco control programs, notably from the California and Massachusetts model programs, demonstrated how national and state mass media can support local pro- grams with resources (Koh et al., 2005; Tang et al., 2010). To address the special monitoring and summative evaluation challenges of community-level initia- tives, the Committee commissioned5 a review of published literature, as well as unpublished evaluation studies and online descriptions, to identify initiatives that have been or are currently being evaluated. Cheadle and colleagues (2012) conducted a search for years 2000-2012 using PubMed and websites of agencies that aggregate reports on obesity prevention interventions, such as the Agency for Healthcare Research and Quality Innovations Exchange and the Robert Wood Johnson Foundation’s (RWJF’s) Active Living Research program. The review found 37 community-level initiatives that included sufficient detail concerning their intervention and evaluation methods. These included 17 completed initiatives that included population-level outcome results (3 negative studies, 14 positive) (see Table H-1 in Appendix H). Another 20 initiatives are either in process or do not measure population-level behavior change (see Table H-2 in Appendix H). Some of the largest and potentially most useful evaluations are in progress. In particular, many independent evaluations of CDC’s Communities Putting Prevention to Work initiatives are being conducted; and a large-scale National Institutes of Health–funded Healthy Communities Study is doing a retrospective examination of associations between the intensity of more than 200 community programs and policies and community obesity rates in more than 200 areas across the United States (see Appendix H) (National Heart, Lung, and Blood Institute, 2012). toward a Community-level monitoring and summative Evaluation Plan As noted in Chapter 2 and the L.E.A.D. (Locate evidence, Evaluate it, Assemble it, and Inform Decisions) framework (IOM, 2010), local monitoring and summative evaluation plans should be driven by the information needs of end users and the contexts of decisions, not on preconceptions of what evaluation is about. Common measures of progress are highly desirable, because they permit comparison of interven- tions and aggregation of studies into a body of evidence. However, uniformity of methods is not desirable, because the contexts of local interventions are so diverse. Moreover, available resources dictate the types of ­ data collection and analysis that are appropriate and feasible. This chapter discusses the choices available within available resources. With the pursuit of more universal agreement on and provisions for indicators and surveillance measures recommended in earlier chapters, more would be available and feasible. 5  Commissioned for the Committee by Allen Cheadle, Ph.D., Group Health Cooperative; Suzanne Rauzon, M.P.H., University of California, Berkeley; Carol Cahill, M.L.S., Group Health Cooperative; Diana Charbonneau, M.I.T., Group Health Cooperative; Elena Kuo, Ph.D., Group Health Cooperative; Lisa Schafer, M.P.H., Group Health Cooperative. 226 Evaluating Obesity Prevention Efforts

OCR for page 223
Tailoring the Plan to End-User Needs To establish “what works” (effectiveness), outcomes need to be attributed to the community inter- vention. This requires high-quality measurement and design, consistent with resources and logistical con- straints (Shadish et al., 2002). Although rigorous methods are more common in research projects, they are also feasible for community evaluations, and some examples of best practices when conducting evalu- ations are described in Appendix H. On the other hand, to demonstrate local progress, stakeholders may be satisfied with intervention monitoring and summative evaluation that measures good implementation and an improvement in outcomes, without worrying much about causal attribution to a specific obesity prevention effort. Yet, other purposes lie somewhere between these, as with measures of progress in spe- cific settings and population segments, and these are important for generalizable knowledge. For example, by knowing the particular combination of interventions in particular communities and observing relative improvements in those communities, without being overly strict about causal attribution, the field can better understand the types of interventions (or combinations of interventions) that are most likely to be associated with desired outcomes, their prevalence and feasibility nationally, as well as the dose of envi- ronmental change (i.e., strength of intervention, duration, and extent of reach to affect the target popula- tion) likely required to achieve them. This information can then inform the priorities for more rigorous tests of effectiveness. Tailoring the Plan to Available Resources Almost universally, local monitoring and summative evaluation has limited resources (Rossi et al., 2004). Therefore, evaluation needs to tailor the methods to answer the highest priority questions. Infrastructure improvements as outlined in Chapter 3 may alleviate the situation, but even then most local evaluation budgets are likely to be quite small without the assistance of outside funders.6 Rigorous m ­ ethods may seem out of reach for many local evaluations, and the cost of data collection can be daunt- ing given the scarcity of local surveillance information. Still, useful evaluation can be conducted, even when expensive data collection is not feasible and methods have limited rigor. As seen below, some rela- tively simple additions to design and measurement can greatly improve the monitoring and summative evaluation plan, thus adding to national knowledge about community interventions. Tailoring the Plan to the Intervention Context and Logic In community-level interventions, the number and kind of strategies are highly diverse and may vary substantially from one initiative to another, as communities implement programs, policies, and envi- ronmental changes that address their specific issues and context. Also, there is potential for community engagement to increase over time after community changes take place, thus leading to more commu- nity changes. For evaluation, this situation is radically different from conventional programs, in which ­ ideally) a well-defined linear set of activities is tested, improved, and disseminated for adoption in other ( locations. This situation poses special issues for planning, design, measurement, and analysis. 6  In 2012, the Community Transformation Grant awards ranged from $200,000 to $10 million (CDC, 2012). CDC recommended that 10 percent be used for evaluation (Laura Kettel-Khan, Office of the Director, Division of Nutrition, Physical Activity and Obesity, CDC, April 2013). Monitoring and Summative Evaluation of Community Interventions 227

OCR for page 223
Components of a Community-level Obesity intervention monitoring and summative Evaluation Plan The components of a community-level monitoring and summative evaluation plan are seen in a proposed template (see Box 8-1). Within those components, considerable flexibility is needed. The core of any plan includes engaging stakeholders, identifying resources, having a logic model or theory of change, selecting the right focus, using appropriate measures, collecting quality data, using appropriate analytic methods, interpreting or making sense of the data, and disseminating the findings. BOX 8-1 Components of a Community-Level Obesity Intervention Monitoring and Summative Evaluation Plan Purpose: To guide community action and to inform national choices about the most effective and cost-­ effective strategies identified in the Accelerating Progress in Obesity Prevention report for funding, dis­ semination, and uptake by other communities. 1. Design stakeholder involvement. a. Identify stakeholders. b. Consider the extent of stakeholder involvement. c. Assess desired outcomes of monitoring and summative evaluation. d. Define stakeholder roles in monitoring and summative evaluation. 2. Identify resources for monitoring and summative evaluation. a. Person-power resources b. Data collection resources 3. Describe the intervention’s framework, logic model, or theory of change. a. Purpose or mission b. Context or conditions c. Inputs: resources and barriers d. Activities or interventions e. Outputs of activities f. Intended effects or outcomes 228 Evaluating Obesity Prevention Efforts

OCR for page 223
There are many good resources on monitoring and summative evaluation methods, so this chap- ter does not repeat them (Cronbach, 1982; Fetterman and Wandersman, 2005; Fitzpatrick et al., 2010; Patton, 2008; Rossi et al., 2004; Shadish et al., 2002; Wholey et al., 2010). For example, this report does not include a discussion on analytic methods. Certain issues, however, are central to developing an effec- tive local evaluation of obesity prevention. For this reason, the chapter devotes a good bit of attention to stakeholder involvement, emerging methods, and interpretation of findings. 4. Focus the monitoring and summative evaluation plan. a. Purpose or uses: What does the monitoring and summative evaluation aim to accomplish? b. Priorities by end-user questions, resources, context c. What questions will the monitoring and summative evaluation answer? d. Ethical implications (benefit outweighs risk) 5. Plan for credible methods. a. Stakeholder agreement on methods b. Indicators of success c. Credibility of evidence 6. Synthesize and generalize. a. Disseminate and compile studies b. Learn more from implementation c. Ways to assist generalization d. Shared sense-making and cultural competence e. Disentangle effects of interventions SOURCE: Adapted from A Framework for Program Evaluation: A Gateway to Tools. The Community Tool Box, http://ctb.ku.edu/en/tablecontents/sub_section_main_1338.aspx (accessed November 12, 2013). Monitoring and Summative Evaluation of Community Interventions 229

OCR for page 223
Designing Stakeholder Involvement Some commonly identified stakeholder groups include those operating the intervention, such as staff and members of collaborating organizations, volunteers, and sponsors, and priority groups served or affected by the intervention, such as community members experiencing the problem, funders, public officials, and researchers. Some stakeholder groups are not immediately apparent, and guidance on the general subject is available (e.g., Preskill and Jones, 2009). Two aspects are specifically important for planning ­ ommunity-level obesity prevention monitoring and evaluation: community participation and c cultural competence. Community Participation in Obesity Monitoring and Summative Evaluation Plans Community participation is beneficial for the planning of most program monitoring and summa- tive evaluation; it is essential for the evaluation of community-level initiatives. Yet, in the commissioned literature review of 37 community-level evaluations, only 6 mentioned participation at all and that was in the context of the intervention rather than the evaluation (see Appendix H). As seen in Chapter 2, com- munity coalitions are often the driving force behind community-level initiatives. Community engagement and formative evaluation are critically linked. Without community engagement, the community may have inadequate trust in the evaluation process to make strategy improvements based on evaluation findings and recommendations. Community participation may also facilitate access to data, not only qualitative but also quantitative data kept by organizations and not available to the public, that evaluators would otherwise not be aware of or able to collect. Other benefits have been well described. The primary dis­ advantages include time burden on community members and a lack of skill in community engagement on the part of many evaluators (Israel et al., 2012; Minkler and Wallerstein, 2008). Participatory approaches to community monitoring and summative evaluation reflect a continuum of community engagement and control—from deciding the logic model and evaluation questions to making sense of the data and using them to improve obesity prevention efforts. In less participatory a ­ pproaches, the evaluator has more technical control of the evaluation (Shaw et al., 2006). In more par- ticipatory approaches, communities and researchers/evaluators share power to a greater extent when pos- ing evaluation questions, making sense of results, and using the information to make decisions, although there may be trade-offs with this approach, too (Fawcett and Schultz, 2008; Mercer et al., 2008). The Special Role of Cultural Competence in Obesity Monitoring and Summative Evaluation Plans As noted in Chapter 5, there is a national urgency to evaluate and address the factors that lead to racial and ethnic disparities in obesity prevalence. Community interventions to address such disparities require cultural competence in both the interventions and their evaluations. Participatory methods facili- tate the use of cultural competence. The American Evaluation Association (2011) states: “Evaluations cannot be culture free. Those who engage in evaluation do so from perspectives that reflect their values, their ways of viewing the world, and their culture. Culture shapes the ways in which evaluation questions are conceptualized, which in turn influence what data are collected, how the data will be collected and analyzed, and how data are ­nterpreted” (Web section, The Role of Culture and Cultural Competence in Quality Education). i 230 Evaluating Obesity Prevention Efforts

OCR for page 223
Ethical, scientific, and practical reasons call for culturally competent evaluation: ethical, because profes- sional guidelines specify evaluation that is valid, honest, respectful of stakeholders, and considerate of the g ­ eneral public welfare; scientific, because misunderstandings about cultural context create systematic error that threatens validity; and cultural assumptions, because the theories underlying interventions reflect implicit and explicit assumptions about how things work. The practical reason to consider culture in evaluating of obesity prevention efforts is that the record is mixed about the effectiveness of cultural competence in health promotion programs (e.g., Robinson et al., 2010). Culturally competent evaluation can help the field to address this mixed result by assuring that interventions are, in fact, consistent with a population’s experience and expectations. Evaluation has demonstrated the effectiveness of cultural tailoring in some areas (Bailey et al., 2008; Hawthorne et al., 2008). Culturally tailored media materials and targeted programs reach more of the intended population (Resnicow et al., 1999). Culturally competent evaluation can assess whether interventions focus on issues of importance to the cultural group; whether interventions address where and how people eat, shop, and spend recreational time; and which environmental changes produce the most powerful enablers for more healthful nutrition and physical activity. Identifying Resources for Monitoring or Summative Evaluation Monitoring and summative evaluation plans can maximize resources in two areas, person-power and data collection. Regarding person-power, evaluations can draw on the expertise of local colleges and universities and of health departments, which will generally improve evaluation quality and potentially lower the cost. Faculty in schools offering degrees in health professions are often required or encouraged by accrediting bodies to provide community service, which they often do through evaluations. Students will find evaluation projects suitable for service-learning opportunities and internships. For example, the Council on Education for Public Health requires that tenure and promotion strongly consider community service and that student experiences include service learning with community organizations (Council on Education for Public Health, 2005). Free services are not always high-quality services, however, and may lack consistency and follow-up. The Community-Campus Partnerships for Health offers useful guidance for maximizing the quality of evaluation activities provided as service (Community-Campus Partnerships for Health, 2013). The guiding principles for evaluation outlined in Chapter 3, which are endorsed by researchers’ professional associations, can also help. Data collection is generally the highest-cost component of evaluations. Using available information where applicable, such as local surveillance and other community assessment and surveillance (CAS) data, can minimize the cost. Making data collection a by-product of prevention activity can also lower cost, as in the collection of participation rosters, media tracking, and public meeting minutes. Community resident volunteers can collect data using methods such as photovoice (see Chapter 7 and Appendix H) and envi- ronmental audits,7 thus adding both person-power and data. 7  Observations to identify interventions being implemented in a particular area. Monitoring and Summative Evaluation of Community Interventions 231

OCR for page 223
Describing the Intervention Framework, Logic Model, or Theory of Change Frameworks, logic models, and theories of change are heuristics—experience-based techniques for problem solving, learning, and discovery designed to facilitate and guide decision making. A logic model is not a description of the intervention itself, but rather a graphic depiction of the rationale and expecta- tions of the intervention. A theory of change is similar to a logic model except that it also describes the “mechanisms through which the intervention’s inputs and activities are thought to lead to the desired out- comes” (Leviton et al., 2010b, p. 215). For the monitoring and summative evaluation plan, one ideally can turn to the logic model or t ­heory used in the planning of the program, but often this was not developed or made explicit in the ear- lier program planning, and must be constructed retrospectively. There are many options to choose from among formats for logic models and theories of change. The choice depends on what will have the most clarity and ease of presentation for the user audience (Leviton et al., 2010b). Figure 8-1 illustrates a graphic depiction of the presumed components and causal path- ways in local-level obesity prevention efforts. Not all evaluations will include all the elements or all the pathways, which is to be expected in areas with such diversity of local initiatives. Building on Figure 8-1, Table 8-1 provides some detail on generic logic model components, with the potential program components listed in the first row and potential community-level components in the second row. Outputs and outcomes resulting from programs are also commonly seen in multi-faceted community initiatives. In building logic models, the components must be clarified. Although not appearing in the table, the purpose or mission describes the problem or goal to which the program, effort, or initiative is addressed, and context or conditions mean the situation in which the intervention will take place and factors that may affect outcomes. Inputs represent resources such as time, talent, equipment, information, and money. Inputs also include barriers such as a history of conflict, environmental factors, and economic conditions. The activities are the specifics of what the intervention will do to affect change and improve outcomes, while outputs are direct evidence of having performed the activities, such as products or participation in services by a target group. Activities and outputs are logically connected to short-, intermediate-, and ­long-term outcomes: for example, engagement of local decision makers is presumed to help to achieve Initial Policy, Environment, Change Conditions System Changes Diet Reduce Obesity Prevalence Change Inputs Community Physical Programs Engagement Activity FIGURE 8-1  Generic logic model or theory of change for community obesity prevention. NOTES: Not all interventions will include programs, policies, and environmental changes or systems changes. Not all interven- Figure 8-1.eps tions will focus on both diet and physical activity. Dashed lines indicate potential for interventions to increase community engagement over time. 232 Evaluating Obesity Prevention Efforts

OCR for page 223
TABLE 8-1  Generic Logic Model for Community-Level Initiatives to Prevent Obesity Outcomes (Impact)a Inputs Outputsa Short term Intermediate Ultimate Initial: • program • target group Changes in: Improved: • resources activities program • knowledge • physical Components • staff • outreach participants • attitudes activity Program • public • media • diet opinion messages • community norms • public • decision Increases in: Changes in: • changes in Improved: opinion makers • public • policies community • prevalence of • community engaged support • environments norms obesity and norms • public • resources • systems • change Multi-Faceted Initiatives overweight • policies meetings • advocacy, sustained in • population identified attended allies, and environment health • policy • community power • change opportunities organized sustained in • advocacy • advocates policy and allies recruited and system trained • enforcement of changes monitored a Outputs and outcomes resulting from program components are also commonly seen in the multi-faceted initiatives. changes in policy and environment, which are presumed to change diet or physical activity and, therefore, help to achieve healthy weight for a greater portion of the population. Logic models and theories of change help greatly to assess the plausibility that particular interven- tions can achieve their goals. Is it plausible—believable—that the connecting arrows of a logic model or the assumptions of a theory are likely to produce the outcomes predicted? Evaluating implausible inter- ventions wastes resources and is needlessly discouraging for the field. Logic models and theories of change also cast light on the “dose” of intervention (i.e., intensity, duration, and reach) that is likely to be neces- sary to achieve change. The low-cost technique of evaluability assessment helps to establish plausibility, indicates which intervention components are ready for evaluation, and pinpoints areas for improvement in implementation or the mix of strategies involved (Leviton et al., 2010b). Focusing the Obesity Monitoring and Summative Evaluation Plan The framework, logic model, or theory of change helps to focus the monitoring and summative evaluation plan: what the evaluation aims to accomplish. By prioritizing based on user needs, resources, and context, the choices often become very clear. Limited resources do not have to imply reduced rigor, and below, in the section titled “Planning for Credible Methods,” some suggestions are offered to improve rigor. Monitoring and Summative Evaluation of Community Interventions 233

OCR for page 223
combinations of interventions and outcomes in any given community. However, new patterns can be seen when one steps back from complexity and looks at differences and similarities across community initiatives. Looking across many communities, it may become possible to identify the interventions that are consistently associated with improvements in outcomes. It may even be possible to derive theories to explain the patterns after the fact, a practice that has become very useful for evaluation in complex areas such as quality improvement in medicine (Dixon-Woods et al., 2011). Theories of importance to community-level initiatives might include organizational change theory, for example (Glanz and Bishop, 2010). Such patterns might be identified, provided that the outcomes of local community-level evaluations become more readily available through fully published details of the interventions and their implementa- tion and context (Green et al., 2009). For example, Philadelphia instituted major changes in the public schools and in the communities surrounding the schools. If these features are consistently seen in urban communities where obesity has declined, then at a minimum they rise to the top of priorities for further study. This is yet another way that single site, pre-post evaluations (perhaps complemented with logic model designs) can have value. Combined with research projects that improve measurement of the com- munity intervention and introduce a variety of controls (perhaps using nonequivalent control, regression discontinuity, or interrupted time series designs), such instances reduce uncertainty about the best invest- ments for scarce prevention resources. However, it will not always be possible to detect which intervention made the most difference. It is important to keep documenting the outcome of interest, as would a historian, documenting key events and contextual changes that occur on the timeline. A Better System to Identify Interventions That Are Suitable for Evaluation Across communities and interventions, the wealth of potential leverage points to intervene is d ­ aunting—an “embarrassment of riches” thanks to the social ecological model. In addition to disentan- gling the powerful leverage points in existing evaluations, it may be possible to approach the problem dif- ferently, through the Systematic Screening and Assessment (SSA) Method (Leviton et al., 2010a). Whereas syn­hesis relies on collecting the results of existing evaluations, the SSA Method collects promising pro- t grams and then determines whether they are worthwhile evaluating. The SSA Method was initially used in collaboration among RWJF, CDC, and the ICF Macro contract research firm to screen 458 nominations of policy and environmental change initiatives to prevent childhood obesity. An expert panel reviewed these nominations and selected 48 that underwent evaluability assessments to assess both their poten- tial for population-level impact and their readiness for evaluation. Of these, 20 were deemed to be both promising and ready for evaluation, and at least 6 of the top-rated innovations have now undergone eval- uation. Byproducts of this process included some insights about the combinations of program components that were plausible to achieve population-level outcomes. Out of the array of potential leverage points, at least some were identified as having more payoffs, in advance of costly evaluation. EXAMPLE: Opportunities and challenges of Evaluating Community-Level Components of the WEight of the Nation Some of the opportunities and challenges for measuring progress in obesity prevention at the c ­ ommunity level can be illustrated using TWOTN as an example. The TWOTN video and collateral 244 Evaluating Obesity Prevention Efforts

OCR for page 223
c ­ ampaign—a nationally developed program—can be employed locally to engage stakeholders to take action as part of a multi-component awareness, advocacy, and action strategy (see Chapter 1 for descrip- tion). One approach to assessing the local contributions of TWOTN, as distinguished from national con- tributions (see Chapter 6 for description of measurement opportunities of the national components), is to evaluate such local efforts consistent with their stated aims and an articulated logic model or theory of change. The following describes current community-level evaluations that are in process and how the use of a logical model as described in this chapter could focus the analysis and improve the evaluation infor- mation. Two local-level evaluations of TWOTN are in process. First, Kaiser Permanente surveyed people who conducted small-group screenings of TWOTN and planners and supporters of community-level activities. The surveys focused on participation, usefulness of media and written materials, and intended changes (Personal communication, Sally Thompson Durkan, Kaiser Permanente, April 29, 2013). Second, CDC Prevention Research Centers (PRCs), led by the University of North Carolina, Chapel Hill, are identifying locally hosted screenings, conducting a pretest and immediate posttest, and following up with 6-week Web surveys of participants willing to be contacted by e-mail. They ask about message credibility, self-efficacy for both individual- and community-level change, community capacity for change, intention to make individual change as well as influence policy and environmental change, and support for three obesity-related policies. The follow-up survey queries respondents about action taken on the single item they identified as a focus of their activity in the posttest. These CDC PRC efforts will provide some information about community-level activities subsequent to screenings. The community-level evaluations could be more useful if they analyzed their data using the logic model design described above. For example, if schools utilize TWOTN-derived products, such as the three follow-on children’s movies released in May 2013, then one might assess changes in knowledge about obesity before versus after viewing the movies. Lacking a logic model, or even in addition to the logic model, content analysis of the movies could provide an indication of the particular themes and infor- mation that are being emphasized. Any other specific objectives of the children’s movies would need to be specified in advance and measured before and after their viewing. This would be strengthened if measured for comparison in nearly identical classrooms, schools, or other units not exposed, with the pre-post dif- ferences between units the measure of effect. This, in turn, would be further strengthened if multiple units exposed and not exposed were randomly assigned to receive or not receive the exposure to the video and other TWOTN components. Implementing these steps will require a sustained commitment of resources to support measurement of the community components of the campaign. Other approaches recommended at the 2012 IOM Workshop (IOM, 2012b) (see Chapter 5) might also be considered. Regardless of research design, the Committee would emphasize the importance of • utilizing strong theoretical or logic models (Cheadle et al., 2003; Julian, 1997); • monitoring reach or dosage, which is actually a critical step in the logic model for any health promotion program or mass media campaign (Cheadle et al., 2012; Glasgow et al., 2006; Hornik, 2002); • conducting multiple waves of measurement, the more the better, preferably both before and after a campaign (Shadish et al., 2002); and Monitoring and Summative Evaluation of Community Interventions 245

OCR for page 223
• replicating and more structured reporting on the reach, effectiveness (with whom), adoption by organizations, implementation, and maintenance to enhance external validity or adaptability to other settings (Glasgow et al., 1999). The mass media literature emphasizes the importance of exposure to the message (Hornik, 2002; IOM, 2012b), which is closely associated with or equivalent to reach and dose; the literature on small- group and community-based interventions emphasizes the parallel concept of participation in the inter- vention (Glasgow et al., 2006). It is inherently obvious that an intervention, whether it is a mass media campaign or a community-based intervention, cannot affect people’s attitudes or behaviors unless they are exposed to and participate in it. The reach or exposure might amount to as little as a touch, with the associated outcome being the person’s or group’s awareness or recognition of some feature of the event or message, or as much as intensive engagement, measured by a higher level of recall, knowledge, and r ­ eaction to the event or message, discussion with others, engagement in new behaviors, and possibly attri- bution of a behavior change to the intervention. Summary Community-level monitoring and summative evaluations are vital for guiding local action and informing national choices about the most effective and cost-effective obesity prevention interventions rec- ommended in the IOM Accelerating Progress in Obesity Prevention (APOP) report for funding, dissemi- nation, and uptake by other communities. The depth and rigor of the evaluations should depend on user needs, the resources available, and the context. Although the highest quality designs and measurement are always helpful, resources may not be available to use them and user questions may not require them. If the monitoring and summative evaluation plan considers resource levels in the context of end-user needs, ­ then key outcomes are likely to be addressed, as summarized in Table 8-3. Even a few modest additions can greatly improve the credibility and quality of community moni- toring and summative evaluation measurement and designs. Yet even the less preferred and less rigorous evaluation designs and measurement can be helpful in aggregate, at the national level, to reduce uncer- tainty about priority strategies for adoption and further study. Monitoring and summative evaluation plans should at a minimum incorporate the elements of stakeholder involvement; identify and leverage resources; describe the intervention’s framework, logic model, or theory of change; focus the monitoring and evaluation plan; use credible methods; and synthesize and generalize the findings. Given the existing gaps in the current infrastructure for monitoring and summative evaluation of APOP report strategies identified by the Committee, Chapter 10 provides seven recommendations (and a set of potential actions and actors) to support the successful implementation of the components of the Community-Level Obesity Intervention Monitoring and Summative Evaluation Plan. 246 Evaluating Obesity Prevention Efforts

OCR for page 223
TABLE 8-3  Recommended Approaches for Key Outcomes of Community Monitoring and Summative Evaluation, by Level of Evaluation Resources Key Outcomes Documenting environmental Resources change Estimating the dose Measuring population-level impact Low Oral and written progress Intensity (strength, duration, Secondary data, when available at (5-10%)a reporting annually from reach) estimates based on an appropriate geographic level community coordinators progress report information Observation of selected key and the literature when strategies available Medium Oral and written progress Intensity (strength, duration, Secondary data, if available (10-15%) reporting at regular intervals reach) estimates based School-based surveys of youth jointly by evaluators and on progress reporting food and physical activity community coordinators information, literature when attitudes and behaviors Use of environmental and policy available, and program assessment tools for selected evaluations of selected key key strategies strategies High Oral and written progress Intensity (strength, duration, Secondary data, if available (>15%) reporting at regular intervals reach) estimates based School-based surveys of youth jointly by evaluators and on progress reporting Mail/phone surveys of adults community coordinators information, literature when Use of comprehensive and available, and program validated environmental and evaluations of all key strategies policy assessment tools for all key strategies aPercentages indicate the amount of resources for evaluation, as a percentage of the intervention budget. SOURCE: Adapted from Community Tool Box (http://ctb.ku.edu/en/default.aspx, accessed Novemer 12, 2013). references American Evaluation Association. 2011. American Evaluation Association public statement on cultural competence in evaluation. http://www.eval.org/p/cm/ld/fid=92 (accessed June 3, 2013). Anderson, L. M., T. A. Quinn, K. Glanz, G. Ramirez, L. C. Kahwati, D. B. Johnson, L. R. Buchanan, W. R. Archer, S. Chattopadhyay, G. P. Kalra, D. L. Katz, and Task Force on Community Preventive Services. 2009. The effectiveness of worksite nutrition and physical activity interventions for controlling employee overweight and obesity: A systematic review. American Journal of Preventive Medicine 37(4):340-357. Ashe, M., L. M. Feldstein, S. Graff, R. Kline, D. Pinkas, and L. Zellers. 2007. Local venues for change: Legal strate- gies for healthy environments. Journal of Law, Medicine, and Ethics 35(1):138-147. Atienza, A. A., and A. C. King. 2002. Community-based health intervention trials: An overview of methodological issues. Epidemiologic Reviews 24(1):72-79. Bailey, E. J., S. G. Kruske, P. S. Morris, C. J. Cates, and A. B. Chang. 2008. Culture-specific programs for children and adults from minority groups who have asthma. Cochrane Database of Systematic Reviews (2):CD006580. Monitoring and Summative Evaluation of Community Interventions 247

OCR for page 223
Berkson, S. S., J. Espinola, K. A. Corso, H. Cabral, R. McGowan, and V. R. Chomitz. 2013. Reliability of height and weight measurements collected by physical education teachers for a school-based body mass index surveillance and screening system. Journal of School Health 83(1):21-27. Booth, K. M., M. M. Pinkston, and W. S. Poston. 2005. Obesity and the built environment. Journal of the American Dietetic Association 105(5 Suppl 1):S110-S117. Brook, R. H., E. B. Keeler, K. N. Lohr, J. P. Newhouse, J. E. Ware, W. H. Rogers, A. Ross Davies, C. D. Sherbourne, G. A. Goldberg, P. Camp, C. Kamberg, A. Leibowitz, J. Keesey, and D. Reboussin. 2006. The health insurance experiment: A classic RAND study speaks to the current health care reform debate. Santa Monica, CA: RAND Corporation. Brownson, R. C., C. M. Hoehner, K. Day, A. Forsyth, and J. F. Sallis. 2009. Measuring the built environment for physical activity: State of the science. American Journal of Preventive Medicine 36(4 Suppl):S99-S123, e1-e12. Bunin, G. R., L. G. Spector, A. F. Olshan, L. L. Robison, M. Roesler, S. Grufferman, X. O. Shu, and J. A. Ross. 2007. Secular trends in response rates for controls selected by random digit dialing in childhood cancer s ­ tudies: A report from the children’s oncology group. American Journal of Epidemiology 166(1):109-116. CDC (Centers for Disease Control and Prevention). 2007. Best practices for comprehensive tobacco control ­programs—October 2007. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health. CDC. 2012. Community transformation grants (CTG). http://www.cdc.gov/communitytransformation/index.htm (accessed April 10, 2013). Cheadle, A., E. Wagner, T. Koepsell, A. Kristal, and D. Patrick. 1992. Environmental indicators: A tool for evaluating community-based health-promotion programs. American Journal of Preventive Medicine 8(6):345-350. Cheadle, A., W. L. Beery, H. P. Greenwald, G. D. Nelson, D. Pearson, and S. Senter. 2003. Evaluating the California Wellness Foundation’s Health Improvement Initiative: A logic model approach. Health Promotion Practice 4(2):146-156. Cheadle, A., P. M. Schwartz, S. Rauzon, E. Bourcier, S. Senter, and R. Spring. 2012. Using the concept of “popula- tion dose” in planning and evaluating community-level obesity prevention initiatives. Clinical Medicine & Research 10(3):184-185. Chinman, M., S. Hunter, P. Ebener, S. Paddock, L. Stillman, P. Imm, and A. Wandersman. 2008. The Getting to Outcomes demonstration and evaluation: An illustration of the prevention support system. American Journal of Community Psychology 41(3-4):206-224. Chriqui, J. F., J. C. O’Connor, and F. J. Chaloupka. 2011. What gets measured, gets changed: Evaluating law and policy for maximum impact. Journal of Law, Medicine & Ethics 39:21-26. Collie-Akers, V. L., S. B. Fawcett, J. A. Schultz, V. Carson, J. Cyprus, and J. E. Pierle. 2007. Analyzing a community- based coalition’s efforts to reduce health disparities and the risk for chronic disease in Kansas City, Missouri. Preventing Chronic Disease 4(3):A66. http://www.cdc.gov/pcd/issues/2007/jul/06_0101.htm (accessed June 4, 2013). COMMIT Research Group. 1995. Community intervention trial for smoking cessation (COMMIT): I. Cohort results from a four-year community intervention. American Journal of Public Health 85(2):183-192. Community-Campus Partnerships for Health. 2013. Community-campus partnerships for health. Service learning. http://depts.washington.edu/ccph/servicelearningres.html (accessed June 3, 2013). Community Preventive Services Task Force. 2013. The guide to community preventive services: What works to pro- mote health. http://www.thecommunityguide.org/index.html (accessed April 10, 2013). Council on Education for Public Health. 2005. Accreditation criteria schools of public health. http://www.ceph.org/ assets/SPH-Criteria.pdf (accessed June 3, 2013). 248 Evaluating Obesity Prevention Efforts

OCR for page 223
County Health Rankings. 2013. What works for health. http://www.countyhealthrankings.org/programs (accessed May 31, 2013). Cronbach, L. J. 1982. Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass. Curtin, R., S. Presser, and E. Singer. 2005. Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly 69(1):87-98. Dietz, W. H., M. T. Story, and L. C. Leviton. 2009. Issues and implications of screening, surveillance, and reporting of children’s BMI. Pediatrics 124(Suppl 1):S98-S101. Dixon-Woods, M., C. L. Bosk, E. L. Aveling, C. A. Goeschel, and P. J. Pronovost. 2011. Explaining Michigan: Developing an ex post theory of a quality improvement program. Milbank Quarterly 89(2):167-205. Doyle, S., A. Kelly-Schwartz, M. Schlossberg, and J. Stockard. 2006. Active community environments and health: The relationship of walkable and safe communities to individual health. Journal of the American Planning Association 72(1):19-31. Economos, C. D., R. R. Hyatt, J. P. Goldberg, A. Must, E. N. Naumova, J. J. Collins, and M. E. Nelson. 2007. A community intervention reduces BMI z-score in children: Shape Up Somerville first year results. Obesity (Silver Spring) 15(5):1325-1336. Farnham, P. G., D. R. Holtgrave, S. L. Sansom, and H. I. Hall. 2010. Medical costs averted by HIV prevention efforts in the United States, 1991-2006. Journal of Acquired Immune Deficiency Syndromes 54(5):565-567. Farquhar, J. W., S. P. Fortmann, J. A. Flora, C. B. Taylor, W. L. Haskell, P. T. Williams, N. Maccoby, and P. D. Wood. 1990. Effects of communitywide education on cardiovascular disease risk factors. The Stanford Five-City Project. Journal of the American Medical Association 264(3):359-365. Fawcett, S. B., and J. A. Schultz. 2008. Supporting participatory evaluation using the Community Tool Box online documentation system. In Community-based participatory research for health, edited by M. Minkler and N. Wallerstein. San Francisco, CA: Jossey-Bass. Pp. 419-424. Fawcett, S. B., R. Boothroyd, J. A. Schultz, V. T. Francisco, V. Carson, and R. Bremby. 2003. Building capacity ­ for participatory evaluation within community initiatives. Journal of Prevention and Intervention in the Community 26(2):21-36. Fetterman, D. M., and A. Wandersman, eds. 2005. Empowerment evaluation. Principles in practice. New York: The Guilford Press. Fielding, J. E., and T. R. Frieden. 2004. Local knowledge to enable local action. American Journal of Preventive Medicine 27(2):183-184. Fishbein, M., and I. Ajzen. 2010. Predicting and changing behavior: The reasoned action approach. New York: Taylor & Francis Group. Fitzpatrick, J. L., J. R. Sanders, and B. R. Worthen. 2010. Program evaluation: Alternative approaches and practical guidelines. 4th ed. Upper Saddle River, NJ: Pearson. Glanz, K. 2009. Measuring food environments: A historical perspective. American Journal of Preventive Medicine 36(4 Suppl):S93-S98. Glanz, K., and D. B. Bishop. 2010. The role of behavioral science theory in development and implementation of p ­ ublic health interventions. Annual Review of Public Health 31:399-418. Glanz, K., and D. Hoelscher. 2004. Increasing fruit and vegetable intake by changing environments, policy and p ­ ricing: Restaurant-based research, strategies, and recommendations. Preventive Medicine 39(Suppl 2):S88-S93. Glasgow, R. E., T. M. Vogt, and S. M. Boles. 1999. Evaluating the public health impact of health promotion inter- ventions: The RE-AIM framework. American Journal of Public Health 89(9):1322-1327. Glasgow, R. E., L. M. Klesges, D. A. Dzewaltowski, P. A. Estabrooks, and T. M. Vogt. 2006. Evaluating the impact of health promotion programs: Using the RE-AIM framework to form summary measures for decision making involving complex issues. Health Education Research 21(5):688-694. Monitoring and Summative Evaluation of Community Interventions 249

OCR for page 223
Green, L. W., and R. E. Glasgow. 2006. Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology. Evaluation and the Health Professions 29(1):126-153. Green, L. W., R. E. Glasgow, D. Atkins, and K. Stange. 2009. Making evidence from research more relevant, use- ful, and actionable in policy, program planning, and practice: Slips “twixt cup and lip.” American Journal of Preventive Medicine 37(6S1)S187-S191. Haddix, A. C., S. M. Teutsch, and P. S. Corso. 2002. Prevention effectiveness: A guide to decision analysis and eco- nomic evaluation. New York: Oxford University Press. Hannay, J., R. Dudley, S. Milan, and P. K. Leibovitz. 2013. Combining photovoice and focus groups: Engaging Latina teens in community assessment. American Journal of Preventive Medicine 44(3 Suppl 3):S215-S224. Hawkins, N. G., R. W. Sanson-Fisher, A. Shakeshaft, C. D’Este, and L. W. Green. 2007. The multiple baseline design for evaluating population-based research. American Journal of Preventive Medicine 33(2):162-168. Hawthorne, K., Y. Robles, R. Cannings-John, and A. G. Edwards. 2008. Culturally appropriate health education for type 2 diabetes mellitus in ethnic minority groups. Cochrane Database of Systematic Reviews (3):CD006424. Heath, G. W., R. C. Brownson, J. Kruger, R. Miles, K. E. Powell, L. T. Ramsey, and Task Force on Community Preventive Services. 2006. The effectiveness of urban design and land use and transport policies and practices to increase physical activity: A systematic review. Journal of Physical Activity and Health 3(Suppl 1):S55-S76. HHS (Department of Health and Human Services). 2008. Physical activity guidelines for Americans. Washington, DC: Department of Health and Human Services. Hoelscher, D. M., S. H. Kelder, A. Perez, R. S. Day, J. S. Benoit, R. F. Frankowski, J. L. Walker, and E. S. Lee. 2010. Changes in the regional prevalence of child obesity in 4th, 8th, and 11th grade students in Texas from 2000- 2002 to 2004-2005. Obesity (Silver Spring) 18(7):1360-1368. Homer, J., B. Milstein, K. Wile, J. Trogdon, P. Huang, D. Labarthe, and D. Orenstein. 2010. Simulating and evaluat- ing local interventions to improve cardiovascular health. Preventing Chronic Disease 7(1):A18. http://www. cdc.gov/pcd/issues/2010/jan/08_0231.htm (accessed June 4, 2013). Hornik, R. C. 2002. Exposure: Theory and evidence about all the ways it matters. Social Marketing Quarterly 8(3):31-37. Hulleman, C. S., and D. S. Cordray. 2009. Moving from the lab to the field: The role of fidelity and achieved relative intervention strength. Journal of Research on Educational Effectiveness 2(1):88-110. IOM (Institute of Medicine). 2009. Local government actions to prevent childhood obesity. Washington, DC: The National Academies Press. IOM. 2010. Bridging the evidence gap in obesity prevention: A framework to inform decision making. Washington, DC: The National Academies Press. IOM. 2012a. Accelerating progress in obesity prevention: Solving the weight of the nation. Washington, DC: The National Academies Press. IOM. 2012b. Measuring progress in obesity prevention: Workshop report. Washington, DC: The National Academies Press. Israel, B. A., E. Eng, A. J. Schulz, and E. A. Parker. 2012. Methods for community-based participatory research for health. 2nd ed. San Francisco, CA: Jossey-Bass. Issel, L. M. 2009. Health program planning and evaluation: A practical systematic approach for community health. 2nd ed. Sudbury, MA: Jones and Bartlett Publishers. Jagosh, J., A. C. Macaulay, P. Pluye, J. Salsberg, P. L. Bush, J. Henderson, E. Sirett, G. Wong, M. Cargo, C. P. Herbert, S. D. Seifer, L. W. Green, and T. Greenhalgh. 2012. Uncovering the benefits of participatory research: Implications of a realist review for health research and practice. Milbank Quarterly 90(2):311-346. Julian, D. A. 1997. The utilization of the logic model as a system level planning and evaluation device. Evaluation and Program Planning 20(3):251-257. 250 Evaluating Obesity Prevention Efforts

OCR for page 223
Kahn, E. B., L. T. Ramsey, R. C. Brownson, G. W. Heath, E. H. Howze, K. E. Powell, E. J. Stone, M. W. Rajab, and P. Corso. 2002. The effectiveness of interventions to increase physical activity. A systematic review. American Journal of Preventive Medicine 22(4 Suppl):73-107. Khan, L., K. Sobush, D. Keener, K. Goodman, A. Lowry, J. Kakietek, S. Zaro. 2009. Recommended community strategies and measurements to prevent obesity in the United States. Morbidity and Mortality Weekly Report 58(RR-7):1-26. King, G., E. Gakidou, K. Imai, J. Lakin, R. T. Moore, C. Nall, N. Ravishankar, M. Vargas, M. M. Tellez-Rojo, J. E. Avila, M. H. Avila, and H. H. Llamas. 2009. Public policy for the poor? A randomised assessment of the Mexican universal health insurance programme. Lancet 373(9673):1447-1454. Koepsell, T. D., E. H. Wagner, A. C. Cheadle, D. L. Patrick, D. C. Martin, P. H. Diehr, E. B. Perrin, A. R. Kristal, C. H. Allan-Andrilla, and L. J. Dey. 1992. Selected methodological issues in evaluating community-based health promotion and disease prevention programs. Annual Review of Public Health 13:31-57. Koh, H. K., C. M. Judge, H. Robbins, C. C. Celebucki, D. K. Walker, and G. N. Connolly. 2005. The first decade of the Massachusetts tobacco control program. Public Health Reports 120(5):482-495. Leviton, L. C., L. Kettel Khan, and N. Dawkins, eds. 2010a. The Systematic Screening and Assessment Method: Finding innovations worth evaluating. New Directions in Evaluation (125):1-118. Leviton, L. C., L. Kettel-Khan, D. Rog, N. Dawkins, and D. Cotton. 2010b. Evaluability assessment to improve p ­ ublic health policies, programs, and practices. Annual Review of Public Health 31:213-233. Mercer, S. L., B. J. DeVinney, L. J. Fine, L. W. Green, and D. Dougherty. 2007. Study designs for effectiveness and translation research: Identifying trade-offs. American Journal of Preventive Medicine 33(2):139-154. Mercer, S. L., L. W. Green, M. Cargo, M. Potter, M. Daniel, R. S. Olds, and E. Reed-Gross. 2008. Reliability-tested guidelines for assessing participatory research projects. In Community-based participatory research for health: From process to outcomes. 2nd ed., edited by M. Minkler and N. Wallerstein. San Francisco, CA: Jossey-Bass. Pp. 407-418. Merzel, C., and J. D’Afflitti. 2003. Reconsidering community-based health promotion: Promise, performance, and potential. American Journal of Public Health 93(4):557-574. Minkler, M., and N. Wallerstein. 2008. Introduction to CBPR: New issues and emphasis. In Community-based par- ticipatory research for health: From process to outcomes. 2nd ed., edited by M. Minkler and N. Wallerstein. San Francisco, CA: Jossey-Bass. Pp. 5-24. National Heart, Lung, and Blood Institute. 2012. Healthy communities study. http://www.nhlbi.nih.gov/hcs/index. htm (accessed April 10, 2013). Osei-Assibey, G., S. Dick, J. Macdiarmid, S. Semple, J. J. Reilly, A. Ellaway, H. Cowie, and G. McNeill. 2012. The influence of the food environment on overweight and obesity in young children: A systematic review. BMJ Open 2:e001538. doi: 10.1136/. Papas, M. A., A. J. Alberg, R. Ewing, K. J. Helzlsouer, T. L. Gary, and A. C. Klassen. 2007. The built environment and obesity. Epidemiologic Reviews 29:129-143. Pateriya, D., and P. Castellanos. 2003. Power tools manual. A manual for organizations fighting for justice. Los Angeles, CA: Strategic Concepts in Organizing and Policy Education. Patton, M. Q. 2008. Utilization-focused evaluation. 4th ed. Thousand Oaks, CA: Sage Publications. Preskill, H., and N. A. Jones. 2009. Practical guide for engaging stakeholders in the evaluation process. Princeton, NJ: Robert Wood Johnson Foundation. Resnicow, K., T. Baranowski, J. S. Ahluwalia, and R. L. Braithwaite. 1999. Cultural sensitivity in public health: Defined and demystified. Ethnicity and Disease 9(1):10-21. Monitoring and Summative Evaluation of Community Interventions 251

OCR for page 223
Ritchie, L. D., P. B. Crawford, D. M. Hoelscher, and M. S. Sothern. 2006. Position of the American Dietetic Association: Individual-, family-, school-, and community-based interventions for pediatric overweight. Journal of the American Dietetic Association 106(6):925-945. Robbins, J. M., G. Mallya, M. Polansky, and D. F. Schwarz. 2012. Prevalence, disparities, and trends in obesity and severe obesity among students in the Philadelphia, Pennsylvania, school district, 2006-2010. Preventing Chronic Disease 9. http://dx.doi.org/10.5888/pcd9.120118 (accessed June 4, 2013). Robinson, T. N., D. M. Matheson, H. C. Kraemer, D. M. Wilson, E. Obarzanek, N. S. Thompson, S. Alhassan, T. R. Spencer, K. F. Haydel, M. Fujimoto, A. Varady, and J. D. Killen. 2010. A randomized controlled trial of cul- turally tailored dance and reducing screen time to prevent weight gain in low-income African American girls: Stanford GEMS. Archives of Pediatrics and Adolescent Medicine 164(11):995-1004. Rossi, P. H., M. W. Lipsey, and H. E. Freeman. 2004. Evaluation: A systematic approach. 7th ed. Thousand Oaks, CA: Sage Publications. Roussos, S. T., and S. B. Fawcett. 2000. A review of collaborative partnerships as a strategy for improving commu- nity health. Annual Review of Public Health 21:369-402. Rundle, A. G., M. D. Bader, C. A. Richards, K. M. Neckerman, and J. O. Teitler. 2011. Using Google Street View to audit neighborhood environments. American Journal of Preventive Medicine 40(1):94-100. Saelens, B. E., and K. Glanz. 2009. Work group I: Measures of the food and physical activity environment: Instruments. American Journal of Preventive Medicine 36(4 Suppl):S166-S170. Saelens, B. E., J. F. Sallis, J. B. Black, and D. Chen. 2003. Neighborhood-based differences in physical activity: An environment scale evaluation. American Journal of Public Health 93(9):1552-1558. Sallis, J. F. 2009. Measuring physical activity environments: A brief history. American Journal of Preventive Medicine 36(4 Suppl):S86-S92. Sallis, J. F., and K. Glanz. 2006. The role of built environments in physical activity, eating, and obesity in childhood. Future of Children 16(1):89-108. Sallis, J. F., and K. Glanz. 2009. Physical activity and food environments: Solutions to the obesity epidemic. Milbank Quarterly 87(1):123-154. Sallis, J. F., R. B. Cervero, W. Ascher, K. A. Henderson, M. K. Kraft, and J. Kerr. 2006. An ecological approach to creating active living communities. Annual Review of Public Health 27:297-322. Sanson-Fisher, R. W., B. Bonevski, L. W. Green, and C. D’Este. 2007. Limitations of the randomized controlled trial in evaluating population-based health interventions. American Journal of Preventive Medicine 33(2):155-161. Sayers, S. P., J. W. LeMaster, I. M. Thomas, G. F. Petroski, and B. Ge. 2012. Bike, walk, and wheel: A way of life in Columbia, Missouri, revisited. American Journal of Preventive Medicine 43(5 Suppl 4):S379-S383. Shadish, W. R., T. D. Cook, and L. C. Leviton. 1991. Foundations of program evaluation: Theorists and their theo- ries. Newbury Park, CA: SAGE Publications. Shadish, W. R., T. D. Cook, and D. T. Campbell. 2002. Experimental and quasi-experimental designs for generalized causal inference. 2nd ed. Boston, MA: Houghton Mifflin. Shaw, I. F., J. C. Greene, and M. M. Mark. 2006. The SAGE handbook of evaluation. London: SAGE Publications. Tang, H., E. Abramsohn, H. Y. Park, D. W. Cowling, and W. K. Al-Delaimy. 2010. Using a cessation-related outcome index to assess California’s cessation progress at the population level. Tobacco Control 19(Suppl 1):i56-i61. Taylor, B. T., P. Fernando, A. E. Bauman, A. Williamson, J. C. Craig, and S. Redman. 2011. Measuring the quality of public open space using Google Earth. American Journal of Preventive Medicine 40(2):105-112. Wagner, E. H., T. M. Wickizer, A. Cheadle, B. M. Psaty, T. D. Koepsell, P. Diehr, S. J. Curry, M. Von Korff, C. Anderman, W. L. Beery, D. C. Pearson, and E. B. Perrin. 2000. The Kaiser Family Foundation Community Health Promotion Grants Program: Findings from an outcome evaluation. Health Services Research 35(3):561-589. 252 Evaluating Obesity Prevention Efforts

OCR for page 223
Wang, C. C., S. Morrel-Samuels, P. M. Hutchison, L. Bell, and R. M. Pestronk. 2004. Flint photovoice: Community building among youths, adults, and policymakers. American Journal of Public Health 94(6):911-913. Wang, Y. C., C. T. Orleans, and S. L. Gortmaker. 2012. Reaching the Healthy People goals for reducing childhood obesity: Closing the energy gap. American Journal of Preventive Medicine 42(5):437-444. Waters, E., A. de Silva-Sanigorski, B. J. Hall, T. Brown, K. J. Campbell, Y. Gao, R. Armstrong, L. Prosser, and C. D. Summerbell. 2011. Interventions for preventing obesity in children. Cochrane Database of Systematic Reviews Issue 12. Art. No.: CD001871. DOI: 10.1002/14651858.CD001871.pub3. Weinstein, M. C., and W. B. Stason. 1976. Hypertension: A policy perspective. Cambridge, MA: Harvard University Press. Wholey, J. S., H. P. Hatry, and K. E. Newcomber. 2010. Handbook of practical program evaluation. 3rd ed. San Francisco, CA: Jossey-Bass. Yin, R. K. 2008. Case study research: Design and methods. 4th ed. Thousand Oaks, CA: Sage. Yin, R. K., and K. A. Heald. 1975. Using the case survey method to analyze policy studies. Administrative Science Quarterly 20:371-381. Monitoring and Summative Evaluation of Community Interventions 253

OCR for page 223