economic, and other social factors that modify the observed relationships) in models that predict outcomes is limited. The experimental evidence is even more scarce and limited in generalizability. Because of these limitations of the evidence-based practice literature, decision makers must turn to practice-based evidence and ways of pooling evidence from various extant or emerging programs and practices.

Pooling refers to consultation with decision makers who have dealt with the problem of obesity in a similar population or setting. After matching and mapping the evidence and theories, local decision makers will still be uncertain about how well the evidence applies to each of the mediators (i.e., mechanisms or intermediate steps) and moderators (i.e., conditions that make an association stronger or weaker) in their logic model for local action. At this point, they should turn to the opinions of experts and experienced practitioners in their or similar settings (e.g., Banwell et al., 2005; D’Onofrio, 2001). Methods exist for pooling these opinions and analyzing them in various systematic and formal or unsystematic and informal ways. For example, Banwell and colleagues (2005) used an adapted Delphi technique (the Delphi Method, described in Chapter 6) to obtain views of obesity, dietary, and physical activity experts about social trends that have contributed to an obesogenic environment in Australia. Through this semistructured process, they were able to identify trends in expert opinion, as well as rank the trends to help inform public policy.

Practice-based evidence is that which comes primarily from practice settings, in real time, and from typical practitioners, as distinct from evidence from more academically controlled settings, with highly trained and supervised practitioners conducting interventions under strict protocols. Such tacit evidence, often unpublished, draws on the experience of those who have grappled with the problem and/or the intervention in situations more typical of those in which the evidence would be applied elsewhere. Even when evidence from experimental studies is available, decision makers often ask, understandably, whether it applies to their context—in their practice or policy setting, circumstances, and population (Bowen and Zwi, 2005; Dobbins et al., 2007; Dobrow et al., 2004, 2006; Green, 2008). They want to weigh what the experimental evidence shows, with its strong level of certainty of the causal relationship between the intervention and the observed outcomes (internal validity), against what the experience of their own and similar practices and practitioners has been, with its possibly stronger generalizability (external validity).

Finally, the use of pooling in weighing and supplementing evidence becomes an important negotiating process among organizations cooperating in community-level and other broad collaborative programs and policies. Each participant in such collaborations will weigh different types of evidence differently, and each will have an idiosyncratic view of its own experience and what it says about the problem and the proposed solutions (Best et al., 2003). This recognition of complexities in the evidence and multiplicities of experience has led to a growing interest in systems theory or systems thinking (Green, 2006) (see Chapter 4).

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement