driven process for generating evidence for problem solving. Taking full advantage of this lesson will require building an infrastructure of surveillance capacity and tools for self-monitoring of the implementation and effects of interventions.

Continuous Quality Assessment of Ongoing Programs

Once interventions have been established as evidence-based (in the larger sense suggested in this chapter of a combination of evidence, theory, professional experience, and local participation), continuous assessment of the quality of implementation is necessary to ensure that the interventions do not slip into a pattern of compromised effort and attenuated resources and that they build on experience with changing circumstances, clientele, and personnel. Methods for continuous quality assessment have been borrowed from industry (e.g., Edwards and Schwarzenberg, 2009) and adapted to public health program assessment to support the development of practice-based evidence in real time with real programs (Green, 2007; Katz, 2009; Kottke et al., 2008).

Continuous quality assessments of individual interventions can be pooled and analyzed for their implications for the adjustment of programs and policies to changing circumstances. These findings, in turn, can be pooled and analyzed at the state and national levels to derive guidelines, manuals, and interactive online guides for a combination of best practices and best processes. What meta-analyses and other systematic reviews may be telling us, with their inconsistent results or variability of findings over time, is that there is nothing inherently superior about most intervention practices. Rather, social, epidemiological, behavioral, educational, environmental, and administrative diagnoses lead to the appropriate application of an intervention to suit a particular purpose, population, time, setting, and other contextual variables. Such was the conclusion of a comparative meta-analysis of patient education and counseling interventions (Mullen et al., 1985).

Similarly, in clinical practice, where the application of evidence-based medicine (EBM) is generally expected, “Physicians reported that when making clinical decisions, they more often rely on clinical experience, the opinions of colleagues, and EBM summarizing electronic clinical resources rather than refer directly to EBM literature” (Hay et al., 2008, p. 707). These alternatives to the direct and simple translation of EBM guidelines, as in other fields of practice, should not be surprising in the face of the limited representation of types of patients or populations and circumstances in EBM studies. The challenge, then, is how to use the experience and results from these combinations of explicit scientific evidence and tacit experiential evidence to enrich the evidence pool. Hay and colleagues (2008) suggest methods of “evidence farming” to systematize the collection of data from clinical experience to feed back into the evidence–practice–evidence cycle.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement