Dean Fixsen, Ph.D., Karen Blase, Ph.D., Melissa Van Dyke, M.S.W., and Allison Metz, Ph.D. National Implementation Research Network, Frank Porter Graham Child Development Institute University of North Carolina at Chapel Hill
Much of the violence prevention literature is about interventions to prevent or treat violent behavior in individuals or groups. This work is to be applauded! Interventions have advanced a long way from the declaration a few decades ago that “nothing works” (Martinson, 1974). So many interventions happen now that there are reviews of reviews and meta-analyses across studies (Lipsey and Cullen, 2007). This is good news for violence prevention and treatment globally.
The next task is to develop evidence-based approaches to implement evidence-based programs. The complexities and difficulties encountered when attempting to use programs and interventions on purpose were documented in the 1970s (Pressman and Wildavsky, 1973; Fairweather et al., 1974; Van Meter and Van Horn, 1975; Fixsen et al., 1978). As noted then (Hough, 1975) and now (Kessler and Glasgow, 2011), people cannot benefit from interventions they do not experience.
Dobson and Cook (1980) documented “Type III errors” in research where outcomes were attributed to programs that did not exist in practice. Schoenwald et al. (2011) have outlined the key factors related to assessing the presence and strength of interventions in practice. Assessments of the independent variable (Naleppa and Cagle, 2010) are important to help discriminate implementation problems from intervention problems.
The purpose of this paper is to outline some key factors related to implementation that have emerged in the past several decades.
Applied Implementation Science
Implementation is the link between science and practice. Implementation is an active process that is designed to put into practice an activity or program of known dimensions (Fixsen et al., 2005). According to this definition, implementation processes are purposeful and are described in sufficient detail such that independent observers can detect the presence and strength of the “specific set of activities” related to implementation (implementation fidelity). In addition, the activity or program being implemented
is described in sufficient detail so that independent observers can detect its presence and strength (intervention fidelity).
Applied implementation science is evidence based and mission driven (Fixsen et al., 2013). Applied research is done to help accomplish a goal, not just to satisfy an investigator’s curiosity or advance knowledge in a general way. Applied implementation science is focused on real issues that arise in the course of attempting to use evidence-based interventions in practice (Fixsen et al., 2001). Research done in support of the National Aeronautics and Space Administration space mission to land people on the moon and return them safely did not set out to investigate interesting variables; they set out to solve real problems such as heat-shield tiles falling off under the extreme temperatures encountered during re-entry. Similarly, attempts to use evidence-based interventions in practice on a socially significant scale encounter problems that require research-based implementation solutions (Nzinga et al., 2009; Glisson et al., 2010). Research related to these real problems has produced a good foundation for applied implementation science and helps accomplish the mission of using research in practice.
When thinking about implementation, the observer must be aware of two sets of activities (intervention-level activity and implementation-level activity) and two sets of outcomes (intervention outcomes and implementation outcomes).
A formula for successful uses of evidence-based programs in typical human service settings can be characterized as follows:
Effective innovations × Effective implementation × Enabling contexts = Socially significant outcomes
The formula for success involves multiplication (for more information, see http://nirn.fpg.unc.edu). If any component is weak, then the intended outcomes will not be achieved, sustained, or used on a socially significant scale. Like a serum and a syringe, innovations are one thing and implementation is something entirely different. Doing more research on a serum will not produce a better syringe; doing more research on an innovation will not produce better implementation methods or create more supportive organizations and systems (Blase et al., 2012).
The Active Implementation Frameworks help define WHAT needs to be done (effective interventions), HOW to establish what needs to be done in practice (effective implementation), WHO will do the work of implementation, and WHERE the innovation and implementation processes will be supported and improved to accomplish socially significant outcomes in typical human service settings.
WHERE evidence-based interventions can be or need to be used has been a vexing problem. This is especially true in global health applications