As Dean Fixsen from the University of North Carolina at Chapel Hill reminded workshop participants, the goal is not building the evidence or entertaining researchers through debate and development of methodologies. The goal is to prevent violence and improve lives for the benefit of individuals and communities. Evidence-based programming is an experiment in the efforts toward this goal, and Fixsen stressed that disseminating the evidence is not enough to reach it; the evidence must be implemented. He commented on the transition toward effective implementation—an ongoing movement that began with letting information trickle down to practice; moved toward facilitating information to practice through manuals, videos, workshops, or websites; and now concentrating on making it happen through focused and purposeful implementation strategies. Speaker Brian Bumbarger from The Pennsylvania State University called for this movement from the science of prevention to what he called the service of prevention, recognizing that implementation of best practices is as important as studying best practices.
The discussions during the workshop demonstrated that there is evidence for violence prevention, reflected Forum co-chair and workshop speaker Jacquelyn Campbell from Johns Hopkins University School of Nursing. There are both systems and programs that work, and resources and databases have been created and updated with the goal of providing easy access to this evidence. However, to move the field of violence prevention forward, a greater focus is needed on the other part of the workshop’s charge—that is, how to implement evidence and how to translate programs
that have been proven to work so that they are culturally appropriate and contextually relevant.
APPLYING EVIDENCE TO PROGRAM DESIGN
Workshop speakers discussed several factors to be considered when designing evidence-based programs: theoretical frameworks, surveillance data, characteristics of target populations, risk and protective factors, and the larger context.
Several speakers suggested that, to maintain clear programmatic focus and better identify intended goals and outcomes, interventions could be designed on the basis of well-supported underlying theories. Basing programs on assumptions that are not based on established theory can sometimes lead to programs that are inadequate in addressing their intended goals. For example, Forum member and workshop speaker Michael Phillips from Shanghai Jiao Tong University School of Medicine mentioned several suicide prevention training programs for mental health practitioners that are based on the presumption the practitioners can identify high-risk individuals. However, this presumption is not supported by the current body of evidence, which suggests that although high-risk groups can be identified, available tools are inadequate to determine suicide risk or predict suicidal behavior of individuals (IOM, 2002).
Phillips also stressed that, considering the dynamic contextual factors related to violence prevention, programs based on theoretical frameworks need to be flexible and able to adjust based on changes to existing models. For example, current models of suicidal behavior and prevention focus mostly on the individual and do not always capture the contextual nature of suicide, such as changes in culture, risk factors, and environmental influences. Programs based on current theories might not work in 5 years, and thus consistent monitoring and informing by theoretical adjustments can be valuable. If practitioners and planners are consistently readjusting their approaches to align with strongly supported models, they are more likely to remain focused on improving the outcomes they initially identified.
Once a program has an established focus and theoretical basis, appropriate surveillance data could inform the design of targeted interventions. Workshop speaker Mark Bellis from Liverpool John Moores University noted that surveillance data will be used differently by programs depending
on their intended goals and outcomes. As an example, he discussed a study conducted by the Trauma and Injury Intelligence Group in the United Kingdom which found that most violent incidents in one city happened near bars. A program invested in environmental control might target the bar area with a criminal justice response, but programs that emphasize primary prevention would instead target interventions in neighborhoods of the offenders to address the underlying causes of violence.
Several speakers discussed how data can be used to determine the target populations for violence prevention interventions. Evidence on program effectiveness in different cultures, age groups, and socioeconomic groups can inform how programs for specific populations are designed. For example, Bellis explained that socioeconomic status can affect the onset of some developmental experiences, indicating that the focus age group may need to be different among populations even for the same intervention. He identified a study in the United Kingdom that found that the use of emergency services peaks at age 13 in populations of deprived females, and at age 20 for affluent females (Bellis et al., 2012). Similar differences in demand for emergency services are found between deprived and affluent elderly populations. Key populations will differ depending on the type of intervention and desired outcomes, and the evidence base can help to identify the individuals to target for more effective and efficient interventions.
Risk and Protective Factors
Workshop speakers discussed the importance of identifying risk and protective factors in communities and opportunities for interventions that address them. Bumbarger described the work of Communities That Care (CTC) as an example for determining appropriate interventions based on risk and protective factors. CTC determines these factors in a neighborhood using local epidemiological data, which are used to create community-specific profiles in which CTC compares the community risk and protective factor rates to controls. CTC then works with communities to develop a community action plan based on evidence-based programming that best focuses on decreasing the community’s most prevalent risk factors and increasing the least prevalent protective factors (Hawkins et al., 2012).
Bumbarger shared a study that showed that Pennsylvania communities implementing programs using CTC’s approach of addressing risk and protective factors reported lower rates of risk factors, substance abuse, and delinquency, and higher levels of protective factors. A 5-year longitudinal study found that children in CTC-supported communities had different
developmental trajectories from children in similar Pennsylvania communities that were not using the CTC model. Children in CTC-supported communities had lower rates of delinquency, less self-reported negative peer influence, better school engagement, and better academic achievement. Bellis pointed out that this was a crime prevention initiative funded by the state crime commission, yet the largest outcome was increased academic achievement. By targeting interventions based on data on risk and protective factors rather than outcomes, communities were able to see results in areas beyond crime prevention.
Bumbarger added that this study also looked at the impact of CTC programs on the juvenile justice system. In 2010, there was a 3 percent lower rate of youth placed in delinquent correctional centers in communities that adopted the evidence-based programs recommended by CTC compared with communities that did not. Bumbarger noted that a 3 percent difference is equivalent to $3 million in Pennsylvania, an amount significant enough to catch a policy maker’s eye. This study was offered as an example of research that leads to action and impact where the governor of Pennsylvania announced his decision to save $10 million by closing a 100-bed juvenile correctional facility and reinvesting that amount into evidence-based prevention programs.
The Larger Context
The larger context in which an intervention is being implemented may be considered in the intervention design. For example, is there access to a functioning health care system or criminal justice system, which are both critical components of violence prevention? These challenges are particularly relevant in low- and middle-income countries (LMICs). Workshop speaker Alys Willman elaborated on this point from her work at the World Bank. Some components of domestic violence prevention programs, such restraining orders and shelters do not exist in LMICs. In these settings, such factors illuminate how context affects considerations for program design and implementation.
IMPLEMENTING EVIDENCE-BASED PROGRAMS
In addition to using theory, prevalence data, and program and strategy effectiveness data to determine program focus and goals, program designers may use knowledge obtained from implementation research to design interventions that function well in the real world. Fixsen reiterated that successful implementation will result in socially significant outcomes—changes that people notice and feel directly in their communities. It is through implementation research that practitioners may ensure programs
that are being carried out are contextually appropriate and culturally relevant. Speakers discussed several areas of consideration when implementing programs:
- program fidelity,
- role of practitioners and practitioner training,
- incentives for evidence-based programming,
- replacing ineffective programs,
- implementation teams, and
- improvement considerations.
Fixsen noted that many studies are carried out with the assumption that the program being evaluated followed its stated description, but the study does not actually measure the program’s fidelity to this description. Fixsen and colleagues reviewed 1,200 outcome studies and found that only 18 percent of them actually assessed the independent variable to determine its fidelity to the defined intervention. The remaining 82 percent of studies purported to measure outcomes of an intervention described in their methods sections, but they more likely measured a variation of the intervention that is adapting to real-world challenges and changes (Fixsen et al., 2005). Fixsen cautioned that unless program fidelity is also measured in evaluations, there can be no certainty that the measured outcomes are the result of the intervention as defined.
Fixsen also suggested that competency, strong leadership, and effective organization are the factors that drive successful implementation and ultimately support the practitioner in direct delivery of services. He encouraged workshop participants to develop practical fidelity assessments in order to ensure that practitioners and managers are using innovations correctly and consistently to strengthen these drivers. If these drivers are strong and practitioners use innovations with fidelity, a proven program is more likely to reliably produce benefits.
Bumbarger added that the quality of implemented programs is more important than implementing a large quantity of prevention programs. The goal is not to bring evidence-based programs to as many places as possible, but to carefully choose the programs that will be the best fit and most effective in a certain context. He pointed out that dissemination is sometimes at odds with high-quality implementation—programs are more likely to be implemented with increases in dissemination of program effectiveness evidence, which makes it harder to ensure program quality and fidelity. Instead of pushing for large dissemination campaigns, he suggested that the violence prevention community focus on correctly implementing a small
number of interventions likely to work in a certain context. Identifying tipping points at which to intervene might be more effective than saturating the market with programs.
Roles of Practitioners and Practitioner Training
Several speakers commented that there is value for organizations to select and train good practitioners when implementing programs. Understanding specific strategies that make programs successful allows implementers to focus resources on specific components and then adapt programs for their local communities. Workshop speaker Harriett MacMillan from McMaster University provided an example of such data being used to improve replications of the Nurse–Family Partnership, which provides new mothers with home visits from nurses to provide early parenting support. To test the initial model of this program, David Olds led three randomized controlled trials that found the Nurse–Family Partnership model was indeed effective in a variety of populations. The third trial, in Denver, Colorado, included another element of comparison, which looked at the difference in effectiveness of nurse visitors and paraprofessional visitors. They found that nurses were much more effective than paraprofessionals, and the program evolved to focus only on using nurse visitors rather than both types (Olds et al., 2007). Campbell suggested that nurses have been effective implementers in this model because they have flexibility in delivery of the program and a professional knowledge base from which to draw. When they enter a family’s house, even though it may be the day to address nutritional adequacy or feeding practices, they also are there to address whatever problems that family is facing, such as mental and physical health and whether there is money to pay the rent.
Speaker Tammy Mann from the Campagna Center noted that some managers do not want to rigidly prescribe program methods to practitioners for fear of limiting their critical thinking. She noted that managers and direct practitioners often have a good understanding of theoretical frameworks. Implementing evidence-based programming does not mean replacing the discretion and knowledge practitioners have gained; instead, researchers and practitioners can use their judgment and knowledge together to determine how to best integrate evidence into practice and decision making.
Mann and Virginia Dolan from Anne Arundel County Public Schools both mentioned the need to provide practitioners with follow-up coaching and support after conferences and training. Mann noted that teachers often go to conferences where they are immersed in new and exciting ideas that they would like to use in their own teaching practices; however, when they return to their schools, they are faced with work demands and limited
support and resources for implementing new ideas. Mann suggested there may be value in the networking that happens at conferences, but without direct follow-up with practitioners much of the information is lost.
Fixsen noted that fidelity of training components to the original design is imperative if the training is to lead to the use of new practices. He mentioned a study by Joyce and Showers (2002), which showed that teaching theory of educational practices together with discussion and training resulted in no use of the practices taught in the classroom. If trainers added opportunity for feedback and practice, they found that 5 percent of trainees implemented new practices in the classroom. When coaching in the classroom was also added to the training regiment, the number of teachers who then used these practices in their teaching rose to 95 percent.
Incentives for Evidence-Based Programming
Several speakers mentioned the need to make use of evidence more desirable to practitioners. Workshop speaker Jim Bueermann from the Police Foundation pointed out that often decisions of practitioners are not based on evidence, but rather situational and community-specific motivations. Dolan added that pressures to generate and apply more evidence can often seem tedious and time-consuming for practitioners who typically already have heavy workloads. She noted that teachers, for example, often make negative associations with the term “evidence.” New strategies for measuring student performance, building new responses to findings, and increasing testing can be viewed as adding more work to their busy schedules and taking time away from teaching.
Leaders could add incentives and change organizational policies to encourage practitioners to apply evidence. Dolan suggested that practitioners and managers recognize and celebrate positive outcomes as an example, and others discussed using political and professional rewards to encourage evidence use. Bueermann’s agency, for example, started adding questions about theories and research to interviews for competitive police positions. Because of this, interviewees started preparing for their interviews with research and literature reviews, and the dialogue in police offices started to include a broader knowledge base. In the late 1990s, Bueermann’s police department reorganized around an evidence-based approach that focused on addressing risk and protective factors. They changed internal incentives to align with the new focus and conducted five randomized controlled trials to assess their work. Bueermann has called for nationally mandated reward systems that bring research to police and other practitioners to at least engage everyone into the same conversation.
Another example of an organizational effort to emphasize the importance of evidence is the Evidence Integration Initiative in the Office of
Justice Programs (OJP) that aims to equip all employees with an appreciation of evidence. Mary Lou Leary from the Department of Justice OJP explained that the objectives of the program are to improve the quantity and quality of evidence that the OJP generates, to effectively integrate evidence into policy and program decisions, and to improve the translation of evidence so that practitioners understand its relevance and application.
Replacing Ineffective Programs
Fixsen mentioned his colleague George Sugai’s rule: “For every new initiative started, two current initiatives should be stopped in order to maintain efficiency and focus.” However, he added that programs cannot be terminated over night because people and communities depend on their services. He suggested that more effective interventions should be established and able to absorb clients from the old programs before the closings occur. Bumbarger noted that program implementation can be a competitive practice, and it is prudent for implementers to be sensitive about the replacement of programs and other’s ideas for effective methods.
Fixsen noted that researchers disseminate a lot of information to practitioners with little guidance on how to apply it. Implementation teams can step in to provide technical assistance and work simultaneously at multiple levels of the intervention to ensure that programs are implemented with fidelity. Fixsen said that, ideally, implementation teams have at least three people with expertise in innovation, implementation, and organizational change. The teams would be sustainable and able to tolerate member turnover. Fixsen said implementation teams with the correct expertise can get 80 percent of their partner organizations to implement their programs well within 3 or 4 years. If information is merely disseminated without the work of these teams, about 14 percent of organizations succeed, taking about 17 years (Fixsen et al., 2005). He added that implementation teams also increase the likelihood the programs will be sustainable and have long-lasting effects.
Fixsen noted that program development is an ongoing practice of self-evaluation and improvement. The state of violence prevention is much better than in the past, but still it is not enough to address the magnitude of the problem and, due to changing societal and environmental factors, interventions that are effective now might not work in a few years. The violence
prevention community thus is cautioned not to become unconditionally committed to certain interventions, but always be looking for a better way. Organizations could use a variety of methods to determine improvement opportunities, such as rapid-cycle, plan-do-study-act problem-solving models; usability testing for new products and programs; and practice-policy communication loops. Leary discouraged researchers, practitioners, and policy makers from thinking that they have the final answer and are done. Rather, determining what works, and what works better, is an ongoing process.
IMPLEMENTING EVIDENCE-BASED PROGRAMS ACROSS COUNTRIES
Jennifer Matjasko of the Centers for Disease Control and Prevention described health as “a dynamic state of well-being characterized by a physical, mental, and social potential that satisfies the demands of a life commensurate with age, culture, and personal responsibility.” According to this perspective, health by definition depends on context. She explained that when determining how to best implement violence prevention initiatives in various contexts, it is important to understand how evidence translates in different cultures and systems. She suggested that the violence prevention community leverage translational work used in areas such as HIV prevention. An understanding of how evidence applies to different contexts will lead to improved program scale-up and sustainability.
An understanding of the context-specificity of violence prevention is important because research conclusions from studies in one country cannot necessarily be applied to another country. Phillips said studies show that in China, only 37 percent of women who make serious suicide attempts have an active mental illness, but in Western countries, about 95 percent do. Strengthening mental health systems might be an effective response to suicide in Western countries, but maybe not in China. In China, pesticide use seems to be driving suicide more than poor mental health. Phillips elaborated by explaining that pesticide use causes a third of suicides globally, and research shows that suicide rates in China decrease as people move to urban areas and have less access to lethal means of suicide.
Phillips gave more examples of important differences in suicide data among countries: in Latin America, religion is a protective factor against suicide; in China, it is a risk. In western countries, the ratio of male to female suicides is 3:1; in China it is 1:1. Again, the contextual factors are important for determining programmatic focus.
Phillips noted that 84 percent of suicides happen in LMICs, yet 95 percent of suicide research is conducted in high-income countries (HICs). Similarly, workshop speaker Neil Boothby from the U.S. Agency for
International Development and Daniela Ligiero from the Office of the U.S. Global AIDS Coordinator both mentioned the abundance of programs that have been proven effective in HICs but are not supported by any studies from LMICs, where systems and contexts are very different. Workshop speaker Catherine Ward from the University of Cape Town, South Africa, did point out that many of the challenges and solutions that exist in LMICs are similar to those in low-income areas of high-income countries, so when determining how to translate information and programs, it may be helpful to look more closely at the types of communities being researched rather than comparing entire countries. Currently there is little understanding of how evidence from one context can be applied to another and workshop participants discussed ways of developing programs for a place (especially in LMICs) when little context-specific evidence is available.
Discussions on implementing evidence-based programs across countries focused on several key areas:
- barriers to consider,
- translatable theories,
- cost-effectiveness of programs,
- cultural adaptation, and
- local research and programming.
Key messages specifically from workshop breakout group discussions on applying knowledge to effective action in LMICs are summarized in Box 5-1.
Barriers to Consider
Speakers discussed barriers to consider before beginning program implementation in an LMIC. Several speakers pointed out that certain resources and infrastructure are missing from some countries and thus cannot be relied upon to support violence prevention. Willman noted that some components of domestic violence prevention programs, such as restraining orders and shelters, do not exist in LMICs, and Campbell mentioned that some countries do not have developed health care or criminal justice systems. Workshop speaker Julie Meeks Gardner from the University of the West Indies, Open Campus, mentioned implementer’s considerations of how to address space constraints, environmental issues, and deficiencies in human, material, and financial resources are important when establishing programs in LMICs. When practitioners and program developers in LMICs need to access research and informational material they often face obstacles such as high journal costs and language barriers.
Willman also pointed out that the political structure of some countries can interfere with advances in violence prevention. For example, it might be more difficult in a country that elects leaders for 18-month terms to secure political commitment to a program that only shows effects after several years. Meeks Gardner added that other countries have policies that are not aligned with successful programs. For example, the Jamaican government’s national security plan focuses on addressing gang-related problems, whereas its Peace Management Initiative focuses more on community development and initiatives to prevent gang involvement before it becomes a problem.
Several speakers discussed theories that are universal and can be applied to LMICs as a basis for violence prevention programs. For example, Ward mentioned that programs based on social learning theory work well across many different contexts. Therefore, to expand programming in a certain area, implementers could scale up existing programs based on this theory or use the theory to design new programs. Matjasko noted that the developmental tasks and needs of adolescence are also universal, despite different cultures having varying definitions and levels of recognition of adolescence. Programs across the globe can then build youth-focused programs to address the needs of adolescents using this framework.
Cost-Effectiveness of Program
Ward noted that programs are often too costly to implement in LMICs and more understanding of the true costs and benefits of violence prevention in these countries would be valuable. She noted that program advocates often point out the long-term cost-effectiveness of their programs, but this might not be a very influential consideration in countries with much lower per capita spending. For example, U.S. estimates of the cost of the Nurse–Family Partnership are about $8,000 to $9,000 per child. In 2009, the average low-income country spent about $25 per child on health. Regardless of the cost-effectiveness of the program, the initial costs are enormous and prohibitive for countries spending so little per capita on health. Furthermore, she noted that a program might be cost-effective, but if a country’s department of social services is not spending any money on programs in the first place, then the department is not going to save money in the short term by implementing the program than it is currently saving by not doing anything.
Discussions from the Workshop Exercise in Applying Knowledge into Effective Action in Low- and Middle-Income Countries
During the workshop, participants were divided into four breakout groups and asked to respond to a scenario regarding violence in Nairobi, Kenya. The breakout groups were assigned topics for discussion: evidence-based programming and decision making based on community needs; identification and engagement of key stakeholders to be involved in selecting, planning, and implementing evidence-based interventions; adaptation of evidence-based programs to local conditions and culture; and evaluation and sustainability. After the sessions, breakout participants provided their insights and thoughts from the discussions.
Forum member Colleen Scanlon from Catholic Health Initiatives provided comments on evidence-based programming and decision making based on community needs. She noted that the violence discussed in the case study could be perhaps addressed with programs focusing on education, income generation, job creation, and education for males on gender violence. Whatever the response, she noted that one component could be the creation of a coordinating network to promote and nurture partnerships with various stakeholders throughout the program. Scanlon added that in order to determine the community’s needs and plan the appropriate response more data might be needed on the realities and conditions that prompted violence, the community’s assets and existing resources, the relationship of the government with the community, the population subsets that could be targeted, programs in similar regions that could be replicated, and surveillance information on daily movement and active social areas.
Workshop planning committee co-chair and Forum member Katrina Baum from the Department of Justice provided comments on key stakeholders to involve in identifying the problem and planning and implementing the response. A wide range of stakeholder involvement is necessary to help identify other potential public and private partners, identify community priorities, identify other communities with similar problems, and encourage community support of the project. Baum noted that in addition to the obvious interested stakeholders it would be prudent for program developers to involve groups of people who might potentially show resistance to the project. For example, implementers could consult police who might themselves be involved in the violence, and attempt to understand and find overlap with goals of policy makers who will be instrumental to the program’s
Many speakers pointed out that culture plays an important role in how programs affect communities. Ward noted that cultures have varying values, literacy levels, beliefs, family structures, and child-rearing traditions. She mentioned that implementers in countries with no local research
success. The program could be implemented in a partnership with community leaders in a language that they understand, and implementers could acknowledge the expertise of stakeholders and continually ask for their advice throughout the process. Baum emphasized that the establishment of trust between implementers and other stakeholders is vital to the success of the project, and program developers might focus on spending as much time as necessary to build strong relationships with the community and other partners.
Breakout group facilitator Dina Deligiorgis from United Nations Women commented on adapting violence prevention programs to local conditions and cultures. She said that before beginning to implement the program, stakeholders might focus their efforts on analyzing the community situation and local context. It would be useful if they identify the prevalent risk factors and the cultural, religious, and institutional context, and this will help planners to identify multiple points at which to intervene as well as structural and legal barriers they might face. For example, a situational analysis might expose that alcohol in Kibera is made, distributed, and consumed in different ways than nonlocals are used to, which could potentially inform new methods of implementing the prevention program. Phillips added that some other considerations could be: ensuring that local community identifies and respects the cultural background of the program implementers; opportunities for bidirectional learning across other low- and middle-income countries that are similar to Nairobi and implementing violence prevention programs; and providing affected communities with hope that their situation can change.
Breakout group facilitator Patricia Campie from the American Institutes for Research provided comments on considerations for evaluating and sustaining the response that might be implemented as a result of the Nairobi violence described in the case study. She noted the need to first assess the validity of data that are already available and then build on currently existing data systems to find new information. Evaluators may face certain challenges when working with available data systems; however, police who are perpetrators of the crime might not report incidents or victims might not want to identify their situations for fear of retaliation. For example, Campie added that evaluators could engage the community and invest in qualitative research to comprehensively understand the program’s effects. She also noted that it is important for program developers to earn the trust of the community and be clear that community members are not merely study subjects but partner creators of hope and change. Campie mentioned that a focus on sustaining the belief that change is possible, rather than merely focusing on sustaining the program, can promote for long-lasting violence prevention efforts.
on program effectiveness face the challenge of trying to maintain the right amount of fidelity to the original program while adapting it to the local culture. Ward observed that some parenting programs seem to be more successful outside of their original contexts than other types of programs. She speculated that this is because the programs were designed to be
collaborative and flexible, working with parents to reach family goals rather than instructing them.
Local Research and Programming
Though studies take money and time, several speakers commented that LMICs could move forward with local research on violence prevention. Boothby noted that LMICs could better understand the magnitude and causes of violence with the development and use of active surveillance systems. Ward added that LMICs would benefit from knowing what risk and protective factors are prevalent in an area. She noted that countries do not have to spend resources establishing what the risk and protective factors are because this information already exists, but the countries can concentrate on doing baseline studies to determine their prevalence. Phillips cautioned that even surveillance research requires thoughtful consideration of culture. For example, monitoring suicide rates would be difficult in some Islamic countries where suicide is illegal and people resist reporting incidents.
Matjasko commented that LMICs could use more information on their best prevention delivery systems. In some countries, for example, schools are the best way to deliver prevention programs but this is not necessarily universal. Phillips added that individuals and institutions can better support programs if they are aware of the resources that are available for violence prevention, and thus developing systematic methods of performing situational analyses across various settings could be useful.
Ward added that despite limited resources, researchers can still use the best available methods to test program effectiveness in all countries. She emphasized that this is perhaps most important in LMICs because they do not have money to waste on ineffective interventions.
Value of Multisectoral Efforts
Willman from the World Bank commented that throughout the 2 days of discussion, breaking down silos and working together was a recurring theme. She noted that violence has long been a public health issue and a criminal justice issue, but it is only more recently being recognized as a development issue. Violence has important economic impacts and dimensions, both at the individual and systems levels. Intimate partner violence has massive economic dimensions—it is difficult for someone to leave an abusive relationship without somewhere to go or the income to sustain them when they get there. She also noted the role of infrastructure in violence prevention; for example, city streets wide enough for police cars to access them and parks with streetlights so activities can be monitored at night.
Willman noted that some of the interventions that have shown the most promise for violence prevention in LMICs are ones that have a strong economic dimension, such as microcredit financing. In the beginning, violence and violence prevention were not considered as factors in microcredit programs. However, it was realized that some women participating in the programs were being battered when they came home with money. She explained that many of “the men did not understand where they [the women] were going when they were going to these community meetings and why they were coming home with money. What we have learned is that when you engage men in a positive way and they see that as family income and they see their wives as partners, they can do amazing things with that money.” Willman noted that there is still a lot of work to be done in this area of microcredit programs, but it is showing promise. Other economic interventions in low- and middle-income settings, such as conditional cash transfers and youth employment, are now being designed and monitored in terms of outcomes on prevalence of violence.
Several important—but overlooked—stakeholders in violence prevention program implementation that were suggested included state health departments, policy makers and their staff, and city council members and managers in implementation of interventions.
Mark Rosenberg, Forum co-chair and workshop planning committee member, stated
We have struggled for such a long time in violence [prevention] against the notion of fatalism. It is the counter-point to the idea of hope, that violence is evil. There has always been evil in the world—you are not going to do anything about it; it has always been with us; it will always stay with us, so why even try? This notion of fatalism unfortunately is still alive and well.… But I think this [workshop] was a tremendous effort toward overcoming fatalism and understanding ways forward.
Several workshop speakers commented that implementation of evidence-based violence prevention programs can provide individuals with the hope that a horrible situation in which they find themselves can change; that their situation will be better, at an individual level and at a systemic level. Implementing interventions can have the potential to directly create change and is the critical component that links research to real community outcomes and violence reduction.
Key Messages Raised by Individual Speakers
- Through implementation science, practitioners can ensure programs are contextually appropriate and culturally relevant (Fixsen).
- Practitioners who are implementing evidence-based programs are a key component to the overall success (Campbell, Dolan, MacMillan, Mann).
- Implementation science can move the dissemination of evidence-based information toward effective application and the ultimate goal of improved well-being and safer communities (Bumbarger, Fixsen).
- Implementing effective evidence-based programs is a process of constant evaluation and adjustment to current context and state of the knowledge base (Fixsen, Leary, Phillips).
Bellis, M., N. Leckenby, K. Hughes, C. Luke, S. Wyke, and Z. Quigg. 2012. Nighttime assaults: Using a national emergency department monitoring system to predict occurrence, target prevention and plan services. BMC Public Health 12:746.
Fixsen, D. L., S. F. Naoom, K. A. Blase, R. M. Friedman, and F. Wallace. 2005. Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network.
Hawkins, J. D., R. Catalano, and M. Kuklinski. 2012. Communities That Care. In Social and economic costs of violence: Workshop summary. Washington, DC: The National Academies Press.
IOM (Institute of Medicine). 2002. Reducing suicide: A national imperative. Washington, DC: The National Academies Press.
Joyce, B., and B. Showers. 2002. Designing training and peer coaching: Our needs for learning (3rd Edition). Alexandria, VA: Association for Supervision and Curriculum Development.
Olds, D. L., H. Kitzman, C. Hanks, R. Cole, E. Anson, K. Sidora-Arcoleo, D. W. Luckey, C. R. Henderson, Jr., J. Holmberg, R. A. Tutt, A. J. Stevenson, and J. Bondy. 2007. Effects of nurse home visiting on maternal and child functioning: Age-9 follow-up of a randomized trial. Pediatrics 120(4):e832-e845.