3
Using Prevention Science and Implementation Science to Better Evaluate Sexual Harassment Prevention Efforts
This chapter covers the content presented on the principles of prevention science and implementation science, including how they might be applied to the evaluation of sexual harassment prevention efforts.
APPLYING PREVENTION SCIENCE
Cindy Crusto, Yale University School of Medicine, and Lisa Hooper, University of Northern Iowa, presented key tenets of prevention science and program evaluation as a proposed framework for preventing sexual harassment in higher education. This information was based on their paper outlining how prevention science has been and is currently being applied to sexual harassment evaluation in higher education, commissioned by the workshop planning committee.1 The paper also offers an organizing pre-
___________________
1 Available at: https://www.nap.edu/catalog/26279.
vention evaluation framework for sexual harassment for use in diverse higher education contexts.
Hooper noted that prevention science focuses on the development, implementation, and evaluation of evidence-based programs and strategies that reduce risk factors and enhance protective factors to improve the health and well-being of individuals, families, communities, and organizations. Drawing from a diverse range of disciplines, prevention science aims to understand the determinants of societal, community, and individual level problems, such as sexual harassment. Prevention science also examines risk factors, which are associated with increasing the likelihood of developing the priority problem; protective factors, which are associated with reducing the likelihood of developing the problem; and promotive factors, which are associated with optimizing health rather than protecting health.
To select and develop a prevention framework, Hooper discussed the need to consider:
- accumulated evidence, by identifying prevention programs and interventions with empirical support that meet the organizational need to prevent the problem;
- conceptual fit, through a review of available programs and interventions that appear to meet the institutional need to prevent the problem; and
- practical fit, by reviewing available programs and interventions that appear to align with the targeted population and type of institution.
Commonly used principles to guide prevention program development and process include:
- assessing available empirical support,
- understanding historical efforts,
- establishing a theory of change or logic model,
- clearly identifying the priority problem,
- identifying risk and protective factors and prevention targets, and
- developing short- and intermediate-term outcomes.
Hooper presented on a prevention science program development framework developed by the Substance Abuse and Mental Health Services Administration (see Figure 3-1). The agency describes two relevant guiding
principles for the framework: (1) cultural competence, defined as “the ability of an individual or organization to understand and interact effectively with people who have different values, lifestyles, and traditions based on their distinctive heritage and social relationships,” and (2) sustainability, defined as “the process of building an adaptive and effective system that achieves and maintains desired long-term results.”
Hooper also presented another prevention framework for sexual harassment (see Figure 3-2). Factors that form the foundation of the framework include a focus on well-being and a culturally responsible, safe, and supportive climate and organization. Other key features include assessment, evaluation, prevention, and response practices; leadership and organizational support; and stakeholders who are trained to be culturally responsive, among other areas.
The program evaluation process should begin at program development and continue through program implementation and beyond, stated Crusto. Developing appropriate outcomes is a critical part of the evaluation process. Outcomes can also reflect changes in learning, action, and condition. Not all outcomes occur at the same time, and some are necessary before others can happen. There are, for example, short-term, intermediate, and long-term outcomes.
To gain buy-in from participants during an evaluation, Crusto recommended including participants in the process from the beginning and asking evaluation questions that are important to them. The goal is to build an ongoing learning program or culture; the capacity to engage participants is key to achieving this goal. It is also critical to build in equity from the beginning of the program. Crusto added that the prevention lens can be helpful for understanding protective factors, including when to intervene, which can occur at the individual or systems level, and can help shape action on how to modify or adapt.
The Community Readiness Model was also discussed as a tool to assess whether a community is prepared to take action on an issue. Crusto described nine stages of community readiness ranging from no awareness (Stage 1) to high level of community ownership (Stage 9). There are also opportunities to “create” readiness by carrying out community training before conducting the intervention.
Crusto also presented the application of a logic model to an initiative at Rutgers University aimed at significantly increasing faculty and staff education and skill development.2 The case study captured the project’s goals, outputs, and outcomes. Considering inputs or outputs can support training that is trauma-informed and is faculty, staff, and student-focused. As was
___________________
2 Available at: https://www.nap.edu/catalog/26279.
discussed, the case study clarifies the importance of clearly communicating expectations through university policies.
Regarding developing training programs, we know what works in prevention—a “one and done” model does not suffice in terms of resulting in long-term knowledge or behavior change, stated Crusto. At a minimum, booster sessions and follow-up assessment are needed over time.
APPLYING IMPLEMENTATION SCIENCE
Raechel Soicher and Kathryn Becker-Blease, Oregon State University, provided an overview of the field of implementation science, including how it can be applied to sexual harassment prevention efforts in institutions of higher education. Their presentation was based on a paper commissioned by the workshop planning committee.3 The paper includes a summary of the organizational barriers to preventing sexual harassment; defines implementation science, including a comparison to other intervention research fields; outlines a subset of the research methods, designs, and models used in implementation science; and provides examples of models of implementation science that may be relevant to evaluating sexual harassment prevention efforts. The paper also includes an overview of potential barriers and next steps for approaching sexual harassment prevention from an implementation science lens.
Soicher discussed multiple outcomes of interest related to implementation, including how they could be applied to the evaluation of sexual harassment prevention, which include acceptability, adoption, appropriateness, feasibility, cost, reach, sustainability, and fidelity. (See Appendix E for worksheets that demonstrate how these outcomes of interest can be applied to the evaluation of sexual harassment prevention efforts.) In fact, fidelity, she noted, is at the forefront of implementation science. Implementation science can be used to understand implementation processes, identify contextual influences, and assess external validity. It can also be used to identify critical components of a program and describe what factors may influence successful implementation. Implementation science design methods, including those used for within-site and between-site analyses, are also discussed in the commissioned paper.
___________________
3 Available: https://www.nap.edu/catalog/26279.
One of the strengths of the implementation science approach is that it can support and facilitate evaluation and be easily adapted and applied in varying contexts. Foundational to this approach, Soicher stressed, is the need to engage the community in the evaluation. Partnerships with the community and between researchers and practitioners are critical. Soicher noted that effective practices, implementation, and enabling contexts, for example, collaborating with teams, can result in improved outcomes (see Figure 3-3).
Soicher presented the Practical, Robust Implementation and Sustainability Model (PRISM) as it applies to Vanderbilt University’s effort to alter departmental admission policies. The effort was designed to diffuse dependent relationships between graduate students and their advisors (see Box 3-1 and Figure 3-4; also see Case Study C in Appendix D). PRISM is an extension of a widely used planning and evaluation framework known as Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM). Soicher noted that this model can be used to improve implementation efforts as well as develop potential intervention and implementation questions.
Becker-Blease provided an overview of the Consolidated Framework for Implementation Research (CFIR) as applied to an initiative of the
Massachusetts Institute of Technology to offer lab-based inclusive culture workshops (see Figure 3-5 and Box 3-2; also see Case Study E in Appendix D). This framework synthesizes common constructs from across multiple implementation theories and provides a consistent taxonomy for building a knowledge base around what works where and why. The CFIR can outline detailed construct definitions and steps for research processes and consists of five major domains including:
- intervention characteristics, or the core components and aspects which should be preserved to maintain the effectiveness of the intervention;
- outer setting, or economic, political, and social contexts that influence implementation of and intervention within an organization;
- inner setting, or the local culture, climate, and structure of the organization which affects implementation;
- individual characteristics, which refer to recipients of the intervention and their knowledge and beliefs about the intervention; and
- process, or details of the active change process.
The key challenge, stated Becker-Blease, is assessing whether gaining buy-in from faculty will ultimately have an impact on the effectiveness of the program. The framework may be used to assess training impact for this case example. (See Chapter 6 and the associated commissioned paper4 for a discussion of resources to support the implementation of this framework.)
___________________
4 Available: https://www.nap.edu/catalog/26279.
This page intentionally left blank.