2
Setting the Stage: Evaluating Efforts to Prevent and Address Sexual Harassment
The workshop began with a discussion about what is currently known about how higher education institutions are approaching sexual harassment prevention and evaluation, including the reasons that an institution would or should evaluate its prevention efforts.
Elissa Perry, Columbia University, provided an overview of the evaluation of sexual harassment prevention efforts in higher education, based on a paper that was commissioned from her by the workshop planning committee.1 Her paper describes sexual harassment prevalence and the importance of prevention and evaluation work, addresses how higher education institutions are approaching sexual harassment prevention work, identifies barriers to the evaluation of sexual harassment prevention efforts, and offers evidence-based suggestions for the evaluation process. Importantly, this paper highlights that strategies for addressing sexual harassment expand beyond individual- and group-level efforts, as well as traditional approaches to prevention, such as training and education. For example, institutional factors, including leadership, organizational structure, practices, and systems (not directly related to victim support or complaint procedures), may also play a role in preventing sexual harassment.
Research supports the idea that prevention programs are most effective when they are continuously evaluated, stated Perry. Evaluations yield information that can be used to justify allocations of resources and time, assess
___________________
1 Available at: https://www.nap.edu/catalog/26279.
whether the intervention is having the intended effects, and determine whether it should be revised or discontinued. Training transfer, or applying knowledge acquired during training to a targeted job or role, is also more likely when the training is frequently evaluated and its effects on behavior and organizational performance are assessed.
Perry noted that higher education institutions and other organizations that have not engaged in evaluation initiatives often cite a lack of resources. Other reasons for not conducting evaluation have included the assumption that the intervention is effective, a lack of agreement on what should be evaluated, and limited knowledge about how to conduct evaluations. Importantly, there is often a disconnect between what is measured and the outcomes the intervention is intended to affect.
STEPS IN PROGRAM EVALUATION
Perry also provided an overview of a generalized program evaluation process, the first step being to conduct a needs assessment, involving diagnosing who needs to be trained, for what, and when. A needs assessment can provide information and understanding of the specific needs, what needs the intervention is designed to address (e.g., greater awareness, changed behavior, improved climate), and clarity around the outcomes the intervention should be designed to affect. It should be tied directly to the evaluation plan, stated Perry.
The second step is to select appropriate program measures. Formative evaluations focus on improving the quality of the program, including its delivery and design, while summative evaluations, which are typically conducted later in the life cycle of a program, can be used to provide evidence of the effects of the program. Perry noted that the selection of appropriate program outcome measures should be based on needs assessments. One should also consider how quickly evaluation information is needed and how long it will take for the program to affect the outcomes of interest, stated Perry.
The third step is to select an appropriate evaluation design. While randomized control trials are considered the “gold standard” of evaluation, Perry noted, they can be difficult, if not impossible, to implement in the real world. Instead, quasi-experimental designs (e.g., pre-test/post-test, time-series designs) can be used. In noting where to start, she added that it is important to use a confirmatory evaluation approach. As described in her paper, a confirmatory evaluation approach “is based on a clear theory of how the program is expected to impact the outcomes of interest and
looks for and interprets patterns of relationships based on the theory,” and it may help strengthen support for a causal relationship between program participation and targeted outcomes.
Evaluating complex social programs, such as sexual harassment interventions, requires a different approach than evaluating less complex programs because such programs include multiple intervention strategies for multiple stakeholders, are delivered by human agents, and are implemented in complex institutions, Perry stated. To address this, one approach may be to develop a logic model or program theory that identifies program-related constructs and maps relationships between these and program outcomes. The logic model should include context, inputs, processes, and outcomes (see example in Figure 2-1). Another approach is to employ a multi-phased, mixed-methods approach.
Many institutions are collecting an extraordinary amount of data related to sexual harassment, including the impact of prevention interventions. While climate surveys are used frequently to assess the prevalence of sexual harassment, students’ knowledge and attitudes related to policies and resources, and student satisfaction after using campus resources, there are many other tools that can be used to support
program evaluation. Perry stated that higher education institutions should consider whether data collection and evaluation are occurring in a systemic fashion, for example, by conducting needs assessments. Another consideration, she noted, is whether program interventions and evaluations are being driven by the logic model or program theories that were designed to outline the key program-related activities and outcomes.
EXISTING MEASURES AND METRICS FOR EVALUATING CHANGE IN ORGANIZATIONAL CLIMATE
Several presenters noted that having measures and metrics that align with program goals is essential to the evaluation of sexual harassment prevention programs. Several speakers discussed examples of existing measures that might be used in sexual harassment prevention evaluation.
Emily Huang, Oregon Health and Science University, presented an overview of safety climate as an indicator of workplace safety (see Box 2-1). She began by defining safety culture as shared core values and beliefs that interact with an organization’s structures and control systems to produce behavioral norms. Safety climate is a measurable aspect of safety culture, or the employees’ perceptions of an institution’s safety policies, procedures, and practices. It represents the overall importance and “true” priority of safety at work. Studies show that when a company cares about employees they have a higher level of job satisfaction. The key dimension is managerial commitment to safety, requiring internal consistency among policies, procedures, and practices. Safety climate can serve as a robust predictor of future injury, Huang noted. This approach allows for a baseline from which to measure impact. Reducing sexual harassment in the workplace contributes directly to workplace safety for employees; as one workshop participant noted, “I see diversity and inclusion as a form of safety.”
Huang provided an example of a scale that captures safety culture from the perspective of management and employees (see Figure 2-2). The scale, which can also be used to assess sexual harassment safety, measures how management may or may not be working to improve safety levels or providing information on safety issues. It also assesses the level to which a direct supervisor may be engaged in discussions with an employee about how to improve safety or whether employees are following safety rules. The scale may be applicable to efforts to evaluate sexual harassment. Huang
added that people want to see leaders leading by example to ensure inclusive workplaces.
Another example of a measure that could be applied to evaluation of sexual harassment prevention is the masculinity contest culture (MCC) scale (see Figure 2-3). Introducing this measure, Jennifer Berdahl, University of British Columbia, said that research indicates that men are most likely to harass when masculinity in the culture is highly valued and masculinity is threatened. Harassment targets are disproportionately those who threaten the perpetrator’s masculinity. Work environments that motivate men to value, prove, and defend their masculinity also motivate them to harass others based on sex.
To develop this measure, Berdahl explained, experts from a range of fields generated over 130 potential items to capture masculinity culture and refined items with online samples. A four-factor scale was developed to address identified ways to prove manhood at work: dog-eat-dog (ruthless competition); show no weakness (emotions); strength and stamina (physicality); and put work first. Respondents were asked to rank their work environment on a scale from 1 (not at all true of my work environment) to
5 (entirely true of my work environment). The MCC scale was found to be higher in more male-dominated organizations.
The MCC scale also predicts negative organizational and individual outcomes for both men and women. Berdahl notes that correlations from 0.5 to 0.7 indicate that there may be few women in management, toxic leadership, and identity-based harassment with low psychological safety. In environments with correlations from 0.3 to 0.5, there is a higher likelihood of individual poor personal performance, poor mental health, burnout, and job dissatisfaction, Berdahl said (see Figure 2-4; also Glick et al., 2018).
Alec Smidt, Yale University School of Medicine, discussed measures of institutional betrayal, defined as the failure to prevent harm or to respond supportively to wrongdoings when those within the institution (such as staff, students, and faculty) have a reasonable expectation of not being harmed. Smidt described studies that indicate that college women who were sexually assaulted experienced worse psychological and physical health outcomes following the trauma (see Cortina et al., 1998; Huerta et al.,
2006). Sexual harassment by those in powerful positions was more likely to result in institutional betrayal. By contrast, institutional courage is defined as action taken on behalf of an institution to address harassment and support those affected; as one participant noted, institutional courage implies institutional risk. Smidt discussed the institutional betrayal and support questionnaire (IBSQ), which was used in 2014 at the University of Oregon (see Figure 2-5). As described above, existing measures and predictors of organizational outcomes could be used when evaluating efforts to address sexual harassment and to understand their effectiveness and impact on organizational climate.
Following the presentations, participants discussed how these measures and others might be applicable in their work and the importance of aligning measures with program goals. For example, participants noted the importance of ongoing stakeholder engagement throughout an evaluation. Also, understanding the ability of stakeholders to implement the program was identified as an item of interest that might be measured. Regarding the MCC scale, Diana Lautenberger, Association of American Medical Colleges, noted that it would be helpful to assess the willingness of men and others to intervene, be allies, and challenge other men in their behavior. Additionally, Vicki Magley, University of Connecticut, noted that while measures of institutional betrayal and support or courage appear to be focused on those who are directly targeted by sexual harassment, perhaps even requiring reporting, these could also be applied more broadly at the institutional level.