National Academies Press: OpenBook
« Previous: 4 Forces Influencing Health Professionals' Education
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 84
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 85
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 86
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 87
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 88
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 89
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 90
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 91
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 92
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 93
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 94
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 95
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 96
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 97
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 98
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 99
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 100
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 101
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 102
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 103
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 104
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 105
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 106
Suggested Citation:"5 Evaluation of Training Efforts." Institute of Medicine. 2002. Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence. Washington, DC: The National Academies Press. doi: 10.17226/10127.
×
Page 107

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

84 CONFRONTING CHRONIC NEGLECT 5 Evaluation of Training Efforts As summarized in the earlier chapters, there has been some increased atten- tion paid to training health care providers about child abuse and neglect, intimate partner violence, and, to a lesser extent, elder maltreatment. Descriptions of family violence curricula and training models for health professionals and expe- riences with their implementation have been published (e.g., Dienemann et al., 1999; Ireland and Powell, 1997; Spinola et al., 1998; Thompson et al., 1998; Wolf and Pillemer, 1994). Attempts have been made to document the extent to which clinicians actu- ally receive instruction in how to identify and respond to patients involved in these situations. Surveys of practicing clinicians have found that considerable segments of health professionals have had little or no training in this area. Some studies have found modest positive correlations between individuals’ reported involvement in training and family violence assessment and management prac- tices (Currier et al., 1996; Flaherty et al., 2000; Lawrence and Brannen, 2000; Tilden et al., 1994). Although this observed relationship cannot be mistaken for evidence that these practices are a direct product of training, it does suggest more careful examination of what is known about the effectiveness of family violence curricula and other training strategies on clinician behaviors and indi- cates the need for more explicit examination of causation. At present, claims regarding what training is needed and how it should be carried out far outnumber the studies that provide empirical evidence to support them. Similar to many other areas of health provider training, several factors are most likely to contribute to this shortage of information. For example, accredita- tion criteria and other pressures on health professional schools place constraints 84

EVALUATION OF TRAINING EFFORTS 85 on curricular content; limited funding interferes with evaluation; and legal, ethi- cal, and patient barriers complicate evaluation efforts (e.g., Gagan, 1999; Sugg et al., 1999; Waalen et al., 2000). Although this lack of evaluation is not unique to family violence training, increasing the number and quality of training opportunities in family violence has consistently been cited as central to narrowing the gap between recommended practices and professional behavior. To understand what improvements should be made, a strong evidential base for deciding how best to educate providers in this area is needed. This chapter examines the available research base concerning the outcomes and effectiveness of family violence training. First, we summarize the search strategy used to locate and include evaluations of training interventions and then describe the characteristics of the training strategies and models that have been assessed, along with the basic features of the evaluation measures and designs. Finally, we discuss the inferences we can confidently draw from these studies so as to guide future training efforts. Due to the dearth of published studies on elder abuse training, the focus is on outcomes and effectiveness of child abuse and intimate partner violence training. SEARCH STRATEGY Four bibliographic databases were systematically searched for studies that evaluated training efforts in family violence and were published prior to Novem- ber 2000. These included MEDLINE, PsycInfo, ERIC, and Sociological Ab- stracts. Search terms included family violence, domestic violence, intimate partner violence, elder abuse/neglect, and child abuse/neglect coupled with train- ing, assessment, evaluation, detection, and identification as both subject terms and text words. These searches were augmented by published bibliographies (i.e., Glazer et al., 1997). The reference lists of all chosen articles also were screened for additional studies.1 This strategy identified 64 potential studies, the majority of which focused on intimate partner violence training (n = 38, or 59 percent). Another 31 percent (n = 20) addressed training efforts in child abuse and neglect, while only 9 1The unpublished literature was also examined for evaluation efforts, including formal committee requests to outside groups (e.g., relevant professional associations, government agencies, founda- tions, and advocacy groups). This uncovered the recent evaluation of the WomanKind program sponsored by the Centers for Disease Control and Prevention (Short et al., 2000), which was in- cluded in the set of studies reviewed. The evaluation of the Family Violence Prevention Fund train- ing initiative has not yet been completed.

86 CONFRONTING CHRONIC NEGLECT percent (n = 6) focused on elder maltreatment training.2 Each was then reviewed to determine whether it met three inclusion criteria: 1. Relevant training population. Training participants had to include stu- dents pursuing degrees or practitioners in one or more of the six health profes- sions chosen by the committee, i.e., physicians, nurses, dentists, psychologists, social workers, and physician assistants. 2. Formal training effort. The training evaluated had to be a formal educa- tional intervention. This includes degree-related and continuing education courses, modules, clinical rotations, seminars, workshops, and staff training ses- sions but excludes training that was explicitly focused on clinical audits, feed- back, or detailing. Also included were studies that assessed the use of a formal screening protocol, given that these efforts often involved highly organized train- ing regarding information about family violence and was grounded in explicit models of instruction and behavior change (e.g., Harwell et al., 1998; Short et al., 2000; Thompson et al., 2000). 3. Quantitative outcome measure(s). A key requirement was that data were collected and reported on one or more quantitative measures of relevant out- comes related to responding to family violence. Outcome domains included: (a) knowledge, attitudes, beliefs, and perceived skills concerning family violence; (b) behaviors and performance associated with screening for abuse and case findings; and (c) practices and competencies needed to provide abuse victims with appropriate care (e.g., information, referrals, or case management).3 Stud- ies that focused on examining participant satisfaction were excluded, as were evaluations that employed only qualitative approaches. Application of these criteria resulted in a pool of 44 articles.4 Because three 2The study (Currier et al., 1996) evaluated trauma training, which included both intimate partner violence and child abuse, and the Thompson et al. (2000) evaluation of training for primary care providers assessed identification and management of violence for adults 18 or older, including eld- erly patients. Given that more attention was paid to intimate partner violence, both studies were assigned to this category. 3 Measures of identification and intervention were limited to those that did not rely on provider self-report surveys. Studies using diaries completed by providers on a daily basis, however, were included. 4Two studies (Seamon et al., 1997; Weiss et al., 2000) dealt with the training of emergency medical technicians, a population that was not one of the professions targeted by the committee. Two studies of child abuse and neglect training programs were excluded, based on their training interven- tions. One involved a statewide educational program of mailings, workshops, and other activities for dentists, but the analysis did not distinguish between those who actually reported receiving materials and participating in the workshops (Needleman et al., 1995). Socolar et al.’s (1998) randomized trial evaluated the impact of feedback and audit strategies on physicians participating in a statewide child

EVALUATION OF TRAINING EFFORTS 87 reported additional follow-up data on interventions included in this group, a slightly smaller number of training efforts were actually evaluated (n = 41). Supporting the relative recency of interest in family violence training is the fact that only 7 percent (n = 4) appeared prior to 1990. The final set of 41 evaluations resulted in a pool that was even more heavily populated by studies of intimate partner violence training. This area has received the most attention, with 30 (73 percent) of the studies assessing programs in this area.5 With the exception of four studies that reported outcomes of an elder abuse training session, the remainder (n = 7, or 17 percent) examined child maltreatment training efforts. The lack of evaluative information on elder abuse training may be partly a function of the relatively recent emphasis placed on the need for screening and the lack of available training opportunities. However, the reasons underlying the lim- ited attention paid to evaluating child abuse training efforts are less clear. Descrip- tions of training strategies appeared in the literature more than 20 years ago (e.g., Hansen, 1977; Venters and ten Bensel, 1977), although published research on train- ing did not surface until much later (1987). Despite the results of surveys conducted in the late 1990s that continued to report noticeable numbers of health care profes- sionals who felt ill-equipped to fully address child abuse cases and labeled their training in this area as insufficient (e.g., Barnard-Thompson and Leichner, 1999; Biehler et al., 1996; Wright et al., 1999), efforts to assess training remain few in number. For example, in our search, we found 20 studies that described some type of training effort in child maltreatment for health professionals, but only 7 studies met the committee’s criteria for selection. abuse program, which fell outside the definition of formal training that was used. Another 14 studies either did not provide any evaluative data concerning the program, restricted their examination to qualitative observations, or collected information on such outcomes as participant satisfaction (Bul- lock, 1997; Delewski et al., 1986; Gallmeier and Bonner, 1992; Hansen, 1977; Ireland and Powell, 1997; Krell et al., 1983; Krenk, 1984; Nelms, 1999; Pagel and Pagel, 1993; Reiniger et al., 1995; Thurston and McLeod, 1997; Venters and ten Bensel, 1977; Wielichowski et al., 1999; Wolf and Pillemer, 1994). Finally, two studies had as their focus the development and assessment of new measures for assessing training rather than the observed outcomes of the training itself (Dorsey et al., 1996; Kost and Schwartz, 1989). 5Summaries of these studies in terms of training characteristics, outcomes assessed, evaluation designs, measurement strategies, and major results are provided in Appendix F for intimate partner violence training and Appendix G for child abuse training evaluations. For each type of outcome, studies are ordered by training target population (e.g., medical students, residents and fellows, emer- gency room staff, and providers in other health care settings).

88 CONFRONTING CHRONIC NEGLECT TYPES OF TRAINING EFFORTS EVALUATED Selected characteristics of the training efforts evaluated in the 37 studies of intimate partner violence and child abuse training are summarized in Table 5.1. Overall, training programs on intimate partner violence that were subjected to some formal evaluation targeted a more diverse group of training populations. For example, no study examined outcomes of child abuse training efforts for medical students; in contrast, 13 percent of the intimate partner violence evalu- ations examined formal medical school courses, modules, and other inten- TABLE 5.1 Overview of Training Interventions Assessed in the Evaluations of Intimate Partner Violence and Child Abuse Training Area of Family Violence Addressed Intimate Partner Child Violence Abuse Total (n = 7) (n = 30) (n = 37) Characteristic N % N % N % Training population: Medical students 4 13.3 0 0.0 4 10.8 Residents or fellows 6 20.0 3 42.9 9 24.3 Emergency department staff (e.g., nurses, 13 43.3 0 0.0 13 35.1 physicians, and social workers) Staff in other health care settings (e.g., 7 23.3 0 0.0 7 18.9 primary care and maternal health clinics) Other (e.g., child protective services workers 0 0.0 4 57.1 4 10.8 and participants from several disciplines) Length of training: Less than 2 hours 9 30.0 0 0.0 9 24.3 2-4 hours 7 23.3 0 0.0 7 18.9 5-8 hours 2 6.7 4 55.1 6 16.2 More than 8 hours 5 16.7 3 42.9 8 21.6 Not specified 7 23.3 0 0.0 13 35.1 Training strategy: Didactic only 15 50.0 0 0.0 15 40.5 Didactic and interactive 11 36.7 7 100.0 18 48.6 Not specified 4 13.3 0 0.0 4 10.8 Training included screening form: 13 43.3 0 0.0 13 35.1 Training included other enabling devices (e.g., local resources list, checklists, and anatomically correct dolls) 12 40.0 1 12.5 13 35.1 Note: Percentages are column percentages and may not total 100.0 percent due to rounding.

EVALUATION OF TRAINING EFFORTS 89 sive instructional strategies (Ernst et al., 1998, 2000; Haase et al., 1999; Jonassen et al., 1999; Short et al., 2000).6 Providers in emergency departments and general health care settings are typically one of the first points of contact for abuse victims, and profes- sional organizations have stressed the need to improve the identification and management of intimate partner violence (e.g., American College of Emer- gency Physicians, 1995; American College of Nurse Midwives, 1997; American Medical Association, 1992). Consequently, a substantial portion of intimate partner violence training evaluations have examined programs for emergency department staff (43 percent), and nearly one-quarter have involved providers in other organized health care settings (23 percent). Five (71 percent) of the seven training evaluations in child abuse were directed at professionals who are most likely to encounter child maltreatment cases— namely, pediatric residents and child protective services workers (Cheung et al., 1991; Dubowitz and Black, 1991; Leung and Cheung, 1998; Palusci and McHugh, 1995; Sugarman et al., 1997). No assessments of intimate partner violence or child maltreatment training efforts designed for the dental or physician assistant professions have been conducted. Previous research on continuing medical education (e.g., Davis et al., 1999) has shown that if training is likely to have any impact on behavior, strategies that involve interaction among trainers and participants are important (see Chapter 6). These strategies have been a part of all child abuse training that has been subjected to any formal assessment (see Table 5.1). In contrast, only about 37 percent of the intimate partner violence training programs incorporated interac- tive instructional strategies, ranging from practice interviewing to group devel- opment of appropriate protocols and strategies for their implementation (e.g., Campbell et al., 2001). Providing participants with materials that they can use in their clinical prac- tice (e.g., assessment forms and diagnostic aids) also has been shown to facilitate the translation of what was learned from training into specific behaviors in the health care setting (see Chapter 6). A noticeable portion of the intimate partner violence evaluations was targeted at assessing outcomes associated with the in- troduction of a screening protocol that also involved training staff in its use. Approximately two-fifths of evaluated intimate partner violence training efforts provided additional “enabling” materials for use in clinical practice. Examples include posters or pocket-sized cue cards with screening questions or other check- lists that were part of the materials provided to residents (Knight and Remington, 2000) and emergency department or health clinic staff (Fanslow et al., 1998; Roberts et al., 1997; Thompson et al., 2000). The dissemination of assessment 6A study by Palusci and McHugh (1995) did include medical students, but they accounted for a small proportion of the participants (2 individuals, or 13 percent of the 15 participants).

90 CONFRONTING CHRONIC NEGLECT forms and other materials by child abuse training efforts was much less com- mon. Of the seven training evaluation studies, one program provided participants with anatomically correct dolls for use in assessment (Hibbard et al., 1987). ASSESSING THE AVAILABLE EVIDENCE Understanding the effectiveness of family violence training programs neces- sitates estimating the unbiased effects of training (i.e., the impact of training above and beyond the influence of other variables that may have contributed to the observed outcomes). It is well known that this is best achieved by random- ized field experiments in which individuals are randomly assigned to groups. This design, if successfully executed, controls nearly all common threats to in- ternal validity (e.g., selection, history, and maturation). However, randomization alone is not sufficient if these efforts are to be truly informative. Evaluation designs must also: (1) use outcome measures that are reliable, valid, and sensitive to change over time; (2) demonstrate that the training intervention was implemented as planned and that participants’ experi- ences differed noticeably from those who did not receive such training; and (3) have sufficient sample sizes to allow statistical detection of group differences if they exist.7 Despite the strengths of randomized designs in determining program effec- tiveness, their execution in the field is not easy, and problems that are likely to introduce unexpected threats to internal validity can occur. For example, ex- tended follow-up measurement waves—a desirable design component for ex- amining how long training outcomes are sustained—also increases the chances that some study participants may not respond to later assessments. The resulting attrition may differ among study groups. Depending on its nature and magni- tude, this differential attrition can either exaggerate or diminish the observed group differences. Historical threats to internal validity can be introduced by unanticipated events, such as the introduction of new reporting requirements, the enactment of laws that mandate education, or increased attention by the media to family vio- lence, all of which are beyond the control of the evaluator (see Campbell et al., 2001, for examples of these). Another problem occurs when settings permit in- teraction and contact among training group participants and their counterparts who did not receive such training (e.g., sharing of what was learned, or what is 7Statistical pooling of outcome results was not performed. Although such meta-analyses have provided valuable insight into the impact of problem-based learning and continuing education in medicine (e.g., Davis et al., 1999; Vernon and Blake, 1993), the small number of rigorous studies precluded this. In addition, data were not always reported for use in calculating effect sizes.

EVALUATION OF TRAINING EFFORTS 91 known as contamination). Members of the “no training” or “usual circumstances” comparison groups also may actually receive some relevant training through professional organizations or their own reading. Especially when the training is lengthy and involves multiple components, training participants themselves may not attend all sessions, complete homework assignments, and so forth (see Short et al., 2000). All of these circumstances narrow the difference that is likely to be found between groups and can lead to misleading conclusions when training is not monitored for both the intervention and comparison groups. Essentially, ran- domized designs then end up as quasi-experiments, and the ability to determine the net impact of training is reduced. In some circumstances, randomization may not even be feasible, and quasi- experimental designs are the only alternative. These can involve the use of a comparison group that was not constructed by random assignment or the assess- ment of outcomes only for a training intervention group before training and at multiple points thereafter (time-series or cohort designs). Although these are unlikely to provide unbiased estimates of intervention effects, sophisticated sta- tistical modeling procedures now exist for taking into account some pretreatment and post-treatment selection biases, given that the necessary information is col- lected as part of the study (e.g., Lipsey and Cordray, 2000; Murray, 1998). Along with other design features, such nonexperimental studies, if well done, can add to the knowledge base about training (e.g., evidence for a relationship between training and the observed outcomes). For this reason, these were in- cluded in our review of evaluation studies. The most common training evaluation has involved the assessment of changes for the training participants only, typically before and immediately after training. Unfortunately, this is the weakest quasi-experimental design, as it yields little information on either the net effects of training or its relationship to out- comes. However, these studies can address the question “Did the expected im- provements in knowledge, attitudes, and/or beliefs occur?” For example, did individuals who participated in the training show an increase in knowledge and self-confidence about treating family violence? This might be viewed as the first question of interest in any causal assessment. Results from such studies also may partly inform expectations about where improvements in performance may or may not be reasonable to expect and how long any observed gains might be sustained. Furthermore, if reliable change in outcomes is repeatedly not found, attention can be directed at understanding the reasons for these no-difference findings (e.g., poor engagement of participants, unreliable or insensitive mea- sures, loss of organizational support for identification and management of family violence, poorly designed training curricula) so as to improve the development of training strategies and the choice and measurement of outcomes in the future. It also is possible that some of these studies were less subject to competing rival explanations for the observed changes due to other design features (e.g., very short pretest/posttest intervals and multiple pretest observations). Thus, the com-

92 CONFRONTING CHRONIC NEGLECT mittee reviewed studies of this type to identify whether any general conclusions about the expected outcomes of training could be drawn.8 CHARACTERISTICS OF THE EVALUATION AND RESEARCH BASE As previously noted, the outcomes of interest to the committee included those related to knowledge, attitudes, and beliefs; outcomes associated with screening and assessment of family violence (e.g., rates of asking about abuse, percentages of cases identified, and adequacy of documentation); and other pa- tient outcome indicators (e.g., referrals made for individuals who were victims of violence). Table 5.2 summarizes the degree to which the 37 evaluations as- sessed each of these outcomes. There was a clear difference in the attention paid to the three outcome domains, depending on the type of training. About 57 percent of intimate partner violence training evaluations measured improvements in knowledge, attitudes, and beliefs. Given that a frequent goal of training was to implement a standard assessment protocol successfully, the evaluations paid con- siderable attention to determining changes in the frequency of screening and case finding (70 percent). A much smaller proportion of studies (27 percent) attempted to assess other changes in clinical practices. For example, the extent to which patient charts included a safety assessment and body map completed by emergency department staff was examined by Harwell et al. (1998), and changes in information and referral practices were assessed by Shepard et al. (1999) for public health nurses, Wiist and McFarlane (1999) for prenatal health clinic staff, and Short et al. (2000) for emergency department, critical care, and perinatal staff. Using an index for rating quality of care by medical record review, Thomp- son et al. (2000) tracked changes in both training intervention and comparison sites. The evaluation conducted by Campbell et al. (2001) was unique in attempt- ing to assess quality of care in terms of both medical record review and patient satisfaction ratings. Moreover, this was one of the few studies to measure the extent of organizational support (e.g., commitment) for detecting and treating victims of intimate partner violence. In contrast, evaluations of child abuse training focused primarily on investi- gating whether knowledge, attitudes, and beliefs improved. Assessment of other 8Admittedly, this group would be skewed toward those studies that observed the expected changes. Even with this limitation, however, it would have been useful to derive average effect sizes for these observed changes and compare their magnitude with that obtained in more rigorous studies. If simi- lar magnitudes for these two groups had been found, this would have been informative. However, such an analysis was precluded once again due to the lack of necessary information (e.g., some reported means but no standard deviations, and others only reported overall statistical significance levels but no other statistics on group performance). Although not peculiar to this literature (e.g., Gotzsche, 2001, and Orwin and Cordray, 1985), a gap prevents this type of quantitative comparison.

EVALUATION OF TRAINING EFFORTS 93 TABLE 5.2 Outcomes Examined in Evaluations of Intimate Partner Violence and Child Abuse Training Area of Family Violence Addressed Intimate Partner Child Violence Abuse Total (n = 30) (n = 7) (n = 37) Characteristic N % N % N % Outcome domain:a Knowledge, attitudes, or beliefs (KAB) 17 56.7 6 75.0 23 62.2 Screening and identification of abuse 21 70.0 0 0.0 21 56.8 Other clinical skills (e.g., appropriate 8 26.7 2 25.0 10 27.0 documentation and referrals) Number of different outcome domains assessed: KAB only 9 30.0 5 71.4 14 37.8 Screening and detection of abuse only 9 30.0 0 0.0 9 24.3 Other clinical skills or outcomes only 0 0.0 1 14.3 1 2.7 KAB and screening/detection only 4 13.3 0 0.0 4 10.8 Screening/detection and other clinical only 5 16.7 0 0.0 5 13.5 KAB and other clinical only 0 0.0 1 14.3 1 2.7 KAB, screening, and other clinical 3 10.0 0 0.0 3 8.1 Note: Percentages are column percentages. aBecause a study can assess multiple outcomes, the percentages do not total 100.0 percent. outcomes was not only infrequent but also more indirect. Cheung et al. (1991) used vignettes to rate the competency of trained protective services workers in case planning, goal formulation, and family contract development. These same researchers also assessed overall competency as indicated by supervisor job rat- ings (Leung and Cheung, 1998). Evaluators of training efforts on intimate partner violence were more likely to measure multiple outcomes in the same study: 30 percent of the evaluations in this area reported findings on two outcomes, and another 10 percent assessed outcomes in all three domains. In contrast, only one (14 percent) of the seven child abuse studies gathered data on outcomes in more than one domain (Cheung et al., 1991). Measurement of Outcomes How outcomes are measured can influence what can be learned from evalu- ations. For example, unreliable measures can reduce the ability to detect inter- vention effects and therefore effectively decrease the power of a design (Lipsey, 1990). Even when gains among training participants and group differences are found, the measures used may have poor construct validity, serving as only pale

94 CONFRONTING CHRONIC NEGLECT surrogates of the relevant outcomes. These issues are especially relevant to re- search on family violence, given that study authors have frequently developed their own knowledge tests, attitude questionnaires, and chart review forms to assess practitioner attitudes and practices but either failed to assess their psycho- metric properties or reported marginal results, e.g., internal consistencies of less than 0.70 (e.g., Finn, 1986; Saunders et al., 1987). Among the 16 evaluations that examined improvements in knowledge, atti- tudes, and beliefs about intimate partner violence, all but two developed their own measures. However, slightly less than half of these presented no data on the reliability (e.g., internal consistency) of these instruments, although total scores and subscale scores were derived. The remainder either referred readers to previ- ously published data on the measures or provided their own assessments of internal consistency (the preferred strategy), which were generally at acceptable levels (Cronbach α = 0.70 or higher). The most concerted efforts at instrument development have been carried out by Short et al. (2000), Maiuro et al. (2000), and Thompson et al. (2000). In Short et al.’s (2000) evaluation of the domestic violence module for medical students at the University of California, Los Angeles, not only were both the internal consistency and test-retest reliability examined for the knowledge, attitudes, be- liefs, and behaviors scale that she developed, but also attention was paid to assessing the construct validity of the intervention itself (i.e., expert ratings of whether it contained the appropriate content and utilized a problem-based ap- proach and varied training methods). Maiuro and colleagues (2000) developed a 39-item instrument to assess practitioner knowledge, attitudes, and beliefs, and self-reported practices toward family violence identification and management. This instrument exhibited internal consistency (α = 0.88), content validity, and sensitivity to change and was later used by Thompson et al. (2000) to assess training outcomes for primary health clinic staff. When protocols for asking individuals about intimate partner violence were utilized, Campbell et al. (2001) and Covington and Dalton et al. (1997), Covington and Diehl et al. (1997) used items from the Abuse Assessment Screen, which has been investigated as to its validity (Soeken et al., 1998). Thompson et al. (2000) used items that had been validated by McFarlane and Parker. Clinical skills (e.g., asking about intimate partner violence or correctly diagnosing abuse) in medical students and residents were assessed with stan- dardized patient visits and case vignettes, with two exceptions; Knight and Remington (2000) used a patient interview to determine whether trained resi- dents had asked the woman about intimate partner violence, and Bolin and Elliott (1996) had residents report daily on the number of conversations they had about intimate partner violence with the patients seen. With regard to measuring screening prevalence, identification rates, docu- mentation, and referrals, evaluations of intimate partner violence training relied on reviewing patient charts. The typical practice was to use standardized forms

EVALUATION OF TRAINING EFFORTS 95 developed by the researchers for collecting baseline and follow-up data. When multiple coders were used, the degree to which information was provided on the blinding of raters, how disagreements were resolved, and interrater reliability varied. For example, Tilden and Shepherd (1987) provided little information on intercoder reliability, whereas both Thompson et al. (2000) and Short et al. (2000) reported initial agreement levels (which ranged from 0.80 to 0.96) and a detailed description of the record review process. In the area of child abuse training, all studies developed their own assess- ment instruments, but only three provided any data on the quality of their mea- sures. Palusci and McHugh (1995) reported on the internal consistency of their 30-item knowledge test, which was only marginally acceptable (α = 0.69). Leung and Cheung (1998) also reported coefficient alphas for their measures of the amount learned and supervisor ratings of job performance, and interrater reli- ability was provided for the grading of trainees’ responses to case vignettes (Cheung et al., 1991).9 Timing of Measurement An important question regarding training program outcomes involves the “half-life” of any observed improvements and whether changes are sustained or degrade over time. Studies that employ multiple and extended follow-up assess- ments are critical to informing this issue. The most common strategy has been to measure immediate improvement in knowledge, attitudes, and beliefs, typically within the first month after completion of training (see Figure 5.1). A handful of intimate partner violence training evaluations also assessed knowledge levels after a much longer time had elapsed (e.g., Campbell et al., 2001; Ernst et al., 2000; Short et al., 2000; Thompson et al., 2000). This is not true for child abuse training evaluations, for which there are no available data on the degree to which participants retained what they learned more than six months after training. Outcomes associated with screening, case finding, and other clinical indica- tors have been assessed at much longer intervals for intimate partner violence training interventions. Nearly half (48 percent) of the evaluations measured screening and identification rates more than six months after the intervention, and four of these included data more than one year after training. For other clinical practices, approximately two-thirds attempted to measure outcomes 7 or more months after training, and 3 of these 6 studies collected data 12 or more 9Coefficient alpha or Cronbach’s alpha is a statistic that measures the reliability of a test, scale, or measure in terms of its internal consistency. It is obtained from the correlations of each item with each other item. Like a correlation, it ranges from 0 to 1, with 0 meaning complete unreliability (the responses are essentially unrepeatable random responses) to 1 (all the items measure the same thing exactly).

96 CONFRONTING CHRONIC NEGLECT 14 12 Number of studies 10 8 6 4 2 0 IPV (KABB) IPV (Screening) IPV (Other CA (KABB) CA (Other clinical) clinical) Type of outcome assessed 0 - 1 mo. 2 - 6 mo. 7 - 12 mo. 13 or more mo. FIGURE 5.1 Timing of measurement in evaluations by type of training and outcome domain. IPV = intimate partner violence; CA = child abuse and neglect; KABB = knowl- edge, attitudes, beliefs, behaviors. Note: Percentages are column percentages. Because a study can have both a posttest and follow-ups, the numbers do not total to the number of studies in each training area. months after training had been completed. Once again, the situation is much different for child abuse, for which the attention paid to measuring clinical out- comes has been infrequent, and follow-up data for extended periods was avail- able in only one study. Type of Design The large majority of training program evaluations relied on some type of quasi-experimental design. The typical practice was to assess individual out- comes (e.g., knowledge levels of residents) with one-group, before-after de- signs and measure the health care outcomes of patients who were seen by the training recipients (e.g., percentage of patients who were screened by emer- gency department staff and identified as victims of family violence) before and one or more times after training. The most sophisticated design for determining causality—the individual or group randomized experiment—was employed in only six studies, and all of these were directed at training on intimate partner violence. Even here, however, some may be labeled more accurately as quasi- experiments due to problems encountered when sites varied in the degree to which they successfully implemented the training intervention or were possibly

EVALUATION OF TRAINING EFFORTS 97 vulnerable to competing hypotheses related to self-selection caused by attrition from measurement and the occurrence of other events that affected the magni- tude of group differences. Because individual studies may examine different outcomes with different designs (e.g., incorporate a comparison group for selected outcomes only), Table 5.3 describes the evaluations in terms of the design used and the outcome of interest. As noted earlier, one-group, pretest-posttest designs were the most com- mon when knowledge, attitudes, and beliefs were of interest. Approximately 63 percent of the studies assessing knowledge in both intimate partner violence and child abuse training relied on this design, and of this group, two-fifths limited their study to examining only changes that occurred immediately after the train- ing session had concluded. The remaining studies were more ambitious, either incorporating a comparison group that received no training or a different type of training, but assignment to these comparison groups was nonrandom. In addi- tion, the most rigorous studies randomly assigned individuals or training sites (e.g., clinics or hospitals) to receive or not to receive the training of interest. For outcomes involving the identification of abused women, approximately one-third of the studies measured rates of screening, case finding, or both before and between 4 days and 12 months after staff training had occurred. Another 10 percent included lengthier follow-ups. Slightly more than two-fifths of the evalu- ations also collected similar screening and case-finding data from one or more comparison sites where staff did not receive such training; this group was nearly equally split between studies that managed to randomly assign sites to either a training or no-training group and those that did not randomize. A similar pattern pertained to evaluations that tracked other types of clinical outcomes. TRAINING OUTCOMES AND EFFECTIVENESS In general, the designs for most evaluations have effectively limited their contributions to enhancing the knowledge base with regard to the impact of training on health professionals’ responsiveness to family violence. The varia- tion in sophistication and rigor previously described must be taken into account when summarizing what is known about the effectiveness of family violence training. Because the large majority have used weak quasi-experimental designs (i.e., one-group, pretest and posttest), they can at best provide information for the much simpler question regarding whether the outcomes expected by training faculty actually occurred. The remaining paragraphs attempt to summarize the evidence provided by the evaluations conducted to date. The majority of attention is paid to evaluations of intimate partner violence training, given the greater amount of available information. Outcomes for knowledge, belief, and attitudes; screening and identification; and other clinical outcomes are summarized separately. Be- cause of the small number of child abuse evaluation studies and the even smaller

TABLE 5.3 Designs Used by Evaluations to Assess Training Outcomes by Type of Training and Outcome Domain 98 Intimate Partner Violence Training Child Abuse Training Knowledge, Screening Other Knowledge, Other Attitudes, and Clinical Attitudes, Clinical and Beliefs Detection Outcomes and Beliefs Outcomes (n = 16) (n = 21) (n = 8) (n = 6) (n=2) Type of Design N % N % N % N % N % Two-group, randomized (individual patients or practitioners or group randomization) Posttest only 0 0.0 3 14.3 1 12.5 0 0.0 0 0.0 Pretest and posttest 2 12.5 1 4.8 1 12.5 0 0.0 0 0.0 Pretest, posttest, and follow-up 1 6.3 1 4.8 1 12.5 0 0.0 0 0.0 Two- or three-group, nonequivalent comparison group Posttest only 1 6.3 2 9.5 0 0.0 0 0.0 0 0.0 Pretest and posttest 2 12.5 1 4.8 0 0.0 1 16.7 0 0.0 Pretest, posttest, and follow-up 1 6.3 1 4.8 2 25.0 1 16.7 1 50.0 Cohort Pretest and posttest 0 0.0 7 33.3 1 12.5 0 0.0 0 0.0 (cohorts of patients) Pretest, posttest, and 0 0.0 2 9.5 1 12.5 0 0.0 0 0.0 follow-up (cohorts of patients) One group Pretest and posttest 4 25.0 2 9.5 0 0.0 2 33.3 1 50.0 Pretest, posttest, and follow-up 6 37.5 1 4.8 0 0.0 2 33.3 0 0.0 Note: Percentages are column percentages. Because a few studies employed different designs for different outcomes (e.g., a one-group pretest and posttest to measure knowledge in training participants and a two-group nonequivalent comparison group design to assess clinical skills), the design used to assess each outcome domain rather than for the study itself was reported.

EVALUATION OF TRAINING EFFORTS 99 set of elder abuse training evaluations, brief summaries of what can be gleaned from published efforts in these two areas are presented below. Child Abuse Training Summarizing what is known about child abuse training efforts is difficult, given the small number of studies (n = 7) and their heterogeneity in terms of the professionals trained, the type of training delivered, and the designs themselves. The majority of evaluative data focuses on improvements in knowledge but re- stricts examination to training participants only (see Appendix G). In all cases, individuals who attended the training (whether they are residents in pediatrics, physicians, nurses, or caseworkers) exhibited increased knowledge levels, more appropriate attitudes, and perceived self-competency to manage child abuse cases. Such gains were typically measured immediately after training comple- tion. For example, in two studies of child protective services workers, the train- ees’ perceptions about their ability to identify abuse and risk, along with attitudes about the value of family preservation and cultural differences, also improved after enrolling in a 3-month training program (Leung and Cheung, 1998), and greater ability in case planning, goal formulation, and family contract develop- ment was observed for individuals who had a 6-hour seminar in these skills (Cheung et al., 1991). Relative to a comparison group, Dubowitz and Black (1991) found stronger improvement in knowledge, attitudes, and skills (including perceptions about their competency to manage child abuse cases) among pediatric residents who had attended several 90-minute sessions on child abuse immediately after train- ing. However, with the exception of perceived self-competency, these differ- ences were no longer evident at the 4-month follow-up. Palusci and McHugh (1995) also found that medical students, residents, fellows, and attending physi- cians who participated in a clinical rotation on child abuse had higher knowledge scores, on average, than their counterparts in other rotations. Once again, how- ever, assessment was limited to immediately after the rotation had ended. In both these cases, the degree to which differences at the pretest between the training and intervention groups may have contributed to these group differences was not well examined. For more direct indicators of clinical competency, Leung and Cheung (1998) found that child protective services workers who had received three months of focused training on child abuse improved between their six-month, nine-month, and first annual evaluation and between their first annual and second annual evaluation as measured by supervisor job performance ratings that covered such behaviors as case interviewing and documentation). At the same time, no signifi- cant differences between their performance and that of more seasoned workers without such formal training were found. The above set of findings provides neither a broad nor strong evidence base

100 CONFRONTING CHRONIC NEGLECT on which to understand the outcomes and effects of child abuse training. Al- though training in this area appears to instill greater knowledge, appropriate attitudes, and perhaps self-efficacy for dealing with child abuse cases, these within-group changes have mainly been observed immediately after the conclu- sion of training. The extent to which they are sustained or can confidently be attributed to the training interventions themselves is unclear. Training on Intimate Partner Violence The degree to which health professionals involved in training on intimate partner violence actually change their knowledge, attitudes, and beliefs was ad- dressed by 13 of the 15 training evaluations.10 Typically, these evaluations did not go beyond examining changes before and after training, and posttests were usually administered immediately upon training completion or shortly thereafter (within one month).11 In all but one evaluation (Knight and Remington, 2000),12 statistically reliable differences between the pretest and posttest were found. Such gains were observed across a wide range of training interventions (ranging from a one-hour lecture to one or more days), questionnaires, and populations (medical students, residents, hospital staff, and community providers). Appar- ently, participants take something away from even a relatively brief exposure to material on family violence, but what the something is, how it changes with the content, nature, and length of training, how long it remains with them, and whether it was a direct result of training are not clear. More informative are the seven evaluations that paid some attention to mea- surement issues (e.g., multi-item scales with acceptable levels of internal consis- tency) and had complete assessment data on the majority of training participants (70 percent or more). Many of these studies also had multiple or extended post- baseline assessments, and three collected outcome and other relevant data on comparison groups, two of which were designed as randomized field experi- ments. On the whole, pretest-posttest gains similar to those previously described were observed. In the Jonassen et al. (1999) study, medical students who com- pleted an intensive interclerkship module (2 or 3.5 days) showed increases in knowledge, attitudes, and perceived skills at the time of completing the module, and these gains had not significantly eroded six months later. With the exception 10One study (Varvaro and Gesmond, 1997) involving emergency department house staff did not perform statistical analyses due to small sample sizes. Another study (Ernst et al., 1998) did report pre-post differences on 2 of 14 knowledge items but did not consider that this may be associated with the number of comparisons that were performed. 11Appendix F lists the evaluation studies and their characteristics regarding knowledge, attitudes, and belief outcomes for training on intimate partner violence. 12This “no-difference” finding in attitudes is most likely attributable to significant problems with respondent carelessness and a desire to complete the surveys quickly.

EVALUATION OF TRAINING EFFORTS 101 of perceived skills, similar results were found by Kripke et al. (1998) for a 4- hour workshop attended by internal medicine residents. Nearly 2 years after a focused, 2-day training workshop, emergency department staff evinced less blam- ing attitudes toward victims and were more knowledgeable about intimate part- ner violence and their role in addressing this problem than prior to training (Campbell et al., 2001). Furthermore, this group, which worked in hospitals that were randomly assigned to the training intervention, outperformed their counter- parts at other hospitals who had not received the training. Evaluations conducted by Short et al. (2000) and Thompson et al. (2000) provide a more differentiated picture of which attitudes and beliefs undergo the most modification. Medical students at the University of California, Los Ange- les, who enrolled in a 4-week domestic violence module showed statistically reliable gains in knowledge, attitudes, and beliefs at the completion of the mod- ule and also improved more than medical students enrolled at a nearby school who did not have any organized opportunities for training on intimate partner violence. Further analyses highlighted that this improvement was primarily a function of increases in perceived self-efficacy—namely, the ability to identify a woman who had been abused and intentions to screen regularly upon becom- ing practicing clinicians. No such change was observed in other knowledge and attitude domains (e.g., how appropriate it is for physicians to intervene in these situations). Similarly, primary care team members also experienced increased feelings of self-efficacy with regard to treating intimate partner violence victims both 9 and 21 months after an intensive training session (Thompson et al., 2000). This was in sharp contrast to staff in other clinics who had been randomly assigned not to receive the workshop and whose self-confidence in handling this problem decreased between the baseline and the nine-month follow-up period. Training participants also changed markedly and outperformed their comparison group counterparts in two other attitude domains—namely, fear of offending victims and provider or patient safety concerns in their interactions and stronger feelings that necessary organizational supports were in place. Improvements in Screening and Identification Rates Increased knowledge and more appropriate attitudes are important, but the ultimate goal is for professionals to translate these into their daily practice. Of the 18 evaluations that examined one or more of these behaviors, 7 collected data on variables related to asking or talking about intimate partner violence with their patients, and 11 monitored changes in case finding (e.g., the percent- age of patients seen who were positively identified as victims of intimate part- ner violence).13 13Appendix F summarizes the studies on intimate partner violence screening and identification rates that included some type of training intervention.

102 CONFRONTING CHRONIC NEGLECT In terms of explicitly inquiring about intimate partner violence, three of the five studies with data on this outcome found significantly higher percentages of patients asked about intimate partner violence after staff had participated in work- shops or other staff training. For example, Knight and Remington (2000) ob- served that four days after hearing a lecture, internal medicine residents more frequently asked patients about intimate partner violence, based on reports of patients seen in their practice. Such changes do not seem limited to the short term but were also found 6 to 9 months later for community health center staff (Harwell et al., 1998) and primary care team members (Thompson et al., 2000). Moreover, this latter study demonstrated that such improvement did not occur among staff in teams that were randomly assigned not to receive such training. The training in all three studies provided either formal assessment protocols or a laminated cue card with screening questions. The other randomized field experi- ment (Campbell et al., 2000) found promising gains 24 months after a training intervention among emergency department staff and in contrast to their compari- son group counterparts. The one study that showed no differences at a 6-month posttest involved a 4-hour training strategy aimed at internal medicine residents, but it involved no protocol or other enabling materials (Kripke et al., 1998). The evidence on whether more frequent screening by practitioners is accom- panied by increased case finding, however, is somewhat more mixed. A total of 13 evaluations monitored changes in relevant variables. Based on follow-ups conducted anywhere between 1 and 12 months after training, 7 (or 54 percent) of the evaluations found that the percentage of women who were positively identi- fied as abused increased significantly in those emergency departments or clinics in which staff had received intimate partner violence training. In all these efforts, a protocol again was included as part of the training. Four evaluations found no reliable change, and both programmatic and meth- odological factors most likely contributed to these results. In the evaluation of training for internal medicine residents, identification rates did not change, and no protocol or screening materials were provided as part of the training (Kripke et al., 1998). The other three evaluations did involve such forms. Among com- munity health center staff, Harwell et al. (1998) found no change in the propor- tion of cases that were confirmed as intimate partner violence, but they did find that a greater percentage was suspected of it. Thompson et al. (2000) also found a 30 percent improvement in case finding, but this was not statistically reliable, most likely because of low statistical power and problems in medical record review. Finally, Campbell et al. (2001) found no statistically significant gains in the proportion of patients who self-reported intimate partner violence and had it documented in their charts; at the same time, this also may have been because of small sample size, events that may have increased relevant practices in the com- parison sites (e.g., legislation on mandatory reporting and education), and the time required for modification of chart forms to facilitate reporting. Furthermore, among the five studies that employed comparison hospitals or

EVALUATION OF TRAINING EFFORTS 103 clinics, only two evaluations reported greater case finding in the intervention groups, but these increases may have been due to selection bias. No differences surfaced in those with randomized controls. Overall, the above results suggest that if training is to result in increased screening for intimate partner violence, it must include instruction in and use of screening protocols and other types of standardized assessment materials. Clearly attesting to this are the findings from McLeer et al. (1989), who re- ported that the sizable increase in screening that followed training and protocol use essentially disappeared eight years later when administrative policy changed and the necessary infrastructure to support screening no longer existed. In addi- tion, Larkin and his colleagues (2000) found dramatic improvements in screen- ing rates by nursing staff, but only after disciplinary action for not screening was instituted as emergency department policy; neither training nor the avail- ability of a protocol had previously enhanced screening in this site. Finally, Olson et al.’s (1996) work, while revealing a rise in domestic violence screen- ing after a stamped query was placed on each patient’s chart, also found that the addition of formal training following chart stamping produced no further im- provement. Essentially, the net contribution made by training itself to screening and identification is less clear. Improvements in Clinical Outcomes Other clinical outcomes associated with identification include such behav- iors as assistance in planning a course of action, providing referrals, and provid- ing appropriate and quality care. Seven evaluations included measures relevant to these outcomes.14 In general, there is some suggestion that training may be associated with staff’s more frequently providing referrals for abused women. Harwell et al. (1998) found that trained community health center staff more often completed safety assessments (which had been provided as part of the training) and referred individuals to outside agencies. Wiist and McFarlane (1999) found similar results with regard to referrals for pregnant women who had been identified as intimate partner violence cases as did Fanslow et al. (1998, 1999) with emergency department staff. Although Shepard et al. (1999) did not find such gains with regard to trained public health nurses, the percentage of intimate partner violence cases that were provided information did significantly increase. In their randomized group trial, Campbell et al. (2001) found that pa- tients were more satisfied with the care that they had received by trained emer- gency department staff compared with those in clinics in which staff had not received the training. This study also was unique in its measurement of institu- tional change: an index assessing departmental commitment to detecting inti- 14Appendix F describes the evaluations that examined these outcomes.

104 CONFRONTING CHRONIC NEGLECT mate partner violence victims was stronger in the departments that participated in the training. Thompson et al. (2000), however, found no differences between intervention and comparison primary teams with regard to ratings of the quality of management as determined by record review. Saunders and Kindy (1993) also found no improvement among internal medicine and family practice residents in terms of history taking and planning. In general, it may be that the materials provided do assist, particularly in terms of referrals. The reasons underlying the lack of differences in other vari- ables may be several. These include site variation in implementing the necessary supports for system change, other events that may have contributed to increases in appropriate practices and weakened the difference between the training and comparison groups (Campbell et al., 2001), and problems resulting in accurately measuring certain outcomes such as quality of care (Thompson et al., 2000). Training on Elder Abuse As previously noted, the training of health professionals to identify elder abuse and neglect and intervene appropriately has received little attention in the literature. Descriptions of formal curricula and training models are few in num- ber. Thus, it is not surprising that formal published evaluations of training efforts are also lacking. The committee’s literature search uncovered only four studies that explicitly provided any evaluative information on the outcomes of such training. These efforts were quite heterogeneous in terms of the recipients of training, the train- ing provided, and the way in which outcomes were examined. Both Jogerst and Ely (1997) and Uva and Guttman (1996) reported data on the outcomes of resi- dent training in elder abuse screening and management. Each study focused on a different specialty and training strategy. Whereas a home visit program to im- prove the skills of geriatric residents for carrying out elder abuse evaluations was the focus of Jogerst and Ely’s work, Uva and Guttman provided data associated with a 50-minute didactic session for emergency medicine residents. Training for diverse groups of professionals was described and assessed by Vinton (1993) in her study of half-day training sessions of caseworkers, and Anetzberger et al. (2000) reported on the use of a 2.5-day training program that involved a formal curriculum—A Model Intervention for Elder Abuse and Dementia—that was delivered to adult protective services workers and Alzheimer’s Association staff and volunteers. Although all authors interpreted their findings as highlighting the benefits of training in terms of improved knowledge, level of comfort in handling elder abuse and neglect, and other outcomes (e.g., self-perceived competence), none of the four studies provided clear evidence regarding training effectiveness. For example, Vinton (1993) and Anetzberger et al. (2000) restricted their assessment to only pretest and posttest measurement of training participants immediately

EVALUATION OF TRAINING EFFORTS 105 after training. Jogerst and Ely (1997) did employ a comparison group consisting of an earlier cohort who had not participated in the home visit program. With the exception of age, these groups were similar in terms of gender, type of practice, age of patients, and number of patients seen per week. Residents who had par- ticipated in the home visit rotation were more likely to rate their abilities to diagnose elder abuse and evaluate other important aspects (home environment) higher than the earlier cohort who did not have these training experiences. How- ever, the latter group was more likely to have made home visits and to have provided statements regarding guardianships for their patients. Whether this was due to simply the added time in practice or to differences in patient mix or clinician skills cannot be determined from this design and its execution, and thus the effects of training (or lack thereof) remain ambiguous. Uva and Guttman (1996) randomly assigned emergency residents to one of two groups, either: (a) to take a 10-item survey addressing their confidence in accurately recognizing elder abuse, level of comfort, and knowledge of how to report suspected cases and then attend a 50-minute educational session or (b) to participate in the session and then complete the survey. The two groups notice- ably differed in terms of their confidence about detection and knowledge of reporting. While less than one-quarter of the residents who were administered the pretest trusted their skills in identification and knew to whom reports should be made, all residents who completed the questionnaire after training did so. Twelve months later, residents in both groups who responded to a follow-up survey all believed that they could identify and report elder abuse. Although a randomized design was used, this study is not very informative due to the lack of a comparison or control group and quite limited outcome measurement (i.e., assessment of knowledge and perceived self-confidence were each limited to one item). Consequently, the knowledge base about the outcomes and effects of elder abuse training is sparse. Although these four studies conclude that training is beneficial, more comprehensive and rigorous assessments are needed in order to determine the types of training that are effective. Moreover, efforts to examine training for other populations, including medical students, nurses, and others, remain to be carried out. QUALITY OF THE EVIDENCE BASE A previous National Research Council and Institute of Medicine report (1998) concluded that the quality of the existing research base on family vio- lence training interventions is “insufficient to provide confident inferences to guide policy and practice, except in a few areas. Nevertheless, this pool of stud- ies and reviews represents a foundation of research knowledge that will guide the next generation of evaluation efforts and allows broad lessons to be derived” (p. 68). Unfortunately, the situation with regard to our evidence base on the

106 CONFRONTING CHRONIC NEGLECT associated outcomes and effectiveness of family violence training interventions is no different. The research and evaluation base on family violence training interventions is mixed in terms of potentially contributing to understanding training effective- ness and the relationship between training and outcomes. This is especially true with regard to elder abuse training (for which there were too few studies to review systematically) and child abuse training. In terms of the latter, although descriptions of training strategies are available, there have been only a handful of attempts to provide corresponding evaluative information. When assessments have occurred, they have nearly all focused on gains in knowledge, and the majority have employed designs that cannot speak even to how training and outcomes may be related. Furthermore, no study was conducted in such a way that confident inferences could be made about the training intervention’s effec- tiveness on patient outcomes. The picture is somewhat more promising with regard to training on intimate partner violence. More than two dozen evaluation studies were located, although their methodological quality varied enormously. Again, assessing changes in knowledge, attitudes, and beliefs received the most attention, but concerted at- tempts have also been made to document changes in screening, identification, and other relevant clinical outcomes that are associated with training, particu- larly that which accompanies or includes the use of a screening protocol and other forms. Moreover, a small number of randomized field experiments have been conducted that can be used to address questions surrounding the effective- ness of training, and when such designs were not logistically possible (e.g., randomizing medical students to courses), there are notable instances of quasi- experimental designs that employ strong measurement strategies, measure dif- ferences in training participation, and attempt to address how well other rival explanations are ruled out. As previously noted, several factors work against launching a concerted effort to improve the number of evaluations that can be conducted and to en- hance how they are done. However, it is important to continue evaluating family violence training in ways that can contribute to the knowledge base about the outcomes of these efforts (even if in small increments). Clearly, these must include efforts to document outcomes and the effectiveness of training in child and elder maltreatment. The topic of child abuse and neglect offers an instructive example of evaluation needs. Training efforts for child abuse began to be de- scribed in the late 1970s, mandatory reporting requirements now exist, a handful of states require mandatory education in these reporting requirements and child abuse, and there is a national center devoted to addressing child abuse and ne- glect. However, only seven formal assessments, all of which suffered from meth- odological weaknesses, could be found. Training efforts in intimate partner violence also can benefit from more serious scrutiny. The available evidence appears reasonably consistent in sug-

EVALUATION OF TRAINING EFFORTS 107 gesting that training is positively associated with greater knowledge about fam- ily violence, stronger feelings of comfort and self-efficacy about interacting with battered women, and greater intentions to screen for intimate partner vio- lence. When training is grounded in models of behavior change and how indi- viduals learn, the data allow more confident determination of a link between training and increases in knowledge, attitudes, and behavioral intentions. Fur- thermore, for those training efforts aimed at practitioners, participants typically outperform their counterparts who did not receive such training in terms of increased rates of screening and identification—at least in the short term and up to two years after training. The same can be said for outcomes associated with safety planning, referrals for necessary services, and other clinical variables (e.g., patient satisfaction). The available evidence also strongly indicates that training by itself, how- ever, is not sufficient in terms of producing the desired outcomes. Unless the clinical settings display commitment to having their staff address the problem of family violence and provide the resources to do it, the effects of training will be short lasting and possibly erode over time. This suggests that training cannot be seen as a one-shot endeavor (e.g., a course in medical or social work school) and must include those who are responsible for creating the necessary infra- structure to support and reward practitioners for paying attention to identifying and intervening with family violence victims. Although the evidence for this conclusion derives mostly from evaluations of intimate partner violence train- ing efforts, it is likely that the same could be said about child and elder mal- treatment training activities. CONCLUSIONS • Evaluation of the impact of training in family violence on health professional practice and effects on victims has received insufficient attention. • Few evaluative studies indicate whether the existing curricula are having the desired impact. • When evaluations are done, they often do not utilize experimental designs (randomized controlled trials and group randomized tri- als) necessary to determine training effectiveness. Also lacking are high-quality quasi-experimental designs necessary to provide a more complete understanding about the relationship of training to outcomes. • In addition to effective training on family violence, a supportive environment appears to be critically important to producing desirable outcomes.

Next: 6 Training Beyond the State of the Art »
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence Get This Book
×
Buy Hardback | $39.95 Buy Ebook | $31.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

As many as 20 to 25 percent of American adults—or one in every four people—have been victimized by, witnesses of, or perpetrators of family violence in their lifetimes. Family violence affects more people than cancer, yet it's an issue that receives far less attention. Surprisingly, many assume that health professionals are deliberately turning a blind eye to this traumatic social problem.

The fact is, very little is being done to educate health professionals about family violence. Health professionals are often the first to encounter victims of abuse and neglect, and therefore they play a critical role in ensuring that victims—as well as perpetrators—get the help they need. Yet, despite their critical role, studies continue to describe a lack of education for health professionals about how to identify and treat family violence. And those that have been trained often say that, despite their education, they feel ill-equipped or lack support from by their employers to deal with a family violence victim, sometimes resulting in a failure to screen for abuse during a clinical encounter.

Equally problematic, the few curricula in existence often lack systematic and rigorous evaluation. This makes it difficult to say whether or not the existing curricula even works.

Confronting Chronic Neglect offers recommendations, such as creating education and research centers, that would help raise awareness of the problem on all levels. In addition, it recommends ways to involve health care professionals in taking some responsibility for responding to this difficult and devastating issue.

Perhaps even more importantly, Confronting Chronic Neglect encourages society as a whole to share responsibility. Health professionals alone cannot solve this complex problem. Responding to victims of family violence and ultimately preventing its occurrence is a societal responsibility

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!