surrogates of the relevant outcomes. These issues are especially relevant to research on family violence, given that study authors have frequently developed their own knowledge tests, attitude questionnaires, and chart review forms to assess practitioner attitudes and practices but either failed to assess their psychometric properties or reported marginal results, e.g., internal consistencies of less than 0.70 (e.g., Finn, 1986; Saunders et al., 1987).

Among the 16 evaluations that examined improvements in knowledge, attitudes, and beliefs about intimate partner violence, all but two developed their own measures. However, slightly less than half of these presented no data on the reliability (e.g., internal consistency) of these instruments, although total scores and subscale scores were derived. The remainder either referred readers to previously published data on the measures or provided their own assessments of internal consistency (the preferred strategy), which were generally at acceptable levels (Cronbach a = 0.70 or higher).

The most concerted efforts at instrument development have been carried out by Short et al. (2000), Maiuro et al. (2000), and Thompson et al. (2000). In Short et al.’s (2000) evaluation of the domestic violence module for medical students at the University of California, Los Angeles, not only were both the internal consistency and test-retest reliability examined for the knowledge, attitudes, beliefs, and behaviors scale that she developed, but also attention was paid to assessing the construct validity of the intervention itself (i.e., expert ratings of whether it contained the appropriate content and utilized a problem-based approach and varied training methods). Maiuro and colleagues (2000) developed a 39-item instrument to assess practitioner knowledge, attitudes, and beliefs, and self-reported practices toward family violence identification and management. This instrument exhibited internal consistency (a = 0.88), content validity, and sensitivity to change and was later used by Thompson et al. (2000) to assess training outcomes for primary health clinic staff.

When protocols for asking individuals about intimate partner violence were utilized, Campbell et al. (2001) and Covington and Dalton et al. (1997), Covington and Diehl et al. (1997) used items from the Abuse Assessment Screen, which has been investigated as to its validity (Soeken et al., 1998). Thompson et al. (2000) used items that had been validated by McFarlane and Parker. Clinical skills (e.g., asking about intimate partner violence or correctly diagnosing abuse) in medical students and residents were assessed with standardized patient visits and case vignettes, with two exceptions; Knight and Remington (2000) used a patient interview to determine whether trained residents had asked the woman about intimate partner violence, and Bolin and Elliott (1996) had residents report daily on the number of conversations they had about intimate partner violence with the patients seen.

With regard to measuring screening prevalence, identification rates, documentation, and referrals, evaluations of intimate partner violence training relied on reviewing patient charts. The typical practice was to use standardized forms

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement