. "4 Strategies for Incorporating the Behavioral and Social Sciences into Medical School Curricula." Improving Medical Education: Enhancing the Behavioral and Social Science Content of Medical School Curricula. Washington, DC: The National Academies Press, 2004.
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Improving Medical Education: Enhancing the Behavioral and Social Science Content of Medical School Curricula
main, formed a course committee, and recruited tutors for the small groups. As a result, many more students now attend the class than was the case when it was the Medical Humanities course (Doug Post, Ohio State University, personal communication, September 2003).
A critical component of any curriculum development project is its evaluation. A well-conducted evaluation serves to legitimize the innovation process, provides feedback to stakeholders, refines the program, and maintains faculty enthusiasm for the change (Bland et al., 2000b; Henry, 1996; Robins et al., 2000). The University of Rochester, for example, formally evaluates its fully integrated curriculum by examining data from the Association of American Medical Colleges’ Graduation Questionnaire. The university also developed a questionnaire given to students at the end of years 1, 2, and 4 that addresses issues relevant to the behavioral and social sciences. This questionnaire includes student perceptions of how well the six curricular themes (many of which relate to the behavioral and social sciences) are taught. As another example, the Social Medicine Department at the University of North Carolina, Chapel Hill (UNC) evaluates its Medicine and Society course in two ways: (1) a centrally distributed course evaluation given to all students at the medical school, and (2) a customized evaluation designed by the Social Medicine Department to address its specific concerns. More empirically, a prospective pretest–posttest controlled trial—the strongest study design for determining the effect of a curriculum intervention (Campbell and Stanley, 1966; Fitz-Gibbon and Morris, 1987; Green, 2001)—has been used to evaluate the impact of education in the behavioral and social sciences on students’ attitudes toward sociocultural issues in medicine (Tang et al., 2002).
Accurate evaluation of curriculum innovation requires time to assess important outcomes, which in some cases may necessitate separate funding dedicated to the completion of the evaluation (Wartman et al., 2001). In fact, any curriculum change effort comes at a cost. Acquiring the funds needed to launch the change may require the establishment of partnerships with external organizations, such as foundations, or the identification of internal sources of funding (Bland et al., 2000b). As the curriculum innovation progresses, care must be taken to ensure that sufficient funds are available to support the change effort so it can continue once the initial grant support has expired.
Evaluating the effectiveness and impact of the innovation can prove useful in leveraging funds for continuing the effort. In addition, funders should consider whether their support for curriculum innovations would have the most impact if directed toward specific departments or a more centralized source. For example, the Interdisciplinary Generalist Curriculum project of the federal Health Resources and Services Administration found that providing money to the dean’s office was probably the best means of effecting a multidisciplinary curriculum change because the curriculum as a whole was affected, and centralized leadership was required.
Because curriculum change is a labor-intensive process, faculty members