Systemic Change: Barriers and Opportunities
In the final sessions of the workshop, speakers offered systemic perspectives on the issue of changing undergraduate education in science, technology, engineering, and mathematics (STEM). Small groups and committee members also reflected on the workshop’s proceedings to identify future directions and next steps.
DIFFUSION OF PROMISING PRACTICES
Melissa Dancy (Johnson C. Smith University) and Charles Henderson (Western Michigan University) discussed their research on reform and science education at the undergraduate level. Dancy began by noting that the research clearly shows that the traditional lecture-based method is ineffective and that alternative methods yield better outcomes. Although there is still room for additional research and development, Dancy said the problems generally are well documented and solutions are available to address them. However, anecdotal evidence suggests that the impact of this research has been minimal in undergraduate science classrooms and that typical classroom practice remains largely lecture-based.
According to Dancy, change is not happening quickly because change strategies are based largely on a development and dissemination model. With this model, education researchers develop and test specific innovations and disseminate the results to instructors. Typically, this model involves telling instructors that the methods they currently use are ineffective and introducing the evidence for alternative practices in the hopes that instructors will adopt them in their classrooms. Dancy said this approach
fails to consider contextual factors that influence practice and the ability to change.
The development and dissemination model, in Dancy’s view, also ignores instructors as an important part of the development process, creating fractious relationships between researchers and instructors. Change agents blame instructors for the lack of change. They assume instructors do not realize that their methods are ineffective, are unaware of alternative options, or do not value effective teaching. For their part, instructors blame the change agents. Interviews with five tenured physics faculty who are considered by their peers to be effective teachers revealed high levels of frustration with the research community (Henderson and Dancy, 2008a). Those faculty members reported that education research is dogmatic and sends the message that everything faculty members are doing is wrong and detrimental to student learning. They expressed a desire to be part of the solution, rather than mere targets of the research.
To improve these relationships and accelerate the change process, Dancy offered several ideas. First, she said curriculum developers can provide easily modifiable materials that instructors can adapt to their own situations as their professional judgment warrants. Second, dissemination can focus on the principles behind a curriculum, not just the curriculum itself. And finally, to acknowledge the constraints faculty face at different institutions, she is in favor of conducting explicit research on the conditions for transferring a reform to different environments.
Dancy presented a model to explain the discontinuity between beliefs and actions regarding implementing reformed instruction (see Figure 8-1). The model shows how individual beliefs interact with context to influence practice. When the two are aligned, belief and action are consistent; when they are not aligned, actions are less consistent with beliefs. For example, faculty members who have progressive beliefs about instruction might teach in environments that do not support innovation—the chairs are bolted down, large numbers of students have expectations for traditional instruction, or their colleagues do not use innovative instructional strategies. Because of contextual constraints, these instructors are likely to use more traditional methods than they otherwise might, according to Dancy. For this reason, she said, any change strategies need to consider the context.
In studying the implementation of promising practices, the research community has focused more on the individual than the environment. However, in Dancy’s view, the individual might not represent the greatest point of leverage. Instead, she argued, it would be fruitful to direct more attention to structural changes that could remove barriers to progressive instruction. She also recommended that the research community intensify its efforts to develop models of change beyond the development and dissemination model.
Building on Dancy’s points, Henderson discussed the literature on undergraduate STEM reform. He began by identifying three stakeholder groups: disciplinary STEM education researchers (generally in STEM departments), faculty development researchers (generally in centers for teaching and learning), and higher education researchers (generally in schools of education). Each group has its own journals, conferences, and professional societies. According to Henderson, the literature from all three stakeholder groups is similar and reflects a shift toward a focus on student learning and away from instructors and instruction. However, these groups are conducting their research in isolation from each other, with no overlapping references.
Henderson and his colleagues conducted a systematic study of the literature of the three stakeholder groups and other relevant literature bases (Henderson, Finkelstein, and Beach, 2010). From this review, they developed four categories of change strategies along the dimensions of research
focus—individual change versus environmental or structural change—and the extent to which the measure of success is prescribed in advance—prescribed versus emergent outcome.1
As Figure 8-2 shows, each category has a different change strategy. For the first category—prescribed final condition and a focus on changing individuals—the change strategy is to teach or tell individuals about new teaching ideas or practices. This category represents the development and dissemination model that is common to the STEM education research community and to faculty development researchers. In the second category, the focus remains on changing individuals, but the final condition is emergent. The change strategy is to encourage or support individuals to develop new teaching practices; faculty developers are the primary community employing this strategy. Third, with a prescribed final condition and changing environments or structures, the strategy is to develop new environmental
features that require or encourage new teaching conceptions or practices (e.g., policy change, strategic planning). Higher education researchers are doing most of the work in this area. The fourth category combines a focus on changing the environment with an emerging final condition. Higher education researchers are the primary change agents in this category, and the strategy is to empower the collective development of environmental features that support new teaching ideas or practices (e.g., institutional transformation and learning organizations).
In closing, Henderson underscored Dancy’s point that STEM change agents primarily use a development and dissemination model to effect change. They do not draw on approaches from other groups or other disciplines, and they rarely test the effectiveness of the development and dissemination approach. A more fruitful approach, he said, would be to use knowledge from both inside and outside the STEM community to develop better change models and collect empirical data on their effectiveness. In short, he said, such an approach would more closely follow the scientific method.
REFLECTIONS ON LINKING EVIDENCE AND PROMISING PRACTICES IN STEM
James Fairweather (Michigan State University) observed that most efforts to reform undergraduate STEM education start from a presumptive model based on classroom innovation and the teaching and learning process. The premise, he explained, is that hundreds, if not thousands, of individual faculty improvements will lead to a substantial aggregate change. He pointed out, however, that the aggregate effect has not yet reached desired levels, which underscores the need to advance the conversation about reform.
Fairweather labeled the existing body of reforms as a collection of solutions in search of problems. He identified some common goals that are targeted by reforms:
Increasing public awareness of STEM or generally improving STEM literacy.
Stoking the STEM pipeline by attracting K-12 students into STEM, recruiting college students into STEM majors, and improving retention in the majors.
Enhancing the preparation of STEM college students for their professions.
Improving various student learning outcomes, including increased content knowledge, the longer term retention of knowledge, application, synthesis, and problem solving.
Reforming the curriculum.
These goals are divergent and necessitate different approaches, said Fairweather. For example, an effort in one classroom may increase students’ retention of content knowledge, but it might not improve their problem-solving skills or stimulate interest in the field or retention in the major. It would be useful, he said, to identify what each innovation is trying to achieve. Such an analysis would uncover redundancies and gaps and would make it easier to target additional reform efforts that address those gaps.
Fairweather observed that researchers make several assumptions about the nature of evidence in reforming STEM undergraduate education. First, they assume that STEM faculty administrators require empirical evidence to convince them of the success of education reforms. Second, they assume that the quality of empirical evidence will be judged according to scientific standards in STEM rather than in education. Third, they assume that the demonstration of evidence alone is sufficient to prompt change; in reality, Fairweather said, empirical evidence is necessary but not sufficient.
Fairweather went on to observe that evaluation practices themselves sometimes confound the ability to truly determine the effectiveness of innovative practices. For example, most evaluation in undergraduate STEM education focuses on in-class events, making it difficult to compare and characterize the entire body of knowledge. In addition, researchers rely more on self-report data than on the gold standard of pre-post comparisons. It is also relatively uncommon to link learning objectives, instructional approaches, and evaluation tools. Finally, said Fairweather, although longitudinal studies, in-depth studies, and studies of systemic reform would yield more nuanced understandings, they are the exceptions rather than the rule.
These observations prompted him to list some useful steps related to evaluating promising practices:
Distinguish between what is required for any effective teaching or learning environment (e.g., having clear objectives) from what is required to implement innovative pedagogical innovations (e.g., group work).
Recognize that initial results from studies of innovative practices might not be positive, especially if students are engaging in practices that are new to them.
Describe the context, with case studies, in sufficient detail so readers can determine whether the results are applicable to them.
Identify statistical measures (e.g., effect sizes, significance levels) that reflect reasonable and meaningful changes in outcomes.
Distinguish between evaluations for different audiences and purposes, such as helping a faculty member implement an innovation, helping a faculty member document the effects of a classroom
innovation, or convincing other faculty members to try the new instructional approach.
Recognize that curriculum reform involves political and cost-effectiveness concerns as well as evidence of impact.
Fairweather also identified some factors that influence the success of innovative strategies. First he noted that focusing on future versus current faculty seems to be an effective way to promote reform (see Chapter 7). It is also important, he said, to understand the implicit change model involved with any innovation. Specifically, it is important to recognize whether the change is expected to happen in a linear or nonlinear way; to identify structural impediments to reform; to understand the role of professional societies and accreditation; and to take into account the role of available institutional resources, including professional development. He concluded by emphasizing that “more effort needs to be expended on strategies to promote the adoption and implementation of STEM reforms rather than on assessing the outcomes of these reforms. Additional research can be useful, but the problem in STEM education lies less in not knowing what works and more in getting people to use proven techniques” (Fairweather, 2008, p. 28).
FUTURE DIRECTIONS AND NEXT STEPS
After the presentations, participants broke into small groups to reflect on the two workshops and identify future directions for promoting innovations in undergraduate STEM education. Committee members offered some final thoughts.
Reports from Small-Group Discussions
All of the small groups emphasized the importance of increasing collaboration among the various stakeholders in undergraduate STEM education. They cited the need to forge stronger connections among discipline-based instructors, discipline-based education researchers, education researchers, cognitive scientists, higher education policy researchers, and disciplinary societies. Strengthening these connections, they said, would further scholarship with respect to STEM education and provide opportunities for professional development targeted at implementing research-based practices. Some groups saw value in jointly identifying an umbrella set of challenges that faculty in the STEM disciplines could tackle as a united community.
All of the small groups mentioned the importance of research. Some favored drawing more heavily on existing research. Specifically, they mentioned the extensive literature from other disciplines on faculty develop-
ment and the idea of requiring National Science Foundation grantees to base curriculum proposals on existing research. Several groups identified the need for additional research, particularly on institutional change and its relation to STEM education. Ideas in this regard included a concerted research initiative around the broad question of what influences faculty members’ teaching decisions; research that examines the drivers for change, the resistance for change, and strategies for overcoming that resistance; the role of influential leaders in promoting change; and a deeper analysis of change strategies that do not work.
Finally, the groups mentioned the importance of disseminating research in a way that makes it enticing and easy for “hungry adopters” to change their practice. The process would take into account the role of textbooks and textbook developers and would involve understanding why more faculty are not adopting innovations and identifying those who might be amenable to changing their practice. According to the small groups, dissemination efforts might include a design manual articulating research-based guidelines for structuring courses and mechanisms for sharing information about innovations within and across disciplines.
Kenneth Heller observed that many of the teaching strategies discussed during the workshop (e.g., case-based learning, problem-based learning, using closed-ended problems or context-rich problems) involved a common set of elements. For example, they all include cooperative group learning, connection to a real problem, and coaching—and these methods seem to be effective.
David Mogk focused on next steps. He cited a need for resources and networks that will engage more faculty in the scholarship of learning and help them become agents of change in their classes, departments, and institutions. Drawing parallels between the scientific method and education research and assessment, he encouraged workshop participants to help their colleagues engage in assessment for the betterment of STEM education and for the health of science and society.
Melvin George remarked on the dearth of discussion about the purpose of improving STEM education, stressing the need to identify a compelling sense of purpose that will generate support for reforms. He also agreed with the need to create a design manual for “hungry adopters.” He concluded by underscoring the points made by Fairweather, Dancy, and Henderson about directing more resources to understanding the factors that influence change versus continuing to study which practices are effective.
Building on George’s points, William Wood added that it is important to understand the role students play—positive and negative—in the change
process. He noted that students’ facility with technology and access to information have required instructors to shift away from teaching facts (Prensky, 2001). However, in his experience, students pose barriers to reform because they often resist new pedagogies and are unfamiliar with how to learn. For this reason, in addition to educating instructors about better instruction, Wood stressed a need to educate students about how to learn.
Susan Singer commented on the fact that several people view further research on effective practices and further research on implementing change as mutually exclusive. She observed that, similar to scientific research, the process of change is iterative and requires both types of research. She also cited a need to develop a broader theoretical framework to guide STEM education research within and across disciplines, expressing the hope that this workshop series is the beginning of a conversation along those lines, rather than the end.