In addition to the strategies described in Chapters 4 and 5 to promote conceptual change and improve students’ problem solving and use of representations, scientists and engineers want to provide the most effective overall learning experiences to help students acquire greater expertise in their disciplines. To some extent, those experiences are constrained by institutional context. Undergraduate lecture halls and laboratories provide much of the infrastructure for teaching students in science and engineering. One compelling question is how best to use those resources. An undergraduate course may be structured around traditional lectures offered two or three times weekly along with a laboratory experience. Some scientists and engineers want to explore alternatives to this traditional format. If they were to depart from the lecture-plus laboratory format, then according to discipline-based education research (DBER), which teaching options are most promising? More importantly, which options are backed by evidence for their effectiveness in fostering student learning?
A significant portion of DBER focuses on measuring the impact of instructional strategies on student learning and understanding. In this chapter, we summarize that research, discussing the three most common settings for undergraduate instruction—the classroom, the laboratory, and the field—and the effects of instructional strategies on different student groups.
As stated in Chapter 1, two long-term goals of DBER are to help identify and measure appropriate learning objectives and instructional approaches that advance students toward those objectives, and identify approaches to make science and engineering education broad and inclusive. This research is motivated, in part, by ongoing concerns that undergraduate science and engineering courses are not providing students with high-quality learning experiences or attracting students into science and engineering degrees (President’s Council of Advisors on Science and Technology, 2012). Indeed, a seminal three-year, multicampus survey examined the reasons undergraduate students switch from science, mathematics, and engineering majors to nonscience majors (Seymour and Hewitt, 1997). The survey revealed that nearly 50 percent of undergraduates who began in science and engineering shifted to other majors. Their reasons for doing so were complex and numerous, but pedagogy ranked high among their concerns. In fact, poor faculty pedagogy was identified as a concern for 83 percent of all science, mathematics, and engineering students. Forty-two percent of white students cited poor pedagogy as the primary factor in their decision to shift majors, compared with 21 percent of non-Asian students of color, who tended to blame themselves and suffered a substantial loss of confidence in leaving the sciences (Seymour and Hewitt, 1997).
Recognizing these challenges, many institutions are working to identify effective approaches to improve undergraduate science and engineering education (Association of American Universities, 2011). DBER, by systematically investigating learning and teaching in science and engineering and providing a robust evidence base for new practices, is playing a critical role in these efforts.
Most DBER studies on instructional strategies are predicated on the assumption that students must build their own understanding in a discipline by applying its methods and principles, either individually or in groups (Piaget, 1978; Vygotsky, 1978). Consequently, with some variations, these studies typically examine student-centered approaches to learning, often comparing the extent to which student-centered classes are more effective than traditional lectures in promoting students’ understanding of course content.
A student-centered instructional approach places less emphasis on transmitting factual information from the instructor, and is consistent with the shift in models of learning from information acquisition (mid-1900s) to knowledge construction (late 1900s) (Mayer, 2010). This approach includes
• more time spent engaging students in active learning during class;
• frequent formative assessment to provide feedback to students and the instructor on students’ levels of conceptual understanding; and
• in some cases, attention to students’ metacognitive strategies as they strive to master the course material.
The extent to which DBER on instructional practices is explicitly grounded in broader research on how students learn varies widely. The committee’s analysis revealed that either implicitly or explicitly, the principle of active learning has had the greatest influence on DBER scholars and their studies. With a deep history in cognitive and educational psychology, this principle specifies that meaningful learning requires students to select, organize, and integrate information, either independently or in groups (Jacoby, 1978; Mayer, 2011; National Research Council, 1999). In addition, the framework of cognitive apprenticeship drives many instructional reforms in physics and thus can help to explain research findings about the success of those reforms. As described in Chapter 5, cognitive apprenticeship is based on the idea that complex skills depend on an interlocking set of experiences and instruction whose efficacy, in turn, depend on the learner and the community of practitioners with whom the learner interacts (Brown, Collins, and Duguid, 1989; Yerushalmi et al., 2007).
Although some DBER is guided by learning theories and principles, reports of DBER studies are typically organized around instructional setting. Following that convention, we organize our synthesis of DBER on instruction by setting—classroom, laboratory, and field—before considering the effects of instructional strategies on different groups.
Most of the available research on instruction is conducted in introductory courses. Sample sizes range from tens of students to several hundred students. The preponderance of this research is conducted in the context of a single course or laboratory—often by the instructor of that course, and sometimes comparing outcomes across multiple sections of that course. Fewer studies are conducted across multiple courses or multiple institutions.
Many studies use pre- and post-tests of student knowledge (often with a comparison or control group) to assess some measure of learning gains for one course, typically lasting one semester. These gains often are measured with concept inventories developed for aspects of the discipline or other specialized assessments (see Chapter 4 for a discussion of concept inventories), or with course assignments or exams. Fewer studies measure longer-term gains, or other outcomes such as student attitudes and motivation to study the discipline.
Understandably, most DBER on instructional strategies centers on the classroom setting. The reviews of DBER commissioned for this study (Bailey, 2011; Dirks, 2011; Docktor and Mestre, 2011; Piburn, Kraft, and Pacheco, 2011; Svinicki, 2011; Towns and Kraft, 2011), along with other syntheses (e.g., Allen and Tanner, 2009; Hake, 1998; Handelsman, Miller, and Pfund, 2007; Prince, 2004; Ruiz-Primo et al. 2011; Smith et al., 2005; Wood, 2009) consistently support the view that adopting various student-centered approaches to classroom instruction at the undergraduate level can improve students’ learning relative to lectures that do not include student participation. A limited amount of research suggests that even incremental changes toward more student-centered approaches can enhance students’ learning (Derting and Ebert-May, 2010; Knight and Wood, 2005).
Research from the different fields of DBER reveals some nuances and variations on this theme, which we explore in this section. We have organized this discussion by instructional strategy rather than by discipline because these strategies in themselves are not discipline-specific, and most are implemented in similar learning environments. We include discipline-specific discussions under each strategy where that research was available.
Making Lectures More Interactive
Most undergraduate science and engineering classes are taught in a lecture format. Although traditional lectures can be effective for some students (Schwartz and Bransford, 1998), instructors have a variety of options at their disposal to make lectures more interactive and enhance their effectiveness. These options range in scope and complexity from slight modifications of instructional practice—such as beginning a lecture with a challenging question for students to keep in mind—to devoting most of the instructional time to collaborative problem solving. Research on making lectures more interactive is a significant focus of DBER. Overall, the committee has characterized the strength of the evidence on making lectures more interactive as strong because of the high degree to which the findings converge, albeit from many studies that were conducted in the context of a single course using a wide variety of measurement tools. This section discusses several options for making lectures and small discussion groups more interactive. Most of these approaches involve enhancing or refining—rather than completely eliminating—the lecture format.
Encouraging Student Participation
Interactive lectures involve students in learning the material, often requiring them to think and apply the content that is covered during class. Several
geoscience education research studies have examined the effectiveness of interactive lectures. One study (Clary and Wandersee, 2007) tested a model of integrated, thematic instruction in the introductory geology lecture. Students in the experimental condition did an in-lecture “mini-lab” with petrified wood and discussed their observations in on-line discussion groups. Pre-test/post-test application of a researcher-developed survey showed statistically greater gains in the experimental group than in two control groups. Other research examining the use of ConcepTests (short, formative assessments of a single concept), Venn diagrams constructed with student input, and analysis of geologic images during lecture has shown significant differences between control and experimental groups; students who experienced the interactive strategies earned higher exam scores (McConnell, Steer, and Owens, 2003).
Interactive lecture demonstrations are another strategy for encouraging student participation. With this approach, students (1) make predictions about the outcome of a physical demonstration that the instructor conducts in class, (2) explain this prediction with peers and then with the class, (3) observe the event, and (4) compare their observations to their predictions (Sokoloff and Thornton, 2004). Some research on interactive lecture demonstrations indicates that they can improve students’ understanding of foundational physics concepts as measured by the Force and Motion Conceptual Evaluation (Sokoloff and Thornton, 1997). Other research suggests that the prediction phase (consistent with conceptual-change models) is particularly important to the success of an interactive lecture demonstration (Crouch et al., 2004). Similarly, chemistry education research shows that students who were allowed to work in small groups to make predictions about lecture demonstrations showed significant improvements on tests over students who merely observed demonstrations (Bowen and Phelps, 1997).
Another approach is to adapt lectures based on student responses to pre-class or in-class work. The most familiar pre-lecture method is Just-in-Time Teaching. With this approach, students read and answer questions or solve homework problems before class and submit their work to the instructor electronically, with enough time for the instructor to modify the lecture to target student weaknesses or accommodate their interests (Novak, 1999). A moderate amount of evidence suggests that Just-in-Time Teaching is effective in teaching some physics concepts, such as Newton’s Third Law (Formica, Easley, and Spraker, 2010), and is associated with positive attitudes about introductory geology (Linneman and Plake, 2006; Luo, 2008). In biology, Just-in-Time Teaching has been associated with improved student preparation for classes and more effective study habits; students also preferred this format to traditional lectures (Marrs and Novak, 2004).
Other versions of pre-lecture assignments have been associated with gains in student learning. As one example, Multimedia Learning Modules have been associated with improved course performance in physics (Stelzer
et al., 2009). In a large introductory biology course for majors, students who participated in Learn Before Lecture (a simpler approach than Just-in-Time Teaching) performed significantly better than students in traditional courses on Learn Before Lecture-related exam questions, but not on other questions (Moravec et al., 2010).
Although arguably less common, approaches that involve real-time adjustment of instruction also appear to have the potential to improve student learning and performance. In a quasi-experimental study in the geosciences, students in interactive courses were given brief introductory lectures followed by formative assessments that triggered immediate feedback and adjustment of instruction. These students showed a substantial improvement in Geoscience Concept Inventory scores (McConnell et al., 2006).
Audience response systems (“clickers”) are a different approach to encouraging greater student participation in large-enrollment courses. Clickers are small handheld devices that allow students to send information (typically their response to a multiple choice question provided by the instructor) to a receiver, which tabulates the classroom results and displays the information to the instructor. The value of clickers for in-class formative assessment has been debated. Some biology instructors have reported high student approval and enhanced learning using clickers (e.g., Smith et al., 2009; Wood, 2004), while others have found them less useful and have discontinued their use (Caldwell, 2007). Research in chemistry and astronomy suggests that learning gains are only associated with applications of clickers that incorporate socially mediated learning techniques, such as those discussed in the next section (Len, 2007; MacArthur and Jones, 2008). Overall, the research on clickers indicates that technology itself does not improve outcomes, but how the technology is used matters more (e.g., Caldwell, 2007; Keller et al., 2007; Lasry, 2008).
Regarding clickers—as regarding instruction more broadly—DBER has not yet systematically used learning theory principles to examine whether certain strategies are more effective for different populations of students, or analyzed the conditions under which those strategies are successfully implemented. However, several authors have offered suggestions for best practices with clicker technology (Beatty et al., 2006; Caldwell, 2007; Smith et al., 2009; Wieman et al., 2008), including posing formative assessment questions at higher cognitive levels and socially mediated conditions for learning such as allowing students to discuss their responses in groups before the correct answer is revealed.
Involving Students in Collaborative Activities
Many transformed courses (i.e., courses in which instructors are using student-centered approaches) incorporate in-class activities where
students collaborate with each other. Consistent with research from science education and educational psychology, DBER has shown that these activities enhance the effectiveness of student-centered learning over traditional instruction (e.g., Armstrong, Chang, and Brickman, 2007; Johnson, Johnson, and Smith, 1998; Smith et al., 2009, 2011; Springer, Stanne, and Donovan, 1999). Moreover, collaborative learning has been shown to improve student retention of content knowledge (Cortright et al., 2003; Rao, Collins, and DiCarlo, 2002; Wright and Boggs, 2002). However, it is important to remember that collaborative learning is not inherently effective, and this approach can be implemented ineffectively (Slavin, Hurley, and Chamberlain, 2003). In this vein, DBER does not yet provide conclusive evidence about the conditions under which these strategies are effective, and for which students.
Think-Pair-Share is a straightforward form of in-class collaborative activity—widely used in K-12 education—that is also referred to as informal cooperative learning (Johnson, Johnson, and Smith, 2011; Smith, 2000). With this approach, the instructor poses a question, often one that has many possible answers; asks students to formulate answers, share their answers, and discuss the question with their group; elicits answers again; and engages in a class-wide discussion. The use of informal groups in this way has been associated with improvements in a variety of outcomes, including achievement, critical thinking and higher-level reasoning, students’ understanding of others’ perspectives, and attitudes about their fellow students, instructors, and the subject matter at hand (Johnson, Johnson, and Smith, 2007, 1998; Smith et al., 2005). Instructors adapt Think-Pair-Share in various ways. Some geoscience education researchers have followed brief introductory lectures with interactive sessions during which students discussed ideas in groups and completed worksheets based on the misconceptions literature. On average, students who participated in the interactive sessions scored higher on tests than students who received only lecture, even when taught by the same instructor during the same semester (Kortz, Smay, and Murray, 2008).
In chemistry, a number of initiatives that stress socially mediated learning have been widely adopted and adapted. In POGIL (Process-Oriented Guided Inquiry Learning),1 students work together in small groups on guided inquiry activities to learning content and science practices. PLTL (Peer Led Team Learning)2 uses peer-team leaders in out-of-class team problem-solving sessions. Both POGIL and PLTL have developed large communities of practice, and there is some evidence that they can improve student outcomes. One mixed-methods study reported significantly improved
outcomes for organic chemistry students in PLTL sections on all course exams and finals, compared with students who learned through traditional lecture courses (Tien, Roth, and Kampmeier, 2002). Other studies have shown that a combination of PLTL and POGIL improved test scores for a cohort of students in general chemistry (Lewis and Lewis, 2005). However, much more research remains to be done to investigate how these pedagogies can best be implemented, how different student populations are affected, and how the fidelity of implementation—that is, the extent to which the experience as implemented follows the intended design—affects outcomes.
To explore the common view that group learning is pragmatically impossible in large-enrollment courses, some astronomy education researchers created and systematically studied a series of collaborative group activities modified specifically for large-enrollment courses known as ASTRO 101. We have characterized the strength of this evidence as limited because relatively few studies exist and the results have not been independently replicated. Studies of these activities reveal that students can learn more when collaborative group activities are added to traditional lecture and that they enjoy the collaborative learning experience more than traditional courses (Adams and Slater, 1998, 2002; Skala, Slater, and Adams, 2000). In addition, female-only learning groups performed better than heterogeneous groups in these activities (Adams et al., 2002). Survey responses, course evaluations, and exam performance in large-enrollment (600 students) oceanography courses have also revealed an increased interest in science as well as improvements in subject-matter learning, information recall, analytical skills, and quantitative reasoning for students who were taught with cooperative learning and collaborative assessments (Yuretich et al., 2001).
In addition to being used in large lectures, collaborative activities also are used to make smaller discussion sections more interactive. In physics, Cooperative Group Problem Solving requires students to work in formal, structured groups on specifically designed tasks called context-rich problems (Heller and Heller, 2000; Heller and Hollabaugh, 1992). The design of this highly structured approach is based on research on cooperative learning, a popular method in K-12 education (Johnson, Johnson, and Holubec, 1990; Johnson, Johnson, and Smith, 1991). A limited amount of evidence at the undergraduate level suggests that this approach can contribute to improved conceptual understanding and problem-solving skills (Heller and Hollabaugh, 1992; Heller, Keith, and Anderson, 1992; Hollabaugh, 1995) (see Box 6-1 for a description of other collaborative models used in physics in which a key feature is changing the learning space). Findings from a study in chemistry also indicated that cooperative group problem solving improved students’ problem-solving abilities by about 10 percent, and that this improvement was retained when students returned to individual problem-solving activities (Cooper et al., 2008). In that study, the only students who did not benefit
Changing the Learning Space:
Some Examples from Physics
Several physics education reforms have involved redesigning the learning space. Based on the model of cognitive apprenticeship (see Chapter 5), these redesigns also involve dramatic changes to the way physics is taught, reducing the amount of lecturing and often integrating laboratory and lecture. Some examples include the following:
Workshop Physics. Developed at Dickinson College, Workshop Physics taught university physics entirely within the laboratory, using the latest computer technology. Students preferred workshop courses, and students in these courses generally outperformed students in traditional courses on conceptual exams but not in problem solving (Laws, 1991, 2004).
Studio Physics. Developed at Rensselaer, Studio Physics redesigned teaching spaces to accommodate an integrated lecture/laboratory course. Early studies showed little improvement in students’ conceptual understanding or problem-solving skills, despite the popularity of the innovation. Later implementations, which added research-based curricula, resulted in improved learning of content over traditional courses (Cummings et al., 1999; Sorensen et al., 2006) but not always improvements in problem solving (Hoellwarth, Moelter, and Knight, 2005).
SCALE-UP. Developed at North Carolina State University, the Student-Centered Active Learning Environment for Undergraduate Programs (SCALE-UP) begins with a redesign of the classroom. Each room holds approximately 100 students, with round tables that accommodate 3 laptops and 9 students, whiteboards on several walls, and multiple computer projectors and screens so every student has a view. Students engage in hands-on activities and with computer simulations, work collaboratively on problems, and conduct hypothesis-driven experiments. SCALE-UP students have better scores on problem-solving exams and concept tests, slightly better attitudes about science, and less attrition than students in traditional courses (Beichner et al., 2007; Gaffney et al., 2008).
from this activity were students with the lowest scores on a logical thinking test who were paired with students of similar ability.
Teasing apart the benefits of collaborative group versus individual problem-solving practice is difficult, as is following changes in problem-solving ability over time, particularly in large classes. Some recent work has been done on the development and validation of tools for comparing collaborative and individual problem-solving strategies in large (60-100 students) biochemistry courses, with students discussing ill-defined problems in small online groups (Anderson, Mitchell, and Osgood, 2008), and then working through individual electronic exams based on similar, but not identical, problems (Anderson et al., 2011).
Other Instructional Strategies
Some DBER exists on other popular instructional strategies that are not necessarily interactive. We have characterized the strength of conclusions that can be drawn from this evidence as limited because relatively few studies exist and the findings across disciplines are contradictory. For example, in traditional and student-centered classes alike, analogies and explanatory models are widely used pedagogical tools to help students see similarities between what they already know and unfamiliar, often abstract concepts (Clement, 2008). Some physics education research suggests that use of analogies during instruction of electromagnetic waves helped students generate inferences, and that students taught with the help of analogies outperformed students who were taught traditionally (Podolefsky and Finkelstein, 2006, 2007a). Further research indicates that blending multiple analogies to convey wave concepts can lead to better student reasoning than using single analogies or standard abstract representations (Podolefsky and Finkelstein, 2007b). A possible explanation for this finding is that using multiple analogies may have helped learners to see the general pattern across the separate analogies (Gentner and Colhoun, 2010), rather than becoming overly attached to the specific features of any one analogy. This result echoes findings from cognitive science that multiple analogies facilitate problem solving because they help solvers to construct a general schema for the common underlying solution procedure (Catrambone and Holyoak, 1989; Gick and Holyoak, 1983; Novick and Holyoak, 1991; Ross and Kennedy, 1990).
In contrast to findings from physics education research, a series of chemistry education research studies identifies the challenges of using analogies for college students who had successfully completed at least one biochemistry course (Orgill and Bodner, 2004, 2006, 2007). In those studies, faculty used analogies to identify similar features between the already-known concept and the concept to be learned, with the goal of facilitating the transfer of knowledge from one setting to another. However,
the instructors often did not identify where the analogy broke down or failed to be useful. As a result, students overgeneralized the features of the known situation, thinking that all features were represented in the target. This overgeneralization impaired student learning.
Another approach in teaching science and engineering is to present abstract concepts and then follow them with a specific worked example (sometimes called a “touchstone example”) to illustrate how the concepts are applied to solve problems. With this approach, students’ understanding of the concept often becomes conflated with the particulars of the example that is used. As a result, students may have difficulty separating the solution from the specifics of a particular problem, which may limit their ability to apply knowledge of the concept in other settings. This phenomenon is known as the “specificity effect” and has been demonstrated in several physics education research studies (Mestre et al., 2009) as well as basic studies in cognitive science.
Supplementing Instruction with Tutorials
The tutorial approach is a common instructional innovation in physics and astronomy, and represents a significant area of research and development for physics and astronomy education research. With a tutorial approach, instructors are provided with a classroom-ready tool to target a specific concept, elicit and confront tenacious student misconceptions, create learning opportunities, and provide formative feedback to students.
The University of Washington physics education research group has developed several Tutorials in Introductory Physics (McDermott and Shaffer, 2002), and numerous studies have demonstrated that these tutorials significantly improve student understanding of the targeted concepts and of scientific reasoning more generally (see review by Docktor and Mestre, 2011, for a detailed listing of relevant publications). The success of the University of Washington tutorials has inspired other research groups to create and evaluate tutorial-style learning interventions (e.g., Elby, 2001; Steinberg, Wittmann, and Redish, 1997; Wittmann, Steinberg, and Redish, 2004, 2005). In physics, these adaptations are predominantly used in a recitation or discussion section.
Astronomy education researchers have successfully modified the tutorial approach to be used in a lecture classroom environment. For example, Lecture-Tutorials for Introductory Astronomy (Prather et al., 2004, 2007) is a widely used series of short-duration, highly focused, highly structured learning activities. Instructors lead students through a purposeful sequence of carefully constructed questions designed to move the learner toward a more expert-like understanding. Several studies have shown that the lecture-tutorial approach is more effective than lecture-dominated courses
in improving students’ understanding in astronomy (Alexander, 2005; Bailey and Nagamine, 2009; Lopresto, 2010; Lopresto and Murrell, 2009). One study of multiple introductory science courses across multiple institutions revealed that adaptations of the astronomy approach for introductory geoscience courses improved students’ test scores in those courses (Kortz, Smay, and Murray, 2008).
Learning science and engineering takes place not just in classrooms, but also in laboratories3 and in the field. Well-designed laboratories can help students to develop competence with scientific practices such as experimental design; argumentation; formulation of scientific questions; and use of discipline-specific equipment such as pipettes, microscopes, and volumetric glassware. However, laboratories that are designed primarily to reinforce lecture material do not necessarily deepen undergraduate students’ understanding of the concepts covered in lecture (Elliott, Stewart, and Lagowski, 2008; Herrington and Nakhleh, 2003; Hofstein and Lunetta, 1982; Kirschner and Meester, 1988 Lazarowitz and Tamir, 1994; White, 1996). Indeed, a 2004 review of more than 20 years of research on laboratory instruction found “sparse data from carefully designed and conducted studies” to support the widely held belief that laboratory learning is essential for understanding science (Hofstein and Lunetta, 2004, p. 46).
Relatively few DBER studies focus on the laboratory environment. We have characterized the strength of evidence as moderate in physics because the research base includes a combination of smaller-scale studies (e.g., a single course or section) and studies that have been conducted across multiple courses or institutions, with general convergence of findings. In chemistry, engineering, biology, the geosciences, and astronomy, the strength of the conclusions that can be drawn from this research is limited.
One of the criticisms of traditional laboratory manuals is that they do not reflect what scientists actually do: develop hypotheses, design and conduct experiments, make decisions about measurement error versus equipment sensitivity, and report their findings. Several reformed physics
3It was beyond the scope of this committee’s charge to define what constitutes a laboratory course (see National Research Council  for a definition of laboratory experiences for K-12 education). Recognizing the wide range of laboratory experiences—and the variations within and across disciplines—in this report, we describe what is commonly practiced in each discipline by using the operational definitions of laboratory employed in the research we reviewed.
curricula include laboratory experiences that are aligned with scientific practices (see, for example, Investigative Science Learning Environment [Etkina and Van Heuvelen, 2007], Physics by Inquiry [McDermott et al., 1996a, 1996b)], and Modeling Instruction [Brewe, 2008]). In these laboratory exercises, students record observations, develop and test explanations, refine existing models, and build and refine their own causal models through experimentation.
Studies of specific curricular innovations show that these types of laboratories are more effective than traditional laboratories for developing students’ ability to design experiments, collect and analyze data, and engage in more authentic scientific communication (Etkina et al., 2006, 2010; Karelina and Etkina, 2007). These laboratories also contribute to positive attitudes about introductory physics, as measured by the Colorado Learning Attitudes about Science Survey (Brewe, Kramer, and O’Brien, 2009), in contrast to most other introductory physics courses (Redish, Steinberg, and Saul, 1998). A limited amount of evidence suggests that some of these benefits may extend beyond the laboratory setting. For example, one study showed that the skills learned in a reformed physics laboratory can transfer to novel tasks in biology (Etkina et al., 2010). In another study, students in a reformed laboratory outperformed their peers from traditional laboratories on course exam problems (Thacker et al., 1994).
Some physics education research has examined the use of technology in the laboratory setting. One curriculum, RealTime Physics Active Learning Laboratories, targets known misconceptions by using microcomputer-based technologies to instantly analyze formative data and provide immediate feedback to the student. Studies of RealTime Physics show gains on the Force Motion Concept Inventory (Sokoloff and Thornton, 1997) over traditional laboratories, although the value of the instantaneous feedback on improving students’ learning is debated (Beichner, 1990; Brasell, 1987; Brungardt and Zollman, 1995). A limited amount of evidence also suggests that video-based laboratories, where students either create their own videos of motion in the laboratory or use provided videos such as a space-shuttle launch and then analyze the videos using specific software programs, can improve students’ understanding of kinematics and kinematics graphs (Beichner, 1996). In addition, interactive computer simulations of physical phenomena can lead to improved student performance on laboratory reports, exam questions, and performance tasks (e.g., assembling real circuits) over traditional instruction (Finkelstein et al., 2005).
The chemistry laboratory is where the properties and reactions between chemicals become visible, and where chemists extrapolate the properties of
compounds to their molecular structure. For chemistry faculty, the laboratory is integral to learning chemistry. Given the expense of laboratory instruction, however, the question of whether students can learn chemistry without laboratories is asked with increasing frequency by department chairs and faculty administrators.
Despite its importance in the curriculum, the role of the chemistry laboratory in student learning has gone largely unexamined. The research that has been done has investigated faculty goals for laboratory learning, the role of graduate students as teaching assistants in the laboratory, experiments to restructure the laboratory with an inquiry focus, and students’ interactions with instrumentation in the laboratory.
An interview study of chemistry faculty revealed that faculty goals vary for connecting laboratory to lecture, promoting students’ critical thinking, providing experiences with experimental design, and teaching students about uncertainty in measurement (Bruck, Towns, and Bretz, 2010). Research on students’ experiences in general chemistry (Miller et al., 2004) and analytical chemistry (Malina and Nakhleh, 2003) suggests that such variation can influence students’ views of laboratory learning. Depending on how faculty members structure the laboratory experiment and assess student learning, students can view instruments simply as objects, without any knowledge of their internal workings, or as useful tools for collecting evidence about the behavior of molecules and their properties.
Domin (1999) has characterized inquiry in chemistry laboratories as ranging from deductive experiences (“explain, then experiment”) to inductive experiments (“experiment, then explain”). To explore learning along this continuum, Jalil (2006) designed a laboratory course with both kinds of experiments, finding that although students initially preferred deductive experiments, they eventually came to value the inductive approach because the experiments provided them with knowledge for subsequent learning in lecture. Although the label “inquiry” is often synonymous with inductive experiments, one analysis (Fay et al., 2007) found that neither commercially published laboratory manuals nor peer-reviewed manuscripts that self-identify as “inquiry” score very high on Lederman’s rubric of scientific inquiry, which was designed to assess the level of scientific inquiry occurring in high-school science classrooms. This research has been extended to other disciplines with similar results (Whitson et al., 2008).
Regarding the effect of laboratories on learning, emerging evidence suggests that students in an open-ended, problem-based laboratory format improve their problem-solving skills (Sandi-Urena et al., 2011, in press). The science writing heuristic—which combines an instructional technique to improve the flow of activities during an experiment with an alternative format for writing laboratory reports—is another approach to improve student learning. Research has shown that students who were taught by
teaching assistants who implemented the science writing heuristic appropriately showed significant improvements on their lecture exam scores (Rudd, Greenbowe, and Hand, 2007). In contrast, traditional laboratories that confirm the knowledge students may already possess do not appear to increase their understanding or retention (Gabel, 1999; Hart et al., 2000; Hofstein and Mamlok-Naaman, 2007).
Biology education research studies on instruction in the laboratory setting typically examine the outcomes of inquiry-based laboratories, often in comparison to traditional laboratories. The design of inquiry-based laboratories is based on the concept of the learning cycle, in which students pose questions, confront their misconceptions, develop hypotheses, and design experiments to test them (Johnson and Lawson, 1998; Lawson, 1988). In the best of these laboratories, students answer research questions using online datasets (e.g., genomic sequence data) (Shaffer et al., 2010) or even contribute to such datasets by isolating and characterizing previously undiscovered life forms (e.g., Hanauer et al., 2006). This work can lead to research publications with students as co-authors (e.g., Hatfull et al., 2010).
Although the committee has characterized the strength of the findings as limited, the evidence from biology education research suggests that when compared with traditional laboratory exercises, inquiry-based laboratories can improve students’ learning and their short-term retention of biology content (Halme et al., 2006; Lord and Orkwiszewski, 2006; Rissing and Cogan, 2009; Simmons et al., 2008). Inquiry-based laboratories also can improve students’ competency with science practices and confidence in their ability to do science (Brickman et al., 2009), and may increase retention of students in the major (Seymour et al., 2004). It is not clear, however, whether inquiry-based laboratories are more effective in dispelling common misconceptions on such topics as the nature of cellular respiration and the origins of plant biomass.
As one example of an inquiry-based laboratory, the Genomics Education Partnership used the Classroom Undergraduate Research Experience and pre- and post-test assessments to evaluate the impact of an authentic Drosophila genome annotation project on learning in 472 students at 46 participating institutions (Shaffer et al., 2010). The experimental design allowed for comparisons in knowledge gains between students who identified elements on the genome and engaged in more extensive characterization and students who only identified elements on the genome. For the latter group, pre- and post-test scores were the same. In contrast, the post-test scores of students who engaged in both tasks were nearly twice as high as their pre-test scores. This effort stands out in the biology education research
literature because of the scale of the study and the range of institutions involved.
Unique among DBER fields, engineering is an externally accredited practice-based profession. As a result, undergraduate engineering education involves developing technical competencies and preparing graduates for practice (Lynch et al., 2009). Engineering educators are therefore concerned with both affective and cognitive outcomes of laboratory experiences (Feisel and Rosa, 2005). Along these lines, recent efforts to develop inquiry-based engineering laboratories to foster student engagement seem promising (Kanter et al., 2003) although the research is in an early stage of development. However, the committee’s review revealed that a limited amount of research exists on how these laboratories affect students’ learning. A follow-up paper to a colloquy on the role of laboratory instruction in engineering noted “the lack of coherent learning objectives for laboratories and how this lack has limited the effectiveness of laboratories and hampered meaningful research in this area” (Feisel and Rosa, 2005, p. 121).
As with the other fields of DBER, the laboratory is understudied in the geosciences. One study of an introductory geoscience laboratory showed that students who completed the optional laboratory in conjunction with an introductory-level, lecture-based course earned higher final exam scores than students who completed only the lecture course (Nelson et al., 2010). Students over age 25 benefitted much more from the laboratory than students of conventional college age. Older students who took the laboratory option performed 21 percent higher than older students in the lecture-only course, whereas college-age students performed about 3 percent higher than their lecture-only counterparts. Students over age 25 and of conventional college age had similar GPAs and course grades, on average.
A limited amount of research on the introductory astronomy laboratory suggests that online datasets might have some benefits for undergraduate students. For example, the highly structured task of repeatedly querying large online datasets can enhance students’ understanding of the nature of scientific inquiry (Slater, Slater, and Lyons, 2010; Slater, Slater, and Shaner, 2008). In addition, undergraduate students’ understanding of the difference between data and evidence can be enhanced when they are explicitly
taught to develop their own research questions and conduct investigations over the duration of a course (Lyons, 2011). One study has shown that this approach works equally well for students in face-to-face collaborative groups and individually in the relatively isolated environment of an internet-delivered astronomy course (Sibbernsen, 2010).
For some disciplines, learning in the field is just as important as learning in the classroom or laboratory. The geoscience curriculum, for example, has had field instruction at its core for more than a century (Mogk and Goodwin, 2012). Field learning in the geosciences encompasses a variety of activities, ranging in scale from a single outdoor class activity (perhaps with a duration of only an hour or two), to sustained individual or group projects, short- or long-term residence programs, capstone field camps at the undergraduate level, and group or individual field projects at the undergraduate or graduate level (Butler, 2008; Mogk and Goodwin, 2012; Whitmeyer, Mogk, and Pyle, 2009).
The geoscience education literature is replete with descriptions of instructional activities in the field. However, reports of the efficacy of these activities are largely observational and anecdotal. We have characterized the strength of this evidence as limited because few studies exist and they have typically been conducted in the context of a single field course. The available research measures a variety of outcomes, and suggests that field courses can positively affect the attitudes, career choices, and lower- and higher-order cognitive skills of student participants as measured by survey instruments designed to assess these outcomes (Huntoon, Bluth, and Kennedy, 2001); improve introductory students’ understanding of concepts in the geosciences as measured by the Geoscience Concept Inventory (Elkins and Elkins, 2007); and contribute to the development of teamwork, decision-making, autonomy, and interpersonal skills (Boyle et al., 2004; Stokes and Boyle, 2009). Several scoring rubrics are helping to standardize the assessment of learning outcomes in the field (e.g., Pyle, 2009).
Some studies have used GPS tracking devices to monitor students at work in the field. Building on the cognitive science field of naturalistic decision making (Klein et al., 1993; Lipshitz et al., 2001; Marshall, 1995; Zsambok and Klein, 1997), some geoscience education research has analyzed the navigational choices of students who were engaged in independent field work and correlated those choices with performance (Riggs, Balliet, and Lieder, 2009; Riggs, Lieder, and Balliet, 2009). That research reported an optimum amount of relocation and backtracking in field geology: too much retracing indicates confusion, and too little reoccupation of key areas appears to accompany a failure to recognize important geologic features.
Most of the studies the committee reviewed were not designed to examine differences in terms of gender, ethnicity, socioeconomic status, or other student characteristics. However, physics education research has explored the impact of instructional innovations on females and minorities. For example, the positive impacts of SCALE-UP appear to be even greater for females and minorities (Beichner et al., 2007). In contrast, researchers studying the early implementation of Workshop Physics discovered that the attitudes of females about the course were significantly worse than males, and that females’ dissatisfaction arose from the alternative format of Workshop Physics, difficult laboratory partners, and time demands (Laws, Rosborough, and Poodry, 1999).
Some physics education researchers designed a course called Extended General Physics specifically for students whom they identified as likely to struggle with college physics. Enrollment in the course included nearly 70 percent females, and greater proportions of underrepresented minorities than traditional physics courses. Among other features, the course incorporated several student-centered pedagogies, including collaborative activities. Students in this course had a higher retention rate, higher grades, and better attitudes than their peers in the traditional section, and these differences were particularly pronounced for females and minorities. Moreover, students in Extended General Physics and traditional courses scored similarly on common exam questions, indicating that Extended General Physics was at least as rigorous as the traditional physics course (Etkina et al., 1999).
Along similar lines, a handful of biology education research studies suggest that first-year students from underrepresented groups perform better in biology courses that offer supplemental instruction (Barlow and Villarejo, 2004; Dirks and Cunningham, 2006; Matsui, Lui, and Kane, 2003). This effectiveness might be at least partially attributed to the cooperative learning that is typically included in supplemental instruction (Rath et al., 2007).
A few astronomy education research studies also have examined differences among males and females. One study showed that males outperform females on the Astronomy Diagnostic Test, leading the study’s authors to conclude that the concept inventories developed for astronomy (see Chapter 4) might have some inherent biases (Brogt et al., 2007; Hufnagel, 2002; Hufnagel et al., 2000). In a separate study, female students in ASTRO 101 started at lower achievement levels than their male counterparts, but the use of curriculum materials designed to improve quantitative reasoning skills closed those initial gaps (Hudgins et al., 2006).
• Across the science and engineering disciplines in this study, DBER clearly indicates that student-centered instructional strategies can positively influence students’ learning, achievement, and knowledge retention, as compared with traditional instructional methods. DBER does not yet provide evidence on the relative effectiveness of different student-centered strategies, whether different strategies are differentially effective for learning different types of content, or the effectiveness of strategies for subgroups of learners.
• Research on the use of various learning technologies suggests that technology can enhance students’ learning, retention of knowledge, and attitudes about science learning. However, the presence of learning technologies alone does not improve outcomes. Instead, those outcomes appear to depend on how the technology is used.
• Despite the importance of laboratories in undergraduate science and engineering education, their role in student learning has largely gone unexamined. Research on learning in the field setting is similarly sparse.
Despite the preponderance of DBER on the benefits of student-centered instruction and of instruction that involves the use of technology, important gaps remain. With some exceptions, the studies the committee reviewed measure learning within the context of a single course. Multi-instructor, multi-institutional studies are needed to move beyond the idiosyncrasies of instructional approaches that work well only in the presence of certain instructors or with students who fit a particular profile. More work also is needed on large-scale projects such as POGIL, to better understand the conditions under which its materials are successfully implemented and provide insights into how the effective use of these materials and associated pedagogy can be reliably supported. Additional research examining the influence of student-centered instruction on other types of outcomes, such as declaring a major, retention in the major and pursuing further study also would be helpful. And finally, longitudinal studies are needed to gauge the effects of student-centered instruction on the long-term retention of conceptual knowledge and on the application of foundational skills and knowledge to progressively more challenging tasks.
Most of the research on instructional strategies has been conducted in introductory courses. Less evidence exists regarding the efficacy of different
instructional approaches in upper-division courses, although some has been conducted (see, for example, Chasteen and Pollack  and Smith et al. ). Within introductory courses it is unclear whether student-centered learning environments affect different student populations differently, because DBER scholars rarely compare the effects of a given strategy for different student populations. Populations of interest for future study include students who are underrepresented in science, including students for whom English is a second language, females, and ethnic/racial minorities. It also would be useful to explore the dimensions of overall science performance, quantitative skills, and spatial ability. Further study is needed on strategies to accommodate students with disabilities into the full suite of instructional opportunities, especially laboratory and field-based learning.
Across the disciplines in this study, the role of the laboratory class is poorly understood. It would be helpful for scientists, engineers, and DBER scholars to identify the most important outcomes of a well-designed laboratory course, then to design instruction specifically targeted at those outcomes and instruments for routinely assessing those outcomes. Future DBER might compare learning outcomes associated with different types of laboratory instruction (e.g., free-standing versus laboratory activities that are integrated into the main course) and compare outcomes in courses where laboratories are required, optional, or not offered. In addition, laboratory activities in which students conduct inquiry on large, professionally collected data sets (such as genomics data and geoscience datasets served by the U.S. Geological Survey, the National Oceanic and Atmospheric Administration, the National Aeronautics and Space Administration, and various university consortia) have grown in prominence in recent years (Hays et al., 2000), but have been little studied.
Additional research also is needed on field-based learning. Specifically, which types of field activities promote different kinds of learning and which teaching methods are most effective for different audiences, settings, expected learning outcomes, or types of field experiences? The research base is particularly sparse regarding the degree of scaffolding needed for different types of field activities, and which types of field projects are optimal for a given learning goal (Butler, 2008). Given the expense and logistical challenges of field-based instruction, it is important to identify which learning goals (if any) can only be achieved through field-based learning, and which (if any) could be achieved through laboratory or computer-based alternatives. These studies also should explore affective dimensions of field learning, including motivations to learn science and cultural and other barriers to learning.
In studying the efficacy of different instructional approaches, DBER scholars must take into account the time constraints of instructors. Future DBER studies might document the time associated with different
instructional approaches and explore which approaches are most efficient for supporting students’ learning in terms of faculty effort. At the same time, research into enhancing the effectiveness of graduate teaching assistants and paraprofessionals such as full-time laboratory instructors can explore ways to make student-centered instruction an economically viable approach, even at a time of shrinking funding for higher education.