Linking Learning Goals and Evidence
This chapter and the next summarize the June workshop, which focused on different learning goals for undergraduate students in science, technology, engineering, and mathematics (STEM) and different types of evidence related to those goals.
EXAMPLES FROM THE DISCIPLINES
In the first session related to this topic, moderator Adam Gamoran (University of Wisconsin, Madison) introduced three panelists who used examples from chemistry, evolutionary ecology, and physics to address the following questions:
What are and what should be some of the most important learning goals for science students in lower division courses?
What types of evidence would be needed to conclude that a specific goal had been achieved?
Are there some types of evidence that carry more weight? If so, what makes that evidence particularly compelling?
Cathy Middlecamp (University of Wisconsin, Madison) explained that the American Chemical Society sponsors Chemistry in Context, a long-term curriculum development project. The curriculum breaks the mold of traditional general chemistry courses by integrating key chemistry concepts
within a coherent framework focused on real-world issues. The placement of chemical principles and concepts is driven by what students need to know in order to understand the science related to each real-world issue ( Middlecamp, 2008).
The curriculum targets two types of learning goals: (1) goals for student attitudes and motivation and (2) goals for student knowledge. The motivation goals are to give students a positive learning experience in chemistry and to motivate them to learn chemistry. The specific goals for student knowledge are to promote broader chemical literacy; to help students better meet the challenges of today’s world; and to help students make choices, informed by their knowledge of chemistry, to use natural resources in wise and sustainable ways.
Middlecamp then turned to the evidence. She noted that there has been no formal evaluation of Chemistry in Context, and there is no ongoing assessment of student learning. In addition, no evidence has been collected on the number of faculty members using the curriculum or about why they select it. Most of the available evidence related to the motivation goals and student knowledge goals is gathered locally by instructors for the purpose of improving instruction and is not disseminated beyond the department or campus. Evidence of progress toward motivation goals includes student attitude surveys, evaluations of the instructor, and student behaviors after taking the course (such as taking further chemistry courses or participating in discussions of chemistry in informal settings). As an example, Middlecamp presented survey data from more than 2,000 students she taught using Chemistry in Context.
Evidence of student knowledge goals includes direct measures of student performance in class (tests, demonstrated skills), student surveys, and course-level data (e.g., class completion rate). To illustrate, Middlecamp presented a breakdown of responses from 1,172 students who had taken the course. When asked about the extent of their learning gains in “connecting chemistry to your life,” more than 450 students (38 percent) responded that they had gained “a lot” and another 400 (34 percent) reported “a great deal.” In response to the statement, “the lecturer makes the course interesting,” 74 percent strongly agreed, and 16 percent agreed. Reflecting on the quality of this evidence, Middlecamp noted that, while compelling to individual instructors, it is local, anecdotal, and nonsystematic.
Middlecamp argued that, despite the weakness of the evidence collected to date, Chemistry in Context is successful in terms of two larger goals of the project—to be adopted and adapted widely and to catalyze development of STEM curricula that take a similar approach. Success in achieving these goals is measured by different types of evidence, including the number of textbooks sold, the continued attendance at faculty workshops, and the translation of the book into other languages. For example, data indicat-
ing that sales have risen from about 6,000 for the first edition, published in 1994, to an estimated 23,000 for the sixth edition, published in 2008, show that adoption of the curriculum is growing. Translations into other languages and other regional and cultural contexts are evidence that the curriculum is adaptable.
Middlecamp suggested that two factors—the role of professional societies and the sustainability challenge—have helped advance the goals of wide adoption and catalyzing development of similar curricula. The American Chemical Society’s sponsorship of Chemistry in Context, including its active role in dissemination, played a role in the early success of the project, she said. In addition, an initiative on liberal education by the American Association of Colleges and Universities calls for undergraduates to develop science knowledge through engagement with “big questions, both contemporary and enduring” (American Association of Colleges and Universities, 2008). By recommending this learning outcome, the professional society supports the adoption of Chemistry in Context and also encourages development of other science curricula that take a similar, real-world approach.
At the same time, the global challenge of sustainability drives a need for scientifically and technologically informed citizens and encourages higher education institutions and professional societies to focus STEM curricula on this real-world challenge. For example, the Curriculum for the Bioregion Initiative of the Washington Center for Improving the Quality of Undergraduate Education has engaged STEM faculty to define sustainability learning outcomes (see http://www.evergreen.edu/washcenter/project.asp?pid=62). The American Association for the Advancement of Science focused its 2009 annual meeting on sustainability with the theme Our Planet and Its Life: Origins and Futures.
Middlecamp closed by proposing that STEM higher education faculty target curriculum and instruction to the areas of intersection among their own vision of teaching and learning, what students care about, the challenges facing the planet.
Bruce Grant (Widener University) began his presentation by emphasizing the importance of addressing students’ alternative conceptions of evolution. He noted that the United States ranked near the bottom in a recent comparative international study on the proportion of the public that accepts the theory of evolution (Miller, Scott, and Okamoto, 2006). Grant suggested that this lack of acceptance of a well supported theory reflects a larger ideological struggle in American society over the basic concept that evidence matters. He explained that he was motivated to change his teach-
ing approach because of these concerns and because a large proportion of students fail introductory biology classes or drop biology as a major field of study.
Grant then described his practitioner research, arguing that it has improved his freshmen students’ conceptual acceptance of evolution by natural selection. He has conducted research on student learning among eight cohorts of freshmen enrolled in an evolutionary ecology course each year from 2000 to 2007, revising the course based on his research. He observed that, because practitioner research incorporates many aspects of traditional scientific epistemology but excludes other aspects, it constitutes a unique and complementary “way of knowing” that can improve science teaching and student learning.
Grant said he administered a standardized final examination at the end of the course each year to assess student learning and their response to his course revisions. The examination includes the prompt, “Please offer a brief and concise definition of evolution.” Since 2005, he has also used this prompt as a pretest. In addition, he has administered a standardized assessment item designed to measure students’ conceptions about evolution (Ebert-May, 2000).
Beginning in fall 2005, Grant conducted frequent short-answer surveys of students’ preconceptions about key topics before they were discussed in class, but the assessment results showed only slight improvement in the learning of basic concepts. Beginning in fall 2006, Grant directly confronted his students with their alternative conceptions, as indicated by their responses to the short-answer surveys and the pretests. He presented students with histograms of their responses and, at the same time, revised the course syllabus to address the alternative conceptions. In addition, he asked them in guided discussions to reflect on the kinds of evidence and arguments he should present that would help them understand the key topics. Finally, he substantially reduced the content and shifted class time toward increased writing and classroom discourse.
These changes yielded significant gains in student learning in the more recent classes, in comparison with earlier classes. The fraction of correct responses to the prompt, “Please offer a brief and concise definition of evolution” rose from about 50 percent in the period 2000 to 2005 to 90 percent in December 2006 and 80 percent in December 2007. Students’ mean scores on the standardized final exam went from 6.44 in December 2002 to 9.51 in December 2006 and 8.79 in December 2007. Grant also found large gains in student scores on the standardized question on evolution. From 2000 through 2005, only about 3 percent of students scored 8, 9, or 10 on this 10-point question, but in 2006 and 2007, about 54 percent achieved a score of 8, 9, or 10. The mean scores on this item also improved significantly, from 4.38 to 7.36.
Grant concluded that the revisions he instituted in fall 2006 significantly decreased students’ misconceptions and improved their learning about the concept of evolution and the process of evolution by natural selection. In addition, he learned new approaches to teaching that rely on the evidence generated by his practitioner research. He promised to continue to redesign and improve the course and described plans to increase his use of published concept inventories and to engage students in research on their own learning. He encouraged other STEM faculty to engage in practitioner research.
Jose Mestre (University of Illinois, Urbana-Champaign) presented his perspective on learning goals and evidence. He explained that his view of important learning goals reflects the current problem that, because of the explosion of scientific and technological knowledge, students in introductory courses are asked to learn an increasing body of knowledge, only to forget it weeks after the course is over. He suggested three learning goals:
Structure instruction to help students learn a few things well and in depth.
Structure instruction to help students retain what they learn over the long term.
Help students build a mental framework that serves as a foundation for future learning.
Mestre proposed that that evidence of achievement of the first goal would include understanding of concepts underlying problem solutions (depth) and the ability to apply concepts within and across domains (breadth). Measures of students’ ability to understand and apply concepts obtained months after the course was over would provide evidence of achieving the second goal (retention). Finally, students’ ability to learn new material more efficiently would constitute evidence of achievement of the third goal.
Mestre views these types of evidence as most compelling, and he argued against using evidence of student gains in factual or procedural knowledge to demonstrate that an instructional practice is effective. He noted that the latter type of gains do not indicate that students have developed a conceptual organizing framework, nor do they reflect flexible, durable learning. However, current assessment practices emphasize short-term recall of facts and procedures. Few studies have been conducted on transfer or retention of STEM knowledge months after a course is over. As a result, there are gaps in the available evidence related to the three student learning goals he listed.
Mestre said that the quality of evidence related to learning goals has an important effect on the adoption of promising practices. In physics, the development of the Force Concept Inventory (Hestenes, Wells, and Swackhamer, 1992), which provides high-quality evidence of student misconceptions, led to dramatic increases in the use of new teaching and learning approaches designed to engage students and eliminate misconceptions (Mestre, 2005). Mestre described as good news the development of similar tests of misconceptions in other disciplines (see Chapter 6).
In the discussion following the presentations, Kimberly Kastens ( Columbia University) asked whether the goals of the approaches described by the speakers included changing student behavior related to societal issues, such as global warming. Middlecamp responded that Chemistry in Context aims to influence students’ behavior in making choices; specifically, the goal is to help them make informed choices about issues that affect themselves and others. Mestre said that, although his physics classes do not focus on societal issues, he does seek to change students’ behavior in constructing scientific arguments and responding to other students’ arguments. He noted that it is difficult to change students’ behavior in this area, as they want him to simply present the scientific reasoning that leads to the correct answer.
Edward (Joe) Redish (University of Maryland) asked Grant whether he had evidence to support his claim that student scores improved because he had acknowledged and validated their struggles with learning the concepts and had made learning more personal, relevant, and accessible to them. Grant acknowledged that he lacked evidence for the claims and called for research on how students develop a learning community and become motivated to learn science.
Gamoran noted that Grant used an interrupted time-series research design. He pointed out that although this design is useful to demonstrate that a change occurred, it cannot determine whether the “interruption” (i.e., the change in instruction) caused the outcome. Other factors that may have caused test scores to increase cannot be ruled out. In addition, because Grant introduced a package of changes, including eliminating some of the content, instituting short-answer surveys at the beginning of class, and confronting students with their misconceptions, it is difficult to untangle the specific changes that may have caused the gains. Nevertheless, Gamoran described Grant’s research as a valuable “existence proof,” demonstrating that it is possible to reduce the level of students’ alternative conceptions.
Committee chair Susan Singer (Carleton College) asked the speakers about their use of cognitive research and theory, such as research on devel-
opment of expertise. Mestre replied that some of his early research focused on the differences between novices and experts in physics thinking and problem solving and indicated that he frequently draws on the cognitive research when investigating and revising his teaching practices. Middlecamp said she had learned more from her own experiences of people being intimidated by chemistry or wondering about its relevance than from the research. These experiences, she said, increased her awareness of the problems of traditional chemistry teaching and motivated her to develop a different approach. Grant said that he has made concerted efforts to learn from the cognitive research, despite the difficulty of deciphering the jargon in this rapidly developing field.
Robin Wright (University of Minnesota) said she has been surprised and frustrated by the hesitation of faculty to accept research evidence supporting new teaching methods. She asked how to improve transfer of this research evidence to STEM faculty. Mestre suggested inviting a skeptical faculty member to test his or her students’ understanding of basic concepts in the discipline. He predicted that this approach would demonstrate that the students in lecture courses do not understand these concepts as well as ones who have been taught using active learning methods. Middlecamp responded to Wright by proposing a different question, “What … might tip change more quickly than the evidence?” Mestre suggested that it is important to ask not only how to transfer research findings to faculty, but also how to transfer the findings from faculty. He noted that faculty members require training in how to collect valid evidence to measure the effects of instruction, but administrators place a low priority on this type of research on instruction.
In response to a question about his goal of having students develop a mental framework to serve as a foundation for future learning, Mestre described a study showing that children who knew a lot about spiders could more easily recall new information about spiders than other children with less background knowledge. He speculated that helping students develop mental frameworks in physics would make it easier to teach them new material in physics. He noted that it would be difficult to assess whether learning really becomes more efficient if students develop such mental frameworks.
Carol Snyder (American Association of Colleges and Universities) observed that change in undergraduate STEM might be supported more effectively by collaborative work in departments than by the efforts of individual faculty members. Heidi Schweingruber (National Research Council) asked what the speakers are doing to measure the goals of developing positive attitudes toward science, noting that increasing students’ science knowledge will not necessarily lead to changes in their behavior or degree of motivation to learn. For example, some physicians smoke, despite their
knowledge of the overwhelming evidence of the health dangers of smoking. Middlecamp responded that she believes that the Chemistry in Context goal to develop students’ motivation for lifelong science learning is more important than the goal to help them learn specific chemistry content. This is why the Chemistry in Context team selects chemistry content that matters to people for inclusion in the curriculum, she said.
Linda Slakey (National Science Foundation) asked Middlecamp about directly introducing chemistry concepts related to real-life issues, without first introducing students to the basic topics of general chemistry, which she considers to be the scaffolding on which to build new understanding. Celeste Carter (National Science Foundation), who teaches a 9-month course in biotechnology at a community college, said that many of her students, including some with advanced degrees, do not understand basic concepts in biology. She said she tries to build conceptual understanding through discussion of scientific methods and through laboratory activities. James Stith (American Institute of Physics) asked how departments can be held accountable for ensuring that prerequisite courses provide the basic understanding students need to benefit from more advanced courses.
Middlecamp responded to Slakey that a scaffold in Chemistry in Context would be the real-life issue, such as “the air we breathe” or “the water we drink.” Grant said that people have very different definitions of the word “scaffold”; he thinks of it as an awareness of one’s own learning process and how one builds understanding in response to instruction. Mestre responded that he uses class time to scaffold student learning, drawing on his own expertise, and asks students to read more basic content material outside class. Responding to the question about holding departments accountable, Mestre observed that new doctoral graduates lack knowledge of active learning strategies and proposed that departments should be held accountable for bringing their newly hired faculty up to speed on the findings of cognitive research and their implications for instruction.
Moderator Adam Gamoran offered three concluding remarks. First, he observed that the evidence underlying promising practices in STEM is thin, as each speaker had described local, anecdotal evidence. Second, he suggested that cognitive scientists, educational testing experts, and disciplinary experts collaborate to develop new forms of assessment to guide STEM teaching and learning. Third, he called for increasing the scope and scale of research to support development of approaches that are useful across different faculty members, departments, and institutions.
Workshop participants then formed small groups for further discussion of learning goals and evidence.
In small groups, workshop participants discussed the learning goals in the STEM disciplines, their views about the most important of these goals, and the types of evidence needed to establish effectiveness in terms of the most important goals. They also considered whether the desired learning goals and associated types of evidence differ across the STEM disciplines. Following the discussions, session moderator Susan Singer invited a reporter from each group to briefly describe that group’s response to these questions.
James Stith reported that his group explored the following issues:
Should there be different goals for students majoring in a STEM discipline and for other students, who require only a general knowledge of the subject matter?
Although professional societies have promulgated science learning goals, faculty members may not understand or even be aware of these goals.
Expert faculty members find it challenging to represent the material in their discipline to the novice learner and help him or her make the connection between representations and the real world.
It is important to help students understand that a STEM field has an underlying structure and is not simply a collection of facts.
What are the best ways to teach students about the nature of science, including the role of experimental methods and the relationships between facts and theory?
Robin Wright explained that her group focused on three types of learning goals for students: (1) core concepts and ways of knowing in the particular STEM discipline; (2) skills in communication, critical thinking, and asking good questions; and (3) positive attitudes toward STEM. She reported several group observations related to these goals:
What counts as evidence of learning outcomes differs across STEM disciplines.
Test questions should be aligned with specific desired learning outcomes.
Surveys can be helpful to assess the development of positive attitudes toward STEM.
Assessment should take place not only within a single course, but also across courses, levels of education, and even lifetimes.
Brock Spencer (Beloit College) shared the following points from his group’s discussion:
The goals for general education students may include more emphasis on societal issues than the goals for STEM majors.
Important goals related to student understanding of the nature of science include knowledge of experimental methods, the ability to make judgments and deal with uncertainty, the capacity to build a scientific argument based on physical evidence, and understanding the explanatory power of scientific models.
The Force Concept Inventory may be more effective in changing faculty behavior than in creating evidence of student learning. It is easier to administer than other, more labor-intensive assessments, but it also provides a less detailed view of students’ thinking and learning.
Current efforts to develop new assessments of students’ skills and attitudes will provide new types of evidence in the future.
Scientists are sometimes skeptical of evidence obtained using qualitative or ethnographic methods.
It is valuable to identify common, cross-disciplinary goals and also to identify important learning goals in each discipline.
Dexter Perkins (University of North Dakota) reported that members of his group discussed the following ideas:
Cross-disciplinary goals, such as problem solving, communication, and critical thinking, are important, in addition to more specific goals for what students should know and be able to do after completing a particular class.
What are the best ways to build instruction to achieve these cross-disciplinary goals?
Assessing student progress toward cross-disciplinary goals is difficult.
There are many different kinds of evidence related to learning goals and no single best way to collect these kinds of evidence.
Pre- and posttests are valuable to measure change in specific abilities or attitudes, and grading rubrics are very helpful to ensure that pre- and posttests are graded consistently.
It would be valuable to obtain evidence of students’ later learning and performance, after they leave a particular STEM class.
One way to demonstrate the effectiveness of instructional changes is to obtain multiple measures on a cohort of students as they progress through the STEM curriculum.
Much more evidence is needed, but it is difficult to obtain.
Perkins concluded that, despite the skepticism of their STEM colleagues about new types of teaching, the group members are motivated by the fun and satisfaction of researching student learning and revising instruction to improve learning.
THE STATE OF EVIDENCE IN DISCIPLINE-BASED EDUCATION RESEARCH
Opening a second session on learning goals and evidence, session moderator Kenneth Heller (University of Minnesota) introduced three panelists who had been invited to summarize the major findings from discipline-based education research in their respective disciplines and to identify the most promising directions for future research.
Physics Education Research
Edward (Joe) Redish opened his remarks by using examples from the established field of physics education research to disagree with Adam Gamoran’s earlier observation that the evidence underlying promising practices in STEM is primarily local and anecdotal. Redish said research in physics education has been under way for 30 years and that the physics education research community includes a literature base and regular conferences. He pointed out that the online peer-reviewed journal Physical Review Special Topics-Physics Education Research has been available since 2005. In addition, a 1999 bibliography cites more than 200 papers in physics education research conducted at the university level (McDermott and Redish, 1999).
Redish said that physics education researchers frequently rely on interviews as a source of evidence of effectiveness, asking students to explain the process they used to solve a physics problem. Researchers also use pre- and posttests, and for the past 10 years they have collected ethnographic data, including videotapes of students at work in the physics classroom.
Turning to his summary of findings from physics education research, Redish said that the findings support constructivist theories of education, indicating that students assemble their responses to instruction from what they already know. In the process, students sometimes develop incorrect, but robust, alternative conceptions. A relatively small number of alternative conceptions dominate students’ responses to instruction. These alternative conceptions may exist even among students who are successful in using algorithms to solve problems.
The research shows that physics learning is highly dependent on context. A student may develop alternative conceptions on the fly in response to new information. The existing knowledge he or she draws on when developing either a correct conception or an alternative conception can
be dramatically affected by how he or she perceives contextual factors. A student may even hold contradictory ideas about a phenomenon without noticing the contradiction.
Other findings illuminate how students compile their understanding of physics. When they have learned a concept well, they develop automatic thought patterns and may no longer be aware of the components in these patterns. Similarly, students may hold intuitions that they find hard to explain. Instructors, who have also developed automatic thought patterns, may find it difficult to understand why students do not just see it, as they do. To support students in this situation, instructors must reverse-engineer their own knowledge to identify its components and the relationships among them.
Finally, the research findings address instructional reform. First, the research has demonstrated that it is possible to create instructional environments that substantially improve student performances on tests of conceptual understanding. Second, research has shown that these instructional environments can be transferred to other institutions and implemented successfully. Third, the evidence suggests that a critical element in successfully implementing these instructional environments appears to be getting students mentally engaged.
Cautioning that much more research is needed to understand the specific factors involved in student learning of physics, Redish (2008) identified the following four promising areas for future research:
Investigate what prior knowledge, expectations, and attitudes students bring to physics class and when and how they apply prior knowledge, expectations, and attitudes in response to instruction.
Deconstruct students’ alternative conceptions and identify underlying components that may be easier to realign than to replace.
Study how students come to understand their own construction of their mental structures of interrelated concepts and principles—which are fundamental to learning physics—and learn when to apply knowledge they already possess.
Conduct interdisciplinary research that carefully links physics education research with cognitive and neuroscience research.
Life Sciences Research
William Wood (University of Colorado, Boulder) opened his remarks by describing the context for life sciences education research—the discipline of biology—as fragmented into subfields. Many of the professional societies associated with these subfields have begun to conduct research on teaching and learning and establish education research journals; however, most
faculty members read only the education journal that is specific to their own professional society.
Wood said that, with growing awareness and interest, life sciences education research is where physics education research was in the late 1980s. Active learning strategies for teaching large biology classes, based partly on this research, are being actively disseminated. For example, the National Academies Summer Institutes on Undergraduate Education in Biology drew more than 200 participants in 2004-2008, including faculty representing 65 institutions in 36 states. Wood said that these participants, in turn, have applied their learning, impacting an estimated 80,000 students.
New approaches to biology instruction are informed by several types of evidence. First, life scientists depend heavily on evidence that has emerged from physics education research. Second, they often conduct “design research,” testing the effectiveness of their own changes in instruction over time, but often without a control group for comparison. There have been only a handful of quasi-experimental studies in life sciences education research and no controlled experimental studies that randomly assign students to different types of instruction.
Wood presented findings from a quasi-experimental study he conducted with a colleague, focusing on upper level undergraduates enrolled in a required course in developmental biology (Knight and Wood, 2005). Over the course of two successive semesters, the authors presented the same course syllabus using two different teaching styles: in fall 2003, the traditional lecture format; and in spring 2004, decreased lecturing and increased student participation and cooperative problem solving during class time, including frequent in-class assessment of understanding. They found significantly higher learning gains and better conceptual understanding in the more interactive course; when they repeated the interactive course in spring 2005, they found similar results.
Wood raised several important questions for the future of life sciences education research. First, the field lacks a strong theoretical framework that integrates and interprets the research to date, similar to the volume in physics education research (Redish, 2004). This leads to two questions:
Does all physics research apply to learning life sciences, or does the higher requirement for factual knowledge in the life sciences require new research models?
Under what circumstances does student-centered instruction result in more learning than traditional lecture classes, and under what circumstances does it not?
Second, Wood highlighted an important question about the practical impact of life sciences education research: What kinds of evidence/interventions/
interactions result in meaningful change in the way postsecondary institutions and their faculties view student learning and design their instructional practices?
Addressing this final question, Wood suggested that discipline-based education researchers in all STEM disciplines could be instructed by the study of Henderson and colleagues (2008) related to change in STEM higher education. That study identified four integral elements of undergraduate STEM education: (1) teachers, (2) culture, (3) curriculum/pedagogy, and (4) policy. The authors propose that an effective change strategy would address all four elements, but they found that most change strategies emerging from discipline-based STEM education research address only the element of curriculum/pedagogy (Henderson et al., 2008).
Geosciences Education Research
Helen King (Helen King Consultancy) opened her remarks with a description of the current context supporting education research in the geosciences. Although geosciences education research is a relatively young subdiscipline, it includes a strong and growing community of researchers and practitioners at all levels of education. Knowledge is shared in the Journal of Geoscience Education and within and across national and international professional associations. The community is beginning to establish research methodologies, and the field is gaining legitimacy, as evidenced by the rapidly growing number of tenure-track education positions in geosciences departments.
In this context, King said, faculty members are developing new teaching practices based partly on general cognitive research and partly on findings from research in other STEM disciplines, as well as on findings emerging from geosciences education research. These new teaching practices include promoting active learning, deploying an array of assessment strategies, engaging students in problem solving while in the field, using visualizations and other applications of computer technology, and creating relevant case studies.
King cited a study of teaching practices employed by geology faculty in the United States which stated, “there is no question that research on learning and resulting recommendations for best classroom practice … have had an impact on geosciences classes” (Macdonald et al., 2005, p. 237). She then identified the major themes of geosciences education research, including how students learn important concepts and skills, the nature of discovery in geosciences, and students’ alternative conceptions of the discipline and of particular topics.
The research on geosciences education has identified several sticking points in student learning, including the development of systems thinking,
understanding complexity and uncertainty, and transfer of knowledge from mathematics and physics to solve problems in the geosciences. Research on the development of expertise in the geosciences, including the difficult process of developing spatial thinking and the ability to think about geological time, has potential to help novices advance toward such expertise. Finally, researchers are beginning to gain understanding of how different learning environments and contexts, including the classroom, laboratory, the field, and the workplace, affect students’ learning. This has included investigations of how contexts influence students’ values, beliefs, and feelings, and how these influences may, in turn, affect learning.
King concluded with an outline of progress in geosciences education research. This progress includes professional development for faculty, with training in important findings from geosciences education research; research funding and collaboration across institutions, disciplines, and nations; and dissemination of research findings to raise the profile of the research and encourage application of its findings.
In the discussion following the presentations, Adam Fagen (National Research Council) asked Heller and Redish about the applicability of what has been learned in physics education research to the other science disciplines. Redish and Heller agreed that there are not only some real differences, but also similarities across the disciplines. Heller said he reminds his physics colleagues that learning is a biological process, and that content, skills, and attitudes are inseparable from a biological perspective.
Ginger Holmes Rowell (National Science Foundation) asked what can be learned from recent learners (i.e., students who have taken a course in the previous semester) and how they might help to design more effective learning environments. Addressing the second question, Redish cited the University of Colorado’s Learning Assistance Program as an interesting and exciting use of recent learners by giving them instruction in pedagogy and folding them back into the classroom. He explained that many students from that program are recruited to become K-12 science teachers.
In response to a question from Robin Wright, Wood said that the field needs better assessments to measure higher order thinking skills, such as problem-solving. Redish agreed and described his own efforts to create assessments that address the higher levels of Bloom’s taxonomy, such as by including essay questions and multiple-choice problems that are difficult to answer without a solid conceptual understanding of a physical system.
Wood and King discussed the importance of being transparent with students about learning goals and methods as a way to promote learning. Wood noted that Dee Silverthorn (2006) has written beautifully on the need
to let students know why inquiry and problem solving are important for their futures before suddenly requiring them to do things in biology courses that they have never before been asked to do. King referred to the phenomenon, discussed earlier in the day, of sharing exam questions with students, which she said can be a strong motivator to learn the required content.
Heidi Schweingruber asked about the relative emphasis in the disciplines on deep conceptual knowledge versus thinking about how students understand inquiry and the nature of science. Redish responded that, although there is strong agreement about the importance of conceptual knowledge, it is integrated differently into the different epistemologies of the disciplines. Wood added that teaching conceptual knowledge is relatively similar across the disciplines, but inquiry within each discipline is more specialized.
Responding to another question, Wood said that inquiry is probably not as much a tradition in the lower level courses as it should be. He explained that biologists teach more about facts because they think students have to know the facts before they can start thinking about inquiry. King added that, when she was pursuing a degree in geology, no one explicitly told her about the nature of knowledge and inquiry in geology. She suggested that it is important to help students better understand the nature of the discipline they are studying and the role of inquiry.