Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 25
3 Evaluating Effective Instruction To develop a list of criteria and benchmarks for evaluating effective science, technology, engineering, and mathematics (STEM) instruction, educators must agree on the characteristics of such instruction. If the aim is not only to teach science content but also to foster inquisitiveness, cognitive skills of evidence-based reasoning, and an understanding and appreciation of the processes of scientific investigation, then—as noted in Chapter 2—courses that consist solely of traditional lectures and laboratory sessions may be inadequate. Moreover, if introductory science courses are expected to play even broader roles—such as increasing the likelihood that nonmajors and preservice teachers will choose to take additional science courses, and expanding the number of students who go into science and engineering careers— evaluation of introductory science instruction must be improved and appropriate criteria developed. In this chapter, evidence is considered that defines the characteristics of effective instruction. In the workshop, participants were asked to enumerate features that could be included in a comprehensive evaluation instrument as indicators for rating exemplary STEM programs. While workshop breakout groups specified characteristics of programs of effective instruction, two presenters offered instructional techniques that exemplify many of these characteristics. Paula Heron, University of Washington, illustrated the tutorial program at her university designed to address students’ preconceptions and details evidence that students gain improved conceptual understanding. Brian Reiser, Northwestern University, described a scaffolding tool and classroom environment that gradually builds students’ skills for multidirectional instruction (i.e., teacher to student, student to teacher, student to student) and independent learning. Other work-
OCR for page 26
shop participants imparted additional instructional strategies that would achieve desired learning outcomes. To begin to shape how criteria would be used to evaluate such instruction and programs, two presenters offered examples of assessment strategies and tools. Gloria Rogers, Rose-Hulman Institute of Technology, described the fundamentals of evaluation for any educational program. Anton Lawson, Arizona State University, illustrated an instructional assessment tool and the contexts in which it was used as an example of how to measure effectiveness of instruction. Expanded summaries of the presentations, the learning outcomes proposed by workshop participants, as well as additional ideas and cautions put forward by participants during plenary discussions are detailed within this chapter. CHARACTERIZING EFFECTIVE INSTRUCTION WITH RESEARCH EVIDENCE Recent evidence suggests that, for many students, traditional didactic lectures that promote memorization of factual information may be unexpectedly ineffective for eliciting learning of more complex concepts when applied as the primary instructional method in science courses (Terenzini and Pascarella, 1994; Honan, 2002; Loverude, Kautz, and Heron, 2002). Although direct instruction is useful in some settings (e.g., Klahr, Chen, and Toth, 2001), and the lecture format can be improved by allowing learners to grapple with an issue on their own before they are provided with answers (Schwartz and Bransford, 1998) or by other modifications that add an element of interactivity (NRC, 2000; Laurillard, 2002), accumulating research indicates that the traditional approach with no additional cognitive assistance leads to memorization of facts rather than understanding of concepts for a majority of students (Wright et al., 1998; Loverude et al., 2002; see Appendix B, this volume). The evidence indicates, moreover, that most students who sit passively in lectures for an entire course are unlikely to appropriately link their prior conceptions to the new knowledge being presented. The conceptual misunderstandings they have when they enter a course are likely to persist if instruction does not address their difficulties specifically (King, 1994; Mestre, 1994; Loverude et al., 2002; Marchese, 2002). Even students who receive good grades and persist in science courses often gain little understanding of the basic science concepts (see Appendix B, this volume; Sundberg, 2002). The broadened roles of introductory science courses, plus recent gains in
OCR for page 27
understanding how people think and learn, have forced a reconsideration of what is meant by effective science teaching. Two recent NRC volumes mentioned earlier, How People Learn: Brain, Mind, Experience, and School (2000) and Evaluating and Improving Undergraduate Teaching in Science, Technology, Engineering, and Mathematics (2003), describe numerous research findings that have powerful implications for how instruction might be organized and implemented to elicit learning. One of the hallmarks of the emerging science of learning is its recognition that students bring with them diverse learning styles and that they learn in different ways under different circumstances. This finding suggests that there is not likely to be one best mode of instruction for all purposes. Precollege educators recognize this need by utilizing a “learning cycle” (Stephans, Dyche, and Beiswanger, 1988) that engages students’ intellectual curiosity before introducing formalisms. To be effective, undergraduate teaching faculty must also have at their command an aggregate of instructional strategies and be prepared to use combinations of inquiry-based, problem-solving, information-gathering, and didactic forms of instruction under appropriate classroom circumstances that promote conceptual understanding and students’ ability to apply knowledge in new situations (Stephans et al., 1988). When implemented properly, the evidence suggests that inquiry-based instruction and problem-solving strategies engage the learner in developing the mental models required for conceptual understanding (NRC, 2000, pp. 239– 241). These strategies assist students in assimilating new information through interaction with their prior concepts and knowledge of the world outside the classroom. When combined appropriately with direct instruction and other teaching approaches, students are motivated to identify and gather relevant factual content and integrate that new knowledge with their preconceptions (Loverude et al., 2002; Marchese, 2002). With such instruction, these studies show that learners can be helped to absorb new facts and concepts, to devise and carry out scientific investigations that test their ideas, and to understand why such investigations are uniquely powerful as a way of learning. DEFINING CHARACTERISTICS OF EFFECTIVE INSTRUCTIONAL PROGRAMS One of the goals of the workshop participants was to determine if it was possible to reach agreement on the
OCR for page 28
characteristics of an effective instructional program. In the first breakout session, the disciplinary groups were assigned three tasks: (1) on the basis of the knowledge and expertise of the assembled group, develop a list of highly regarded courses/programs within the discipline; (2) consider the characteristics of each entry that justified its selection for the list; and (3) extract from those characteristics a list of criteria or indicators that would enable an observer to assess programs in that discipline. The Summaries of Breakout Groups Priscilla Laws, Dickinson College, summarized the discussions of the physics working group. The group made a list of thirteen curricular materials and approaches in physics1 that the members believed from personal experience had exemplary characteristics. They then set out to define explicitly the characteristics of these courses or course materials in physics that justified their designation as “exemplary.” The identified characteristics are described below, following the summaries of the breakout groups, and reflect the importance of developing students’ understanding of underlying concepts. As noted above, many of these curricula utilize a process known as learning cycles (Kolb, 1984; Healey and Jenkins, 2000). A learning cycle is designed to engage students’ curiosity about a phenomenon, to elicit their thoughts and preconceptions about the phenomenon, then to provide them with opportunities for direct observations and problem-solving experiences so they may judge the validity of their ideas, and finally to provide opportunities to resolve any discrepancies between their preconceptions and canonical concepts. The group pointed out that 1 The exemplary programs and approaches identified by the physics working group included context-rich cooperative problem solving (Heller, Keith, and Anderson, 1992; Heller and Hollabaugh, 1992), Explorations in Physics (http://physics.dickinson.edu/~EiP_homepage/html), Interactive Physics (http://www.interactivephysics.com), Just in Time Teaching (http://webphysics.iupui.edu/jitt/jitt.html), lecture demonstrations (Sokoloff and Thornton, 1997), Models in Physics Instruction (http://www.physics.umd.edu/perg/papers/redish/jena/jena.html), Peer Instruction (Mazur, 1997), Physics by Inquiry (McDermott, Shaffer, and Rosenquist, 1996), Powerful Ideas in Physical Science (http://www.psrc-online.org/classrooms/papers/layman.html), Real Time Physics (http://physics.dickinson.edu/~wp_web/Introduction/FAQ/Real_Time_Physics), Studio Physics (http://www.rpi.edu/dept/phys/education.html), Tutorials in Introductory Physics (McDermott, Shaffer, and the Physics Education Group, 2002), and Workshop Physics Activity Guide (Laws, 1997).
OCR for page 29
many students of different majors and diverse needs are required to take physics. The physics community generally requires students to learn what the experts feel is important for them to know without seeking input from students. All too often, this is achieved by simply telling students what to memorize through a series of traditional didactic lectures. In contrast, Laws noted “One of the things most of the reform curricula have in common is…a great emphasis on [the need for] students understanding the underlying…concepts [while recognizing that] one of the real weaknesses of traditional [instruction] is that students …memorize how to solve categories of problems…without understanding the fundamental underlying concepts.” Marshall Sundberg, Emporia State College, summarized the life sciences group’s discussions of a range of examples, such as those listed in the NRC report BIO2010 (2002a) in which instructors provide class-based investigative opportunities, both for students majoring in the sciences and for nonmajors. The characteristics representative of a number of successful programs are described below following the summaries. In all cases, an important feature was that the instructor sought to build learning communities that engaged students and encouraged further study. David Gosser, City College of New York, and Ishrat Khan, Clark Atlanta University, summarized the discussions of the chemistry group. This was a small group, with most members currently involved in the same projects— New Traditions (http://newtraditions.chem.wisc.edu) and Peer-Led Team Learning (PLTL) (http://www.pltl.org). The noteworthy characteristics of these programs are that they engage faculty in teaching, require incremental changes, and take into account forces that drive institutions. The group identified additional characteristics that make these programs exemplary and discussed other methods to engage faculty, outside of those immediately invested in educational reform, to adopt effective instructional strategies. These characteristics and methods are described below following the summaries. Khan cited New Traditions and PLTL as efforts that are designed to be readily adaptable to different settings but which also allow faculty ownership of material and assessment. Gosser added that with PLTL, faculty are asked to evaluate students and report findings to the parent program; he believed this data collection convinces faculty that the model is working and provides them with resources to support the program at their institutions. He also noted that students are involved in efforts to
OCR for page 30
disseminate PLTL. “They…actually conduct workshops and show faculty how poised and how capable they can be. This is more of a selling point than if I talk about it for an hour.” In response to questions by Richard McCray, University of Colorado, Gosser and Khan confirmed that the PLTL program could be adapted for other disciplines. Susan Singer, Carleton College, added that the program has already been modified and is being used in history and introductory biology. Bonnie Brunkhorst, California State University, San Bernardino, summarized the discussions of the geosciences group that included representatives from the fields of geosciences, space science, and astronomy. The group chose to look at NSF-funded projects and examined the criteria that had been developed out of those projects. McCray added that since the group had a hybrid representation, their discussions were not discipline specific. The Characteristics of Effective Programs The characteristics of exemplary programs identified by the geosciences group and other breakout groups are described below. The characteristics were very similar across the disciplinary groups. Recognized exemplary instructional programs can be characterized as follows. They: Provide experiences for students to develop functional understanding. These programs place emphasis on students’ understanding of science concepts and ability to apply these concepts to new situations. Less emphasis is placed on end-of-the-chapter problems and exams as the bottom line in grading, to minimize students memorizing and pattern matching without developing functional understandings. The programs often outline the concepts they expect students to learn as explicit learning outcomes (see the partial list of concepts for biology education in Table 2-1 for an example). Opportunities for undergraduate research were identified as appropriate experiences to develop an understanding of the scientific process. Have strategies for iterative evaluation. These strategies should include self-assessment by faculty of instruction and program effectiveness, mechanisms for identifying instructor expertise both conceptually and pedagogically, assessment of student learning, and procedures for learning from failure through formative evaluation. Exemplary programs often take risks, learn from failure, reevaluate, and try again. Summative evaluation is needed to demonstrate effectiveness to developers or institutions. Invest in training and mentoring of instructors, both
OCR for page 31
faculty and teaching assistants. Training should assess in a nonthreatening manner the instructors’ competencies and comfort levels for the program. Training should encourage instructors to take ownership of the program and its materials and to adapt it as necessary to suit their own contexts. Include efforts to collect and disseminate information about the program, applying accepted principles of research on teaching and learning. Make interdisciplinary connections. Instructors should make students aware that methods or concepts from other sciences are often used in the context of one discipline. Foster independent learning skills. These programs strive for the outcomes regarding learning skills (i.e., “learning to learn”) defined in Chapter 2. Promote students’ ability to work cooperatively and to communicate orally and in writing. Address materials’ relevance to students. The working group agreed that material should be relevant to students’ lives, but they pointed out that sometimes the topics that have the biggest impact are not perceived as relevant initially but become surprisingly significant. Become institutionalized and self-sustaining. Strategies exist for departments to take group ownership of effective programs. Institutionalizing effective programs is critical to sustain the programs in the event that the individuals driving the programs retire or otherwise leave. ACHIEVING DESIRED OUTCOMES WITH EFFECTIVE INSTRUCTIONAL TECHNIQUES As noted earlier, there is accumulating evidence that new knowledge is shaped by interaction with existing knowledge (NRC, 2000). Instructors need to pay attention to students’ beliefs, incomplete understandings, and the naïve versions of concepts they bring to a given subject. An important element of effective instruction involves building on these preconceptions and prior beliefs in ways that help each student achieve a more mature understanding. If students’ initial ideas and beliefs are ignored, the understandings that they develop can fall far short of the goals of the instructor (Minstrell, 1989; Mestre, 1994; NRC, 2003). Striking evidence for this comes from a study in which undergraduates in a leading university who took a traditional geometry course that ignored their entering misconceptions represented and visualized three-dimensional forms more poorly than did a comparison group of
OCR for page 32
elementary children whose prior ideas about space were engaged (Lehrer and Chazan, 1998). This discussion of the resistance of students’ preconceptions to change was the setting for Paula Heron’s contribution. Instruction Designed to Help Students Overcome Conceptual Difficulties Paula Heron, University of Washington In her presentation entitled Research as a Guide to Improving Student Learning in Undergraduate Physics, Heron illustrated how misinterpretations or problems that students have with specific ideas can persist and adversely affect desired learning outcomes. She described how the university’s Physics Education Group (PEG) conducts research to develop effective instructional strategies to address the difficulties students commonly have with specific physics topics. Heron stressed the importance of education research that is conducted within the discipline. “What physics education research constitutes is an approach to improving instruction that is objective [and] efficient and allows for cumulative progress to be made in the teaching of the discipline.” She illustrated this perspective with a specific example of research that focused on student understanding of the concept of center of mass (Gomez, 2001). The “baseball bat problem” (see Figure 3-1) tests whether students recognize that both the amount of mass and its distribution determine the location of the center of mass, or the balancing point. The problem was administered to students at different stages of instruction in different courses. Responses from students in the introductory calculus-based physics course at the University of Washington (UW) were compared before and after traditional instruction in the course and after additional Tutorials (McDermott, Shaffer, and the Physics Education Group, 2002). Tutorials at UW meet once a week, while lectures are held three times a week. (Students participate in a weekly lab as well.) In the tutorials, students work with materials and complete worksheets tailored specifically to guide them through the development and application of important concepts in the course, and to address specific difficulties that have been uncovered through research. Responses to the bat problem were also gathered from students in an introductory calculus-based physics course at Purdue University, where the tutorials developed at UW have been incorporated, and from students in an introductory engineering statics course at UW. These students, who are required to take an introductory calculus-based physics course prior to the statics
OCR for page 33
FIGURE 3-1 The baseball bat problem. SOURCE: Gomez (2001). Reprinted with permission. course, revisit material on the center of mass in the statics course. Responses were also obtained from prospective and practicing K–5 teachers in special physics courses at UW that are designed to prepare them to teach physics and physical science as a process of inquiry. Table 3-1 presents the results from the bat problem: the percentages of students answering the question correctly sorted by type of instruction received. The results from the bat problem were consistent with those obtained in other studies by PEG related to student conceptual difficulties with the wave properties of light (Ambrose, Heron, Vokose, and McDermott, 1999) and compression of ideal gases (Loverude et al., 2002). Cumulatively, the group’s research findings indicate that on certain types of qualitative questions, student performance remains essentially unchanged before and after instruction in either calculus- or algebra-based courses, with or without standard laboratory, with or without demonstrations, in large and small classes, and regardless of perceived effectiveness of the lecturer (see Appendix B, this volume). Heron summarized the group’s interpretation of their data with a question: “Is…good quality standard instruction, through a lecture, textbook, and laboratory, sufficient to develop a
OCR for page 34
TABLE 3-1 Student Response to Baseball Bat Problem (Percentages by Course) Course and Relative Time of Test N Percentage Responding mA < mB (correct) Percentage Responding mA = mB Introductory mechanics, University of Washington, before instruction 152 5 90 Introductory mechanics, University of Washington, after instruction 455 15 80 Engineering statics, University of Washington, after all instruction 71 15 85 Introductory mechanics, University of Washington, after traditional instruction plus tutorial 255 55 40 Introductory mechanics, Purdue University, after traditional instruction plus tutorial 1,160 50 45 Graduate TAs, University of Washington, after traditional instruction (presumed) 30 70 30 Physics by Inquiry course for preparing K–5 teachers, University of Washington, after instruction 30 100 0 SOURCE: Gomez (2001). Reprinted with permission.
OCR for page 35
functional understanding of an important concept or principle? By functional understanding, what we mean is the ability to apply the concept or principle to a situation that has not previously been memorized.” Her answer, based on PEG research and a growing body of other supporting literature was: “Teaching by telling is an ineffective mode of instruction for most students…. Students must be intellectually active to develop a functional understanding…. Sitting and listening to lectures…, reading the textbook, solving the traditional end of chapter problems does not lead to this type of intellectual engagement.” Heron pointed out that even instruction considered to be “pedagogically correct,” such as small group work, hands-on activities, and demonstrations, may not address persistent conceptual and reasoning difficulties. When students in the engineering statics course participated in small group exercises devoted to centroids and center of mass, there was no evidence of improved performance on the bat problem. The percentage of students who claimed that the halves of the bat on each side of the balancing point must have equal mass was the same as in the prerequisite introductory physics course. In response to a question by Lawson about whether students would be affected by faculty demonstrating the different masses of bat “halves” by weighing the pieces obtained from cutting the bat at the balance point, Heron described an instructor’s experience in conducting such a demonstration. When confronted with the evidence, students assumed that the demonstration had not worked as planned. “We know what we were supposed to see,” they said to the instructor. Heron indicated that in laboratory situations students would often ask for better equipment if the results fail to match their (erroneous) expectations. During her presentation, Heron made the point that effective instruction could best be designed through research to identify and detail specific student difficulties and assess instructional strategies meant to address those difficulties. By examining student responses to the bat problem, as well as to several other written problems, and probing student ideas through in-depth interviews, Heron and her colleagues identified several specific student difficulties with the concept of static equilibrium. Of primary concern was the failure of students to consider both the mass and its distribution relative to the balance point. Heron explained how tutorials incorporated into introductory physics courses at the University of Washington are designed to address students’ conceptual and reasoning difficulties
OCR for page 39
Though much of Reiser’s work is at the K–12 level, he identified his purpose for presenting at this workshop as exploring whether his work to help middle and high school students go beyond the formalisms (memorized equations and terms) and develop functional understandings can be replicated at the undergraduate level. Reiser provided an example of a group working with the scaffolding tool; the participants believed they had reached consensus, but when they were forced to write in the tool they discovered disagreements and the need to further clarify definitions. To employ the strategies essential for effective inquiry teaching, Reiser explained, instructors needed to change the climate of the classrooms. He pointed out examples of such climate changes in his video presentations. The teacher would often engage students in questions about the problem and would frequently profess that the answers are not known. At the start of the school year, the students did not believe her and felt she was holding back answers. Through continued discussions, they began to accept her acknowledgement and to articulate explanations on their own or through discussions with their peers. Reiser highlighted a student in a full class discussion who asked a question of his classmates instead of the teacher. Students need to interact with each other and the teacher in new ways. They have to be willing to explain their thinking, to ask questions of each other, and to offer advice to other students. Teachers have to continually engage students in conversations and questions about the problem under consideration. Both the teacher and the software tool aim to uncover students’ confusions and then guide the students into productive discussions about these confusions. Small Group and Pair Cooperative Learning Extending the idea that students should be more engaged in conversations and questions with instructors and peers, several workshop participants pointed to the benefits of student-to-student interactions. For example, Michael Zeilik, University of New Mexico, asserted that students’ conceptions can be changed in properly managed cooperative teams and that small group or pair cooperative learning can be facilitated even in large lecture situations (Mazur, 1997; Schwartz and Bransford, 1998). Priscilla Laws, alluding to Heron’s message about targeting persistent difficulties, added that students must engage with the right issues if cooperative teams are to be successful. She pointed out that lecture demonstrations can be effective, referring to an article by Sokoloff and Thornton
OCR for page 40
(1997), if students are engaged in discussions about what they saw and what they conclude. Confirming that cooperative groups can be effective, Elaine Seymour, University of Colorado, added that students often complain about this approach, but if the instructor continues to form and work with such groups the students become more comfortable with collaborative learning. She also reported that men get the most out of structured groups because such work tends to be novel to them; women, in general, need little prompting to work in groups (e.g., Stabiner, 2003). Richard McCray added that in situations where students are forced to explain their answers and reasoning, they are able to identify what it is that they don’t understand. Case Studies In addition to instructional methods that encourage collaboration, another strategy that many participants thought to be effective was the use of case studies. Katayoun Chamany, Eugene Lang College, reminded the group that subject matter could be made relevant and useful to diverse populations of students by teaching through case study modules. In support of Chamany’s point about incorporating case studies into the curriculum, Clyde Herreid, State University of New York at Buffalo, explained that, like other inquiry-based approaches to teaching and learning, case studies promote interaction and provide relevance for the students (Herreid, 1999; Honan and Rule, 2002). Problem-Based Learning Both Herreid and Zeilik promoted PBL as the “greatest method of all.” Herreid pointed out that the PBL method, as described in The Power of Problem-Based Learning (Duch, Gron, and Allen, 2001), has thrived successfully at many medical schools for over thirty years. However, it has also failed at a few medical schools because of lack of administrative support. Instructors’ Roles and Physical Environment In his presentation later in the workshop, Jack Wilson, UMassOnline, identified some teaching methods that were relevant to include with this section. In his talk, he listed some of the strategies that made Studio Physics successful at Rensselaer Polytechnic Institute. The major instructional innovation was in the interaction between the instructors (faculty and teaching assistants) and the students. Instructors in the Studio define their role as guides, leading students to information and helping them with difficult concepts, while students are encouraged to take responsibility for their own learning.
OCR for page 41
Anton Lawson added that instructors should be aware of the reasoning skills and abilities students have when they enter their classrooms and the need to focus on developing those to a higher level throughout the course. Although assessing generalized cognitive skills is more difficult than measuring discipline-specific knowledge, there is a substantial literature on how to do it (Shavelson and Huang, 2003). In Studio Physics, teaching assistants work collaboratively with faculty; one faculty member, one graduate TA, and one undergraduate TA interact with students together in each studio section. The team approach reduced the error rate on transmitted information, since if one of the three misunderstood the problem the others could make corrections. The physical setting for the course was completely reconstructed from two theatre-style lecture sessions serving 500 students each to 12–15 studios or labs where 50–60 students worked in small collaborative groups. Instruction was extended through the mobile computing initiative, requiring a laptop computer for each student. Laptop purchases were built into the financial aid structure such that a student receiving 100 percent financial aid was given a laptop. Most of the didactic material for the course was made available online. Recognizing the importance of all the suggested instructional methods— including Studio Physics, problem-based learning, case studies, in-class conversations and small group work, scaffolding tools, and the tutorials at UW—Ronald Henry, Georgia State University, remarked that faculty should model in their teaching the ways in which their own students should teach if those students go on to become graduate teaching assistants, K–12 teachers, or science faculty. ASSESSING INSTRUCTIONAL IMPACT ON LEARNING During their deliberations, each of the workshop breakout groups developed lists of student learning outcomes appropriate for introductory science courses (see Chapter 2). But according to Herb Levitan, National Science Foundation, many faculty members do not know how to evaluate either student learning achievements or their own instructional efforts. Moreover, added Lawson, although many current evaluation practices measure performance on tests or other tasks, they fail to indicate the degree of learning. Learning, in this context, is interpreted as conceptual understanding measured by the ability of a student to apply knowledge and skills in new situations. This discussion served as background for the two
OCR for page 42
speakers at the workshop who devoted their presentations to assessment practices and tools, one to provide general background information, the other offering an example of an instrument for evaluating an instructor’s performance in the college classroom. Fundamentals of Evaluation Gloria Rogers, Rose-Hulman Institute of Technology In her presentation Evaluating student outcomes: E=MC2, Rogers emphasized the importance of establishing a complete institutional assessment process, covering classroom assessment, program assessment, and “mapping strategies” (i.e., methods for using assessment data to chart instructional improvements across a program). Recognizing the wide range of expertise of the workshop participants, she aimed to include information designed to link classroom and program assessments. The Purpose of Assessment The most difficult part of assessment for faculty, Rogers noted, is to define explicitly what is meant by educational outcomes, and to articulate outcomes in terms of measurable criteria. Definition of Terms Since many of the terms commonly used in the field of assessment are defined only vaguely, Rogers stressed the importance of agreeing on definitions at the start of an evaluation process to avoid future disagreements. Rogers distinguished “assessment,” the collection and analysis of evidence, from “evaluation,” which she defined as interpretation of evidence. She emphasized that these were separate activities. She also defined inputs, processes, outputs, and outcomes. See Table 3-2 for examples of each of these terms. Each term includes student, faculty, and campus components. Inputs were defined as what the constituents bring into the system. Processes include the programs, services, loads, policies, and procedures that are established to take advantage of what is known about the inputs. Outputs are easily measured indicators and statistics. Outcomes refer to the effects, which are particularly important in terms of accreditation. Goals of Assessment Rogers focused next on the importance of determining what is being assessed. Is the target of the assessment individuals or groups? Will the assessment be used to evaluate achievement, placement, gatekeeping, program enhancement, or program accountability? She indicated that the assessment strategy would be dependent on assessment goals.
OCR for page 43
TABLE 3-2 Examples for Assessment Terms Inputs Processes Outputs Outcomes Student credentials: Test scores Programs and services offered, populations served Student grades, graduation rates, employment statistics What have students learned; what skills have they gained; what attitudes have they developed? Faculty credentials Faculty teaching loads/ class size Faculty publication numbers Faculty publication citations data, faculty development Campus resources Policies, procedures, governance Statistics on resource availability, participation rates Student learning and growth SOURCE: Adapted by Rogers (2002b) from work cited in Middaugh, Trusheim, and Bauer (1994, p. 4). With input from workshop participants, Rogers identified stakeholders in the educational process as students, their parents, other departments, employers, scholarship agencies, and graduate programs. She noted that the constituents have different expectations and the makeup of constituents can vary widely from institution to institution. She pointed out that educational objectives should be tied to the institutional mission and should be reevaluated every several years. To accomplish the educational objectives, students must achieve learning outcomes, which Rogers defined as what students should know and be able to do by the end of a course or program. Instructors, she suggested, should build educational practices and strategies around desired learning outcomes, incorporating measurable performance criteria into the curriculum to determine whether those outcomes are achieved. Each aspect (objectives, criteria, practices/strategies, assessment, outcomes, evaluation, and constituent responses) is part of an interconnected system of feedback for continuous improvement of the course or program. Rogers stressed the importance of defining and reevaluating: “If objectives and outcomes are difficult to define, they will be difficult to measure.” Ramon Lopez, University of Texas at El Paso, asked about the effectiveness of
OCR for page 44
the Accreditation Board for Engineering and Technology criteria (2002). These standards, which emphasize outcomes and understanding for undergraduate engineering students, were intended to drive real change in engineering programs. Rogers responded that while the ABET 2002 criteria themselves are excellent, many schools have responded with “surface” changes and most engineering programs have not taken a hard look at defining outcomes, or teaching in relation to these outcomes. Most programs depend on exit surveys that ask graduating seniors and employers if certain knowledge and skills have been achieved. Rogers pointed out, however, that even these surface changes could have some significance. Programs are beginning to recognize the importance of defining learning outcomes and to map and identify gaps in their curriculum. Alan Kay, Viewpoints Research Institute, Inc., drew attention to the bigger picture of outcomes, or enlightenments, such as those that enable individuals to invent entirely new technologies that could not be easily measured but would play powerful roles in future innovations. Rogers acknowledged the significance of his point, but reiterated that defining explicit learning outcomes is necessary, because instructors and institutions are accountable for assessing student learning and “we can’t assess it if we don’t know what to expect.” Classroom versus Program Assessment Rogers outlined the similarities and differences between classroom and program assessment. Both can be formative and/or summative; both measure knowledge, skills, behaviors, attitudes, and values; and both focus on individual students or groups of students. The differences entail the degree of complexity, time span, cost, level of specificity of the measure, degree of accountability for the assessment process, and level of faculty commitment. Rogers identified constraints on assessment and practices that should be taken into consideration: time, facilities, subject matter relevance, and student knowledge factors such as differences in preexisting knowledge, out-of-class experiences, and selected sequence of courses. She continued by describing what she referred to as “mapping strategies”: procedures for identifying where in the curriculum students have the opportunities to learn, apply, and demonstrate the knowledge and skills proposed in the learning outcomes. The process of mapping informs the instructor or administration about existing opportunities as well as gaps in the course or program. Rogers shared her vision of a
OCR for page 45
syllabus for students that detailed their opportunities and responsibilities to gain specific knowledge and skills. Faculty members at her institution receive a curriculum map every quarter and are asked to indicate learning opportunities in their courses as they relate to the learning outcomes projected by the department or institution. Rogers listed common assessment methods and made available a booklet she authored, Evaluating Student Learning: E=MC2 Assessment Methods (2002), which describes available tools in detail: standardized exams, local developed exams, oral exams, performance appraisal, simulations, written surveys and questionnaires, exit and other surveys, focus groups, external examiners, behavioral observations, archival records, and portfolios. She concluded with some words of motivation for the participants: start early with assessment plans, prioritize, pick appropriate battles, seek out resources and reference materials, recognize that any one assessment plan does not fit every situation or institution, and adopt various strategies from different sources to meet individual needs. Anticipating the discussion in Chapter 4, Rogers noted that if an institution wishes to encourage and reward excellence in instruction, it must have an established set of goals that any course is expected to achieve and a reliable means of distinguishing between instructional strategies that are more effective and those that are less so in reaching those goals. Yet, despite the importance of evaluating teaching, most colleges and universities continue to struggle with the question of how to do it (Seldin, 1999). A common method of evaluating faculty is through student ratings. However, more reliable and productive assessments rely on multiple sources of evidence. Faculty may also be evaluated by examination of teaching portfolios that describe course organization and teaching materials, determination of level of student learning, evaluation by peers and administrators, and classroom observations by an instructional consultant (Seldin, 1999; Fink, 2002). Numerous teaching observation and evaluation instruments are described in the literature (reviewed in Wilkerson and Lewis, 2002; NRC, 2003). An Instructional Assessment Tool Anton Lawson, Arizona State University In his presentation Tools for Assessing Quality USTEM Instruction: Reformed Teaching Observation Protocol (RTOP), Lawson described the context for the development of the RTOP instructional assessment tool, and also highlighted many of the significant findings from the application of RTOP by faculty of the Arizona Collaborative for Excellence in the Preparation of Teachers (ACEPT).
OCR for page 46
(See details in Lawson’s paper in Appendix A, this volume.) Following best practices in faculty development (Wright, 2002), ACEPT introduced summer training institutes in which college faculty could experience teaching methods based on the principles of effective teaching introduced by the American Association for the Advancement of Science (AAAS) in Science for All Americans (1990). These principles emphasize that teaching should be consistent with the nature of scientific inquiry and recommend many of the teaching approaches discussed earlier in this workshop by Heron, Reiser, and others (see Lawson’s paper, Appendix A). Lawson’s group then employed the RTOP instrument to evaluate whether the institutes had an effect on faculty’s use of ACEPT teaching methods in their courses and whether these teaching methods in turn had an effect on student achievement. (Details are available at http://purcell.phy.nau.edu/AZTEC/RTOP/RTOP_full/index.htm, and in Lawson’s paper in Appendix A.) The 25-item RTOP observation instrument is organized into the following evaluation categories: lesson design and implementation, propositional and pedagogical content knowledge, classroom culture (interstudent and teacher/ student interactions), and problem-solving orientation (Box A-2, Appendix A). On a Likert scale, an observer assesses a lesson on the basis of items such as: “3) In this lesson, student exploration preceded formal presentation”; “5) The focus and direction of the lesson was often determined by ideas originating with students”; “6) The lesson involved fundamental concepts of the subject”; “8) The instructor had a solid grasp of the subject matter content inherent in the lesson.”; “11) Students used a variety of means (models, drawings, graphs, concrete materials, manipulatives, etc.) to represent phenomena”; “12) Students made predictions, estimations and/or hypotheses and derived means for testing them.”; and “17) The instructors questions triggered divergent modes of thinking.” In an ongoing investigation of the ACEPT program, instructors were evaluated in five courses. To make meaningful comparisons, several instructors in each course were rated with the instrument. Various instructors who exhibited considerable variation in the extent to which they had embraced the reformed methods during the summer institute were selected for rating, as well as instructors who did not participate in the institute. The examined courses included an introductory physics and mathematics course (each designed especially for preservice elementary school teachers), a large introductory biology course, an introductory physics
OCR for page 47
course for physics majors, and a biology course for preservice teachers taken near completion of their undergraduate biology majors. For each course, data included instructors’ RTOP scores and students’ scores and/or normalized gains on tests appropriate to each course. Lawson outlined the significant findings from the ACEPT investigation. The reliability of RTOP was demonstrated to be quite high. Trained independent observers were found to rate individual instructors with similar scores. The important result of the investigation was that mean instructor RTOP scores correlated strongly (r = 0.88–0.97, p < 0.05, range of the five courses) with student achievement gains, supporting the hypothesis that ACEPT teaching methods promote higher student achievement. Additional evidence for this conclusion was the finding of improved student content knowledge and reasoning skills, as measured by an independent reasoning skills assessment. Furthermore, a significant correlation (r = 0.70, p < 0.05) was noted between the RTOP scores of the TAs responsible for the introductory biology course labs and students’ reasoning gains as measured in those labs. In follow-up observations, the RTOP scores of in-service teachers who had received instruction at Arizona State from instructors who had participated in ACEPT courses were significantly higher than those who had not. In a continuing investigation, Lawson reported that ACEPT is building evidence for extended positive effects of ACEPT instruction for preservice teachers on the achievement of high school students that they teach. Lawson concluded by pointing out that those RTOP scores, which indicate the degree to which ACEPT instructional methods are implemented, are strongly correlated with improvements in student achievement not only in terms of conceptual understanding but also in reasoning skills. The critical aspect of these instructional methods, according to Lawson, is that they include a broad array of research-based teaching strategies. “Our project was based…on the teaching principles that are found in the AAAS document called Science for all Americans, which basically states that teaching should be consistent with the nature of scientific inquiry. That means [one should] start with questions about nature. Engage students actively. Concentrate on the collection and use of evidence. Provide historical perspective. Insist on clear expression. Use a cooperative team approach and do not separate knowing from finding out and the memorization of textbook vocabulary.”
OCR for page 48
In response to participants’ concerns about the accuracy of a one-time “snapshot” evaluation of a faculty member, Lawson clarified that observations for these evaluations were conducted on three separate occasions for each instructor at a time when the instructors were introducing new topics in the classrooms. He concurred that an instructor’s RTOP score could increase if observed over extended periods or in different circumstances, such as laboratory sessions. Richard McCray added that instructors must plan long-term methods to encourage students to become reflective about their learning; this does not occur in just one day. Lawson responded that the summer institutes did encourage developing such procedures and that their single classroom observations are assumed to be appropriate snapshots of the results. SUMMARY The major points from the presentations and discussions concerning the characteristics of effective instruction and how such instruction may be assessed are summarized below. Accumulating research shows that the traditional didactic lecture format can support memorization of factual information but may be less effective than other instructional strategies in promoting understanding of complex concepts or the ability to apply such concepts in new situations. Instructional programs known to the workshop participants to be effective in eliciting such learning start by defining important, measurable learning outcomes for students; recognize that students have diverse learning styles; provide varied experiences for students to develop functional understanding of a subject; promote students’ ability to work cooperatively and to communicate orally and in writing; invest in training and mentoring of instructors; and promote research on teaching and learning. Classroom observation assessment instruments exist for evaluating an instructor’s degree of success in achieving these goals. An important element of effective instruction involves engaging students’ preconceptions and prior beliefs in ways that help them achieve a more mature understanding. Effective instructional strategies for correcting misconceptions and producing conceptual understanding for most students require situations that demand active intellectual engagement, such as tutorials, small group learning, hands-on activities, case studies, and problem-solving exercises with appropriate scaffolding. Scaffolding (i.e., support and guidance in learning
OCR for page 49
specific concepts or tasks) can be provided by an expert (instructor, teaching assistant, or peer learning coach) or by a computer program. When instructors employ effective instructional strategies of the types described, they model in their teaching the ways in which their own students should teach if those students go on to become graduate teaching assistants, K–12 teachers, or science faculty.
Representative terms from entire chapter: