Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 9
PART I What Is Known: Principles, Research Findings, and Implementation Issues
OCR for page 10
This page in the original is blank.
OCR for page 11
1 Recent Perspectives on Undergraduate Teaching and Learning This report addresses a crucial challenge to changing and improving undergraduate education in the United States: how to evaluate the effectiveness of undergraduate teaching in science, technology, engineering, and mathematics (STEM1) in ways that will enable faculty to enhance student learning, continually improve teaching in these fields, and allow faculty to develop professionally in the practice and scholarship of teaching and learning. Although many view higher education in the United States as among the best such systems in the world, there have been numerous calls for reform, particularly in the STEM disciplines. Top-ranking policy makers (e.g., Greenspan, 2000; Seymour, in press) have stated that globalization of the economy, significant advances in scientific discovery, and the ubiquity of information technologies make it imperative for all U.S. students (grades K–16) to understand the methods and basic principles of STEM if they are to succeed. Recent reports from the National Science Foundation ([NSF], 1996, 1998), the National Science Board (2000), the National Research Council (NRC), (1996b, 1999a), and others (e.g., Boyer Commission, 1998) have challenged the nation’s colleges and universities to ensure that all undergraduates increase their knowledge and understanding of STEM and the relevance of these disciplines to other areas of learning and human endeavors. 1 This abbreviation for science, technology, engineering, and mathematics education, taken from the official designation of the National Science Foundation for education in the disciplines, is used as shorthand throughout the report.
OCR for page 12
IMPETUS FOR AND CHALLENGES TO CHANGE Calls for Accountability from Outside of Academe The reforms within K–12 education that have been enacted in almost every state and many districts include systems for measuring achievement and accountability. State legislatures, departments of education, school boards, and the general public expect those responsible for educating students to be held specifically accountable for the quality of the outcomes of their work (Rice et al., 2000). The call for accountability is also being clearly heard at the postsecondary level. State legislatures are demanding that public universities provide quantifiable evidence of the effectiveness of the academic programs being supported with tax dollars. Other bodies, including national commissions, institutional governing boards, and professional accrediting agencies, also have begun to recommend that universities and colleges be held more accountable for student learning (see, e.g., National Center for Public Policy and Higher Education, 2001; see also Chapter 3, this report). One aspect of the call for accountability in higher education is particularly important for faculty in STEM. Corporate leaders and the public alike are focusing on the need for a scientifically and technologically literate citizenry and a skilled workforce (Capelli, 1997; Greenspan, 2000; International Technology Education Association, 2000; Murnane and Levy, 1996; National Council of Teachers of Mathematics, 2000; NRC, 1996a, 1999a, 2000d). Corporate leaders also have made it increasingly clear that their workforce needs more than basic knowledge in science, mathematics, and technology. They expect those they hire to apply that knowledge in new and unusual contexts, as well as to communicate effectively, work collaboratively, understand the perspectives of colleagues from different cultures, and continually update and expand their knowledge and skills (Capelli, 1997; Greenspan, 2000; Rust, 1998). Calls for Change from Within Academe While public pressure for reforming undergraduate teaching and learning and holding educators accountable for such improvements is real and growing, recent surveys also suggest that increasing numbers of faculty are advocating strongly for quality teaching and are paying close attention to how effective teaching is recognized, evaluated, and rewarded within departments
OCR for page 13
and at the institutional level (Ory, 2000). In a recent survey of doctoral students, for example, 83 percent indicated that teaching “is one of the most appealing aspects of faculty life, as well as its core undertaking” (Golde and Dore, 2001, p. 21). In recent interviews with new faculty members, Rice et al. (2000)2 reported that interviewees overwhelmingly expressed enjoyment of and commitment to teaching and working with students. However, early-career faculty expressed concerns about how their work is evaluated. They perceive that expectations for their performance are vague and sometimes conflicting. They also indicated that feedback on their performance often is insufficient, unfocused, and unclear. Many expressed concern about the lack of a “culture of collegiality” or a “teaching community” at their institutions (Rice et al., 2000). During the past decade, there also has been increasing concern among senior faculty and administrators about improving undergraduate STEM education. These efforts have been spurred by reports from a variety of national organizations (e.g., Boyer, 1990; Boyer Commission, 1998; NRC, 1996b, 1997a; NSF, 1996; Project Kaleidoscope, 1991, 1994) calling for reform in these disciplines. Professional societies also are devoting serious attention to enhancing undergraduate teaching and learning in these disciplines (e.g., Council on Undergraduate Research <http://www.cur.org>; Doyle, 2000; McNeal and D’Avanzo, 1997; NRC, 1999b, 2000b; Howard Hughes Medical Institute <http://www.hhmi.org>; National Institute for Science Education <http://www.wcer.wisc.edu/nise>; Project Kaleidoscope <http://www.pkal.org>; Rothman and Narum, 1999; and websites and publications of increasing numbers of professional societies in the natural sciences, mathematics, and engineering). 2 This report by Rice et al. (2000) is a product of the American Association for Higher Education’s (AAHE’s) ongoing Forum on Faculty Roles and Rewards. The report provides the results of structured interviews that were undertaken with 350+ new faculty members and graduate students aspiring to be faculty members from colleges and universities around the country. The aim of that study was to obtain perspectives from those who are just beginning their academic careers and to offer guidance for senior faculty, chairs, deans, and others in higher education who will be responsible for shaping the professoriate of the future. Rice et al. offer ten “Principles of Good Practice: Supporting Early-Career Faculty,” accompanied by an action inventory to prompt department chairs, senior colleagues, and other academic leaders to examine their individual and institutional practices. These principles and specific action items are also available in a separate publication by Sorcinelli (2000), which is available at <http://www.aahe.org/ffrr/principles_brochure.htm>.
OCR for page 14
Challenges to Change Although there are many pressures on postsecondary institutions to examine and change their practices and assumptions about teaching and learning, it also is clear that the circumstances in which such changes must occur are exceedingly complex. One challenge is the diversity of the U.S. higher education community. Institutions range from those that serve several hundred students to those that enroll many thousands. Institutional histories and academic missions vary widely, as do their sources of support, means of governance, and student populations. These differences inevitably result in varying expectations on the part of students, faculty, parents, and funders with respect to the relative balance among research, teaching, and service. A second challenge is that some deeply entrenched aspects of university culture need to change if undergraduate teaching and learning are to improve (Mullin, 2001). One perception of the current culture is that more professional rewards and recognition accrue to those faculty who succeed at research than to those who devote their energies primarily to teaching (Brand, 2000). This perception persists because many postsecondary faculty and administrators believe that it is difficult to measure the effectiveness of an instructor’s teaching or a department’s curriculum objectively (Glassick et al., 1997). This challenge becomes especially difficult when one of the measures is the amount students have learned. Finally, perhaps the most significant challenge is that many undergraduate faculty in the STEM disciplines have received little or no formal training in techniques or strategies for teaching effectively, assessing student learning, or evaluating the effectiveness of their own teaching or that of their colleagues. Such training is not a firm requirement for being hired as a college-level faculty member. Formal, ongoing programs for professional development aimed at improving teaching are still rare at many postsecondary institutions. Faculty may discover what is known about assessing learning only by perusing the research literature, by participating in workshops on teaching and learning (e.g., Bloom, 1956; see also Anderson et al., 2001; Chickering and Gamson, 1987; and Osterlind, 1989), or by discussing problems with colleagues. The ultimate goal of undergraduate education should be for individual faculty and departments to improve the academic growth of students. A considerable body of research now exists on how students learn (summarized in How People Learn: Brain, Mind, Experience, and School, NRC, 2000c); on the assessment of teaching and learning (e.g.,
OCR for page 15
Knowing What Students Know: The Science and Design of Educational Assessment, NRC, 2001); and on other research findings that relate closely to the responsibilities of undergraduate faculty and could lead to direct improvements in undergraduate education. Overviews and summaries of research on learning and the application of that scholarship to the assessment of learning are provided at the end of this chapter in Annex Boxes 1-1 and 1-2, respectively. Many college faculty are not familiar with that literature, however, nor do they have the time, opportunity, or incentives to learn from it. Moreover, assessing whether students actually have learned what was expected requires that faculty rethink course objectives and their approaches to teaching. Extending the assessment of learning outcomes beyond individual courses to an entire departmental curriculum requires that faculty collectively reach consensus about what students should learn and in which courses that knowledge and those skills should be developed. STATEMENT OF TASK AND GUIDING PRINCIPLES The committee conducted its work according to the following statement of task from the NRC: The goal of this project is to develop resources to help postsecondary science, technology, engineering, and mathematics (STEM) faculty and administrators gain deeper understanding about ways convincingly to evaluate and reward effective teaching by drawing on the results of educational research. The committee will prepare a National Research Council report on the evaluation of undergraduate STEM teaching, with a focus on pedagogical and implementation issues of particular interest to the STEM community. The report will emphasize ways in which research in human learning can guide the evaluation and improvement of instruction, and will discuss how educational research findings can contribute to this process. In responding to this charge, the committee embraced four fundamental premises, all of which have implications for how teaching is honored and evaluated by educational institutions: Effective postsecondary teaching in STEM should be available to all students, regardless of their major. The design of curricula and the evaluation of teaching and learning should be collective responsibilities of faculty in individual departments or, where appropriate, performed through other interdepartmental arrangements. Scholarly activities that focus on improving teaching and learning should be recognized as bona fide endeavors that are equivalent to other scholarly pursuits. Scholarship devoted to im-
OCR for page 16
proving teaching effectiveness and learning should be accorded the same administrative and collegial support that is available for efforts to improve other research and service endeavors. Faculty who are expected to work with undergraduates should be given support and mentoring in teaching throughout their careers; hiring practices should provide a first opportunity to signal institutions’ teaching values and expectations of faculty. Thus, the central theme of this report is that teaching evaluation must be coupled with emphasis on improved student learning and on departmental and institutional support of improved teaching through ongoing professional development. Although the challenge is daunting, it is far from impossible. To the contrary, there is mounting evidence that colleges and universities of all types are embracing the challenge of improving undergraduate teaching and resolving these issues in innovative ways (Suskie, 2000). The committee was convinced by its examination of a wide range of literature that well-designed and implemented systems for evaluating teaching and learning can and do improve undergraduate education. Research on effective evaluation of teaching points to a number of principles that are increasingly well supported by evidence and embraced by a growing segment of the higher education community. Accordingly, this report is organized according to six guiding principles: A powerful tool for increasing student learning is ongoing, informal assessment (formative assessment). Emerging research on learning shows that thoughtful and timely feedback informed by pedagogical content knowledge3 is critical for developing among students at all levels a more advanced understanding of key concepts and skills in a discipline. Formative assessment has benefits for both students and faculty. Faculty 3 Shulman (1986, p. 9) was the first to propose the concept of pedagogical content knowledge, stating that it “…embodies the aspects of content most germane to its teachability…. [P]edagogical content knowledge includes… the most powerful analogies, illustrations, examples, explanations, and demonstrations—in a word, the ways of representing and formulating the subject that makes it comprehensible to others. …[It] also includes an understanding of what makes the learning of specific concepts easy or difficult: the conceptions and preconceptions that students of different ages and backgrounds bring with them to the learning.” Thus, teachers use pedagogical content knowledge to relate what they know about what they teach (subject matter knowledge) to what they know about effective teaching (pedagogical knowledge). The synthesis and integration of these two types of knowledge characterize pedagogical content knowledge (Cochran, 1997).
OCR for page 17
who use formative assessment effectively also benefit because the feedback loop that is established as they obtain information about student learning enables them to determine rapidly and accurately how to adjust their teaching strategies and curricular materials so their students will learn more effectively (see Chapter 5, this report, for additional details). Appropriate use of formative evaluation facilitates the collection and analysis of information about teaching effectiveness for more formal personnel decisions (summative evaluation). If formative evaluation is employed regularly, faculty also generate information they can use for purposes of documenting the effectiveness of their teaching when they are involved with personnel decisions such as continuing contracts, salary increases, tenure, promotion, or professional awards. Departments and institutions also can use data compiled by individual faculty to examine student learning outcomes and to demonstrate what and how students are learning. The outcomes of effective formative and summative assessments of student learning by individual faculty can be used by other faculty to improve their own teaching, as well as by departments to strengthen existing academic programs or design new ones. Faculty can integrate such information with their own course materials, teaching philosophy, and so on to produce portfolios and other materials that make their work visible to wider communities in reliable and valid ways. Producing such materials demonstrates the accomplishments of faculty in fostering student learning, in developing themselves as scholars, and in contributing to their fields. Embracing and institutionalizing effective evaluation practices can advance the recognition and rewarding of teaching scholarship and communities of teaching and learning. By adopting policies and practices that inform and support the effective use of formative evaluation, departments, institutions, and professional societies can develop effective criteria for evaluating summatively the teaching effectiveness and educational scholarship of faculty. Effective and accepted criteria and practices for evaluating teaching enable institutions to address the concerns of those who are critical of undergraduate teaching and learning. As links between formative and summative student assessment and between summative student assessment and faculty evaluation become part of everyday practice, higher education leaders will be able to respond more effectively to criticisms about the low visibility and value of teaching in higher education. In applying these principles, individual faculty, academic departments,
OCR for page 18
and institutions of higher education can benefit from an overview of existing research on effective practices for evaluating faculty and academic programs. They also need practical guidance about how to initiate the process or advance it on their campuses. Meeting these needs is the primary purpose of this report. ORGANIZATION OF AND INTENDED AUDIENCES FOR THIS REPORT Report Organization In this report, the six organizing principles stated above are used to provide an overview of the current status of research on evaluating teaching and learning. The report also provides a set of guidelines, based on emerging research, for evaluating the teaching of individuals and the academic programs of departments. Faculty and administrators can adapt these ideas for evaluating teaching and programs to the needs of their departments and campuses as appropriate for their institutional mission and identity. Part I (Chapters 1 through 4) presents principles and research findings that can support improvements in the evaluation of undergraduate teaching in STEM and reviews implementation issues. Chapter 2 reviews characteristics of effective undergraduate teaching and summarizes challenges that faculty may encounter in trying to become more effective teachers. By comparing the “cultures” of teaching and disciplinary research, Chapter 3 examines barriers associated with making undergraduate teaching and learning a more central focus through effective systems for teaching evaluation. This chapter also provides suggestions for better aligning these cultures within the university. Chapter 4 presents key research findings on how to evaluate undergraduate teaching in STEM more effectively. Part II (Chapters 5 through 8) applies the principles, research findings, and recommendations set forth in Part I, providing an overview of specific methodologies and strategies for evaluating the effectiveness of undergraduate teaching in STEM. Chapter 5 reviews a variety of methodologies that can be used to evaluate teaching effectiveness and the quality of student learning. Some of these methods also can be applied to evaluate teaching, course offerings, and curriculum at the departmental level. Indeed, it is the committee’s conviction that similar expectations and criteria can and should apply to academic departments and institutions as a whole. Chapters 6 and 7 provide practical strategies for
OCR for page 19
using the methodologies presented in Chapter 5 to evaluate individual teachers and departmental undergraduate programs, respectively. Finally, all of these findings serve as the basis for a set of recommendations aimed at improving evaluation practices, presented in Chapter 8. Four appendixes also are provided. Because student evaluations of teaching occupy a place of prominence in current evaluation processes, Appendix A provides an in-depth examination of research findings on the efficacy and limitations of input from undergraduate students. Based on concerns of many faculty about the design and analysis of student evaluations of teaching, colleges and universities across the United States have begun to revise such forms. Appendix B offers specific examples, used by a variety of types of institutions that comport with the six guiding principles of this report; these examples can serve as models for other institutions that are looking to revamp their student evaluation forms. Similarly, as peer review of teaching gains greater prominence in the instruments for both formative and summative evaluations of teaching, faculty and administrators will require assistance on ways to undertake this process fairly and equitably. Appendix C includes examples of peer evaluation forms that are consistent with the findings and recommendations of this report. Finally, Appendix D provides biographical sketches of the committee members. This report also provides readers with links to a wealth of additional information and guides available at numerous websites. These links are found primarily in footnotes or in the list of References. All of these links were tested prior to the release of the report and were found to be operable as of July 20, 2002. Intended Audiences A primary audience for this report is the individual STEM faculty members who teach disciplinary and interdisciplinary courses at colleges and universities, especially at the introductory level. This report also is directed to departmental and institutional leaders in higher education, including college and university presidents and chancellors, provosts, academic deans, and department chairs—those who can best promote a culture and community of teaching and learning and can encourage faculty collaboration in improving student learning and academic success.
OCR for page 20
Annex Box 1-1. Seven Principles of Learning Research in the cognitive, learning, and brain sciences has provided many new insights about how humans organize knowledge, how experience shapes understanding, how individuals differ in learning strategies, and how people acquire expertise. From this emerging body of research, scientists and others have been able to synthesize a number of underlying principles of human learning. That knowledge can be synthesized into the following seven principles of learning: 1. Learning with understanding is facilitated when new and existing knowledge is structured around the major concepts and principles of the discipline. Proficient performance in any discipline requires knowledge that is both accessible and usable. Experts’ content knowledge is structured around the major organizing principles, core concepts, and “big ideas” of the discipline. Their strategies for thinking and solving problems are closely linked to their understanding of such core concepts. Therefore, knowing many disconnected facts is not sufficient for developing expertise. Understanding the big ideas also allows disciplinary experts to discern the deeper structure and nature of problems and to recognize similarities between new problems and those previously encountered. Curricula that emphasize breadth of coverage and simple recall of facts may hinder students’ abilities to organize knowledge effectively because they do not learn anything in depth, and thus are not able to structure what they are learning around the major organizing principles and core concepts of the discipline. 2. Learners use what they already know to construct new understandings. College students already possess knowledge, skills, beliefs, concepts, conceptions, and misconceptions that can significantly influence how they think about the world, approach new learning, and go about solving unfamiliar problems. They often attempt to learn a new idea or process by relating it to ideas or processes they already understand. This prior knowledge can produce mistakes as well as new insights. How these links are made may vary in different subject areas and among students with varying talents, interests, and abilities. Learners are likely to construct interpretations of newly encountered problems and phenomena in ways that agree with their own prior knowledge even when those interpretations conflict with what a teacher has attempted to teach. Therefore, effective teaching involves gauging what learners already know about a subject and finding ways to build on that knowledge. When prior knowledge contains misconceptions, effective instruction entails detecting those misconceptions and addressing them, sometimes by challenging them directly.
OCR for page 21
3. Learning is facilitated through the use of metacognitive strategies that identify, monitor, and regulate cognitive processes. Metacognition is the ability of people to predict and monitor their current level of understanding and mastery of a subject or performance on a particular task and decide when it is not adequate (NRC, 2000e). Metacognitive strategies include (1) connecting new information to former knowledge; (2) selecting thinking strategies deliberately; and (3) planning, monitoring, and evaluating thinking processes. To be effective problem solvers and learners, students need to reflect on what they already know and what else they need to know for any given situation. They must consider both factual knowledge—about the task, their goals, and their abilities—and strategic knowledge about how and when to use a specific procedure to solve the problem at hand. Research indicates that instructors can facilitate the development of metacognitive abilities by providing explicit instruction focused on such skills, by providing opportunities for students to observe teachers or other content experts as they solve problems, and by making their thinking visible to those observing. 4. Learners have different strategies, approaches, patterns of abilities, and learning styles that are a function of the interaction between their heredity and their prior experiences. Individuals are born with a potential to learn that develops through their interaction with their environment to produce their current capabilities and talents. Among learners of the same age, there are important differences in cognitive abilities (such as linguistic and spatial aptitudes or the ability to work with symbolic representations of the natural world), as well as in emotional, cultural, and motivational characteristics. Thus, some students will respond favorably to one kind of instruction, whereas others will benefit more from a different approach. Educators need to be sensitive to such differences so that instruction and curricular materials will be suitably matched to students’ developing abilities, knowledge base, preferences, and styles. Students with different learning styles also need a range of opportunities and ways to demonstrate their knowledge and skills. Using one form of assessment will work to the advantage of some students and to the disadvantage of others; multiple measures of learning and understanding will provide a better picture of how well individual students are learning what is expected of them. 5. Learners’ motivation to learn and sense of self affect what is learned, how much is learned, and how much effort will be put into the learning process. Both internal and external factors motivate people to learn and develop competence. Regardless of the source, learners’ level of motivation strongly affects their willingness to persist in the face of difficulty or challenge. Intrinsic motivation is enhanced when students perceive learning tasks as interesting and personally meaningful, and presented at an appropriate level of difficulty. Tasks that are too difficult can frustrate; those that are too easy can lead to boredom. Research also has revealed strong
OCR for page 22
connections between learners’ beliefs about their own abilities in a subject area and their success in learning that subject. For example, some students believe their ability to learn a particular subject or skill is predetermined, whereas others believe their ability to learn is substantially a function of effort. The use of instructional strategies that encourage conceptual understanding is an effective way to increase students’ interest and enhance their confidence about their abilities to learn a particular subject. 6. The practices and activities in which people engage while learning shape what is learned. Research indicates that the way people learn a particular area of knowledge and skills and the context in which they learn it become a fundamental part of what is learned. When students learn some subject matter or concept in only a limited context, they often miss seeing the applicability of that information to solving novel problems encountered in other classes, in other disciplines, or in everyday life situations. By encountering a given concept in multiple contexts, students develop a deeper understanding of the concept and how it can be used and applied to other contexts. Faculty can help students apply subject matter to other contexts by engaging them in learning experiences that draw directly upon real-world applications, or exercises that foster problem-solving skills and strategies that are used in real-world situations. Problem-based and case-based learning are two instructional approaches that create opportunities for students to engage in practices similar to those of experts. Technology also can be used to bring real-world contexts into the classroom.4 7. Learning is enhanced through socially supported interactions. Learning can be enhanced when students have opportunities to interact and collaborate with others on instructional tasks. In learning environments that encourage collaboration, such as those in which most practicing scientists and mathematicians work, individuals have opportunities to test their ideas and learn by observing others. Research demonstrates that providing students with opportunities to articulate their ideas to peers and to hear and discuss others’ ideas in the context of the classroom is particularly effective in enhancing conceptual learning. Social interaction also is important for the development of expertise, metacognitive skills (see learning principle #3), and formation of the learner’s sense of self (see learning principle #5). 4 Specific techniques for structuring problem-based learning and employing technology in college classrooms are discussed on the website of the National Institute for Science Education. Suggestions for creative uses of technology are available <http://www.wcer.wisc.edu/nise/cl1/ilt/default.asp>. Each site also provides further references. Additional resources on problem-based learning are found in Allen and Duch (1998). SOURCE: Excerpted and modified from NRC (2002b, Ch. 6). Original references are cited in that chapter.
OCR for page 23
Annex Box 1-2. Overview of Research on Effective Assessment of Student Learning Although assessments used in various contexts and for differing purposes often look quite different, they share common principles. Assessment is always a process of reasoning from evidence. Moreover, assessment is imprecise to some degree. Assessment results are only estimates of what a person knows and can do. It is essential to recognize that one type of assessment is not appropriate for measuring learning in all students. Multiple measures provide a more robust picture of what an individual has learned. Every assessment, regardless of its purpose, rests on three pillars: a model of how students represent knowledge and develop competence in the subject domain, tasks or situations that allow one to observe students’ performance, and an interpretation method for drawing inferences from the performance evidence thus obtained. Educational assessment does not exist in isolation. It must be aligned with curriculum and instruction if it is to support learning. Research on learning and cognition indicates that assessment practices should extend beyond an emphasis on skills and discrete bits of knowledge to encompass more complex aspects of student achievement. Studies of learning by novices and experts in a subject area demonstrate that experts typically organize factual and procedural knowledge into schemas that support recognition of patterns and the rapid retrieval and application of knowledge. Experts use metacognitive strategies to monitor their understanding when they solve problems and perform corrections of their learning and understanding (see Annex Box 1-1, principle 3, for additional information about metacognition). Assessments should attempt to determine whether a student has developed good metacognitive skills. They should focus on identifying specific strategies that students use for problem solving. Learning involves a transformation from naïve understanding into more complete and accurate comprehension. Appropriate assessments can both facilitate this process for individual students and assist faculty in revising their approaches to teaching. To this end, assessments should focus on making students’ thinking visible to both themselves and their instructors so that faculty can select appropriate instructional strategies to enhance future learning. One of the most important roles for assessment is the provision of timely and informative feedback to students during instruction and learning so that their practice of a skill and its subsequent acquisition will be effective and efficient. Much of human learning is acquired through discourse and interactions with others. Knowledge is often associated with particular social and cultural contexts, and it encompasses understanding about the meaning of specific practices, such as asking and answering questions. Effective assessments need to determine how well students engage in communicative practices that are appropriate to the discipline being
OCR for page 24
assessed. Assessments should examine what students understand about such practices and how they use tools appropriate to that discipline. The design of high-quality assessments is a complex process that involves numerous iterative and interdependent components. Decisions made at a later stage of the design process can affect those occurring at an earlier stage. Thus, as faculty develop assessments of student learning, they must often revisit their choices of questions and approaches and refine their designs. Although reporting of results occurs at the end of an assessment cycle, assessments must be designed from the outset to ensure that reporting of the desired types of information will be possible. Providing students with information about particular qualities of their work and about what they can do to improve is crucial for maximizing learning. For assessment to be effective, students must understand and share the goals for learning that are assessed. Students learn more when they understand and, in some cases, participate in developing the criteria by which their work will be evaluated, and when they engage in peer and self-assessment during which they apply those criteria. Such practices also help students develop metacognitive abilities, which, in turn, improve their development of expertise in a discipline or subject area. SOURCE: Excerpted and modified from NRC (2001, pp. 2–9). References to support these statements are provided in that report.
Representative terms from entire chapter: