Click for next page ( 144


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 143
6 Teaching and Assessing for Transfer T he prior chapters have established transfer as the defining character- istic of deeper learning; discussed the importance of cognitive, in- trapersonal, and interpersonal skills for adult success; and expanded our description of deeper learning, including both the process of deeper learning and its manifestation in the disciplines of English language arts, mathematics, and science. This chapter takes the argument one step further by reviewing research on teaching for transfer. The first section discusses the importance of specifying clear definitions of the intended learning goals and the need for accompanying valid outcome measures if we are to teach and assess for transfer. Accepting that there are limitations in the research, the next section describes emerging evidence indicating that it is possible to support deeper learning and development of transferable knowledge and skills in all three domains. The third section then summarizes what is known about how to support deeper learning and the development of transferable cognitive competencies, identifying features that may serve as indicators that an intervention is likely to develop these competencies in a substantial and meaningful way. The fourth section then discusses what is known about how to support deeper learning in the intrapersonal and interpersonal domains. The fifth section returns to issues of assessment and discusses the role of assessment in support of deeper learning. The final sec- tion offers conclusions and recommendations. 143

OCR for page 143
144 EDUCATION FOR LIFE AND WORK THE NEED FOR CLEAR LEARNING GOALS AND VALID MEASURES Educational interventions may reflect different theoretical perspectives on learning and may target different skills or domains of competence. In all cases, however, the design of instruction for transfer should start with a clear delineation of the learning goals and a well-defined model of how learning is expected to develop (National Research Council, 2001). The model—which may be hypothesized or established by research—provides a solid foundation for the coordinated design of instruction and assess- ment aimed at supporting students’ acquisition and transfer of targeted competencies. Designing measures to evaluate student accomplishment of the particu- lar learning goals can be an important starting point for the development process because outcome measures can provide a concrete representation of the ultimate student learning performances that are expected and of the key junctures along the way, which in turn can enable the close coordina- tion of intended goals, learning environment characteristics, programmatic strategies, and performance outcomes. Such assessments also communicate to educators and learners—as well as designers—what knowledge, skills, and capabilities are valued (Resnick and Resnick, 1992; Herman, 2008). An evidence-based approach to assessment rests on three pillars that need to be closely synchronized (National Research Council, 2001, p. 44): • A model of how students represent knowledge and develop com- petence in a domain • Tasks or situations that allow one to observe student performance relative to the model • An interpretation framework for drawing inferences from student performance Developing that first pillar—a model of the learning outcomes to be assessed—offers a first challenge in the assessment of cognitive, intraper- sonal, and interpersonal competencies. Within each of these three broad domains, theorists have defined and conducted research on a wealth of individual constructs. In the previous chapters, we noted that the research literature on cognitive and noncognitive competencies has used a wide variety of definitions, particularly in the intrapersonal and interpersonal domains. In Chapter 2, we suggested certain clusters of competencies within each domain as the targets of assessment and instruction and offered pre- liminary definitions. Questions remain, however, about the implications of these definitions. For example, the range of contexts and situations across which the learning of these competencies should transfer remains unclear.

OCR for page 143
TEACHING AND ASSESSING FOR TRANSFER 145 A second challenge arises from the existing assessment models and methodologies used to observe and interpret students’ responses relative to these constructs. It is widely acknowledged that most current large-scale measures of educational achievement do not do a good job of reflecting deeper learning goals in part because of constraints on testing formats and testing time (Webb, 1999; also see Chapter 7). While a variety of well- developed exemplars exist for constructs in the cognitive domain, those for intrapersonal and interpersonal competencies are less well developed. Below, we briefly discuss examples of measures for each domain of compe- tence. (For a fuller discussion of this topic, see National Research Council, 2011a.) Measures of Cognitive Competence Promising examples of measures focused on important cognitive com- petencies can be found in national and international assessments, in train- ing and licensing tests, and in initiatives currently under way in K-12. One example is the computerized problem-solving component of the Programme for International Student Assessment (PISA), which is scheduled for op- erational administration in 2012 (National Research Council, 2011b). In this 40-minute test, items are grouped in units around a common problem, which keeps reading and numeracy demands to a minimum. The problems are presented within realistic, everyday contexts, such as refueling a moped, playing on a handball team, mixing elements in a chemistry lab, and taking care of a pet. The difficulty of the items is manipulated by increasing the number of variables or the number of relationships that the test taker has to deal with. Scoring of the items reflects the PISA 2012 framework, which defines four processes that are components of problem solving: (1) information retrieval, (2) model building, (3) forecasting, and (4) monitoring and re- flecting. Points are awarded for information retrieval, based on whether the test taker recognizes the need to collect baseline data and uses the method of manipulating one variable at a time. Scoring for the process of model building reflects whether the test taker generates a correct model of the problem. Scoring of forecasting is based on the extent to which responses to the items indicate that the test taker has set and achieved target goals. Finally, points are awarded for monitoring and reflecting, which includes checking the goal at each stage, detecting unexpected events, and taking remedial action if necessary. Another promising example of assessment of complex cognitive com- petencies, created by the National Council of Bar Examiners, consists of three multistate examinations that jurisdictions may use as one step in the

OCR for page 143
146 EDUCATION FOR LIFE AND WORK process of licensing lawyers.1 The three examinations are the Multistate Bar Examination (MBE), the Multistate Essay Examination (MEE), and the Multistate Performance Test (MPT). All are paper-and-pencil tests that are designed to measure the knowledge and skills necessary to be licensed in the profession and to ensure that the newly licensed professional knows what he or she needs to know in order to practice. These overarching goals—as well as the goals of the individual components summarized briefly below— reflect an assumption that law students need to have developed transferable knowledge that they will be able to apply when they become lawyers. The purpose of the MBE is to assess the extent to which an examinee can apply fundamental legal principles and legal reasoning to analyze a given pattern of facts. The questions focus on the understanding of legal principles rather than on memorization of local case or statutory law. The MBE consists of 60 multiple-choice questions and is administered over an entire day. The purpose of the MEE is to assess the examinee’s ability to (1) iden- tify legal issues raised by a hypothetical factual situation; (2) separate mate- rial that is relevant from that which is not; (3) present a reasoned analysis of the relevant issues in a clear, concise, and well-organized composition; and (4) demonstrate an understanding of the fundamental legal principles relevant to the probable resolution of the issues raised by the factual situ- ation. This test lasts for 6 hours and consists of nine 30-minute questions. The goal of the MPT is to assess the fundamental skills of lawyers in realistic situations by asking the candidate to complete a task that a beginning lawyer should be able to accomplish. It requires applicants to sort detailed factual materials; separate relevant from irrelevant facts; ana- lyze statutory, case, and administrative materials for relevant principles of law; apply relevant law to the facts in a manner likely to resolve a client’s problem; identify and resolve ethical dilemmas; communicate effectively in writing; and complete a task within time constraints. Examinees are given 90 minutes to complete each task. These and other promising examples each start with a strong model of the competencies to be assessed; use simulated cases and scenarios to pose problems that require extended analysis, evaluation, and problem solving; and apply sophisticated scoring models to support inferences about student learning. The PISA example, in addition, demonstrates the dynamic and interactive potential of technology to simulate authentic problem-solving situations. The PISA problem-solving test is one of a growing set of examples that use technology to simultaneously engage students in problem solving and assess their problem-solving skills. Another example is SimScientists, a 1  he T following description of the three examinations relies heavily on Case (2001).

OCR for page 143
TEACHING AND ASSESSING FOR TRANSFER 147 simulation-based curriculum unit that includes a sequence of assessments designed to measure student understanding of ecosystems (Quellmalz, Timms, and Buckley, 2010). The SimScientists summative assessment is designed to measure middle school students’ understanding of ecosystems and scientific inquiry. Students are presented with the overarching task of describing an Australian grassland ecosystem for an interpretive center and respond by drawing food webs and conducting investigations with the simulation. Finally, they are asked to present their findings about the grasslands ecosystem. SimScientists also includes elements focusing on transfer of learning, as described in a previous NRC report (National Research Council, 2011b, p. 94): To assess transfer of learning, the curriculum unit engages students with a companion simulation focusing on a different ecosystem (a mountain lake). Formative assessment tasks embedded in both simulations identify the types of errors individual students make, and the system follows up with graduated feedback and coaching. The levels of feedback and coach- ing progress from notifying the student that an error has occurred and asking him or her to try again, to showing the results of investigations that met the specifications. Students use this targeted, individual feedback to engage with the tasks in ways that improve their performance. As noted in Chapter 4, practice is essential for deeper learning, but knowledge is acquired much more rapidly if learners receive information about the correctness of their results and the nature of their mistakes. Combining expertise in content, measurement, learning, and technol- ogy, these assessment examples employ evidence-centered design and are developing full validity arguments. They reflect the emerging consensus that problem solving must be assessed as well as developed within specific content domains (as discussed in the previous chapter; also see National Research Council, 2011a). In contrast to these examples, many other cur- rent technology-based projects designed to impact student learning lack a firm assessment or measurement basis (National Research Council, 2011b). Project- and problem-based learning and performance assessments that require students to engage with novel, authentic problems and to create complex, extended responses in a variety of media would seem to be prime vehicles for measuring important cognitive competencies that may transfer. What remains to be seen, however, is whether the assessments are valid for their intended use and if the reliability of scoring and the generalizability of results can achieve acceptable levels of rigor, thereby avoiding validity and reliability problems of complex performance assessments developed in the past (e.g., Shavelson, Baxter, and Gao, 1993; Linn et al., 1995).

OCR for page 143
148 EDUCATION FOR LIFE AND WORK Measures of Intrapersonal and Interpersonal Competence As is the case with interpersonal skills, many of the existing instruments for the measurement of intrapersonal skills have been designed for research and theory development purposes and thus have the same limitations for large-scale educational uses as the instruments for measuring interpersonal skills. These instruments include surveys (self-reports and informant re- ports), situational judgment tests, and behavioral observations. As with the assessment of interpersonal competencies, it is possible that evidence of in- trapersonal competencies could be elicited from the process and products of student work on suitably designed complex tasks. For example, project- or problem-based performance assessments theoretically could be designed to include opportunities for students to demonstrate metacognitive strategies or persistence in the face of obstacles. Student products could be systemati- cally observed or scored for evidence of the targeted competencies, and then these scores could be counted in student grades or scores on end-of-year accountability assessment. To date, however, strong design methodologies, interpretive frameworks and approaches to assuring the score reliability, validity, and fairness have not been developed for such project- or problem- based performance assessments. There are few well-established practical assessments for interpersonal competencies that are suitable for use in schools, with the exception of tests designed to measure those skills related to formal written and oral com- munication. Some large-scale measures of collaboration were developed as part of performance assessments during the 1990s, but the technical quality of such measures was never firmly established. The development of those assessments revealed an essential tension between the nature of group work and the need to assign valid scores to individual students. Today there are examples of teacher-developed assessments of teamwork and collaboration being used in classrooms, but technical details are sketchy. Most well-established instruments for measuring interpersonal com- petencies have been developed for research and theory-building or for em- ployee selection purposes, rather than for use in schools. These instruments tend to be one of four types: surveys (self-reports and informant reports), social network analysis, situational judgment tests, or behavioral observa- tions (Bedwell, Salas, and Fiore, 2011). Potential problems arise when applying any of these methods for large-scale educational assessment, to which stakes are often attached. Stakes are high when significant positive or negative consequences are applied to individuals or organizations based on their test performance, consequences such as high school graduation, grade-to-grade promotion, specific rewards or penalties, or placement into special programs.

OCR for page 143
TEACHING AND ASSESSING FOR TRANSFER 149 Stakes attached to large-scale assessment results heighten the need for the reliability and validity of scores, particularly in terms of being resistant to fakeability. Cost and feasibility also are dominant issues for large-scale assessments. Each of the instrument types has limitations relative to these criteria. Self-report, social network analysis, and situational judgment tests, which can provide relatively efficient, reliable, and cost-effective measures, are all subject to social desirability bias—the tendency to give socially de- sirable and socially rewarded rather than honest responses to assessment items or tasks. While careful design can help to minimize or correct for social desirability bias, if any of these three types of assessment instruments were used for high-stakes educational testing, social desirability bias would likely be heightened. Behavioral ratings, in contrast, present challenges in assuring reliabil- ity and cost feasibility. For example, if students’ interpersonal skills are assessed based on self, peer, or teacher ratings of student presentations of portfolios of their past work (including work as part of a team), a number of factors may limit the reliability and validity of the scores. These include differences in the nature of the interactions reflected in the portfolios for different students or at different times; differences in raters’ application of the scoring rubric; and differences in the groups with whom individual students have interacted. This lack of uniformity in the sample of inter- personal skills included in the portfolio poses a threat to both validity and reliability (National Research Council, 2011a). Dealing with these threats to reliability takes additional time and money beyond that required for simply presenting and scoring student presentations. Collaborative problem-solving tasks currently under development by PISA offer one of the few examples today of a direct, large-scale assessment targeting social and collaboration competencies; other prototypes are under development by the ATC21S project and by the military. The quality and practical feasibility of any of these measures are not yet fully documented. However, like many of the promising cognitive measures, these rely on the abilities of technology to engage students in interaction, to simulate others with whom students can interact, to track students’ ongoing responses, and to draw inferences from those responses. Summary In summary, there are a variety of constructs and definitions of cogni- tive, intrapersonal, and interpersonal competencies and a paucity of high- quality measures for assessing them. All of the examples discussed above are measures of maximum performance rather than of typical performance (see Cronbach, 1970). They measure what students can do rather than what they are likely to do in a given situation or class of situations. While

OCR for page 143
150 EDUCATION FOR LIFE AND WORK measures of maximum performance are usually the focus in the cognitive domain, typical performance may be the primary focus of measures for some intrapersonal and interpersonal competencies. For example, measures of dispositions and attitudes related to conscientiousness, multicultural sen- sitivity, and persistence could be designed to assess what students are likely to do (typical performance). In comparison to measures of maximum per- formance, measures of typical performance require more complex designs and tend to be less stable and reliable (Patry, 2011). Both the variety of definitions of constructs across the three domains and the lack of high-quality measures pose challenges for teaching, assess- ment, and learning of 21st century competencies. They also pose challenges to research on interventions designed to impact student learning and per- formance, as we discuss below. EMERGING EVIDENCE OF INSTRUCTION THAT PROMOTES DEEPER LEARNING Despite the challenges posed by a lack of uniform definitions and high- quality measures of the intended performance outcomes, there is emerg- ing evidence that cognitive, intrapersonal, and interpersonal competencies can be developed in ways that promote transfer. The most extensive and strongest evidence comes from studies of interventions targeting cognitive competencies, but there is also evidence of development of intrapersonal and interpersonal competencies. The research includes studies encompass- ing how people learn in formal, informal, and workplace learning environ- ments, as discussed further below. Evidence from Interventions in Formal Learning Environments As illustrated by the examples in the previous chapter, some class- room-based interventions targeting specific cognitive competencies have also, through changes in teaching practices, fostered development of in- trapersonal and interpersonal competencies. The students learn through discourse, reflection, and shared experience in a learning community. For example, Boaler and Staples (2008) note the following: The discussions at Railside were often abstract mathematical discussions and the students did not learn mathematics through special materials that were sensitive to issues of gender, culture, or class. But through their mathematical work, the Railside students learned to appreciate the differ- ent ways that students saw mathematics problems and learned to value the contribution of different methods, perspectives, representations, partial ideas and even incorrect ideas as they worked to solve problems. (p. 640)

OCR for page 143
TEACHING AND ASSESSING FOR TRANSFER 151 Both the mathematics knowledge and skills and the positive dispositions toward mathematics and feelings of self-efficacy in mathematics developed by these students appear to be durable and transferable, as nearly half of the students enrolled later in calculus classes and all indicated plans to continue study of mathematics. In the domain of English language arts, Guthrie, Wigfield, and their colleagues developed an instructional system designed to improve young students’ reading by improving their motivation and self-regulation as well as their use of cognitive and metacognitive strategies (Guthrie et al., 1996, 2004; Guthrie, McRae, and Klauda, 2007; Wigfield et al., 2008; Taboada et al., 2009). Several empirical studies found this intervention to be success- ful in improving the performance of young readers, reflecting gains in the cognitive knowledge and skills that were the primary targets of the interven- tion (Guthrie et al., 2004). The young students involved in the intervention showed greater engagement in reading both in school and outside of school (Wigfield et al., 2008). These findings suggest that the students not only developed the intrapersonal competencies of motivation and self-regulation but also transferred these competencies to their reading in the contexts of both school and home. There is also some evidence that intrapersonal and interpersonal com- petencies can be effectively taught and learned in the classroom. In the past, interventions often focused on reducing or preventing undesirable behav- iors, such as antisocial behavior, drug use, and criminal activities. Increas- ingly, however, intervention programs are designed instead to build positive capacities, including resilience, interpersonal skills, and intrapersonal skills, in both children and families. In a recent review of the research on these new skill-building approaches—including meta-analyses and numerous ran- domized trials—a National Research Council committee (2009b) concluded that effectiveness has been demonstrated for interventions that focus on strengthening families, strengthening individuals, and promoting mental health in schools and in healthcare and community programs. Durlak et al. (2011) recently conducted a meta-analysis of school-based instructional programs designed to foster social and emotional learning. They located 213 studies that targeted students aged 5 to 18 without any identified adjustment or learning problems, that included a control group, and that reported sufficient data to allow calculation of effect sizes. Almost half of the studies employed randomized designs. More than half (56 per- cent) were implemented in elementary school, 31 percent in middle school, and the remainder in high school. The majority were classroom based, delivered either by teachers (53 percent) or by personnel from outside the school (21 percent). Most of the programs (77 percent) lasted less than a year, 11 percent lasted 1 to 2 years, and 12 percent lasted more than 2 years.

OCR for page 143
152 EDUCATION FOR LIFE AND WORK The authors analyzed the effectiveness of these school-based programs in terms of six student outcomes in the cognitive, intrapersonal, and inter- personal domains: social and emotional skills, attitudes toward self and others, positive social behaviors, conduct problems, emotional distress, and academic performance. Measures of these outcomes included student self- reports; reports and ratings from a teacher, parent, or independent rater; and school records (including suspensions, grades, and achievement test scores). Overall, the meta-analysis showed statistically significant, positive effect sizes for each of the six outcomes, with the strongest effects (d = 0.57) in social and emotional skills.2 These positive effects across the different outcomes suggest that students transferred what they learned about posi- tive social and emotional skills in the instructional programs, displaying improved behavior throughout the school day. Among the smaller group of 33 interventions that included follow-up data (with an average follow-up period of 92 weeks), the effects at the time of follow up remained statistically significant, although the effect sizes were smaller. These findings suggest that the learning of social and emotional skills was at least somewhat durable. An even smaller subset of the reviewed studies included measures of academic performance. Among these studies the mean effect size was 0.27, reinforcing the interconnectedness of learning across the cognitive, intra- personal, and interpersonal domains. One promising example showing that interventions can develop trans- ferable intrapersonal competencies is Tools of the Mind, a curriculum used in preschool and early primary school to develop self-regulation, improve working memory, and increase adaptability (Diamond et al., 2007). It in- cludes activities such as telling oneself aloud what one should do, dramatic play, and aids to facilitate memory and attention (such as an activity in which a preschooler is asked to hold a picture of an ear as a reminder to listen when another preschooler is speaking). A randomized controlled trial in 18 classrooms in a low-income urban school district indicated that the curriculum was effective in improving self-regulation, classroom behavior, and attention. The documented improvement in classroom behavior sug- gests that the young children transferred the self-regulation competencies they learned through the activities to their daily routines. The intervention also improved working memory and cognitive flexibility, further illustrating 2  n I research on educational interventions, the standardized effect size, symbolized by d, is calculated as the difference in means between treatment and control groups, divided by the pooled standard deviation of the two groups. Following rules of thumb suggested by Cohen (1988), an effect size of approximately 0.20 is considered ‘‘small,’’ approximately 0.50 is considered “medium,’’ and approximately 0.80 is considered ‘‘large.” Thus, the effect size of 0.57 on social and emotional skills is considered “large” or “strong.”

OCR for page 143
TEACHING AND ASSESSING FOR TRANSFER 153 the links across the cognitive, intrapersonal, and interpersonal domains (Barnett et al., 2008). Because of the closely intertwined nature of cognitive, intrapersonal, and interpersonal competencies an intervention targeting learning and skill development in one domain can influence other domains, as illustrated by a study included in the Durlak et al. (2011) meta-analysis. Flay et al. (2006) conducted a randomized controlled trial of the Positive Action Program—a drug education and conflict resolution curriculum with parent and community outreach—in 20 elementary schools in Hawaii. Although the intervention was focused on social and emotional competencies, it had large, statistically significant effects on mathematics (an effect size of 0.34) and reading achievement (0.74). Evidence from Interventions in Informal Learning Environments Studies of informal learning environments provide more limited evi- dence that cognitive, intrapersonal, and interpersonal competencies can be taught in ways that promote deeper learning and transfer. Informal learning takes place in a variety of settings, including after-school clubs, museums, science centers, and homes, and it includes a variety of experiences, from completely unstructured to highly structured workshops and educational programs. Informal learning activities may target a range of different learn- ing goals, including goals determined by the interests of individual learn- ers (National Research Council, 2011b). These characteristics of informal learning pose challenges both to clearly identifying the goals of a particular informal learning activity and to a careful assessment of learners’ progress toward those goals—essential components of any rigorous evaluation (Na- tional Research Council, 2009a). Despite these challenges, research and evaluation studies have shown, for example, that visitors to museums and science centers can develop a deeper understanding of a targeted scientific concept through the direct sensory or immersive experience provided by the exhibits (National Research Council, 2009a). Somewhat stronger evidence that informal learning environments can develop important competencies emerges from evaluations of struc- tured after-school programs with clearly defined learning goals. Durlak, Weissberg, and Pachan (2010) conducted a meta-analysis of after-school programs designed to promote social and emotional learning among chil- dren and youth. They located 68 studies of social and emotional learning programs that included both a control group and measures of postinterven- tion competencies, and they analyzed data on three categories of outcomes:

OCR for page 143
174 EDUCATION FOR LIFE AND WORK Based on prior research, the authors identified four practices thought to work together in combination to enhance the effectiveness of such programs: • A sequenced, step-by-step training approach • Emphasizing active forms of learning, so that youth can practice new skills • Focusing specific time and attention on skill training • Clearly defining goals, so that youth know what they are expected to learn Among the programs evaluated in the studies, 41 followed all four of the research-based practices listed above, while 27 did not follow all four. The group of programs that followed the four practices showed statistically significant mean effects for all outcomes (including drug use and school attendance), while the group of programs that did not follow all four prac- tices did not yield significant mean effects for any of the outcomes. These findings support the authors’ hypothesis that the four research-based prac- tices work best in combination to support the development of intrapersonal and interpersonal skills. In a more recent meta-analysis of school-based social and emotional learning programs, Durlak et al. (2011) reviewed 213 studies, examining findings of effectiveness in terms of six outcomes: • Social and emotional skills • Attitudes toward self and others • Positive social behaviors • Conduct problems • Emotional distress • Academic performance When the authors considered the findings in terms of the four research- based practices identified in their earlier study (Durlak, Weissberg, and Pachan, 2010), they found that the group of programs that followed all four of these recommended practices showed significant effects for all six outcomes, whereas programs that did not follow all four practices showed significant effects for only three outcomes (attitudes, conduct problems, and academic performance). The authors also found that the quality of imple- mentation mattered. When programs were well conducted and proceeded according to plan, gains across the six outcomes were more likely. These four practices are similar to some of the research-based meth- ods and design principles described above for supporting deeper learning in the cognitive domain. For example, the earlier discussion identified the

OCR for page 143
TEACHING AND ASSESSING FOR TRANSFER 175 method of encouraging elaboration, questioning, and self-explanation as an effective way to support deeper learning of cognitive skills and knowledge. Similarly, the research on teaching social and emotional skills suggests that active forms of learning that include elaboration and questioning—such as role playing and behavioral rehearsal strategies—support deeper learning of intrapersonal and interpersonal skills and knowledge. These active forms of social and emotional learning provide opportunities for learners to practice new strategies and receive feedback. The research on social and emotional skills indicates that it is important for teachers and school leaders to give sufficient attention to skill develop- ment, with a sequential and integrated curriculum providing opportunities for extensive practice. This echoes two findings about teaching cognitive skills: (1) teaching should be conducted within the specific context in which problems will be solved—in this case, social and emotional problems; and (2) the development of expert problem-solving skill requires years of delib- erate practice. Providing adequate time and attention for skill development in the school curriculum appears to enhance the learning of intraper- sonal and interpersonal skills. Finally, the research on social and emotional learning—like the research on cognitive learning—indicates that establish- ing explicit learning goals enhances effectiveness (Durlak et al., 2011). Just as the research on instruction for cognitive outcomes has demonstrated that learners need support and guidance to progress toward clearly defined goals (and that pure “discovery” does not lead to deep learning), so, too, has the research on instruction for social and emotional outcomes. Research on team training also provides suggestive evidence that cer- tain instructional design principles are important for the deeper learning of intrapersonal and interpersonal skills. In their meta-analysis, Salas et al. (2008) analyzed the potential moderating influence that the content of the team-training interventions had on outcomes. They identified three types of content: primarily task work; primarily teamwork (i.e., communication and other interpersonal skills); and both task work and teamwork. Their results suggest that when the goal is performance improvement the content makes little difference. However, for process outcomes (i.e., the development of intrapersonal and interpersonal skills that facilitate effective teamwork) and affective outcomes, teamwork and mixed-content training are associated with larger effect sizes than training focused on task work. The finding that, in situations when the goal is to improve team processes, focusing training content on teamwork skills improves effectiveness provides further support for the design principle that instruction should focus on clearly defined learning goals. The authors caution, however, that this conclusion is based on only a small number of studies.

OCR for page 143
176 EDUCATION FOR LIFE AND WORK ASSESSMENT OF AND FOR DEEPER LEARNING Earlier in this chapter we discussed the need for clear learning goals and valid measures of important student outcomes, be they cognitive, intra- personal, or interpersonal. Thus any discussion of issues related to the use of assessment to promote deeper learning presupposes that concerns about what to assess, how to assess, and how to draw valid inferences from the evidence have been addressed. These concerns must be addressed if assess- ment is to be useful in supporting the processes of teaching and learning. In this section we focus on issues related to how assessment can function in educational settings to accomplish the goal of supporting and promoting deeper learning. Since its beginning, educational testing has been viewed as a tool for improving teaching and learning (see, for example, Thorndike, 1918), but perspectives on the ways that it can best support such improvement have expanded in recent years. Historically the focus has been on assessments of learning—the so-called summative assessments—and on the data they can provide to support instructional planning and decision making. More recently, assessment for learning—the so-called formative assessment—has been the subject of an explosion of interest, spurred largely by Black and Wiliam’s 1998 landmark review showing impressive effects of formative assessment on student learning, particularly for low-ability students. A more recent meta-analysis of studies of formative assessment showed more modest, but still significant, effects on learning (Kingston and Nash, 2011). The formative assessment concept emphasizes the dynamic process of using assessment evidence to continually improve student learning, while summative assessment focuses on development and implementation of an assessment instrument to measure what a student has learned up to a par- ticular point in time (National Research Council, 2001; Shepard, 2005; Heritage, 2010). Both types of assessment have a role in classroom instruction and in the assessment of deeper learning and 21st century skills, as described be- low. (The role of accountability testing in the development of these skills is treated in Chapter 7.) Assessments of Learning Assessments of learning look back over a period of time (a unit, a semester, a year, multiple years) in order to measure and make judgments about what students have learned and about how well programs and strat- egies are working—as well as how they can be improved. Assessments of learning often serve as the starting point for the design of instruction and teaching because they make explicit for both teachers and students what is

OCR for page 143
TEACHING AND ASSESSING FOR TRANSFER 177 expected and they provide benchmarks against which success or progress can be judged. For the purpose of instruction aimed at deeper learning and development of 21st century skills, it is essential that such measures (1) fully represent the targeted skills and knowledge and a model of their development; (2) be fair in enabling students to show what they know; and (3) provide reliable, unbiased, and generalizable inferences about student competence (Linn, Baker, and Dunbar, 1991; American Educational Re- search Association, American Psychological Association, and the National Council for Measurement in Education, 1999). In other words, the intended learning goals, along with their development, the assessment observations, and the interpretative framework (National Research Council, 2001) must be justified and fully synchronized. When this is the case, the results for individual students can be useful for grading and placing students, for initial diagnoses of learning needs, and, in the case of students who are academically oriented, for motivating performance. Aggregated at the class, school, or higher levels, results may help in the identification of new curriculum and promising practices as well as in the assessment of teaching strategies and the evaluation of personnel and institutions. Assessment for Learning: Formative Assessment In contrast to assessments of learning that look backward over what has been learned, assessments for learning—formative assessments—chart the road forward by diagnosing where students are relative to learning goals and by making it possible to take immediate action to close any gaps (see Sadler, 1989). As defined by Black and Wiliam (1998), forma- tive assessment involves both understanding and immediately responding to students’ learning status. In other words, it involves both diagnosis and actions to accelerate student progress toward identified goals. Such actions may be teacher directed and coordinated with a hypoth- esized model of learning. Actions could include: teachers asking questions to probe, diagnose, and respond to student understanding; teachers ask- ing students to explain and elaborate their thinking; teachers providing feedback to help students transform their misconceptions and transition to more sophisticated understanding; and teachers analyzing student work and using results to plan and deliver appropriate next steps, for example, an alternate learning activity for students who evidence particular difficulties or misconceptions. But the actions are also student centered and student directed. A hallmark of formative assessment is its emphasis on student ef- ficacy, as students are encouraged to be responsible for their learning, and the classroom is turned into a learning community (Gardner, 2006; Harlen, 2006). To assume that responsibility, students must clearly understand what

OCR for page 143
178 EDUCATION FOR LIFE AND WORK learning is expected of them, including its nature and quality. Students re- ceive feedback that helps them to understand and master performance gaps, and they are involved in assessing and responding to their own work and that of their peers (see also Heritage, 2010). The importance of the teacher’s role in formative assessment was dem- onstrated by the recent meta-analysis by Kingston and Nash (2011). The authors estimated a weighted mean effect size of 0.20 across the selected studies. However, in those studies investigating the use of formative assess- ment based on professional development that supported teachers in imple- menting the strategy, the weighted mean effect size was 0.30. Formative assessment occurs hand in hand with the classroom teaching and learning process and is an integral component of teaching and learning for transfer. It embodies many of the principles of designing instruction for transfer that were discussed in the previous section of this chapter. For example, forma- tive assessment includes questioning, elaboration, and self-explanation, all of which have been shown to improve transfer. Formative assessment can provide the feedback and guidance that learners need when engaged in chal- lenging tasks. Furthermore, by making learning goals explicit, by engaging students in self- and peer assessment, by involving students in a learning community, and by demonstrating student efficacy, formative assessment can promote students as agents in their own learning, which can increase student motivation, autonomy, and metacognition as well as collaboration and academic learning (Gardner, 2006; Shepard, 2006). Thus, formative assessment is conducive to—and may provide direct support for—the de- velopment of transferable cognitive, intrapersonal, and interpersonal skills. A few examples suggest that teachers and students can enhance deeper learning by drawing on the evidence of their learning progress and needs provided by the formative assessment embedded within simulations and games. One such example, SimScientists, was described above. Another example, called Packet Tracer, was developed for use in the Cisco Network- ing Academy, which helps prepare networking professionals by providing online curricula and assessments to public and private education and train- ing institutions throughout the world. In the early years of the networking academy, assessments were conducted by instructors and consisted of either hands-on exams with real networking equipment or else multiple-choice exams. Now Packet Tracer has been integrated into the online curricula, allowing instructors and students to construct their own activities and students to explore problems on their own. Student-initiated assessments are embedded in the curriculum and include quizzes, interactive activities, and “challenge labs”—structured activities focusing on specific curriculum goals, such as integration of routers within a computer network. Students use the results of these assessments to guide their online learning activities

OCR for page 143
TEACHING AND ASSESSING FOR TRANSFER 179 and to improve their performance. A student may, with instructor authori- zation, access and re-access an assessment repeatedly. Formative and Summative Assessment: Classroom Systems of Assessment Assessments of learning and for learning (summative and formative assessments) can work together in a coherent system to support the devel- opment of cognitive, intrapersonal, and interpersonal skills. If they are to do so, however, the assessments must be in sync with each other and with the model of how learning develops. Figure 6-2 shows the interrelationships among components of such a model. The model features explicit learning goals for targeted cognitive, intrapersonal, and interpersonal competencies and poses a sequential and integrated approach to their development, as supported by the literature (see, for example, Durlak and Weissburg, 2011). In Figure 6-2, the benchmarks represent critical juncture points in prog- ress toward the ultimate goals, while the formative assessment represents the interactive process between the teachers and students and continuous data that facilitate student progress toward the junctures and ultimate goals. FIGURE 6-2 A coherent assessment system. SOURCE: Adapted from Herman (2010a).

OCR for page 143
180 EDUCATION FOR LIFE AND WORK Formative Assessment: Teacher Roles and Practices The coherent assessment system depicted in Figure 6-2 depends on for- mative assessment to facilitate student progress. Herman has described formative assessment as follows (2010b, p. 74): Rather than imparting knowledge in a transmission-oriented process, in formative assessment teachers guide students toward significant learning goals and actively engage students as assessors of themselves and their peers. Formative assessment occurs when teachers make their learning goals and success criteria explicit for students, gather evidence of how student learning is progressing, partner with students in a process of re- ciprocal feedback, and engage the classroom as a community to improve students’ learning. The social context of learning is fundamental to the process as is the need for classroom culture and norms that support active learning communities—for example, shared language and understanding of expected performance; relationships of trust and respect; shared re- sponsibility for and power in the learning process. Theorists (Munns and Woodward, 2006) observe that enacting a meaningful process of formative assessment influences what students perceive as valued knowledge, who can learn, who controls and is valued in the learning process. Yet formative assessment itself involves a change in instructional practice: It is not a regular part of most teachers’ practice, and teachers’ pedagogical content knowledge may be an impediment to its realization (Heritage et al., 2009; Herman, Osmundson, and Silver, 2010). These and other challenges related to teaching and assessing 21st century competencies are discussed in Chapter 7. In that chapter, we reach conclusions about the challenges and offer recommendations to overcome them. CONCLUSIONS AND RECOMMENDATIONS The research literature on teaching and assessment of 21st century competencies has examined a plethora of variously defined cognitive, in- terpersonal, and interpersonal competencies, Although the lack of uniform definitions makes it difficult to identify and delineate the desired learning outcomes of an educational intervention—an essential first step toward measuring effectiveness—emerging evidence demonstrates that it is possible to develop transferable competencies. • Conclusion: Although the absence of common definitions and qual- ity measures poses a challenge to research, emerging evidence indi- cates that cognitive, intrapersonal, and interpersonal competencies can be taught and learned in ways that promote transfer.

OCR for page 143
TEACHING AND ASSESSING FOR TRANSFER 181 The emerging evidence on teaching and learning of cognitive, intraper- sonal, and interpersonal competencies builds on a larger body of evidence related to teaching for transfer. Researchers have examined the question of how to design instruction for transfer for more than a century. In recent decades, advances in the research have begun to provide evidence-based answers to this question. Although this research has focused on acquisi- tion of cognitive competencies, it indicates that the process of learning for transfer involves the interplay of cognitive, intrapersonal, and interpersonal competencies, as reflected in our recommendations for design of instruction and teaching methods: • Recommendation 3: Designers and developers of instruction tar- geted at deeper learning and development of transferable 21st century competencies should begin with clearly delineated learning goals and a model of how learning is expected to develop, along with assessments to measure student progress toward and attain- ment of the goals. Such instruction can and should begin with the earliest grades and be sustained throughout students’ K-12 careers. • Recommendation 4: Funding agencies should support the devel- opment of curriculum and instructional programs that include research-based teaching methods, such as: o Using multiple and varied representations of concepts and tasks, such as diagrams, numerical and mathematical repre- sentations, and simulations, combined with activities and guid- ance that support mapping across the varied representations. o Encouraging elaboration, questioning, and explanation—for example, prompting students who are reading a history text to think about the author’s intent and/or to explain specific information and arguments as they read—either silently to themselves or to others. o Engaging learners in challenging tasks, while also support- ing them with guidance, feedback, and encouragement to re- flect on their own learning processes and the status of their understanding. o Teaching with examples and cases, such as modeling step-by- step how students can carry out a procedure to solve a problem and using sets of worked examples. o Priming student motivation by connecting topics to students’ personal lives and interests, engaging students in collaborative problem solving, and drawing attention to the knowledge and skills students are developing, rather than grades or scores.

OCR for page 143
182 EDUCATION FOR LIFE AND WORK o Using formative assessment to: (a) make learning goals clear to students; (b) continuously monitor, provide feedback, and respond to students’ learning progress; and (c) involve students in self- and peer assessment. The ability to solve complex problems and metacognition are im- portant cognitive and intrapersonal competencies that are often included in lists of 21st century skills. For instruction aimed at development of problem-solving and metacognitive competencies, we recommend: • Recommendation 5: Designers and developers of curriculum, in- struction, and assessment in problem solving and metacognition should use modeling and feedback techniques that highlight the processes of thinking rather than focusing exclusively on the prod- ucts of thinking. Problem-solving and metacognitive competencies should be taught and assessed within a specific discipline or topic area rather than as a stand-alone course. Teaching and learning of problem-solving and metacognitive competencies need not wait un- til all of the related component competencies have achieved fluency. Finally, sustained instruction and effort are necessary to develop expertise in problem solving and metacognition; there is simply no way to achieve competence without time, effort, motivation, and informative feedback. Most of the available research on design and implementation of instruc- tion for transfer has focused on the cognitive domain. We compared the instructional design principles and research-based teaching methods emerg- ing from this research with the instructional design principles and research- based teaching methods that are beginning to emerge from the smaller body of research focusing on development of intrapersonal and interpersonal skills, identifying some areas of overlap and similarities. • Conclusion: The instructional features listed above, shown by re- search to support the acquisition of cognitive competencies that transfer, could plausibly be applied to the design and implementa- tion of instruction that would support the acquisition of transfer- able intrapersonal and interpersonal competencies. The many gaps and weaknesses in the research reviewed here, particu- larly the lack of common definitions and measures, and the limited research in the intrapersonal and interpersonal domains limit our understanding of how to teach for transfer across the three domains.

OCR for page 143
TEACHING AND ASSESSING FOR TRANSFER 183 • Recommendation 6: Foundations and federal agencies should sup- port research programs designed to fill gaps in the evidence base on teaching and assessment for deeper learning and transfer. One important target for future research is how to design instruction and assessment for transfer in the intrapersonal and interpersonal domains. Investigators should examine whether, and to what ex- tent, instructional design principles and methods shown to increase transfer in the cognitive domain are applicable to instruction tar- geted to the development of intrapersonal and interpersonal com- petencies. Such programs of research would benefit from efforts to specify more uniform, clearly defined constructs and to produce associated measures of cognitive, intrapersonal, and interpersonal competencies.

OCR for page 143