simulation-based curriculum unit that includes a sequence of assessments designed to measure student understanding of ecosystems (Quellmalz, Timms, and Buckley, 2010). The SimScientists summative assessment is designed to measure middle school students’ understanding of ecosystems and scientific inquiry. Students are presented with the overarching task of describing an Australian grassland ecosystem for an interpretive center and respond by drawing food webs and conducting investigations with the simulation. Finally, they are asked to present their findings about the grasslands ecosystem.

SimScientists also includes elements focusing on transfer of learning, as described in a previous NRC report (National Research Council, 2011b, p. 94):

To assess transfer of learning, the curriculum unit engages students with a companion simulation focusing on a different ecosystem (a mountain lake). Formative assessment tasks embedded in both simulations identify the types of errors individual students make, and the system follows up with graduated feedback and coaching. The levels of feedback and coaching progress from notifying the student that an error has occurred and asking him or her to try again, to showing the results of investigations that met the specifications.

Students use this targeted, individual feedback to engage with the tasks in ways that improve their performance. As noted in Chapter 4, practice is essential for deeper learning, but knowledge is acquired much more rapidly if learners receive information about the correctness of their results and the nature of their mistakes.

Combining expertise in content, measurement, learning, and technology, these assessment examples employ evidence-centered design and are developing full validity arguments. They reflect the emerging consensus that problem solving must be assessed as well as developed within specific content domains (as discussed in the previous chapter; also see National Research Council, 2011a). In contrast to these examples, many other current technology-based projects designed to impact student learning lack a firm assessment or measurement basis (National Research Council, 2011b).

Project- and problem-based learning and performance assessments that require students to engage with novel, authentic problems and to create complex, extended responses in a variety of media would seem to be prime vehicles for measuring important cognitive competencies that may transfer. What remains to be seen, however, is whether the assessments are valid for their intended use and if the reliability of scoring and the generalizability of results can achieve acceptable levels of rigor, thereby avoiding validity and reliability problems of complex performance assessments developed in the past (e.g., Shavelson, Baxter, and Gao, 1993; Linn et al., 1995).

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement