integration of these advances for assessment in the form of an “assessment triangle.” This symbol represents the critical idea that assessment is a highly principled process of reasoning from evidence in which one attempts to infer what students know from their responses to carefully constructed and selected sets of tasks or performances. One corner of the triangle represents cognition (theory and data about how students learn), the second corner represents observations (the tasks students might perform to demonstrate their learning), and the third corner represents interpretation (the methods used to draw inferences from the observations). The study committee emphasized that the three elements of the triangle must be closely interrelated for assessment to be valid and informative.
Mislevy et al. (2003) extended this model in a framework known as evidence-centered design (ECD). This framework relates (1) the learning goals, as specified in a model of student cognition; (2) an evidence model specifying the student responses or performances that would represent the desired learning outcomes; and (3) a task model with specific types of questions or tasks designed to elicit the behaviors or performances identified in the evidence model (Messick, 1994). The assessment triangle and ECD frameworks can be used in a variety of ways, including evaluation of the quality and validity of particular assessments that have been used to appraise student learning for research or instructional purposes and to guide the design of new assessments. Examples of both applications are described below.
Quellmalz, Timms, and Schneider (2009) used ECD (see Figure 5-1) as a framework to evaluate assessment practices used in recent research on science simulations. The authors reviewed 79 articles that investigated the use of simulations in grades 6-12 and included reports of measured learning outcomes, drawing on a study by Scalise et al. (2009).
The authors found that the assessments included in the research on student learning outcomes rarely reflected the integrated elements of this framework. The studies tended to not describe in detail the learning outcomes targeted by the simulation (the student model), how tasks were designed to provide evidence related to this model (the task and evidence models), or the approach used to interpret the evidence and reach conclusions about student performance (the interpretation component of the assessment triangle). The lack of attention to the desired learning outcomes led to a lack of alignment between the assessment tasks used and the capabilities of simulations. Simulations often engage students in science processes in virtual environments, presenting them with interactive tasks that yield rich streams of data. Although these data could provide evidence of science process skills