A critical aspect of education is assessment, providing feedback on student learning to all parts of the education ecosystem. However, in this time of transition and blended learning environments, assessment used for the purpose of accountability is likely to be less useful and less equitable. For example, disparities in the resources available for student learning are likely to have increased.1 In addition, for many purposes and uses, such as to monitor students’ achievement of state learning standards, assessments need to be administered under standardized conditions and cover the same content and skills, which is not currently feasible. In this situation, it is more helpful to students to focus on continuous improvement through ongoing formative assessment and feedback. When student learning experience centers on explaining phenomena and designing solutions, embedded formative assessment becomes a natural way to support learning progress.2
Focusing on continuous improvement is also important for the entire education system at this time: educators are not expected to immediately implement all changes to instruction and assessment needed to adjust for the ongoing changes to learning environments in response to the COVID-19 pandemic. The process of implementation during the pandemic will be iterative and will require careful monitoring so that adjustments can be made along the way, in a manner very similar to that used to support ongoing student learning.
The guiding questions in this chapter are intended to help education practitioners consider how this volume’s four foundational principles—in particular, Principles 1 and 4—can be applied to planning for equitable and supportive formative assessment for students and continual improvements to education systems.
Although students may not have had opportunities to learn all of the material planned for spring 2020 due to disruptions to the school schedule, education experts throughout the country are recommending that the focus not be on diagnostic assessments in the beginning of the 2020–2021 school year.3 Instead, instruction can focus on grade-level-appropriate content along with ongoing monitoring of each student’s real-time needs for accessing the current content. As students engage in learning activities, teachers can look for evidence that students have the background knowledge and skills they need to engage with the grade-level material. Using effective formative assessment, they can determine students’ individual, immediate needs in each lesson and instructional unit to help them continue to build along learning progressions4 toward the targeted learning in each of the three dimensions of learning: disciplinary core ideas (DCIs), science and engineering practices (SEPs), and crosscutting concepts (CCCs).
These kinds of determinations are not new; they are standard practice for effective teaching and learning. Students come to the classroom every year with different levels of understanding from the previous year’s instruction for a variety of reasons. This year, unfinished learning is likely the focus of more school conversations as educators try to ensure students do not have “gaps” in their knowledge from any missing instructional time in spring 2020. However, it may not be necessary to try to address all unfinished learning right away when missing concepts and practices are not immediately needed for grade-level content this year. As detailed in Chapter 5, educators can plan together over the next one or more years to ensure that students have opportunities to build all of the necessary foundational knowledge and skills to support their future learning. In addition, a focus on gaps—on what students lack—is less supportive of student learning than a focus on what understandings students bring to class. All students bring to the learning environment a unique set of skills and understandings. As teachers get to know their students, they can more clearly see how to build on each student’s foundational knowledge and skills to support their continued learning.
3 See Council of Chief State School Officers, Restart & Recovery: Assessment Considerations for Fall 2020. Available: https://ccsso.org/sites/default/files/2020-07/Assessment%20Considerations%20for%20Fall%202020.pdf; also see Lake, R., and Olson, L., Learning as We Go: Principles for Effective Assessment During the COVID-19 Pandemic. Available: https://www.crpe.org/sites/default/files/final_diagnostics_brief_2020.pdf.
4 See Shepard, L.A., Diaz-Bilello, E., Penuel, W.R., and Marion, S.F. (2020). Classroom Assessment Principles to Support Teaching and Learning. Boulder, CO: Center for Assessment, Design, Research and Evaluation, University of Colorado.
Assessment is often thought of as a process that educators use to obtain a snapshot of student proficiencies at a given point in time. Currently, however, the focus can be on a high-quality classroom assessment system (whether the class is remote or in person) that prioritizes formative assessment to support continuous improvement. As defined by the Council of Chief State School Officers:5
[Formative assessment is] “a planned, ongoing process used by all students and teachers during learning and teaching to elicit and use evidence of student learning to improve student understanding of intended disciplinary learning outcomes and support students to become self-directed learners.”
Although formative assessment processes are foundational components of research-based teaching and learning, using them explicitly is still new to many teachers and requires deep pedagogical and assessment skills. In addition, when formative assessment is discussed, it is generally with a focus on information for teacher use rather than on ways both teachers and students can use that information to adjust teaching and learning. Ideally, student artifacts can be used to help both teachers and students identify where the students currently are along a continuum of understanding and proficiency for each of the three dimensions—SEPs, CCCs, and DCIs—and how well they are able to integrate them, and therefore help clarify the next steps each student needs to progress along those continua for each dimension.6 Used in this way, formative assessment can be a significant driver for student learning.7
To monitor and support student learning, especially in remote learning environments, it is important to collect evidence of student thinking, not just whether students know the right answer or have memorized the correct words. To focus only on the answer or the words is to focus primarily on outcomes related
6 For more information, see Guide to Implementing the Next Generation Science Standards. Available: https://www.nap.edu/read/18802/chapter/5#32; also see Shepard, L.A., Diaz-Bilello, E., Penuel, W.R., and Marion, S F. (2020). Classroom Assessment Principles to Support Teaching and Learning. Boulder, CO: Center for Assessment, Design, Research and Evaluation, University of Colorado.
7 For more information, see Design, Selection, and Implementation of Instructional Materials for the Next Generation Science Standards: Proceedings of a Workshop. Available: https://www.nap.edu/read/25001/chapter/4#30.
to DCIs rather than on all three dimensions.8 One beneficial outcome of remote instruction is the potential for increased recognition of the value to student learning of using the three dimensions for assessment purposes. For example, in one district before the pandemic, teachers had been using assessments based on memorization of factual content, but with remote instruction they became concerned with academic integrity issues, realizing that it was too easy for students to find the answers online to their usual assessments. The teachers became motivated to try new ways to monitor students’ learning, and they realized the benefits of shifting to new, three-dimensional instructional materials that would support students to learn in deep ways that could be assessed authentically.
Performance tasks, research projects, multimedia portfolio curation, and other student-generated artifacts can (1) offer students a range of ways to demonstrate their thinking; (2) provide information about student thinking that teachers can use to inform instructional decisions, including the potential need for individualized supports;9 and (3) give students concrete ways to reflect on and track their own learning over time.10 When student artifacts are collected remotely rather than through classroom performances, it may even be easier to document student progress and for students, families, and teachers to all monitor the progress together.
Some tools, such as the OpenSciEd exit tickets,11 collect information on students’ affective responses, allowing teachers to monitor how students are feeling and to help identify students who might need extra emotional support. Student writing and discourse can also provide evidence about student thinking, and realtime discourse can provide opportunities for teachers to probe more deeply to get more information or to gently add guiding questions that help students challenge their own thinking.12 As discussed in Chapter 4, to ensure this process is equitable and culturally responsive, classes will need to create explicit norms and guidelines for maintaining respect and understanding different students’ perspectives and patterns of participation.13
To support formative assessment processes, teachers need to first ensure that their students can share their thinking in equitable ways. Box 6-1 describes some techniques a teacher used with her students to help them share their learning artifacts after their class transitioned to remote instruction in spring 2020.
This story illustrates that teachers still have many options to monitor student progress in remote environments. Some of the ways the class adapted to remote learning, such as sharing some thinking over recorded videos, produced artifacts of student learning that would allow more detailed information to be tracked over time compared to teacher notes from students’ class discussions.
Whichever technological tool is used to gather student artifacts, it is important that the modalities used for student responses (e.g., writing, speaking, drawing) be flexible and adapted to student needs.14 For example, when the goal is to monitor student understanding of ideas and not the actual form of expression, students could be given the choice of different modalities to describe their thinking, including orally, through gestures in a video, or by taking pictures of their drawings. In particular, tasks can be designed with scaffolds to support students from bi- and multilingual backgrounds, including reducing linguistic complexity, making evaluation criteria explicit, and providing alternative ways for students to express their ideas.15 In addition, teachers can make sure all students are supported to feel included in class discussions.
Teachers and others designing assessments need to be clear about the three-dimensional learning targets they want to assess, the types of evidence that would help provide insight into students’ progress toward these targets, and the types of student work or observations that would provide that evidence. As evidence is collected, teachers can appraise progress and identify any areas of difficulties for students, such as being able to apply a particular crosscutting concept to make sense of a phenomenon or effectively argue from evidence. Teachers can use this information to plan for how to support the students in building those proficiencies.16
As discussed in Chapter 4 in the context of instructional routines, it is especially important that students understand what is expected of them and what success looks like when they are working more independently in remote, blended, or hybrid environments. Rubrics are important tools to help students assess and monitor their own learning along a progression of performance and therefore help build their agency in learning. For this reason, it can be beneficial to use student-friendly rubric language to describe levels of proficiency for DCIs, SEPs, and CCCs expected in any performance and for the three dimensions integrated. It is also important that the rubric be tailored for the specific lesson context. For
15 See Fine, C., and Furtak, E. (2020). The SAEBL checklist: Science classroom assessments that work for emergent bilingual learners. Science Teacher (Normal, Ill.), 87, 38–48.
example, generic rubrics about modeling are less helpful than grade-level-specific indicators of student modeling of one particular phenomenon.
If students become partners in helping to create the rubrics, they can get a clearer picture of what success looks like and how it is attainable.17 Checklists can also help students understand what to include in their responses. For example, checklists could show students that they are expected to include both visible and invisible elements in their models (where appropriate to the grade level), thereby supporting students’ developing ability to create these models on their own and to think about parts of systems that are not visible. It is to be expected that elementary school students will need more support for monitoring their own learning than middle school or high school students. More details about formative assessment that supports the goals of three-dimensional science and engineering learning are described in Developing Assessments for the Next Generation Science Standards and Seeing Students Learn Science.18
How can students be supported to give and receive constructive feedback from both their peers and their teachers?
Receiving meaningful feedback is a powerful way for students to progress in their learning, and it is an essential part of integrating effective formative assessment practices into teaching and learning.19 Incorporating peer feedback into lessons has the added benefit of reducing the sense of isolation that students may feel in remote learning situations. Giving and receiving ongoing constructive feedback can help encourage students to persist in their learning in all settings. In addition, reflecting on their performance and the feedback they receive and then deciding how to incorporate it helps promote student agency.
Giving feedback is not the same as determining grades. While students and teachers are adjusting to new learning environments, building relationships and developing a sense of comfort with the growth opportunities available with formative assessment, it is important to help students feel safe sharing their evolving thinking and their questions along the way. They need to know that they will not
18 For more information, see Developing Assessments for the Next Generation Science Standards. Available: https://www.nap.edu/catalog/18409/developing-assessments-for-the-next-generation-science-standards; also see Seeing Students Learn Science: Integrating Assessment and Instruction in the Classroom. Available: https://www.nap.edu/catalog/23548/seeing-students-learn-science-integrating-assessment-and-instruction-in-the.
be penalized for not getting the “correct” answers right away or for sharing ideas they later revise. In addition, when students are learning remotely with disparate access to resources, grading may be less equitable.20 At least early in the 2020–2021 school year, it may be helpful to focus on the following types of actionable feedback suggested by the Council of State Science Supervisors in collaboration with the National Science Teaching Association and the National Science Education Leadership Association:21
- one-on-one conversations or written feedback with a mechanism for students to reflect and respond that highlights positive aspects of student performance;
- goal-oriented reflections on possible next steps;
- opportunities to discuss challenges students are facing and ways to move forward; and
- constructive identification and suggestions for areas of growth, perhaps focusing on one actionable area at a time.
Box 6-2 describes how a teacher made her feedback to students more personal and approachable through the use of videos.
The students in the story appreciated and benefitted from the technical medium of videos, which can seem more personal than written communications. This story also highlights the importance of communicating feedback in a caring and humanized way such that students feel personally valued and supported.
Teachers need continued professional learning experiences and ongoing support to increase their facility with using and scaffolding different types of feedback to support student learning across different instructional environments, and to identify tools that can help this process. For example, teachers can support peer feedback processes that promote critical thinking, colearning, and student growth, which might take the form of small breakout room discussions or use of tools such as Jamboard (see Figure 4.2) to allow students to provide feedback to each other and promote student reflection and changes in thinking. Peer feedback can be a strong source of motivation for students and may help build a sense of collaborative learning among everyone in a class.22
Box 6-3 describes how a teacher made use of peer conversations to help push students’ thinking, allowing students to clarify their ideas without needing direct intervention from the teacher. Although this activity took place before the pandemic, the tools and ideas can be applied to the 2020–2021 school year.
As shown in the story, peer feedback and discussion can quickly prompt student learning. Small group conversations, in this case through video conference breakout rooms, allowed each student’s ideas to be considered by peers in more detail than would be possible in a full class discussion and therefore allowed each student time to talk through and reconsider their own ideas.
How can feedback from families and other stakeholders be gathered and used to inform ongoing improvements?
As understanding evolves about how to keep students and educators safe through the COVID-19 pandemic, decisions about instructional models and schedules will likely change. Ongoing modifications will need to be made to instructional plans. These modifications directly affect teachers’ practice and will affect families, so school district decisions about changes need to be made in partnership with teachers, students, families, and community partners. Feedback from these different stakeholder groups, who have a vested interest in the outcomes of student education, will improve decision-making processes and help ensure that they address the specific needs of the community, as well as of students.23
In addition, all education stakeholders, including teachers, students, and families, benefit from staying informed about how new curriculum, instruction, and assessment decisions are being implemented and the anticipated effects.24 Feedback from students, teachers, parents or guardians, and education leaders can be collected and used to make ongoing adjustments to the curricular program and education services as needed.25 This feedback can include both needs and assets related to academics, physical and mental health, and socioeconomic conditions. One example comes from the Oklahoma State Department of Education, which held virtual information gathering meetings with educators from across the state in spring 2020 to find out how things were going soon after school closures. Another example comes from the Colorado Department of Education, which did a needs assessment to determine the best ways to support educators and students.26 In addition, short periodic surveys could be used to gather ongoing information
from a wide variety of stakeholders, including teachers, students, families, caregivers, and community partners.27
As with feedback for student learning purposes, feedback gathered about the education system is more helpful when it is ongoing, actionable, and timely—not lagging by 1 year or more.28 This means that regularly collecting data on a small number of priority metrics and having plans for how to make changes based on those data is much more useful than collecting data on hundreds of metrics with no plans for how to use the results. In order to be useful for identifying system needs and addressing longstanding educational inequities, whenever possible, enough data need to be collected such that they can be disaggregated by race, ethnicity, socioeconomic status, gender identity, sexual orientation, English learner status, immigration status, and different ability status.29 In addition, because the needs for elementary, middle, and high school students, families, and teachers are different, feedback should be gathered from each grade band.
When collecting the data, it is important to consider what metrics are being used, because the issues measured will likely be those that receive attention. For example, monitoring the number of families engaged in planning processes could lead to an emphasis on engaging families. Monitoring the number of students with special needs who do not have access to high-quality science and engineering instruction could lead to an emphasis on reducing this number.30 Monitoring the time spent on science and engineering in elementary school classes could lead to an increase in this time. For some of these metrics, such as the quality of science and engineering instruction accessible to all students, it may be necessary to observe virtual classes and review teachers’ lesson plans and student work.31 Additional ideas for setting up systems of ongoing monitoring and feedback can be found in Developing Assessments for the Next Generation Science Standards32
As data are collected, they need to be used similarly to formative assessment—in support of the continuous growth and improvement of students,
educators, and systems.33 Implementation throughout the pandemic period is expected to be slow, and improvements are expected to be iterative with each round of implementation and feedback.34 Maintaining communications with families and educators about a realistic timeline of expectations and ongoing progress toward goals will be important.35
- Continuously monitor student needs for engaging in grade-level learning rather than conducting diagnostic assessments at the beginning of the year.
- Use formative assessment to look for evidence that students have the background knowledge and skills they need to engage with the current grade-level learning.
- Use “classroom”-level assessment to support continuous improvement in student learning and provide supports for students to give and receive feedback from both peers and teachers.
- Support teachers in collecting ongoing evidence of student progression in all three dimensions—SEPs, CCCs, and DCIs—and in their use together.
- Encourage teachers to focus on formative assessment practices, including providing actionable feedback, that highlight continuous improvement rather than summative grading.
- Develop mechanisms for monitoring and communicating to families about whole-class, grade-level, and school-level progress.
- Carefully choose and communicate the priority metrics that will be used for monitoring systemwide progress of science and engineering education
- Collect ongoing feedback from students, families, teachers, and community partners on their needs and how implementation is going during the pandemic.