How can one measure high school students’ skills, capabilities, and grasp of concepts with respect to information and communications technology (ICT)?
What assessment tools exist or are under development?
What are the challenges of developing large-scale assessments of ICT fluency?
Following the two sessions devoted to exploring the kinds of outcomes needed and specific strategies and approaches for achieving them, this session essentially addressed the measurement of outcomes. Its aim was to acquaint workshop participants with creative practices and tools that have been developed to assess students’ ICT competencies.
Speakers described a variety of assessment vehicles aimed at diverse ages, ranging from relatively narrow applications up to “high-stakes” tests administered on a national scale. Presenters suggested, however, that the underlying principles were generalizable, with the principal differences among tests being degree of difficulty. In other words, innovative ICT tests for college students or professional license applicants could, with relatively modest intellectual adjustment, be useful in designing assessments for high school students as well. One speaker also described an ambitious national program of assessments designed directly for K–12 students.
Presenters were Martin Ripley, head of e-strategy at the Qualification Curriculum Authority (QCA) of the United Kingdom; Irvin Katz, a senior
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 45
ICT Fluency and High Schools: A Workshop Summary 6 Assessments to Measure Students’ Competencies How can one measure high school students’ skills, capabilities, and grasp of concepts with respect to information and communications technology (ICT)? What assessment tools exist or are under development? What are the challenges of developing large-scale assessments of ICT fluency? Following the two sessions devoted to exploring the kinds of outcomes needed and specific strategies and approaches for achieving them, this session essentially addressed the measurement of outcomes. Its aim was to acquaint workshop participants with creative practices and tools that have been developed to assess students’ ICT competencies. Speakers described a variety of assessment vehicles aimed at diverse ages, ranging from relatively narrow applications up to “high-stakes” tests administered on a national scale. Presenters suggested, however, that the underlying principles were generalizable, with the principal differences among tests being degree of difficulty. In other words, innovative ICT tests for college students or professional license applicants could, with relatively modest intellectual adjustment, be useful in designing assessments for high school students as well. One speaker also described an ambitious national program of assessments designed directly for K–12 students. Presenters were Martin Ripley, head of e-strategy at the Qualification Curriculum Authority (QCA) of the United Kingdom; Irvin Katz, a senior
OCR for page 45
ICT Fluency and High Schools: A Workshop Summary research scientist at the Educational Testing Service’s Center for Assessment, Innovation, and Technology Transfer; and John Behrens, senior manager of assessment development and innovation at Cisco Systems. INNOVATION AND EXCITEMENT IN THE UNITED KINGDOM Noting that QCA is the government body responsible for the U.K.’s curricula, standards, examinations, and assessments for all students ages 5– 16, Martin Ripley spoke in particular about the national curriculum’s “Key Stage 3,” which covers students in grades 7–9 (ages 11–14). He said that while the testing of these students in the subjects of English, mathematics, and science has been compulsory since 1994, the agency plans to add four new statutory tests—in ICT—in 2008. These tests are high stakes, Ripley said. “The results are published on a school-by-school basis by the national government, and because they are made available to every parent and every school governor in the country, these results are used for school accountability purposes.” The ICT curriculum for Key Stage 3, he said, has four basic components: Finding things out—a student’s ability to select an appropriate source and assess the value of the information thus obtained. Developing ideas and making things happen—for example, using ICT to measure, record, respond to, and control events. Exchanging and sharing information—using ICT for such purposes as Web publishing or video conferencing. Reviewing, modifying, and evaluating work as it progresses. QCA has set increasingly stringent standards, ranging from level 1 to level 8, on what students are expected to achieve as they progress through their schooling. Ripley said that a 13-year-old should be achieving level 5, which includes such abilities as creating sequences of instructions to control events and exploring the effects of changing the variables in ICT models, among numerous other skills. Ripley described the elements of testing that ascertain whether or not the curriculum is yielding student performance at the desired standard levels. Tests are designed, he said, to articulate nine ICT capabilities: Searching and selecting—“an aspect of finding things out.” Organizing and structuring—“using systemic approaches to find-
OCR for page 45
ICT Fluency and High Schools: A Workshop Summary ing things out.” Developing ideas—“students’ ability to measure and record.” Exchanging information—“primarily communication.” Reviewing—“for the purposes of improvement.” Defining tasks—“students’ ability to characterize the tasks that they are being asked to complete.” Control—“using technology to make things happen.” Modeling—“using ICT as a tool. Presenting information—“using forms of technology for the purposes of presentation.” Ripley briefly summarized key components of his current project. Regarding the first component—getting the schools’ infrastructure ready—he noted that there had been an investment to ensure access to computers and broadband. In describing the actual test program and the kinds of questions posed, Ripley showed several screen graphs of Key Stage 3 ICT tasks that are presented to children. These tests “are a virtual world we have created that mimics very closely a Windows-based desktop environment,” he said. Entirely within its confines—i.e., not through the Internet—students log on to a test section and have access to a variety of applications built for the purposes of that test. Behind an intranet Web browser, for example, “sits a whole plethora of different Websites, on different resources and kinds of information, that the student can gain access to” for use in addressing a given task. The designed tasks are typically presented to students in an e-mail message to their screens. For example, one task may ask them to go into the virtual world in order to update a hotel leaflet aimed at attracting more guests. This particular task, Ripley noted, is “reasonably scaffolded. It provides instructions and directions, making clear to students that the leaflet needs to be updated, that it needs a photo of the swimming pool, that the prices should be inserted, and even that they should save their work.” Scoring this task, he said “is a matter of electronically eavesdropping on how children set about solving the task—whether students use keyboard shortcuts in order to navigate around the virtual world we have created, how they select the photograph, whether they check the validity of the information on prices.” Another example, less scaffolded, is a partly finished presentation for display in a shopping center. Students are provided with a number of comments on the presentation from different sources, and they are asked to
OCR for page 45
ICT Fluency and High Schools: A Workshop Summary update it in light of those comments. “In this case we are looking for higher-order thinking from the students,” said Ripley. “We are asking them to make judgments about the comments and to engage in quite a sustained activity—of 15 or maybe 20 minutes—to complete the presentation.” As in the preceding example, and for all other tasks, students are scored against the nine ICT capabilities. “What we have created is truly innovative, exciting, and robust,” Ripley said. But he acknowledged that “at the moment it is ‘wrong footing’ many teachers and many students.” For example, in a pilot version of this type of test involving 45,000 students, which QCA ran during the summer of 2005, it was evident that “students are really very unfamiliar with this mode of taking a test and that lack of familiarity clearly impacted on student performance.” Many students ran out of time, encountered technical difficulties, or showed underdeveloped technique. The bottom line, he said, is that they had weaknesses in two main areas: modeling and data handling. Meanwhile, Ripley observed, “there is some depth of concern that ICT performance in our schools has not been as close to the mark as we would like it to be—students’ achievement is good or better in only 54 percent of lessons, and with huge variation from school to school. Though ICT performance continues to improve, it’s still the subject where there is the most underachievement in schools.” The country’s goals are ambitious, however. “A team of about 400 people nationally has responsibility to get 85 percent of our students to reach the level 5 target by 2007,” he said. In pursuit of that objective, the team is focusing especially on the preparation of teachers. A VIEW FROM THE EDUCATIONAL TESTING SERVICE Irvin Katz pointed out that his extensive involvement in ICT skills assessment pertained to ICT literacy, rather than ICT fluency, which was the focus of the workshop. But he suggested that ICT literacy—which he and his colleagues at the Educational Testing Service (ETS) have formally defined as the “ability to use digital technologies, communication tools, and/or networks to access, manage, integrate, evaluate, create, and communicate information ethically and legally in order to function in a knowledge society”—is just a particular subset of ICT fluency. It is basically “information literacy as it is viewed through the use of technology,” Katz said. He also noted that while his work has been geared to higher education, the kinds of assessments that he and his colleagues have developed are
OCR for page 45
ICT Fluency and High Schools: A Workshop Summary readily transferable, and in both directions: to precollege (K–12) systems; and beyond college, to graduate schools and workplaces. The differences between these assessment levels, he said, would largely be a matter of difficulty. ETS’s overall model of ICT literacy has seven components, which are aligned with the standards of the American Council of Research Libraries: Define an information need. Access resources and information. Manage information. Integrate information through interpretation and synthesis. Evaluate resources and information. Create new information or adapt existing information. Communicate information to particular audiences. Katz stressed that these components emphasize cognitive skills—intellectual capabilities—rather than the technical skills involved in using particular technologies. For example, students may be presented with a half-completed spreadsheet, given a little time to accommodate themselves to that type of spreadsheet, and then be asked to complete it using the resources they have been given. The components also address ethical issues, he said, such as knowledge about citations or the ability to deal effectively with confidential information. ETS’s testing of these skills has been framed around modest scenarios aimed at “simulating real-world types of activities,” Katz said. “We have taken this big, sustained type of reasoning and broken it up into little pieces. We provide all the information that students would need at that point, and they take it the next step.” He noted as well that this approach “allows us to collect a lot of data on each individual in a relatively short amount of time.” The current version of the test, Katz reported, is delivered over the Internet and is 75 minutes long. It consists of 14 short tasks, each of which targets one or more components of the ICT literacy model. There is also a longer, 15-minute task that targets two of the skills and starts to look at integration across skills. He offered several examples, speaking at length on a task “designed to target integration: taking information from a bunch of places, summarizing it, and then drawing some type of conclusion from that summary.” The problem asks students to imagine that they work at an architecture firm that happens to employ a lot of left-handed people and that the boss
OCR for page 45
ICT Fluency and High Schools: A Workshop Summary wants to find some vendors of left-handed products. Information (in varying degrees of explicitness) on three vendors is provided in three different electronic formats, and students must decide how to extract the specific information needed and then how to compare the products from those different vendors. Finally, students have to rank the vendors and provide a recommendation. In keeping with the purpose of assessing students’ intellectual capabilities, they are scored on how well they figure out what it is they need to compare, how well they pull that information from the available resources, and how well they draw conclusions. Scoring other tasks might involve, for example, how well students search the Internet or a database, critically evaluate information, decide on what resources are more authoritative, or develop presentations that meet some main objective. In the latter case, Katz said, “key aspects include: Are you meeting the information needs of your audience? And are you supporting whatever main point it is that you want to make?” Feedback about test performance “is not so much detailed scores,” he said, because those wouldn’t be very reliable. Rather, feedback largely consists of a discussion of the types of strengths and weaknesses that the student has shown, together with some recommendations on the types of tasks he or she might do, working with an instructor, to improve.” Katz concluded by citing five benefits of such assessments of ICT literacy: Supporting institutional ICT-literacy initiatives. Guiding curricular innovations and evaluating curricular changes. Guiding individual learning. Providing a “stake in the ground” for what ICT skills look like. Providing a model for teachers of possible assignments. BROAD AND NARROW ASSESSMENT John Behrens noted that because the word “assessment” has different meanings for different people, it is important to make clear what one is referring to under any given set of circumstances. For example, he asked, “Are we talking instructional, formative, summative, or diagnostic assessment?” Behrens said that in his work at the Cisco Networking Academy, the
OCR for page 45
ICT Fluency and High Schools: A Workshop Summary Cisco Professional Certification Program, and Cisco University, a construct called the Seven Cs—claims, curriculum, collaboration, complexity, computation, communication, and coordination (plus an eighth: contextualization)—defines assessment of outcomes from training programs involving the company’s products and services. Behrens cited as well a useful delivery model, called evidence-centered design, that has four basic parts: task selection, presentation, evidence observation, and evidence accumulation. In other words, he said, the assessment cycle is “interact, look at what you’ve got back, characterize it, and decide what to do next.” Out of Cisco’s vast curriculum- and assessment-design work, both internal and external—it has partnered with over 10,000 schools in 150 countries, Behrens said—he offered a variety of examples ranging from pilot projects for testing students to simulation tasks used in professional certification exams. Discussing simulations at some length, he described their basic language at Cisco (Internetworking Operating System), their applications, and the ways in which their results can be presented. Behrens stressed the utility of a digital format for providing diverse types of feedback both to instructors and students. It can place item-level information into a grade book, for instance, and provide verbal feedback together with scoring rules, he said. Instructors are also given the work products and user logs so that they can score the test themselves, if they wish, or look for other patterns. “A great thing going on in the world right now, which we are all excited about, is the integration of instruction and assessment,” Behrens said. He described a tool, made available to instructors without charge on the Internet, called Packet Tracer. “It allows students of digital networking systems—by themselves or in groups—to practice planning, design, or troubleshooting,” he said. “And it can be used for assessment, both formally and informally, in class and out of class.” Such an approach, Behrens maintained, is clearly the wave of the future. “Because the world is becoming more digital, the aids for describing the world are becoming more digital too,” he said. “Assessment people need to use these tools rather than reinvent the wheel every time.” Eric Klopfer, director of the Teacher Education Program at the Massachusetts Institute of Technology, raised the issue of potential bias in the presentation of such digitally based assessments to students. In assigning tasks by email, for example, some students, depending on the e-mail applications that they customarily use, if any, might be disadvantaged, he said.
OCR for page 45
ICT Fluency and High Schools: A Workshop Summary Ripley admitted that he and his colleagues in the United Kingdom often feel torn between offering a “reductionist” test (presenting a task so that virtually all students will be familiar with it) and elevating the test (trying to raise the minimum expectation for students). Because his agency’s mission is to design “high-stakes” assessments that offer “a very similar test experience for all students” around the country, it is important to try to minimize any bias in such environments. Similarly, Katz pointed out that ETS—in using e-mail, for example, in its testing—“tries to come up with something generic” that will likely resemble whatever a student is used to. Moreover, in echoing a major point from his talk, he noted that “we are focusing not so much on the technology but on what people are doing with the information that is presented.” Still, he acknowledged, “it is hard to avoid some aspect of bias.” Ripley added that administration of tests in a digital environment might actually reduce bias. QCA wanted to know “which students, in which categories of need, we would exclude if we went down a digital front—a screen route—for formulating tasks.” So it did a study, completed in 2004, “Our top-line conclusion was that we were enabling more students to access the tasks on screen than if they were on paper,” said Ripley. “So we are certainly not doing more harm than in paper-based tests. And I would argue that we are facilitating engagement, not preventing engagement, with the test.” Heidi Schweingruber of the National Academies’ Board on Science Education raised the issue of ICT embeddedness in content areas—an often-mentioned idea during the workshop—and noted that it did not seem to be reflected in the discussion of assessments. Ripley acknowledged that so far “this has been a challenge for us. Our tests look rather like standardized ICT lessons, or business applications of ICT, and not even school-based applications of ICT.” But the omission has been noted, he said, and about two years ago his agency began development work in this area. Colleagues are making progress, he suggested, though “the material is not yet ready to show publicly or to use in any of our test administrations.”