National Academies Press: OpenBook

ICT Fluency and High Schools: A Workshop Summary (2006)

Chapter: 6 Assessments to Measure Students’ Competencies

« Previous: 5 What Are High School Students Learning? Where and How Are They Learning It?
Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×

6
Assessments to Measure Students’ Competencies

  • How can one measure high school students’ skills, capabilities, and grasp of concepts with respect to information and communications technology (ICT)?

  • What assessment tools exist or are under development?

  • What are the challenges of developing large-scale assessments of ICT fluency?

Following the two sessions devoted to exploring the kinds of outcomes needed and specific strategies and approaches for achieving them, this session essentially addressed the measurement of outcomes. Its aim was to acquaint workshop participants with creative practices and tools that have been developed to assess students’ ICT competencies.

Speakers described a variety of assessment vehicles aimed at diverse ages, ranging from relatively narrow applications up to “high-stakes” tests administered on a national scale. Presenters suggested, however, that the underlying principles were generalizable, with the principal differences among tests being degree of difficulty. In other words, innovative ICT tests for college students or professional license applicants could, with relatively modest intellectual adjustment, be useful in designing assessments for high school students as well. One speaker also described an ambitious national program of assessments designed directly for K–12 students.

Presenters were Martin Ripley, head of e-strategy at the Qualification Curriculum Authority (QCA) of the United Kingdom; Irvin Katz, a senior

Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×

research scientist at the Educational Testing Service’s Center for Assessment, Innovation, and Technology Transfer; and John Behrens, senior manager of assessment development and innovation at Cisco Systems.

INNOVATION AND EXCITEMENT IN THE UNITED KINGDOM

Noting that QCA is the government body responsible for the U.K.’s curricula, standards, examinations, and assessments for all students ages 5– 16, Martin Ripley spoke in particular about the national curriculum’s “Key Stage 3,” which covers students in grades 7–9 (ages 11–14). He said that while the testing of these students in the subjects of English, mathematics, and science has been compulsory since 1994, the agency plans to add four new statutory tests—in ICT—in 2008. These tests are high stakes, Ripley said. “The results are published on a school-by-school basis by the national government, and because they are made available to every parent and every school governor in the country, these results are used for school accountability purposes.”

The ICT curriculum for Key Stage 3, he said, has four basic components:

  1. Finding things out—a student’s ability to select an appropriate source and assess the value of the information thus obtained.

  2. Developing ideas and making things happen—for example, using ICT to measure, record, respond to, and control events.

  3. Exchanging and sharing information—using ICT for such purposes as Web publishing or video conferencing.

  4. Reviewing, modifying, and evaluating work as it progresses.

QCA has set increasingly stringent standards, ranging from level 1 to level 8, on what students are expected to achieve as they progress through their schooling. Ripley said that a 13-year-old should be achieving level 5, which includes such abilities as creating sequences of instructions to control events and exploring the effects of changing the variables in ICT models, among numerous other skills.

Ripley described the elements of testing that ascertain whether or not the curriculum is yielding student performance at the desired standard levels. Tests are designed, he said, to articulate nine ICT capabilities:

  1. Searching and selecting—“an aspect of finding things out.”

  2. Organizing and structuring—“using systemic approaches to find-

Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×

ing things out.”

  1. Developing ideas—“students’ ability to measure and record.”

  2. Exchanging information—“primarily communication.”

  3. Reviewing—“for the purposes of improvement.”

  4. Defining tasks—“students’ ability to characterize the tasks that they are being asked to complete.”

  5. Control—“using technology to make things happen.”

  6. Modeling—“using ICT as a tool.

  7. Presenting information—“using forms of technology for the purposes of presentation.”

Ripley briefly summarized key components of his current project. Regarding the first component—getting the schools’ infrastructure ready—he noted that there had been an investment to ensure access to computers and broadband.

In describing the actual test program and the kinds of questions posed, Ripley showed several screen graphs of Key Stage 3 ICT tasks that are presented to children. These tests “are a virtual world we have created that mimics very closely a Windows-based desktop environment,” he said. Entirely within its confines—i.e., not through the Internet—students log on to a test section and have access to a variety of applications built for the purposes of that test. Behind an intranet Web browser, for example, “sits a whole plethora of different Websites, on different resources and kinds of information, that the student can gain access to” for use in addressing a given task. The designed tasks are typically presented to students in an e-mail message to their screens.

For example, one task may ask them to go into the virtual world in order to update a hotel leaflet aimed at attracting more guests. This particular task, Ripley noted, is “reasonably scaffolded. It provides instructions and directions, making clear to students that the leaflet needs to be updated, that it needs a photo of the swimming pool, that the prices should be inserted, and even that they should save their work.” Scoring this task, he said “is a matter of electronically eavesdropping on how children set about solving the task—whether students use keyboard shortcuts in order to navigate around the virtual world we have created, how they select the photograph, whether they check the validity of the information on prices.”

Another example, less scaffolded, is a partly finished presentation for display in a shopping center. Students are provided with a number of comments on the presentation from different sources, and they are asked to

Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×

update it in light of those comments. “In this case we are looking for higher-order thinking from the students,” said Ripley. “We are asking them to make judgments about the comments and to engage in quite a sustained activity—of 15 or maybe 20 minutes—to complete the presentation.” As in the preceding example, and for all other tasks, students are scored against the nine ICT capabilities.

“What we have created is truly innovative, exciting, and robust,” Ripley said. But he acknowledged that “at the moment it is ‘wrong footing’ many teachers and many students.” For example, in a pilot version of this type of test involving 45,000 students, which QCA ran during the summer of 2005, it was evident that “students are really very unfamiliar with this mode of taking a test and that lack of familiarity clearly impacted on student performance.” Many students ran out of time, encountered technical difficulties, or showed underdeveloped technique. The bottom line, he said, is that they had weaknesses in two main areas: modeling and data handling.

Meanwhile, Ripley observed, “there is some depth of concern that ICT performance in our schools has not been as close to the mark as we would like it to be—students’ achievement is good or better in only 54 percent of lessons, and with huge variation from school to school. Though ICT performance continues to improve, it’s still the subject where there is the most underachievement in schools.”

The country’s goals are ambitious, however. “A team of about 400 people nationally has responsibility to get 85 percent of our students to reach the level 5 target by 2007,” he said. In pursuit of that objective, the team is focusing especially on the preparation of teachers.

A VIEW FROM THE EDUCATIONAL TESTING SERVICE

Irvin Katz pointed out that his extensive involvement in ICT skills assessment pertained to ICT literacy, rather than ICT fluency, which was the focus of the workshop. But he suggested that ICT literacy—which he and his colleagues at the Educational Testing Service (ETS) have formally defined as the “ability to use digital technologies, communication tools, and/or networks to access, manage, integrate, evaluate, create, and communicate information ethically and legally in order to function in a knowledge society”—is just a particular subset of ICT fluency. It is basically “information literacy as it is viewed through the use of technology,” Katz said.

He also noted that while his work has been geared to higher education, the kinds of assessments that he and his colleagues have developed are

Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×

readily transferable, and in both directions: to precollege (K–12) systems; and beyond college, to graduate schools and workplaces. The differences between these assessment levels, he said, would largely be a matter of difficulty.

ETS’s overall model of ICT literacy has seven components, which are aligned with the standards of the American Council of Research Libraries:

  1. Define an information need.

  2. Access resources and information.

  3. Manage information.

  4. Integrate information through interpretation and synthesis.

  5. Evaluate resources and information.

  6. Create new information or adapt existing information.

  7. Communicate information to particular audiences.

Katz stressed that these components emphasize cognitive skills—intellectual capabilities—rather than the technical skills involved in using particular technologies. For example, students may be presented with a half-completed spreadsheet, given a little time to accommodate themselves to that type of spreadsheet, and then be asked to complete it using the resources they have been given. The components also address ethical issues, he said, such as knowledge about citations or the ability to deal effectively with confidential information.

ETS’s testing of these skills has been framed around modest scenarios aimed at “simulating real-world types of activities,” Katz said. “We have taken this big, sustained type of reasoning and broken it up into little pieces. We provide all the information that students would need at that point, and they take it the next step.” He noted as well that this approach “allows us to collect a lot of data on each individual in a relatively short amount of time.”

The current version of the test, Katz reported, is delivered over the Internet and is 75 minutes long. It consists of 14 short tasks, each of which targets one or more components of the ICT literacy model. There is also a longer, 15-minute task that targets two of the skills and starts to look at integration across skills.

He offered several examples, speaking at length on a task “designed to target integration: taking information from a bunch of places, summarizing it, and then drawing some type of conclusion from that summary.” The problem asks students to imagine that they work at an architecture firm that happens to employ a lot of left-handed people and that the boss

Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×

wants to find some vendors of left-handed products. Information (in varying degrees of explicitness) on three vendors is provided in three different electronic formats, and students must decide how to extract the specific information needed and then how to compare the products from those different vendors. Finally, students have to rank the vendors and provide a recommendation.

In keeping with the purpose of assessing students’ intellectual capabilities, they are scored on how well they figure out what it is they need to compare, how well they pull that information from the available resources, and how well they draw conclusions. Scoring other tasks might involve, for example, how well students search the Internet or a database, critically evaluate information, decide on what resources are more authoritative, or develop presentations that meet some main objective. In the latter case, Katz said, “key aspects include: Are you meeting the information needs of your audience? And are you supporting whatever main point it is that you want to make?”

Feedback about test performance “is not so much detailed scores,” he said, because those wouldn’t be very reliable. Rather, feedback largely consists of a discussion of the types of strengths and weaknesses that the student has shown, together with some recommendations on the types of tasks he or she might do, working with an instructor, to improve.”

Katz concluded by citing five benefits of such assessments of ICT literacy:

  1. Supporting institutional ICT-literacy initiatives.

  2. Guiding curricular innovations and evaluating curricular changes.

  3. Guiding individual learning.

  4. Providing a “stake in the ground” for what ICT skills look like.

  5. Providing a model for teachers of possible assignments.

BROAD AND NARROW ASSESSMENT

John Behrens noted that because the word “assessment” has different meanings for different people, it is important to make clear what one is referring to under any given set of circumstances. For example, he asked, “Are we talking instructional, formative, summative, or diagnostic assessment?”

Behrens said that in his work at the Cisco Networking Academy, the

Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×

Cisco Professional Certification Program, and Cisco University, a construct called the Seven Cs—claims, curriculum, collaboration, complexity, computation, communication, and coordination (plus an eighth: contextualization)—defines assessment of outcomes from training programs involving the company’s products and services.

Behrens cited as well a useful delivery model, called evidence-centered design, that has four basic parts: task selection, presentation, evidence observation, and evidence accumulation. In other words, he said, the assessment cycle is “interact, look at what you’ve got back, characterize it, and decide what to do next.”

Out of Cisco’s vast curriculum- and assessment-design work, both internal and external—it has partnered with over 10,000 schools in 150 countries, Behrens said—he offered a variety of examples ranging from pilot projects for testing students to simulation tasks used in professional certification exams. Discussing simulations at some length, he described their basic language at Cisco (Internetworking Operating System), their applications, and the ways in which their results can be presented.

Behrens stressed the utility of a digital format for providing diverse types of feedback both to instructors and students. It can place item-level information into a grade book, for instance, and provide verbal feedback together with scoring rules, he said. Instructors are also given the work products and user logs so that they can score the test themselves, if they wish, or look for other patterns.

“A great thing going on in the world right now, which we are all excited about, is the integration of instruction and assessment,” Behrens said. He described a tool, made available to instructors without charge on the Internet, called Packet Tracer. “It allows students of digital networking systems—by themselves or in groups—to practice planning, design, or troubleshooting,” he said. “And it can be used for assessment, both formally and informally, in class and out of class.” Such an approach, Behrens maintained, is clearly the wave of the future. “Because the world is becoming more digital, the aids for describing the world are becoming more digital too,” he said. “Assessment people need to use these tools rather than reinvent the wheel every time.”

Eric Klopfer, director of the Teacher Education Program at the Massachusetts Institute of Technology, raised the issue of potential bias in the presentation of such digitally based assessments to students. In assigning tasks by email, for example, some students, depending on the e-mail applications that they customarily use, if any, might be disadvantaged, he said.

Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×

Ripley admitted that he and his colleagues in the United Kingdom often feel torn between offering a “reductionist” test (presenting a task so that virtually all students will be familiar with it) and elevating the test (trying to raise the minimum expectation for students). Because his agency’s mission is to design “high-stakes” assessments that offer “a very similar test experience for all students” around the country, it is important to try to minimize any bias in such environments.

Similarly, Katz pointed out that ETS—in using e-mail, for example, in its testing—“tries to come up with something generic” that will likely resemble whatever a student is used to. Moreover, in echoing a major point from his talk, he noted that “we are focusing not so much on the technology but on what people are doing with the information that is presented.” Still, he acknowledged, “it is hard to avoid some aspect of bias.”

Ripley added that administration of tests in a digital environment might actually reduce bias. QCA wanted to know “which students, in which categories of need, we would exclude if we went down a digital front—a screen route—for formulating tasks.” So it did a study, completed in 2004, “Our top-line conclusion was that we were enabling more students to access the tasks on screen than if they were on paper,” said Ripley. “So we are certainly not doing more harm than in paper-based tests. And I would argue that we are facilitating engagement, not preventing engagement, with the test.”

Heidi Schweingruber of the National Academies’ Board on Science Education raised the issue of ICT embeddedness in content areas—an often-mentioned idea during the workshop—and noted that it did not seem to be reflected in the discussion of assessments. Ripley acknowledged that so far “this has been a challenge for us. Our tests look rather like standardized ICT lessons, or business applications of ICT, and not even school-based applications of ICT.” But the omission has been noted, he said, and about two years ago his agency began development work in this area. Colleagues are making progress, he suggested, though “the material is not yet ready to show publicly or to use in any of our test administrations.”

Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×
Page 45
Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×
Page 46
Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×
Page 47
Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×
Page 48
Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×
Page 49
Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×
Page 50
Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×
Page 51
Suggested Citation:"6 Assessments to Measure Students’ Competencies." National Research Council. 2006. ICT Fluency and High Schools: A Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/11709.
×
Page 52
Next: 7 Revisiting the Being Fluent Framework »
ICT Fluency and High Schools: A Workshop Summary Get This Book
×
Buy Paperback | $38.00 Buy Ebook | $30.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Information and communications technology (ICT) pervades virtually all domains of modern life—educational, professional, social, and personal. Yet although there have been numerous calls for linkages that enable ICT competencies acquired in one domain to benefit another, this goal has largely remained unrealized. In particular, while technology skills and applications at work could be greatly enhanced by earlier complementary learning at school—particularly in K-12 education, a formative and influential stage in a person's life—little progress has been made on such linkages. At present, the curricula of most U.S. high schools focus on skills in the use of tools such as specific word-processing software or contemporary Internet search engines. Although these kinds of skills are certainly valuable—at least for a while—they comprise just one component, and the most rudimentary component, of ICT competencies.

The National Academies held a workshop in October 2005 to address the specifics of ICT learning during the high school years would require an explicit effort to build on that report. The workshop was designed to extend the work begun in the report Being Fluent with Information Technology, which identified key components of ICT fluency and discussed their implications for undergraduate education.

ICT Fluency and High Schools summarizes the workshop, which had three primary objectives: (1) to examine the need for updates to the ICT-fluency framework presented in the 1999 study; (2) to identify and analyze the most promising current efforts to provide in high schools many of the ICT competencies required not only in the workplace but also in people's day-to-day functioning as citizens; and (3) to consider what information or research is needed to inform efforts to help high school students develop ICT fluency.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!