Technology in the broadest sense is the modification of the natural world to fulfill human needs and wants. Although people often focus only on the most recent technological inventions, such as cell phones, the Internet, and MRI machines, technology also includes automobiles and airplanes, frozen food and irrigation systems, manufacturing robots and chemical processes. Virtually everyone in a modern society is profoundly influenced by technology.
At the behavioral level, Americans have traditionally been early adopters and enthusiastic users of a wide array of technologies, from automobiles and televisions to air travel and wireless telecommunications, suggesting that they not only recognize the advantages of new technologies, but also that they incorporate them into their lives and benefit from them. But, as this report shows, technological literacy is much more than simply being able and willing to use a technology.
Because technology is pervasive in our world, it is vitally important that people understand what technology is, how it works, how it is created, how it shapes society, and what factors influence technological development. The technological choices we make are important in determining our health and economic well-being, the types of jobs and recreation available to us, even our means of self-expression. How well we are prepared to make those choices depends in large part on how technologically literate we are.
Twenty years ago, Erich Bloch, the director of the National Science Foundation (NSF), noted the importance of his agency to the public awareness and understanding of technology (Bloch, 1986). More
recently, other organizations concerned with the nation’s science and technology enterprise, such as the American Association for the Advancement of Science and the International Technology Education Association, have called for Americans to become more technologically savvy (AAAS, 1990; ITEA, 1996). More recently, ITEA proposed standards related to technological understanding and capabilities for K–12 students (ITEA, 2000). And just a few years ago, the case for technological literacy was outlined in Technically Speaking: Why All Americans Need to Know More About Technology, a report from the National Academies (NAE and NRC, 2002).
How Technologically Literate Are We?
Against this background, the question naturally arises about the level of technological literacy in the American public. Most experts who have thought about the issue in depth agree that people in this country are not as technologically literate as they should be; but this is a general impression with little hard data to back it up. Unfortunately, no good measures of technological literacy are being used in the United States today. A small number of organizations and individuals—including some outside this country—have developed a variety of tests and surveys to try to get a handle on what people know or believe about technology, but most of these efforts have either been short lived or have failed to provide the kind of data necessary for drawing useful conclusions about technological literacy.
The lack of information about technological literacy contrasts sharply with the amount of information about literacy in other subject areas. For example, adults’ understanding of science has been assessed for almost three decades in surveys published biennially in Science and Engineering Indicators (e.g., NSB, 2004). Scientific knowledge and understanding among K–12 students are evaluated in a variety of standardized tests and by the federal National Assessment of Educational Progress (NAEP). In addition, student achievement is regularly tested in other school subjects, such as mathematics, English, and history. So why not test for technology literacy?
Part of the answer is historical. Until recently, educators and policy makers did not consider technology as separate from science.
Therefore, not only has there been no testing specifically of technological literacy, there has not even been a consensus on what constitutes technological literacy. This is particularly evident in elementary and secondary schools, where technology has not been taught as a separate subject— except in limited cases, such as industrial arts classes and, more recently, computer classes. Logically then, schools have not tried to measure the technological literacy of their students. Even though science and technology—and scientific and technological literacy—are closely related, it is important that they be treated independently for the purposes of assessment.
Another part of the answer to our question is that technological literacy is difficult to assess. Technological literacy has three basic components or dimensions, each of which presents challenges for assessments. First, a technologically literate person must have a certain amount of basic knowledge about technology (e.g., an understanding of the concepts of systems, feedback, trade-offs). Second, a technologically literate person should have some basic technical capabilities, such as being able to work with a computer and to identify and fix simple problems in the technological devices used at home and in the office. More generally, he or she should able to employ an approach to solving problems that relies on aspects of a design process. This second dimension is particularly difficult to assess because it cannot be easily measured in a typical paper-and-pencil test, especially if the test is in a multiple-choice format. And third, a technologically literate person should be able to think critically about technological issues and act accordingly (e.g., should construction of a new coal-fired power plant be supported or opposed).
Technological literacy has three basic components or dimensions, each of which presents challenges for assessments.
Many different types of assessment tools will have to be developed, depending on how the assessment data will be used and the characteristics of the population being tested. Third-grade students require a different method of assessment than eighth-grade students. An assessment developed for students will not be appropriate for assessing their teachers. And an entirely different approach will be necessary for assessing technological literacy among out-of-school adults.
In light of the importance of assessing technological literacy, several groups have called for the development of measurements of technological literacy. One of the recommendations in Technically Speaking, for instance, was that “NSF … support the development of assessment tools that can be used to monitor the state of technological literacy among
students and the public in the United States.” And the Standards for Technological Literacy called for the development of ways to gauge learning among K–12 students, measured against the standards.
Benefits of Assessing Technological Literacy
To appreciate the benefits of assessing technological literacy, one must first appreciate the value of technological literacy itself. There are a number, and according to Technically Speaking some of the most important relate to improving how people—from consumers to policy makers— think and make decisions about technology; increasing citizen participation in discussion of technological developments; supporting a modern workforce, which requires workers with significant technological savvy; and ensuring equal opportunity in such areas as education and employment for people with differing social, cultural, educational, and work backgrounds. The benefits of technological literacy also address growing concerns about the state of the nation’s science and engineering enterprise in the context of the global economy (NAE, 2002; NRC, 2005).
Assessments of technological literacy will have a number of benefits, too. First, they will raise the profile of technological literacy and strengthen the case for the importance of increasing the level of technological literacy. As long as technological literacy is not assessed in a rigorous or systematic way, it is unlikely to be considered a priority by policy makers or the general public. Almost by definition, making a case for boosting technological literacy will require showing that the current level of technological literacy is too low. But this cannot be done now with good quantitative measures. We live in a numbers-oriented world, and many people will only heed a call for higher technological literacy if the argument can be backed by hard data. Without numbers, the case can be dismissed altogether.
We live in a numbers-oriented world, and many people will only heed a call for higher technological literacy if the argument can be backed by hard data.
A number of groups will benefit from good assessments of technological literacy. Perhaps the most obvious beneficiary will be the formal education community. As the K–12 system moves toward adopting the International Technology Education Association (ITEA) standards or in other ways exposes students to more technology- or engineering-based courses, schools will need to measure how well they and their students are doing. Just as schools today assess students’ knowledge and understanding of science, mathematics, and English, schools tomorrow will assess students’
knowledge and understanding of technology, and for the same reason—to determine the effectiveness of teaching and learning and decide where improvements should be made.
Assessments of technological literacy are also important for students training to be K–12 teachers. For schools of education to provide future teachers with the knowledge and skills to speak knowledgeably about technology, they must be able to measure the technological literacy of their graduates.
Adults who have completed their formal educations continue to learn about technology in many ways (e.g., museums and science centers; radio, television, and print media; community and social organizations). Each of these venues would benefit from knowing how much people know about technology. Museums and science centers, for instance, could use information about the technological literacy of their patrons to design exhibits that would be useful and appealing. Journalists could use assessments of technological literacy to gauge the information about technology they can expect their audience to be familiar with and to determine what they must explain in their reporting. Political scientists studying public participation in technological decision making are more likely to be interested in public attitudes and ways of thinking and acting about specific technologies, such as genetically modified foods, nuclear power, and biometrics-based security.
Many organizations, both for-profit and nonprofit, that present information to the public about technology would also benefit from assessments. For example, an agricultural business introducing a new type of genetically engineered crop or an environmental organization presenting the results of a study about air pollution in the national parks could both make more effective presentations if they had a good sense of what the public knows and believes about technology. Product developers, who must decide which features a new product will have, would benefit from knowing what sorts of technology their customers are comfortable or familiar with and which sorts they tend to dislike or avoid. For similar reasons, marketing and advertising executives in many industries would benefit from having a better sense of what the public knows and feels about different technologies.
To the extent that differences in technological literacy disadvantage a person or group, assessment can help identify these differences, thus opening the door to efforts to improve the situation. Finally, for government policy makers, assessments of technological literacy would provide a
window into the hopes and fears of people regarding technology that could help guide policy decisions. Policy makers might even decide they should promote efforts to improve technological literacy in this country.
Obstacles to Assessing Technological Literacy
The developers of tools for assessing technological literacy face significant design challenges.
The developers of tools for assessing technological literacy face significant design challenges, an issue much of the rest of this report considers. With enough time and financial support, most of these difficulties can be overcome. Overcoming the obstacles to implementation of assessments, however, will require more than just time and money.
Consider, for example, assessments of the technological literacy of students in grades K–12. Children in elementary and secondary school are already subjected to a battery of standardized tests each year, and there is tremendous and understandable resistance among teachers, school administrators, and parents to giving more tests. The problem is not merely taking one more day out of the schedule to administer a technological literacy test. Once a test is added to the mix, teachers will be expected to “teach to the test” (i.e., to ensure that students have the information they need to do well on the test). Thus, teachers would have to find time in an already packed day to teach about technology.
Resistance to tests for teachers could be even greater. K–12 teachers are generally reluctant to subject themselves to any test that could be perceived as a test of professional competence, and this resistance has been supported by their professional organizations, the National Education Association and American Federation of Teachers. Some of this resistance has been overcome by provisions in the No Child Left Behind Act of 2001 (P.L. 107-110), which requires that all teachers be “highly qualified” in the subjects they teach. One way for teachers to meet this requirement is by passing a state-developed assessment (DOEd, 2005). At the post-secondary level, faculty competence is considered the purview of academic departments, which do not usually use standardized tests.
Despite these problems, testing students and teachers, who can be found in one location—their schools—and can be ordered by the school administration to take a test, would be less problematic and complicated than assessing out-of-school adults or the general public. Historically, people have been resistant to surveys of almost any kind. The response rate to surveys is so low that it is very difficult to get a good
measure of the public as a whole. It would be even more difficult to convince people to submit to the kinds of performance exercises that would be necessary to assess, say, their ability to troubleshoot a technology-related problem at home, such as an appliance that stops working.
Charge to the Committee
Given the increasing importance of technology in our society, it is vital that American citizens be technologically literate. Because we do not have good ways to measure technological literacy, however, our policy makers and educators are essentially “flying blind.” There are obstacles to the development and implementation of tools to measure technological literacy, but they can be overcome, and good assessments of technological literacy would have great benefits.
In response to this need, the National Academy of Engineering and National Research Council (NRC) of the National Academies, with funding from NSF, established the Committee on Assessing Technological Literacy. (Biographies of committee members appear at Appendix A.) The committee was asked to determine “the most viable approach or approaches for assessing technological literacy in three distinct populations in the United States: K–16 students, K–16 teachers, and out-of-school adults (the ‘general public’).”
During the course of deliberations, the committee modified one aspect of the original charge by narrowing the grade range for teacher and student populations from K–16 to K–12, or kindergarten through the end of high school. The change was made because the committee was unable to identify opportunities for assessing college students and faculty (with the exception of pre-service teachers). For K–12 students and their teachers, however, the committee found a number of opportunities for improving existing measurement tools or introducing new ones.
The charge to the committee also included the following elements:
Assess the opportunities and obstacles to developing one or more scientifically valid and broadly useful assessment instruments for technological literacy in the three target populations.
Recommend possible approaches to carrying out such assessments, including specification of subtest areas and actual sample test items representing a variety of formats.
The report that follows is the committee’s response to that charge. In Chapter 2, the committee defines “technology” and “technological literacy” as they are used in the report. Chapter 3 describes an approach to assessments that relies heavily on the concept of technological design. In Chapter 4, the committee outlines the basics of assessment practices, relevant findings in the cognitive sciences, and research on learning in technology that are important to the design of assessments in this domain. Chapter 5 provides brief descriptions and discussions of 28 assessment instruments collected by the committee in the course of the project. Chapter 6 presents five examples illustrating how assessments of technological literacy might play out in different populations and for varying purposes. In Chapter 7, the committee discusses the potential role of computer-based assessment methods. And in Chapter 8, it presents its findings and recommendations for expanding and improving assessments of technological literacy in the United States. The appendixes include copies of K–12 learning goals related to the study of technology from three different sets of content standards, summaries of the 28 instruments discussed in Chapter 5, and bibliographies of some of the research on how people learn technology- and engineering-related concepts.
This report builds on and refers extensively to two earlier documents, Technically Speaking (NAE and NRC, 2002) and Standards for Technological Literacy (ITEA, 2000). In Technically Speaking, technological literacy is defined, the benefits of technological literacy are described, and the characteristics of a technologically literate person are outlined. Standards for Technological Literacy specifies the basic knowledge and capabilities students in grades K–12 should have to be technologically literate. The committee used these concepts and standards as guidelines in determining which assessments would be most appropriate for testing U.S. students. Both documents are discussed extensively in Chapter 2.
The committee also reviewed general information about assessments. Knowing What Students Know: The Science and Design of Educational Assessment (NRC, 2001) was especially helpful in this regard. For background on the science of learning, the committee relied heavily on How People Learn: Brain, Mind, Experience, and School (NRC, 1999). The committee also consulted many other publications and held seven face-to-face meetings, informal discussions with a number of experts in relevant fields, and a major data-gathering workshop. As noted, the committee also identified and discussed assessment instruments that measure different aspects of technological literacy.
AAAS (American Association for the Advancement of Science). 1990. Science for All Americans. New York: Oxford University Press.
Bloch, E. 1986. Scientific and technological literacy: the need and the challenge. Bulletin of Science, Technology and Society 138–145.
DOEd (U.S. Department of Education). 2005. New No Child Left Behind Flexibility: Highly Qualified Teachers—Fact Sheet. Available online at: http://www.ed.gov/nclb/methods/teachers/hqtflexibility.html (August 16, 2005).
ITEA (International Technology Education Association). 1996. Technology for All Americans: A Rationale and Structure for the Study of Technology. Reston, Va.: ITEA.
ITEA. 2000. Standards for Technological Literacy: Content for the Study of Technology. Reston, Va.: ITEA.
NAE (National Academy of Engineering). 2002. Raising Public Awareness of Engineering. Washington, D.C.: National Academy Press.
NAE and NRC (National Research Council). 2002. Technically Speaking: Why All Americans Need to Know More About Technology. Washington, D.C.: National Academy Press.
NRC (National Research Council). 1999. How People Learn: Brain, Mind, Experience, and School. Edited by J.D. Bransford, A.L. Brown, and R.R. Cocking. Washington, D.C.: National Academy Press.
NRC. 2001. Knowing What Students Know: The Science and Design of Educational Assessment. Washington, D.C.: National Academy Press.
NRC. 2005. Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future. Washington, D.C.: The National Academies Press.
NSB (National Science Board). 2004. Science and Technology: Public Attitudes and Understanding. Science and Engineering Indicators, 2004. Available online at: http://nsf.gov/statistics/seind04/ (August 16, 2005).