The population of interest for most of the instruments was K–12 students. Teachers were the target population for two, the Praxis Technology Education Test (ETS, 2005) and the Engineering K–12 Center Teacher Survey (ASEE, 2005). The rest were designed to test out-of-school adults. Although the focus of this project is on assessment in the United States, the committee also studied instruments developed in Canada, England, and Taiwan. The approaches to assessment in non-U.S. settings provided useful data for the committee’s analysis.
The purposes of the assessment tools varied as much as the instruments themselves. They included diagnosis and certification of students, input for curriculum development, certification of teachers, resource allocation, program evaluation, guidance for public policy, suitability for employment, and research. The developers of these assessments could be divided into four categories: state or federal agencies, private educational organizations, academic researchers, and test-development or survey companies.
Table 5-1 provides basic information about the instruments, according to target population. More detailed information on each instrument, including sample items and committee observations, is provided in Appendix E.
The committee reviewed each instrument through critiques written by committee members, telephone conferences, and face-to-face discussions. In general, the reviews focused on two aspects of the assessments: (1) the type and quality of individual test items; and (2) the format or design of the assessment. The reviews provided an overview of current approaches to assessing technological understanding and capability and stimulated a discussion about the best way to conduct assessments in this area.
No single instrument struck the committee as completely adequate to the task of assessing technological literacy.
Although a number of the instruments reviewed were thoughtfully designed, no single instrument struck the committee as completely adequate to the task of assessing technological literacy. This is not surprising, considering the general challenge of developing high-quality assessments; the multifaceted nature of technological literacy; the characteristics of the three target populations; the relatively small number of individuals and organizations involved in designing assessments for technological literacy; and the absence of research literature in this area. And as noted, only a few of the instruments under review were designed explicitly to assess technological literacy in the first place.