6
From Theory to Practice: Five Sample Cases

Much of the discussion about assessing technological literacy in this report is by necessity general and applicable to many different settings. But in the real world, assessments must be done on a case-by-case basis, and each assessment will be tailored to fulfill a specific purpose. Thus, it is useful to see how the general principles might apply in particular situations. In this chapter, examples are given for five different settings, ranging from classrooms throughout a state to a museum or other informal-learning institution. Two of the examples deal with assessing students, one with assessing teachers, and two with assessing segments of the general population. The choice of cases was influenced considerably by the committee’s charge, which was focused on these same three populations.

Many of the sample cases inform one or more of the recommendations in Chapter 8. For example, Case 2, a national sample-based assessment, addresses some of the same issues designers of the National Assessment of Educational Progress, Trends in Mathematics and Science Study, and Programme for International Student Assessment may face in adapting those instruments to measuring technological literacy (Recommendations 1 and 2). Case 3, an assessment of teachers, addresses concerns that will undoubtedly arise as researchers develop and pilot test instruments for assessing pre-service and in-service teachers (Recommendation 5). Cases 4 and 5, assessments of broad populations and informal-learning institutions, address the committee’s suggestion that efforts to assess the technological literacy of out-of-school adults be expanded (Recommendation 6). Although none of the recommendations specifically addresses Case 1, a statewide census assessment of students, the



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy 6 From Theory to Practice: Five Sample Cases Much of the discussion about assessing technological literacy in this report is by necessity general and applicable to many different settings. But in the real world, assessments must be done on a case-by-case basis, and each assessment will be tailored to fulfill a specific purpose. Thus, it is useful to see how the general principles might apply in particular situations. In this chapter, examples are given for five different settings, ranging from classrooms throughout a state to a museum or other informal-learning institution. Two of the examples deal with assessing students, one with assessing teachers, and two with assessing segments of the general population. The choice of cases was influenced considerably by the committee’s charge, which was focused on these same three populations. Many of the sample cases inform one or more of the recommendations in Chapter 8. For example, Case 2, a national sample-based assessment, addresses some of the same issues designers of the National Assessment of Educational Progress, Trends in Mathematics and Science Study, and Programme for International Student Assessment may face in adapting those instruments to measuring technological literacy (Recommendations 1 and 2). Case 3, an assessment of teachers, addresses concerns that will undoubtedly arise as researchers develop and pilot test instruments for assessing pre-service and in-service teachers (Recommendation 5). Cases 4 and 5, assessments of broad populations and informal-learning institutions, address the committee’s suggestion that efforts to assess the technological literacy of out-of-school adults be expanded (Recommendation 6). Although none of the recommendations specifically addresses Case 1, a statewide census assessment of students, the

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy committee believes state leaders in education and other readers will benefit from seeing how this type of testing might play out. Beyond the call for modified or new assessments, the discussion of determining content for an assessment of teachers (Case 3) illustrates the need for careful development of assessment frameworks (Recommendation 11). And the cases related to broad populations (Case 4) and visitors to a museum or other informal-education institution (Case 5) suggest the importance of new measurement methods (Recommendation 10). Even though the sample cases touch on many of the issues facing designers of assessments, they are meant to be descriptive rather than prescriptive. Each case includes a rationale and purpose for the assessment, suggests a source for deriving the assessment content, proposes a way of thinking about performance levels, and addresses some administrative, logistical, and implementation issues. The committee intends this chapter to be a springboard for discussion about designing and carrying out assessments of particular groups and for particular purposes. The committee intends this chapter to be a springboard for discussion. When reviewing the examples in this chapter, readers should keep in mind the discussion of the design process in Chapter 3. Design is a process in which experience helps. When experienced designers are faced with a problem, they immediately ask themselves if they have encountered similar problems before and, if so, what the important factors were in those cases. The committee adopted the same approach, beginning with a review and analysis of existing studies and instruments, the identification and incorporation of useful aspects of those designs into the sample design, the identification of needs that had not been met by existing designs, and attempts to devise original ways to meet those needs. Anyone who intends to design an assessment of technological literacy will have to go through a similar process. During the committee’s deliberations, considerable time was spent discussing the value of including a sample assessment for an occupational setting. Ultimately, the committee decided not to include an occupational assessment for two reasons. First, the goal of most technical training and education for specific occupations is to provide a high level of skill in a limited set of technologies (see Box 2-2), rather than to encourage proficiency in the three dimensions of technological literacy spelled out in Technically Speaking. Second, two industry participants in a data-gathering workshop (one from the food industry and one from the automotive industry) expressed the view that a measure of overall

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy technological literacy would be of little value to employers, who are more concerned with workers’ job-related skills.1 Case 1: Statewide Grade-Level Assessment Description and Rationale In Case 1, the target population is students in a particular state and in particular grades. The exact grades are not important, but for the sake of illustration we assume that they include one elementary grade (3rd, 4th, or 5th grade), one middle school grade (6th, 7th, or 8th grade), and one high school grade (9th, 10th, 11th, or 12th). So, for example, the test population might consist of all 4th-, 8th-, and 11th-graders in Kentucky public schools. A statewide assessment has many similarities to large-scale national assessments and small, school-based assessments. But there are also important differences. For instance, a statewide assessment generally falls somewhere between a national assessment and a school-based assessment in terms of the timeliness of results and the breadth and depth of knowledge covered. But the most important difference is that a statewide assessment provides an opportunity for assessors to calculate individual, subgroup, and group-level scores. In addition, aggregate scores can be determined at the state, district, school, and classroom levels. Disaggregated scores can be determined for student subgroups, according to variables such as gender, race/ethnicity, and socioeconomic status. The assessment in this sample case is in some ways—such as the targeted test group and how the data are analyzed—similar to assessments currently used by states to meet the requirements of the No Child Left Behind Act of 2001 (NCLB). To comply with this legislation, states are required to test students’ proficiency in reading/language arts and mathematics annually in grades 3 through 8 and at least once in grades 10 through 12. States must also include assessments of science proficiency in three designated grade spans by 2007. Results are generally reported within about four months of administration of the assessment. 1 Some proponents of technological literacy, including the authoring committee of Technically Speaking, have suggested that there may be at least an indirect link between general technological literacy and performance in the workplace (NAE and NRC, 2002, pp. 40–42).

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy The rationale for a statewide assessment of technological literacy is to encourage changes in standards, curriculum, and teacher education to support the goal of increasing technological literacy for all students. With the possible exception of Massachusetts, states do not currently have the curricular building blocks in place to justify a statewide assessment of technological literacy. However, an assessment on such a large scale conducted in even a single state could demonstrate the feasibility and value of determining what students know and can do with respect to technology and could provide momentum for changes in standards, curriculum, and teacher education across the country. Purpose Typically, statewide assessments serve a powerful accountability function. In this example, the primary purpose of the statewide assessment of technological literacy is to improve teaching and learning related to technology. Typically, statewide assessments serve a powerful accountability function, providing data that can be used to track student achievement trends by school, school district, and the state as a whole. In an area of content as new as technological literacy, however, the goal of improving teaching and learning looms large. As technological literacy becomes more established as a school subject, however, assessment data may increasingly be used for accountability purposes. In this sample case, assessment results can be used to inform policy makers at the state and district levels and provide data for instructional leaders at the district, school, and classroom levels. This assessment could either be designed to provide a snapshot of technological literacy in the target populations or to provide data on specific content standards for technological literacy, which in turn may be aligned with national standards, such as those developed by ITEA (2000). A statewide assessment of technological literacy could not only tell educators what students at these age levels know and what they can do with respect to technology, but could also provide information related to specific standards. For example, they could determine if there was a difference in performance between boys and girls on ITEA Standard 19, which relates to understanding and being able to select and use manufacturing technologies. In short, data from such an assessment would enable educators to answer a large variety of questions useful for improving teaching and learning.

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy Content To be useful for an assessment, benchmarks must be “operationalized.” The ITEA Standards for Technological Literacy (ITEA, 2000) the AAAS Benchmarks for Science Literacy (AAAS, 1993), the NRC National Science Education Standards (NRC, 1996), and especially state-specific content standards, would be logical starting points for determining the content of the assessment. All of these documents suggest “benchmark” knowledge and skills that a technologically literate individual should have. To be useful for an assessment, however, the benchmarks must be “operationalized, ” that is, the most important technology concepts and capabilities must first be identified and then made specific enough to clarify the range of material to be covered in the assessment. This is a step in the process of developing an assessment framework for technological literacy, as discussed in Chapter 3. In addition, existing assessments may be reviewed to determine if any items are aligned with, and measure, the operationalized benchmarks. If not, technology-related content may have to be added. A review of the general guidelines for student assessments developed by ITEA may also be helpful (ITEA, 2004a). The assessment framework must specify the emphasis, or weight, given to items in each dimension of technological literacy. The weighting process must be based on many factors, including the purpose of the assessment, the time allotted for testing, the developers’ views of the importance of each dimension of technological literacy, and expert judgments about reasonable expectations for students in these grades. Table 6-1 shows how the weighting process might work. Performance Levels In this sample case, the state would derive a scale score for each student. If similar technology-related concepts were tested for more than one grade level (e.g., manufacturing processes for grades 3–5 and 6–8), the state might use cross-grade vertical scaling, which would enable scorers to compare the mastery of material by students at different grade levels. Using within-grade scaling, which is more common, the performance levels in each grade would be examined independently. To provide scores in a useful form for policy makers and instructional leaders, the state board of education might establish performance

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy TABLE 6-1 Sample Weighting for Grades 6–8 Items Assessing Knowledge, Capability, and Critical Thinking and Decision Making Related to Manufacturing Technologies, by Percentage of Items Devoted to Topic   Benchmark Topics   Manufacturing Systems Manufacturing Goods Chemical Technologies Materials Use Knowledge 20 percent 10 percent 10 percent 10 percent Capability 10 percent 10 percent     Critical Thinking and Decision Making   10 percent 10 percent 10 percent SOURCE: Adapted from ITEA, 2000. levels to group students according to subjective achievement standards for increasingly sophisticated levels of performance (e.g., novice, competent, proficient, and expert). Performance-level descriptors must realistically capture what a child of a given age might know and be able to do. Reporting could be done either on the overall assessment or on separate subscales or dimensions of the assessment. If separate subscales or dimensions were used, separate performance levels could be defined for each. If the idea is to report subscale- or dimension-specific scores, the assessment must be designed so that the items in each subscale or dimension support reliable scoring. Once state and local educators received descriptive and diagnostic data, they could interpret the results in context and identify achievement gaps. Based on diagnostic information, educators could determine which standards had been mastered by most students and which subjects required more or better instruction. Based on assessment results, educators could then focus their instruction and professional development practices to improve student learning. If the assessment were given regularly, perhaps biennially, the resulting data would provide a measure of whether the level of technological literacy had increased, stayed the same, or declined. Results over time could reveal trends among subgroups of students. If the assessment includes items that measure student attitudes and opinions about technology or technology careers, that information could be correlated with performance data. In this way, the data could be used by K–12 educators to assist with course planning and career counseling.

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy Administration and Logistics Combining census and matrix-sampling approaches would have several advantages. A statewide assessment would be administered to all students in three grade levels, one elementary (grades 3–5), one middle school (grades 6–8), and one high school (grades 9–12), in every school in the state. The assessment should take no more than two sessions, lasting no more than 90 minutes each, and should use both census and matrix-sampling techniques.2 Combining census and matrix-sampling approaches would have several advantages. It would reduce the time required to administer the assessment, because not every student would see every question. By making sure all students were presented with a core set of items (the census portion of the instrument), a general measure of technological literacy could be obtained. The matrix portion of the assessment would enable the collection of additional diagnostic measures of performance related to specific areas of content, such as student knowledge of the influence of technology on history. The assessment should include a mix of multiple-choice, constructed-response, and design-based performance items, possibly including simulations. Teachers would require rudimentary training to administer the test, particularly the hands-on design or online computer-based components. Administrators and policy makers would also have to be educated about the dimensions of technological literacy, the purpose of the assessment, and potential uses of the data obtained from the assessment. Obstacles to Implementation With the notable exception of state testing conducted to fulfill the requirements of NCLB, assessments like the one described here usually have no direct consequences for students, teachers, or schools if student scores are low. Without the threat of punitive consequences for poor outcomes, teachers may be less inclined to spend time preparing students for the assessment, and students may be less inclined to take the test seriously. A statewide assessment of technological literacy would also have resource constraints, especially today, when states are already spending considerable sums to meet the assessment and reporting requirements of 2 Matrix sampling and census testing are explained in Chapter 4 in the section on Measurement Issues.

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy NCLB. For example, the Maryland State Department of Education recently spent more than $5 million to develop and implement, within two years, new reading/language arts and mathematics assessments for 9th graders (M. Yakimowski-Srebnick, director of assessments, Council of Chief State School Officers, personal communication, June 16, 2005). Although some of the costs of an assessment, particularly related to test administration, might be reduced by using computer-based testing methods (see Chapter 7), it would still be difficult to convince states that are already “feeling the pinch” of NCLB to add a statewide assessment of technological literacy. Furthermore, traditional paper-and-pencil tests alone generally do not provide an adequate measure of capabilities related to technological design. Thus, some states are beginning to explore nontraditional testing methodologies, such as computer simulations, to assess hands-on tasks and higher order thinking. Developing and testing these methods, however, requires considerable resources and time. Turf issues within the academic community might introduce additional challenges for a statewide assessment. For instance, the mathematics and science-education communities might argue that an assessment of technological literacy would divert attention and resources from their efforts to improve student learning in their content areas. Many educators might be concerned about the amount of time taken away from instruction, above and beyond the time required to prepare for mandated assessments. Turf issues within the academic community might introduce additional challenges. Another potential challenge for states might be providing opportunities for students with special needs to participate in the assessment. Adjustments would have to be made for students with physical or cognitive disabilities, limited proficiency in English, or a combination of these to ensure full and fair access to the test. Adjustments must be made on a case-by-case basis. For instance, a student with a visual impairment would not require the same test accommodation as someone with dyslexia, even though both have trouble reading small, crowded text. Common accommodations include extending time, having test items read aloud, and allowing a student to dictate rather than write answers. It is also important that accommodations be used only to offset the impact of disabilities unrelated to the knowledge and skills being measured (NRC, 1997). Some students with special needs might require alternative assessment approaches, such as evaluation of a collection of work (portfolio), a one-on-one measure of skills and knowledge, or checklists filled out

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy by persons familiar with a student’s ability to demonstrate specific knowledge or skills (Lehr and Thurlow, 2003); typically, a very small percentage of students, on the order of 1 percent, require alternative assessments. Because a test score may not be a valid representation of the skills and achievement of students with disabilities, high-stakes decisions about these students should take into account other sources of evidence, such as grades, teacher recommendations, and other examples of a student’s work (NRC, 1999a). Finally, because it is often difficult or impractical for states to collect meaningful data related to socioeconomic status, assessment results might inadvertently be reported in ways that reinforce negative racial, ethnic, or class stereotypes. Concerns about stereotyping might even arouse resistance to the implementation of a new assessment. Sample Assessment Items3 1. Manufacturing changes the form of materials through a variety of processes, including separation (S), forming (F), and combining (C). Please indicate which process is associated most closely with each of the following: bending sawing gluing cutting 2. One common way of distinguishing types of manufactured goods is whether they are “durable” or “nondurable.” In your own words, explain two ways durable goods differ from nondurable goods. Then sort the following products into two groups, according to whether they are durable or nondurable: toothbrush, clothes dryer, automobile tire, candy bar, bicycle, pencil. 3 For a statewide assessment, items would be based on a framework derived from rigorously developed content standards. In this example, items were derived from content specified for grades 6 through 8 in the ITEA Standards for Technological Literacy.

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy 3. Manufacturing, like all aspects of technology, has had significant impacts on society, and not all of these have been anticipated or welcome. Innovations in manufacturing in the past quarter-century have included the use of robotics, automation, and computers. Using examples from only one manufacturing sector, describe some of the positive and negative impacts these manufacturing innovations have had on life in the United States. Case 2: Matrix-Sample Assessment of 7th Graders Description and Rationale Case 2 involves a matrix-sample-based assessment of the technological literacy of 7th graders throughout the United States. Sample-based assessments differ from other types of assessments in that individual scores are rarely, if ever, reported. Instead, the focus is on discovering and tracking trends. In this case, one might want to follow the changes over time in the average level of technological literacy of 7th graders. Sampling can also reveal geographic variations, such as state-by-state differences in scores and variations among subgroups, such as gender, race/ethnicity, type of school, population density, poverty level, and other demographic variables, depending on the design of the sample. In matrix sampling,4 individual students are not tested on all test items. This is done mainly to accommodate the time constraints of test administration. Even though no single student sees every item, every question is administered to a large enough subset of the sample to ensure that the results are statistically valid. Another important feature of a matrix sample is that the large number of questions ensures that all three dimensions of technological literacy are assessed. The assessment described here is similar in structure to assessments conducted through the National Assessment of Education Progress (NAEP). The rationale for conducting a national, sample-based assessment of students would be to draw public attention to the state of 4 Matrix sampling is described in more detail in Chapter 4 in the section on Measurement Issues.

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy technological literacy in the country’s middle-school population. In the same way the release of NAEP results in science and mathematics encourages examination of how learning and teaching occur in these subjects, data on technological literacy would provide an impetus for a similar analysis related to the learning and teaching of technology. If the results indicated significant areas of weakness, they might provide an impetus for education reform. Periodic administration of the assessment would provide valuable time-series data that could be used to monitor trends. Purpose This assessment would be a policy tool, rather than a classroom tool. A national sample assessment of technological literacy among U.S. 7th graders could provide a “snapshot” of technological literacy in this population that would be useful for policy makers. Like the statewide assessment described in Case 1, educators could use these data to get a sense of what students at this age know and what can they do with respect to technology. With a national assessment, however, administrators at the school, district, and state levels could determine how their students’ scores compared with student scores in other areas of the country, and national education officials could get a sense of the overall technological literacy of 7th graders. Unlike the assessment in Case 1, of course, the sample assessment would not provide information about individual students. This assessment would be a policy tool, rather than a classroom tool. If a national sample assessment were repeated periodically, it would show whether technological literacy was increasing, staying the same, or declining around the country. If similar assessments were conducted in other countries, it would be possible to make some cautionary comparisons across national boundaries. If the assessment revealed student attitudes about technology or technology careers, that information could be correlated with performance data to determine how attitudes influence the level of technological literacy. Content Specifications The ITEA Standards for Technological Literacy, AAAS Benchmarks for Science Literacy, and the NRC National Science Education Standards would be useful starting points for determining the content of a national sample assessment, just as they would be for the statewide assessment described in Case 1. Each of these documents suggests “benchmarks”

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy information about gender, ethnic, and geographic differences in technology-related knowledge, capabilities, and critical thinking and decision making. And surveys of broad populations could also provide data on public attitudes toward technology. A number of measurement methods, strategies, and practices have been developed for studying broad populations. Deciding which of these to use will depend on the population of interest and the goals of the study. Obstacles to Implementation In recent decades, most measurements of all segments of the adult population have been conducted through large-scale sample surveys, mostly telephone samples and interviews. In addition, a solid body of research has been accumulated on the best methods of constructing questionnaires and analyzing their results. In the last decade, however, resistance to telephone-based surveys has been growing, and response rates are often unacceptably low. As a result, researchers of broad populations now rely increasingly on online panels, which raise questions about probability-based recruitment versus online participants’ self-selection. Some researchers have turned to surveys of broad populations that are co-located, such as patrons of science museums, but these samples may be biased toward people familiar with both science and technology. Another difficulty for survey designers is that some types of knowledge questions quickly lose currency because of rapid advancements in technology. This can make changes over time difficult to track. Sample Assessment Items For Technology Consumers 1. You have bought a new home entertainment system. The system has several large components, not including the speakers, as well as a number of connecting wires and plugs, batteries, and a remote-control device. When you unpack the system at home, you discover that the instruction manual for assembling it is missing. Which of the following best reflects your approach to this problem?

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy I have a good idea how systems like this work, so I would be able to assemble it without instructions or outside help. I do not have experience with this exact type of system, but I would be comfortable trying to figure out how everything fits together through a process of trial and error. I do not have experience with this type of system and would search the World Wide Web for a copy of the instruction manual or to get other online help. I do not have experience with this type of system and would not feel comfortable searching the Web for help. 2. All technologies have consequences not intended by their designers, and some of these consequences are undesirable. Below is a list of consequences some people associate with cell phones. For each, please indicate the level of concern you have (no concern at all; a little concern; a moderate amount of concern; a lot of concern). Possible negative health effects, including cancer. Loss of enjoyment of quiet in public places, such as restaurants. Car accidents caused by drivers using cell phones while on the road. Possible theft of personal data by cell-phone hackers. For Policy-Attentive Citizens 1. To what extent do you agree or disagree that the following applications of technology pose a risk to society? (Answer choices: completely agree; agree; neither agree nor disagree; disagree; completely agree; not sure.)6 The use of biotechnology in the production of foods—for example, to increase their protein content—makes them last longer, or enhance their flavor. 6 This question and answers a and b are adapted from U.S. Environmental and Biotechnology Study (Pardo and Miller, 2003).

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy Cloning human cells to replace the damaged cells that are not fulfilling their function well. The computerized collection and sorting of personal data by private companies or the government in order to catch terrorists. The placement under the skin of small computer chips that enable medical personnel to retrieve your personal health information. 2. Please indicate for each of the following sentences the extent to which you believe it is absolutely true, probably true, probably false, or absolutely false. If you do not know about or are not sure about a specific question, check the “Not Sure” box.7 Antibiotics kill viruses as well as bacteria. Ordinary tomatoes, the ones we normally eat, do not have genes, whereas genetically modified tomatoes do. The greenhouse effect is caused by the use of carbon-based fuels, like gasoline. All pesticides and chemical products used in agriculture cause cancer in humans. For the General Public 1. Please indicate the extent to which you believe the following statements to be absolutely true, probably true, probably false, or absolutely false.8 Nuclear power plants destroy the ozone layer. All radioactivity is produced by humans. The U.S. government regulates the World Wide Web to ensure that the information people retrieve is factually correct. Using a cordless phone while in the bathtub creates the possibility of being electrocuted.9 7 This question and answers a, b, c, and d are adapted from U.S. Environmental and Biotechnology Study (Pardo and Miller, 2003). 8 This question and answers a and b are adapted from U.S. Environmental and Biotechnology Study (Pardo and Miller, 2003). 9 This answer adapted from ITEA, 2004b.

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy Case 5: Assessments for Visitors to Museums and Other Informal-Learning Institutions Description and Rationale Case 5 describes an assessment of technological literacy for visitors to a museum, science center, or other informal-learning institution, where participants set their own learning agendas and determine the duration and selection of content; this is called “free-choice learning.” Some 60 million people are served by public science-technology centers in the United States every year (ASTC, 2004). This number is consistent with NSB survey data indicating that 61 percent of adult Americans visit an informal science institution (e.g., a zoo, aquarium, science center, natural history museum, or arboretum) at least once a year (NSB, 2000). Typically, visitors are children attending as part of a family or school group (which often includes teachers) or adults attending alone or in groups without children. Because of the transient nature of the population of interest (visitors usually spend no more than a few hours in these institutions), the assessment would rely on sampling techniques, although focus-group-style assessments might also be used. The principal rationale for conducting assessments in informal-education settings is to gain insights into the type and level of technological literacy among a unique (though not random) cross-section of the general public. In addition, because visitors to these facilities are often surrounded by and interact with three-dimensional objects representing aspects of the designed world, informal-learning locations present opportunities for performance-related assessments. The sheer volume of visitors, particularly at mid-sized and large institutions, provides an additional incentive. Purpose Organizations that provide informal-learning opportunities, including museums, book and magazine publishers, television stations, websites, and continuing-education programs offered by colleges and universities, all provide information about technology, but generally have limited knowledge of the level of understanding or interest of their intended audiences. For this diverse group of institutions and companies,

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy assessments of technological knowledge and attitudes would provide a context for making programming and marketing decisions. For example, a science center might want to involve members of the non-expert public in discussions of how using technology to enhance national security might affect privacy. For these discussions to be effective, the center would have to know the nature and extent of participants’ understanding (and misunderstanding) of various technologies, such as the Internet and voice- and face-recognition software, as well as their grasp of the nature of technology (e.g., the concepts of trade-offs and unintended consequences). The center might also benefit from an assessment of attitudes about the topic. For instance, knowing that two-thirds of potential participants feel powerless to influence government decisions about deploying such technology, for instance, might influence the type of background information the center provides prior to a discussion. In addition to planning tools, assessments could be used to determine what members of the public take away from their experiences—new knowledge and understanding (as well as, possibly, misunderstanding), new skills and confidence in design-related processes, and new or different concerns and questions about specific technologies or technology in general. These findings, in turn, could be used to adjust and improve existing or future programs, exhibits, or marketing. Apart from the direct impact of assessments of technology literacy on individual institutions that want to attract more visitors and improve the quality of their outreach to the public, the assessments might be of wider interest. The formal education system in the United States evolved at a time when the body of knowledge—the set of facts, reasoning abilities, and hands-on skills that defined an “educated” person—was small. A decade or so of formal education was enough to prepare most people to use and understand the technologies they would encounter throughout their lives. Today, the pace of technological change has increased, and individuals are being called upon to make important technological decisions, including career changes required by new technologies, many times in their lives. For this reason, “lifelong learning,” which can take place formally in settings like community colleges and the workplace, or informally through independent reading, visits to museums and science centers, or exposure to radio, television, and the Internet, has become critical to technological literacy. Individuals are called upon to make important technological decisions many times in their lives. But little is known about how well informal, or free-choice, learning promotes technological understanding. This information would

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy be of interest not only to the institutions themselves but also to the publics they serve, funders, policy makers, and the education research community. Content The three dimensions of technological literacy, as described in Technically Speaking, could provide a reasonable starting point for determining content relevant to an assessment of this population. The ITEA standards also should be consulted, particularly standards related to the nature of technology. To a great extent, however, the content of the assessment would be determined by the specific technology or technology-related concerns at issue. That is, just as a student assessment should be aligned with relevant standards and curriculum, an assessment of visitors to an informal-education institution should be aligned with the subject matter and goals of the program or exhibit. In situations where the assessment involves a hands-on or design component, assessment developers could use a rubric for judging design-process skills. The model developed by Custer et al. (2001) might be useful here. Performance Levels Assessments of visitors to informal-learning institutions would be most useful for identifying a spectrum of technological literacy rather than specific levels of literacy. Changes in the spectrum, for example, movement—up or down—of the entire curve or changes in the shape of the curve, would provide valuable information. Correlations among the three dimensions and with attitudes would be of special interest. Does a high level of knowledge correlate with critical thinking and decision making? with attitudes? How are capabilities related to knowledge and attitudes? Does literacy in one aspect of technology translate to literacy in other areas? These are just a few of the questions that could be answered. Administration and Logistics Many informal-learning institutions are open 300 days a year or more, including weekends; thus, there would be fewer constraints on content selection and assessment methodologies than for formal-education settings, such as classrooms, where time, space, and trained

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy staff are all at a premium. Practically all testing methods would work for this population: interviews, multiple-choice questions, constructed-response items, performance items, and focus groups. Assessments could also measure changes in visitors’ understanding of technology or technology-related exhibits over time. Short-term understanding could be measured by pre- and post-visit surveys; long-term understanding might be measured by e-mail or telephone follow-up. A variety of methods could be used to enable museums and other institutions to compare the effects of different exhibit formats and content on specific measures of technological literacy (Miller, 2004b). Many informal-learning institutions routinely conduct visitor surveys for demographic and marketing purposes, and many also conduct extensive cognitive and affective visitor assessments for front-end, formative, and summative evaluations of exhibitions (Taylor and Serrell, 1991). Some larger institutions even have staff members or consultants capable of performing assessments of the type that could gauge technological literacy, although they rarely have the funds to carry out such assessments. Obstacles to Implementation Assessments should be conducted in ways that take into account potential sample bias. Obtaining a sample of visitors that represents the diversity—in income, education, and other factors—of the U.S. population as a whole would be difficult in the typical informal-learning setting. The population represented by visitors to these institutions is undoubtedly biased in favor of the science-attentive, as opposed to the science-“inattentive” (Miller, 1983b). In addition, compared to the population at large, patrons of science centers, zoos, and related institutions tend to have higher socioeconomic parameters, although institutions in urban areas attract more diverse patrons. For example, at the New York Hall of Science, in Queens, 38 percent to 68 percent of family visitors are non-Caucasian (depending on the season), probably because of the location of the institution and the diversity of the staff (Morley and Associates, unpublished). In any case, assessments should be conducted in ways that take into account potential sample bias. Pre-surveys might be used to identify those biases. Another potential obstacle to assessment in informal-learning institutions is the reluctance of visitors to take part in structured interviews, surveys, or focus groups. Given the relatively short duration of a typical visit, the desire of many patrons to move freely among exhibits of their choosing, and the fact that admission is usually paid, this reluctance

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy is understandable. Offering incentives for participation, such as token gifts or free admission, may help to lower this barrier. Exhibit designs that build in opportunities for assessment might also be helpful. For example, assessment designers might consider using technologies that are portable (e.g., PDAs, electronic tablets) and can be programmed to select assessment items based on the visitor’s characteristics and physical location in an exhibit space. Sample Test Items10 Give an example of a type of technology you like. Give an example of a type of technology you don’t like. On a scale of 1 to 100, how much do you think technology affects people’s lives? On a scale of 1 to 100, how much of a role do you think people play in shaping the technologies we have and use? Give an example of how people like you and me shape technologies. Imagine that you work for Coca Cola or Pepsi and you are part of the team that came up with a new 20-ounce bottle. What steps did you go through? Imagine that you are an inventor, and a friend of yours asks you to think about an idea. What steps would you go through to work on this idea? Do you ever do things that involve creating or designing something, testing it, modifying how you do it, evaluating how someone uses it, and considering the consequences? Give an example. 10 Test items are adapted from a formative evaluation conducted for the Oregon Museum of Science and Industry by People, Places & Design Research, Northhampton, Mass. Used with permission.

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy References AAAS (American Association for the Advancement of Science). 1993. Benchmarks for Science Literacy. Project 2061. New York: Oxford University Press. AAAS. 1998. Blueprints for Reform: Science, Mathematics, and Technology Education. New York: Oxford University Press. ASTC (Association of Science-Technology Centers). 2004. ASTC Sourcebook of Science Center Statistics 2004. Washington, D.C.: ASTC. Broughman, S.P., and K.W. Pugh. 2004. Characteristics of Private Schools in the United States: Results from the 2001–2002 Private School Universe Study, Table 1. Available online at: http://nces.ed.gov/pubs2005/2005305.pdf (April 11, 2006). Bybee, R.W. 1997. Achieving Scientific Literacy: From Purposes to Practices. Portsmouth, N.H.: Heinemann. Custer, R.L., G. Valesey, and B.N. Burke. 2001. An assessment model for a design approach to technological problem solving. Journal of Technology Education 12(2): 5–20. DoEd (U.S. Department of Education). 2003. The Nation’s Report Card: Science 2000. NCES 2003-453. Institute of Education Sciences, National Center for Education Statistics. Washington, D.C.: DoEd. Friedman, S., S. Dunwoody, and C. Rogers, eds. 1986. Scientists and Journalists: Reporting Science as News. New York: Free Press. Friedman, S., S. Dunwoody, and C. Rogers. 1999. Communicating Uncertainty. Mahwah, N.J.: Lawrence Erlbaum Associates. ITEA (International Technology Education Association). 2000. Standards for Technological Literacy: Content for the Study of Technology. Reston, Va.: ITEA. ITEA. 2004a. Measuring Progress: A Guide to Assessing Students for Technological Literacy. Reston, Va.: ITEA. ITEA. 2004b. The Second Installment of the ITEA/Gallup Poll and What It Reveals as to How Americans Think About Technology. A Report of the Second survey Conducted by the Gallup Organization for the International Technology Education Association. Available online at: http://www.iteaconnect.org/TAA/PDFs/GallupPoll2004.pdf (October 5, 2005). Lehr, C., and M. Thurlow. 2003. Putting It All Together: Including Students with Disabilities in Assessment and Accountability Systems. NCEO Policy Directions, Number 16/October 2003. Available online at: http://education.umn.edu/nceo/OnlinePubs/Policy16.htm (February 23, 2006). MCREL (Mid-Continent Research for Education and Learning). 2004. Content Knowledge, 4th ed. Available online at: http://mcrel.org/standards-benchmarks/ (January 13, 2006). Miller, J.D. 1983a. The American People and Science Policy: The Role of Public Attitudes in the Policy Process. New York: Pergamon Press. Miller, J.D. 1983b. Scientific literacy: a conceptual and empirical review. Daedalus 112(2): 29–48. Miller, J.D. 1986. Reaching the Attentive and Interested Publics for Science. Pp. 55– 69 in Scientists and Journalists: Reporting Science as News, edited by S. Friedman, S. Dunwoody, and C. Rogers. New York: Free Press. Miller, J.D. 1987. Scientific Literacy in the United States. Pp. 19–40 in Communicating Science to the Public, edited by D. Evered and M. O’Connor. London: John Wiley and Sons. Miller, J.D. 1992. From Town Meeting to Nuclear Power: The Changing Nature of Citizenship and Democracy in the United States. Pp. 327–328 in The United States Constitution: Roots, Rights, and Responsibilities, edited by A.E.D. Howard. Washington, D.C.: Smithsonian Institution Press.

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy Miller, J.D. 1995. Scientific Literacy for Effective Citizenship. Pp. 185–204 in Science/Technology/Society as Reform in Science Education, edited by R.E. Yager. New York: State University of New York Press. Miller, J.D. 1998. The measurement of civic scientific literacy. Public Understanding of Science 7: 1–21. Miller, J.D. 2000. The Development of Civic Scientific Literacy in the United States. Pp. 21–47 in Science, Technology, and Society: A Sourcebook on Research and Practice, edited by D.D. Kumar, and D. Chubin. New York: Plenum Press. Miller, J.D. 2004a. Public understanding of and attitudes toward scientific research: what we know and what we need to know. Public Understanding of Science 13: 273–294. Miller, J.D. 2004b. The Evaluation of Adult Science Learning. ASP Conference Series, vol. 319. Washington, D.C.: National Aeronautics and Space Administration. Miller, J.D., and L. Kimmel. 2001. Biomedical Communications: Purposes, Audiences, and Strategies. New York: Academic Press. Miller, J.D., and R. Pardo. 2000. Civic Scientific Literacy and Attitude to Science and Technology: A Comparative Analysis of the European Union, the United States, Japan, and Canada. Pp. 81–129 in Between Understanding and Trust: The Public, Science, and Technology, edited by M. Dierkes and C. von Grote. Amsterdam: Harwood Academic Publishers. Miller, J.D., R. Pardo, and F. Niwa. 1997. Public Perceptions of Science and Technology: A Comparative Study of the European Union, the United States, Japan, and Canada. Madrid: BBV Foundation. Morley and Associates. Unpublished. Unpublished visitor survey for the New York Hall of Science, 2005. NAE (National Academy of Engineering) and NRC (National Research Council). 2002. Technically Speaking: Why All Americans Need to Know More About Technology. Washington, D.C.: National Academy Press. NRC (National Research Council). 1996. National Science Education Standards. Washington, D.C.: National Academy Press. NRC. 1997. Educating One and All: Students with Disabilities and Standards-Based Reform, edited by L.M. McDonnell, M.J. McLaughlin, and P. Morison. Washington, D.C.: National Academy Press. NRC. 1999a. Recommendations from High Stakes Testing for Tracking, Promotion, and Graduation, edited by J.P. Heubert and R.M. Hauser. Washington, D.C.: National Academy Press. NRC. 1999b. Being Fluent with Information Technology. Washington, D.C.: National Academy Press. NSB (National Science Board). 2000. Science and Engineering Indicators 2000, vol. 2. Arlington, Va.: NSB. Pardo, R., and J.D. Miller. 2003. U.S. Environmental and Biotechnology Study, 2003. Unpublished questionnaire. Shen, B.S.P. 1975. Science Literacy and the Public Understanding of Science. Pp. 44– 52 in Communication of Scientific Information, edited by S.B. Day. New York: Karger. Taylor, S., and B. Serrell. 1991. Try It!: Improving Exhibits Through Formative Evaluation. Washington, D.C.: Association of Science Technology Centers and New York: New York Hall of Science. U.S. Census Bureau. 2005. Annual Estimates of the Population for the United States and States, and for Puerto Rico: April 1, 2000 to July 1, 2005 (NST-EST2005-01): Table 1. Available online at: http://www.census.gov/popest/states/tables/NST-EST2005-01.xls (April 11, 2006).

OCR for page 127
Tech Tally: Approaches to Assessing Technological Literacy Young, B.A. 2003a. Public school student, staff, and graduate counts by state: School year 2001–02. Education Statistics Quarterly 5 (1): Table 2. Available online at: http://nces.ed.gov/programs/quarterly/vol_5/5_2/q3_4_t1.asp#Table-2 (April 11, 2006). Young, B.A. 2003b. Public school student, staff, and graduate counts by state: School year 2001–02. Education Statistics Quarterly 5 (1): Table 1. Available online at: http://nces.ed.gov/programs/quarterly/vol_5/5_2/q3_4_t1.asp#Table-1 (April 11, 2006).