National Academies Press: OpenBook

Tech Tally: Approaches to Assessing Technological Literacy (2006)

Chapter: 6 From Theory to Practice: Five Sample Cases

« Previous: 5 Review of Instruments
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

6
From Theory to Practice: Five Sample Cases

Much of the discussion about assessing technological literacy in this report is by necessity general and applicable to many different settings. But in the real world, assessments must be done on a case-by-case basis, and each assessment will be tailored to fulfill a specific purpose. Thus, it is useful to see how the general principles might apply in particular situations. In this chapter, examples are given for five different settings, ranging from classrooms throughout a state to a museum or other informal-learning institution. Two of the examples deal with assessing students, one with assessing teachers, and two with assessing segments of the general population. The choice of cases was influenced considerably by the committee’s charge, which was focused on these same three populations.

Many of the sample cases inform one or more of the recommendations in Chapter 8. For example, Case 2, a national sample-based assessment, addresses some of the same issues designers of the National Assessment of Educational Progress, Trends in Mathematics and Science Study, and Programme for International Student Assessment may face in adapting those instruments to measuring technological literacy (Recommendations 1 and 2). Case 3, an assessment of teachers, addresses concerns that will undoubtedly arise as researchers develop and pilot test instruments for assessing pre-service and in-service teachers (Recommendation 5). Cases 4 and 5, assessments of broad populations and informal-learning institutions, address the committee’s suggestion that efforts to assess the technological literacy of out-of-school adults be expanded (Recommendation 6). Although none of the recommendations specifically addresses Case 1, a statewide census assessment of students, the

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

committee believes state leaders in education and other readers will benefit from seeing how this type of testing might play out.

Beyond the call for modified or new assessments, the discussion of determining content for an assessment of teachers (Case 3) illustrates the need for careful development of assessment frameworks (Recommendation 11). And the cases related to broad populations (Case 4) and visitors to a museum or other informal-education institution (Case 5) suggest the importance of new measurement methods (Recommendation 10).

Even though the sample cases touch on many of the issues facing designers of assessments, they are meant to be descriptive rather than prescriptive. Each case includes a rationale and purpose for the assessment, suggests a source for deriving the assessment content, proposes a way of thinking about performance levels, and addresses some administrative, logistical, and implementation issues. The committee intends this chapter to be a springboard for discussion about designing and carrying out assessments of particular groups and for particular purposes.

The committee intends this chapter to be a springboard for discussion.

When reviewing the examples in this chapter, readers should keep in mind the discussion of the design process in Chapter 3. Design is a process in which experience helps. When experienced designers are faced with a problem, they immediately ask themselves if they have encountered similar problems before and, if so, what the important factors were in those cases. The committee adopted the same approach, beginning with a review and analysis of existing studies and instruments, the identification and incorporation of useful aspects of those designs into the sample design, the identification of needs that had not been met by existing designs, and attempts to devise original ways to meet those needs. Anyone who intends to design an assessment of technological literacy will have to go through a similar process.

During the committee’s deliberations, considerable time was spent discussing the value of including a sample assessment for an occupational setting. Ultimately, the committee decided not to include an occupational assessment for two reasons. First, the goal of most technical training and education for specific occupations is to provide a high level of skill in a limited set of technologies (see Box 2-2), rather than to encourage proficiency in the three dimensions of technological literacy spelled out in Technically Speaking. Second, two industry participants in a data-gathering workshop (one from the food industry and one from the automotive industry) expressed the view that a measure of overall

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

technological literacy would be of little value to employers, who are more concerned with workers’ job-related skills.1

Case 1:
Statewide Grade-Level Assessment

Description and Rationale

In Case 1, the target population is students in a particular state and in particular grades. The exact grades are not important, but for the sake of illustration we assume that they include one elementary grade (3rd, 4th, or 5th grade), one middle school grade (6th, 7th, or 8th grade), and one high school grade (9th, 10th, 11th, or 12th). So, for example, the test population might consist of all 4th-, 8th-, and 11th-graders in Kentucky public schools.

A statewide assessment has many similarities to large-scale national assessments and small, school-based assessments. But there are also important differences. For instance, a statewide assessment generally falls somewhere between a national assessment and a school-based assessment in terms of the timeliness of results and the breadth and depth of knowledge covered. But the most important difference is that a statewide assessment provides an opportunity for assessors to calculate individual, subgroup, and group-level scores. In addition, aggregate scores can be determined at the state, district, school, and classroom levels. Disaggregated scores can be determined for student subgroups, according to variables such as gender, race/ethnicity, and socioeconomic status.

The assessment in this sample case is in some ways—such as the targeted test group and how the data are analyzed—similar to assessments currently used by states to meet the requirements of the No Child Left Behind Act of 2001 (NCLB). To comply with this legislation, states are required to test students’ proficiency in reading/language arts and mathematics annually in grades 3 through 8 and at least once in grades 10 through 12. States must also include assessments of science proficiency in three designated grade spans by 2007. Results are generally reported within about four months of administration of the assessment.

1

Some proponents of technological literacy, including the authoring committee of Technically Speaking, have suggested that there may be at least an indirect link between general technological literacy and performance in the workplace (NAE and NRC, 2002, pp. 40–42).

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

The rationale for a statewide assessment of technological literacy is to encourage changes in standards, curriculum, and teacher education to support the goal of increasing technological literacy for all students. With the possible exception of Massachusetts, states do not currently have the curricular building blocks in place to justify a statewide assessment of technological literacy. However, an assessment on such a large scale conducted in even a single state could demonstrate the feasibility and value of determining what students know and can do with respect to technology and could provide momentum for changes in standards, curriculum, and teacher education across the country.

Purpose

Typically, statewide assessments serve a powerful accountability function.

In this example, the primary purpose of the statewide assessment of technological literacy is to improve teaching and learning related to technology. Typically, statewide assessments serve a powerful accountability function, providing data that can be used to track student achievement trends by school, school district, and the state as a whole. In an area of content as new as technological literacy, however, the goal of improving teaching and learning looms large. As technological literacy becomes more established as a school subject, however, assessment data may increasingly be used for accountability purposes.

In this sample case, assessment results can be used to inform policy makers at the state and district levels and provide data for instructional leaders at the district, school, and classroom levels. This assessment could either be designed to provide a snapshot of technological literacy in the target populations or to provide data on specific content standards for technological literacy, which in turn may be aligned with national standards, such as those developed by ITEA (2000).

A statewide assessment of technological literacy could not only tell educators what students at these age levels know and what they can do with respect to technology, but could also provide information related to specific standards. For example, they could determine if there was a difference in performance between boys and girls on ITEA Standard 19, which relates to understanding and being able to select and use manufacturing technologies. In short, data from such an assessment would enable educators to answer a large variety of questions useful for improving teaching and learning.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Content

To be useful for an assessment, benchmarks must be “operationalized.”

The ITEA Standards for Technological Literacy (ITEA, 2000) the AAAS Benchmarks for Science Literacy (AAAS, 1993), the NRC National Science Education Standards (NRC, 1996), and especially state-specific content standards, would be logical starting points for determining the content of the assessment. All of these documents suggest “benchmark” knowledge and skills that a technologically literate individual should have. To be useful for an assessment, however, the benchmarks must be “operationalized, ” that is, the most important technology concepts and capabilities must first be identified and then made specific enough to clarify the range of material to be covered in the assessment. This is a step in the process of developing an assessment framework for technological literacy, as discussed in Chapter 3.

In addition, existing assessments may be reviewed to determine if any items are aligned with, and measure, the operationalized benchmarks. If not, technology-related content may have to be added. A review of the general guidelines for student assessments developed by ITEA may also be helpful (ITEA, 2004a).

The assessment framework must specify the emphasis, or weight, given to items in each dimension of technological literacy. The weighting process must be based on many factors, including the purpose of the assessment, the time allotted for testing, the developers’ views of the importance of each dimension of technological literacy, and expert judgments about reasonable expectations for students in these grades. Table 6-1 shows how the weighting process might work.

Performance Levels

In this sample case, the state would derive a scale score for each student. If similar technology-related concepts were tested for more than one grade level (e.g., manufacturing processes for grades 3–5 and 6–8), the state might use cross-grade vertical scaling, which would enable scorers to compare the mastery of material by students at different grade levels. Using within-grade scaling, which is more common, the performance levels in each grade would be examined independently.

To provide scores in a useful form for policy makers and instructional leaders, the state board of education might establish performance

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

TABLE 6-1 Sample Weighting for Grades 6–8 Items Assessing Knowledge, Capability, and Critical Thinking and Decision Making Related to Manufacturing Technologies, by Percentage of Items Devoted to Topic

 

Benchmark Topics

 

Manufacturing Systems

Manufacturing Goods

Chemical Technologies

Materials Use

Knowledge

20 percent

10 percent

10 percent

10 percent

Capability

10 percent

10 percent

 

 

Critical Thinking and Decision Making

 

10 percent

10 percent

10 percent

SOURCE: Adapted from ITEA, 2000.

levels to group students according to subjective achievement standards for increasingly sophisticated levels of performance (e.g., novice, competent, proficient, and expert). Performance-level descriptors must realistically capture what a child of a given age might know and be able to do.

Reporting could be done either on the overall assessment or on separate subscales or dimensions of the assessment. If separate subscales or dimensions were used, separate performance levels could be defined for each. If the idea is to report subscale- or dimension-specific scores, the assessment must be designed so that the items in each subscale or dimension support reliable scoring.

Once state and local educators received descriptive and diagnostic data, they could interpret the results in context and identify achievement gaps. Based on diagnostic information, educators could determine which standards had been mastered by most students and which subjects required more or better instruction. Based on assessment results, educators could then focus their instruction and professional development practices to improve student learning.

If the assessment were given regularly, perhaps biennially, the resulting data would provide a measure of whether the level of technological literacy had increased, stayed the same, or declined. Results over time could reveal trends among subgroups of students. If the assessment includes items that measure student attitudes and opinions about technology or technology careers, that information could be correlated with performance data. In this way, the data could be used by K–12 educators to assist with course planning and career counseling.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Administration and Logistics

Combining census and matrix-sampling approaches would have several advantages.

A statewide assessment would be administered to all students in three grade levels, one elementary (grades 3–5), one middle school (grades 6–8), and one high school (grades 9–12), in every school in the state. The assessment should take no more than two sessions, lasting no more than 90 minutes each, and should use both census and matrix-sampling techniques.2 Combining census and matrix-sampling approaches would have several advantages. It would reduce the time required to administer the assessment, because not every student would see every question. By making sure all students were presented with a core set of items (the census portion of the instrument), a general measure of technological literacy could be obtained.

The matrix portion of the assessment would enable the collection of additional diagnostic measures of performance related to specific areas of content, such as student knowledge of the influence of technology on history. The assessment should include a mix of multiple-choice, constructed-response, and design-based performance items, possibly including simulations.

Teachers would require rudimentary training to administer the test, particularly the hands-on design or online computer-based components. Administrators and policy makers would also have to be educated about the dimensions of technological literacy, the purpose of the assessment, and potential uses of the data obtained from the assessment.

Obstacles to Implementation

With the notable exception of state testing conducted to fulfill the requirements of NCLB, assessments like the one described here usually have no direct consequences for students, teachers, or schools if student scores are low. Without the threat of punitive consequences for poor outcomes, teachers may be less inclined to spend time preparing students for the assessment, and students may be less inclined to take the test seriously.

A statewide assessment of technological literacy would also have resource constraints, especially today, when states are already spending considerable sums to meet the assessment and reporting requirements of

2

Matrix sampling and census testing are explained in Chapter 4 in the section on Measurement Issues.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

NCLB. For example, the Maryland State Department of Education recently spent more than $5 million to develop and implement, within two years, new reading/language arts and mathematics assessments for 9th graders (M. Yakimowski-Srebnick, director of assessments, Council of Chief State School Officers, personal communication, June 16, 2005). Although some of the costs of an assessment, particularly related to test administration, might be reduced by using computer-based testing methods (see Chapter 7), it would still be difficult to convince states that are already “feeling the pinch” of NCLB to add a statewide assessment of technological literacy.

Furthermore, traditional paper-and-pencil tests alone generally do not provide an adequate measure of capabilities related to technological design. Thus, some states are beginning to explore nontraditional testing methodologies, such as computer simulations, to assess hands-on tasks and higher order thinking. Developing and testing these methods, however, requires considerable resources and time.

Turf issues within the academic community might introduce additional challenges for a statewide assessment. For instance, the mathematics and science-education communities might argue that an assessment of technological literacy would divert attention and resources from their efforts to improve student learning in their content areas. Many educators might be concerned about the amount of time taken away from instruction, above and beyond the time required to prepare for mandated assessments.

Turf issues within the academic community might introduce additional challenges.

Another potential challenge for states might be providing opportunities for students with special needs to participate in the assessment. Adjustments would have to be made for students with physical or cognitive disabilities, limited proficiency in English, or a combination of these to ensure full and fair access to the test. Adjustments must be made on a case-by-case basis. For instance, a student with a visual impairment would not require the same test accommodation as someone with dyslexia, even though both have trouble reading small, crowded text. Common accommodations include extending time, having test items read aloud, and allowing a student to dictate rather than write answers. It is also important that accommodations be used only to offset the impact of disabilities unrelated to the knowledge and skills being measured (NRC, 1997).

Some students with special needs might require alternative assessment approaches, such as evaluation of a collection of work (portfolio), a one-on-one measure of skills and knowledge, or checklists filled out

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

by persons familiar with a student’s ability to demonstrate specific knowledge or skills (Lehr and Thurlow, 2003); typically, a very small percentage of students, on the order of 1 percent, require alternative assessments. Because a test score may not be a valid representation of the skills and achievement of students with disabilities, high-stakes decisions about these students should take into account other sources of evidence, such as grades, teacher recommendations, and other examples of a student’s work (NRC, 1999a).

Finally, because it is often difficult or impractical for states to collect meaningful data related to socioeconomic status, assessment results might inadvertently be reported in ways that reinforce negative racial, ethnic, or class stereotypes. Concerns about stereotyping might even arouse resistance to the implementation of a new assessment.

Sample Assessment Items3

1. Manufacturing changes the form of materials through a variety of processes, including separation (S), forming (F), and combining (C). Please indicate which process is associated most closely with each of the following:

  1. bending

  2. sawing

  3. gluing

  4. cutting

2. One common way of distinguishing types of manufactured goods is whether they are “durable” or “nondurable.” In your own words, explain two ways durable goods differ from nondurable goods. Then sort the following products into two groups, according to whether they are durable or nondurable: toothbrush, clothes dryer, automobile tire, candy bar, bicycle, pencil.

3

For a statewide assessment, items would be based on a framework derived from rigorously developed content standards. In this example, items were derived from content specified for grades 6 through 8 in the ITEA Standards for Technological Literacy.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

3. Manufacturing, like all aspects of technology, has had significant impacts on society, and not all of these have been anticipated or welcome. Innovations in manufacturing in the past quarter-century have included the use of robotics, automation, and computers. Using examples from only one manufacturing sector, describe some of the positive and negative impacts these manufacturing innovations have had on life in the United States.

Case 2:
Matrix-Sample Assessment of 7th Graders

Description and Rationale

Case 2 involves a matrix-sample-based assessment of the technological literacy of 7th graders throughout the United States. Sample-based assessments differ from other types of assessments in that individual scores are rarely, if ever, reported. Instead, the focus is on discovering and tracking trends. In this case, one might want to follow the changes over time in the average level of technological literacy of 7th graders. Sampling can also reveal geographic variations, such as state-by-state differences in scores and variations among subgroups, such as gender, race/ethnicity, type of school, population density, poverty level, and other demographic variables, depending on the design of the sample.

In matrix sampling,4 individual students are not tested on all test items. This is done mainly to accommodate the time constraints of test administration. Even though no single student sees every item, every question is administered to a large enough subset of the sample to ensure that the results are statistically valid. Another important feature of a matrix sample is that the large number of questions ensures that all three dimensions of technological literacy are assessed. The assessment described here is similar in structure to assessments conducted through the National Assessment of Education Progress (NAEP).

The rationale for conducting a national, sample-based assessment of students would be to draw public attention to the state of

4

Matrix sampling is described in more detail in Chapter 4 in the section on Measurement Issues.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

technological literacy in the country’s middle-school population. In the same way the release of NAEP results in science and mathematics encourages examination of how learning and teaching occur in these subjects, data on technological literacy would provide an impetus for a similar analysis related to the learning and teaching of technology. If the results indicated significant areas of weakness, they might provide an impetus for education reform. Periodic administration of the assessment would provide valuable time-series data that could be used to monitor trends.

Purpose

This assessment would be a policy tool, rather than a classroom tool.

A national sample assessment of technological literacy among U.S. 7th graders could provide a “snapshot” of technological literacy in this population that would be useful for policy makers. Like the statewide assessment described in Case 1, educators could use these data to get a sense of what students at this age know and what can they do with respect to technology. With a national assessment, however, administrators at the school, district, and state levels could determine how their students’ scores compared with student scores in other areas of the country, and national education officials could get a sense of the overall technological literacy of 7th graders. Unlike the assessment in Case 1, of course, the sample assessment would not provide information about individual students. This assessment would be a policy tool, rather than a classroom tool.

If a national sample assessment were repeated periodically, it would show whether technological literacy was increasing, staying the same, or declining around the country. If similar assessments were conducted in other countries, it would be possible to make some cautionary comparisons across national boundaries. If the assessment revealed student attitudes about technology or technology careers, that information could be correlated with performance data to determine how attitudes influence the level of technological literacy.

Content Specifications

The ITEA Standards for Technological Literacy, AAAS Benchmarks for Science Literacy, and the NRC National Science Education Standards would be useful starting points for determining the content of a national sample assessment, just as they would be for the statewide assessment described in Case 1. Each of these documents suggests “benchmarks”

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

of knowledge and skills a technologically literate individual should have. An assessment framework for a national sample assessment should specify the most important technology concepts and capabilities for 7th-grade students, and the specifications should be detailed enough to clarify the range of material to be covered. Test developers would also have to create a detailed test- and item-specifications document.

Performance Levels

The development of performance levels would be as important for a national sample-based assessment as it would be for the statewide assessment described in Case 1. The processes for developing performance standards, for disaggregating scores according to subscales or dimensions of technological literacy, and for reporting results would be the same for both.

Administration and Logistics

A national sample-based assessment should be administered to a representative sample of 7th graders attending public and nonpublic schools in the United States. (The 2000 NAEP science assessment national sample included about 47,000 children in grades 4, 8, and 12 [DoEd, 2003].) The assessment should take about 50 minutes and should include a mix of multiple-choice, constructed-response, and design-based performance items. An additional 10 minutes could be allocated for completion of accompanying surveys to gauge attitudes and collect demographic information. Teachers would require some training in administering the test, particularly the hands-on design component.

Obstacles to Implementation

Resources and time for designing, administering, and reporting results would be the most significant constraints on a national sample assessment of technological literacy. For example, it costs the federal government about $1.2 million to develop the content framework and item and test specifications for the science portion of the NAEP, which is administered every four years (S. Shakrani, deputy executive director, National Assessment Governing Board, personal communication, August 23, 2004). The development and validation of test items, data

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

collection, analysis, and reporting consume another $2.8 million every test year. For an assessment of technological literacy, in addition to the usual expenses involved in creating and administering a large-scale assessment, additional resources might be necessary to develop specialized assessment tools—such as computer simulations—for measuring the capability dimension. (Cost issues related to simulation are discussed in detail in Chapter 7.)

There may also be other obstacles to overcome. Many decision makers, whose support would be necessary for the development of a national sample-based assessment, might have a limited understanding of technological literacy themselves. In addition, as might be true for a statewide assessment, the mathematics and science education communities might object to a separate assessment of technological literacy on the grounds that it would divert attention and resources from their efforts to improve student learning in their content areas. Finally, because the matrix-sampling approach does not allow for the reporting of individual scores, students and their teachers might not take the test as seriously as other assessments.

Sample Assessment Item

The objective of this sample item, which would constitute most, perhaps all, of the assessment for measuring the capability dimension of technological literacy, would be to gauge a student’s knowledge of the design process and his or her ability to carry out a design task. Other items, in multiple-choice, extended-response, and open-response formats, would address the other dimensions of technological literacy. See Table 6-2 for the performance rubric.

Explanation of the Problem

Design and Test a Straw Bridge.

An outdoor jogging and biking path is being built for people to use for exercise. The best site for the path requires that at one location it cross a stream 6 meters wide. The bridge for this crossing must be strong enough to hold several people at once and must prevent them from falling off the edge.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Directions

Select a bridge design that meets the problem constraints, and build a model of it with plastic straws and tape.

Constraints

The model bridge should span a 25-cm space, hold the weight of at least five spice containers, and prevent them from falling off. Only the materials provided can be used to build the model.

Documentation

Use your log sheets to show the drawings you make of potential bridge designs, to describe the process you use to design and select your bridge, and to record your test results.

Materials Provided

15 plastic drinking straws, each one 10 inches long

10 inches of masking tape

5 spice containers (~4 cm in diameter, 10 cm tall) filled with sand or water

2 large cardboard bricks log sheets

Time Limit

25 minutes

Case 3:
National-Sample Assessment of Teachers

Description and Rationale

Case 3 involves an assessment of technological literacy for a national sample of pre-service and in-service K–12 teachers. The sample would be designed to include generalists (e.g., elementary school teachers) as well as teachers in specific academic disciplines—science, mathematics, social studies/history, fine arts, and language arts. The sample would also

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

TABLE 6-2 Performance Rubric for Sample Task

 

Performance Level

 

 

Performance Standard

Basic

Proficient

Advanced

Generates and visualizes possible solutions.

Student identifies a single solution that meets some of the constraints and would adequately solve the problem. However, the solution may or may not be feasible.

Student generates solutions that are feasible, meet the constraints, and make efficient use of resources. The design expresses an element of creativity. More than one solution may be presented, but many of them are similar. Student tends to think “inside the box.”

Student generates creative and efficient solutions. All solutions meet the constraints and address the original problem. A number of the solutions are feasible. Student is innovative and thinks “outside the box.”

Selects a design solution.

Student selects a solution based on limited attention to criteria. The solution may or may not be feasible. The selection process tends to be tentative and uncertain.

Student selects solutions on the basis of efficiency and effectiveness. The solutions are checked against the constraints. Student provides a basic rationale for the design but tends not to have an alternative solution in case the initial choice does not work.

Student provides detailed reasons for selecting a particular solution. Student may provide a backup or alternate solution in case the first solution fails. Student tries to be innovative and to find the best possible solution.

Source: Adapted from Custer et al., 2001.

include teachers of technology, who are routinely assessed during their pre-service education, to provide a basis for comparison.

The rationale for developing an assessment of this sort would be to make efforts to improve technological literacy in the United States more effective. Contemporary models of education reform emphasize that multiple elements of the educational system must be addressed to achieve meaningful, lasting change (e.g., AAAS, 1998; Bybee, 1997). In this view, simply developing content standards is not sufficient. Curricula and instructional materials must also be reworked to align with the standards, goals and methods of teacher education must be reassessed, and assessments must be created that link to what is being taught in the classroom. Knowing what a representative sample of U.S. teachers know and can do with respect to technology would be essential to reforms intended to improve the technological literacy of both teachers and their students.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Purpose

A sample-based assessment of teachers could have several purposes. Education researchers could use the data, along with information from other sources, to build a model of adult learning related to technology. Anyone involved in in-service teacher education could draw on the assessment results to enrich existing activities with technology content (e.g., through summer workshops); results could also be used to design new materials and programs. For schools of education, the assessment could provide a rough indication of how well new teachers are being prepared to think technologically (beyond using computers).

Content

The ITEA Standards for Technological Literacy, the AAAS Benchmarks for Science Literacy, and the NRC National Science Education Standards would be useful starting points for determining the content of a teacher assessment. However, these documents suggest the knowledge and skills for technologically literate students; because of differences in age, maturity, and expectations for content knowledge of teachers, the standards would not be directly transferable.

Thus, a careful framework-development process would be necessary to support assessment in this population. Assessment designers might consider creating a set of items to measure “general” technological literacy to be administered to all teachers in the sample; items targeting more discrete knowledge and skills would be given to a subset of subject-matter specialists. The balance between general and subject-specific items would vary, depending on the purpose of the assessment.

Content specialists in technology as well as in the subjects taught by the teachers in the sample population should be involved in the framework-development process. Standards and benchmarks in non-technology subjects that state or imply a requirement for technological knowledge, capabilities, or critical thinking should be examined. One helpful resource in this regard is the compendium of K–12 education standards created by Mid-Continent Research for Education and Learning (MCREL, 2004). The framework-development process should be informed by the realities of current teacher education. For this reason, those involved in developing and administering teacher pre- and in-service education programs should also be involved.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

To the extent that computers and other educational technologies are used to support the development of technological literacy in students, assessments for teachers should include items measuring their knowledge and capability in this domain. The skills, foundational concepts, and intellectual capabilities considered essential to information-technology fluency (NRC, 1999b) would be a reasonable basis upon which to develop such items.

Performance Levels

Establishing performance levels for an assessment of teachers would be challenging. First, the only current basis for deriving descriptions of what might constitute sub-par, adequate, or exemplary teacher technological literacy would be the Praxis test, which is given to a limited target population (technology teachers) for the purpose of licensure. Thus, assessment designers would be charting new territory in many ways. Second, sensitivities to the provisions for highly qualified teachers in NCLB might increase the concerns of potential test takers. If assessment results suggest that teachers are not knowledgeable or capable “enough” in technology, the very individuals and institutions (i.e., schools of education) the assessments are designed to help might resist participating. Third, setting discipline-specific benchmarks would require the involvement of experts in various dimensions of technological literacy and experts familiar with K–12 curricula in the subjects of interest. For all of these reasons, setting performance levels and reporting results for this assessment must be approached with considerable care and sensitivity.

Administration and Logistics

The assessment should include at least one performance task.

The assessment should last no more than two hours and should include at least one performance task. If possible, testing should be done in a way that encourages teacher participation and reassures them that the results will not be seen by school system administrators involved in personnel oversight and evaluation. One possibility would be to have the assessment administered online by a third-party testing firm. Virtually all teachers have access to computers and the Internet, or can easily obtain access, and there are numerous examples of successful online surveys and tests of professionals, ranging from physicians to journalists and policy makers.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Obstacles to Implementation

A large-scale assessment of teachers’ technological literacy would be a major undertaking with significant resource constraints. Because this would be a sample-based assessment, the most significant constraint would be the time and expertise required to design and carry out appropriate sampling procedures. Like assessments for students and out-of-school adults, the other two target populations, designing this assessment would pose technical, logistical, and financial challenges associated with measuring the capability dimension. As a rule, performance assessments, including assessments of technological capability, are time consuming and expensive to design, administer, and score.

Another constraint might be the difficulty of persuading teachers and facility administrators that an assessment would be worthwhile. Teachers have limited time for activities not directly related to their classroom duties, and many teachers and their unions might be wary of an assessment process that could have uncertain outcomes and consequences. This resistance might be overcome by a combination of compensation for participation and assurances that individual scores would not be provided to school administrators. However, if scores were completely disconnected from accountability measures, this would become a low-stakes assessment, thus making it less likely that teachers would take the test seriously.

Sample Assessment Items

Test Items for Generalists

1. An electric generator is used to convert what into what? (knowledge dimension)

  1. Solar energy into electric energy

  2. Electric energy into solar energy

  3. Mechanical energy into electric energy

  4. Electric energy into mechanical energy

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

2. Which device receives, analyzes, and processes incoming information like motion, heat, and light? (knowledge dimension)

  1. A sensor

  2. A monitor

  3. A radio

  4. An air coil

3. Develop a basic sketch of the heating system in a typical home. The sketch should include the major components as well as a feedback system that enables the system to function automatically. Describe in words how the system works to deliver heat throughout the home. (knowledge and capabilities dimensions)

4. Hydrogen-powered engines for cars may provide some advantages over existing fuel sources. Focus on either the ethical, social, political, or environmental impact of this significant technological change, and identify the negative and positive consequences. (critical thinking and decision making dimension)

5. Identify a key selling feature (e.g., high gas mileage) of hybrid vehicles and describe two of the associated trade-offs (e.g., less engine power) involved in optimizing that feature. (knowledge and critical thinking and decision making dimensions)

Subtest Item for Social Studies Teachers

How have technological inventions changed the nature of conflict between nations?

  1. Describe the changes in technology used in wars of the 18th century and wars of the 20th century? (knowledge dimension)

  2. How have these changes impacted the decision to go to war? (critical thinking and decision making dimension)

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Case 4:
Assessments for Broad Populations

Description and Rationale

In addition to information about the technological literacy of students and teachers, information about the technological literacy of segments of the general population in the United States—people who are affected by, or likely to join in a debate about, a particular new technology—can be extremely helpful. Public opinion researchers call this assessing a “broad population,” by which they mean any group sufficiently numerous and widely distributed so that a measurement involves sampling rather than surveying every member of the group. Segments of any of the three population groups—students, teachers, and out-of-school adults—could be part of a broad population. For example, a family of one parent and two young children attending a baseball game could be part of the broad population of “family visitors to sporting events.”

The rationale for assessing broad populations is simple. If broad populations are not assessed, a large segment of the general population for which we might want data about technological literacy might be missed. K–12 students and teachers together comprise only about 19 percent of the U.S. population.5 In addition, assessment in these groups is almost always linked to a structured curriculum. In contrast, assessments of broad populations reveal the understanding, skills, and attitudes acquired by people through life experiences.

K–12 students and teachers together comprise only about 19 percent of the U.S. population.

Broad population assessments might also provide opportunities to gauge how the dimensions of technological literacy play out in the situations and environments of everyday life, rather than in the somewhat artificial environment of the classroom. Researchers, policy makers, and the education and business communities might all benefit from information about the nature of technological literacy outside the formal education environment.

5

This estimate is based on data from the 2001–2002 school year, the most recent period for which accurate data on teachers are available. There were approximately 2.7 million public school K–12 teachers, according to the National Center for Education Statistics (Young, 2003a). There were approximately 425,000 private elementary and secondary school teachers (Broughman and Pugh, 2004). The K–12 public school student population was approximately 47 million in the 2001–2002 school year (Young, 2003b), and there were about 5.3 million private school students that year (Broughman and Pugh, 2004). According to the Population Division of the U.S. Census Bureau (2005) the U.S. population in 2001 was about 285 million.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Purpose

Assessments of broad populations might be conducted for many different purposes.

Assessments of broad populations might be conducted for many different purposes. One way of thinking about a broad population assessment was introduced in 1975 by Benjamin Shen in a discussion of scientific literacy (rather than technological literacy). Shen distinguished three types of literacy: (1) consumer scientific literacy; (2) civic scientific literacy; and (3) cultural scientific literacy. If this framework is applied to technology, three broad populations can be identified: (1) technology consumers, (2) policy-attentive citizens, and (3) the general public.

Technology Consumers

Technology consumers include most adolescents and adults in the United States. The people in this group tend to seek out information about specific technologies—for example, technologies related to health and medical issues—but they are generally more interested in the value and risks of specific technologies than in general issues of public policy. As a group, technology consumers have been studied intensely by technology developers and manufacturers, who routinely conduct studies of users—or potential users—of specific technologies. But much of this information is proprietary and not available for analysis by outsiders. Additional studies of this group would be enlightening, especially assessments comparing attitudes toward, and knowledge of, different technologies.

Policy-Attentive Citizens

Policy-attentive citizens, mostly adults but also some well informed teenagers, have a high level of interest in public policy as it relates to one or more specific technologies. Researchers have identified “attentive publics” for science and technology policy, energy policy, space policy, and biomedical policy (Miller, 1983a, 1986, 1992, 1995, 2004a; Miller and Kimmel, 2001; Miller and Pardo, 2000; Miller et al., 1997). In addition, some people are interested in the widespread effects of technology in general on economic and social life.

Policy-oriented audiences tend to want more sophisticated information about technology and tend to have a deeper understanding of technology than technology consumers. Assessments of this population would be particularly useful for characterizing the role of the public in the making of technology-related policies.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
The General Public

Everyone in a society is affected by and, in turn, helps shape technology. Thus, the level of “cultural” technological literacy—roughly speaking, the awareness and attitudes of the members of a society toward technology in general and toward specific technologies in particular—can be an important factor in the health of a society. An assessment of cultural technological literacy would provide information about the acceptance of technology by society and about people’s awareness of how technology shapes their lives.

An assessment of cultural technological literacy would necessarily be less structured than assessments of technology consumers or policy-attentive citizens. Assessments of knowledge of and attitudes toward technology could provide useful information for educators and media that produce informal science educational products intended for the general adult population; social scientists hoping to improve their understanding of public attitudes; and policy makers attempting to get a perspective on the workforce in relation to national competitiveness in technology-related areas. Assessment data might also be valuable to people who communicate information about technological issues to the general public, such as journalists, designers of museum exhibits, and designers of public-health campaigns (Friedman et al., 1986, 1999).

Content

Some questions for surveys and assessments of broad populations might be derived from the National Science Board (NSB) longstanding survey series on scientific literacy (Miller, 1983b, 1987, 1995, 1998, 2000, 2004a). In that way, data from new surveys could be compared to data from this nearly three-decades-long time series. Unfortunately, NSB has discontinued its surveys, and it is not clear if the surveys will be restarted in the future. The 2001 and 2004 ITEA/Gallup polls on technological literacy might also provide content for a broad population survey. In some cases, rather than relying on earlier assessment instruments or surveys, assessment developers might consult with subject-matter experts in technology, the history of technology, and science, technology and society studies, as well as representatives of populations participating in the assessment and groups that are expected to make use of the results.

The dimensions of technological literacy must be approached from a different angle in the context of broad populations. For technology

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

consumers, all three dimensions of technological literacy—knowledge, capability, and critical thinking and decision making—should be assessed. Measuring attitudes related to consumer issues is partly a marketing concern, and well-developed tools are available for assessing attitudes toward specific technologies and products. But consumers also have personal concerns that may not be tapped in a marketing survey. For example, manufacturers are interested in the factors that influence consumers to purchase a particular model of cell phone but probably do not include questions about consumers’ concerns about the effects of using cell phones on the health of the user, traffic safety, and civility in public places.

Consumers have personal concerns that may not be tapped in a marketing survey.

For policy-attentive audiences, assessing the knowledge and attitudinal components of technological literacy are straightforward, but assessing capability can be problematic. Individuals concerned about environmental damage from waste disposal, for example, need to know the causes and sources of the waste-related pollution and the technical feasibility of controlling or reducing it, but they do not need to have the technical competence to engage directly in pollution control.

At the cultural level, all individuals need general knowledge about technology and the social significance of technology to follow public policy discussions on energy, the environment, and biotechnology, for example. Citizens also need to have enough background information to be able to absorb new information and form reasoned opinions about technological issues. Levels of understanding for various broad populations may range from a general appreciation of the importance of technology to a deeper understanding based on historical examples.

Performance Levels

The concept of performance levels, which are derived largely from educational contexts, is difficult to apply to broad population surveys. Performance levels relevant to particular occupations or professions could be specified for particular segments of the workforce, but these are likely to be occupation-specific, rather than general categories.

Administration and Logistics

Surveys of broad populations and supplementary studies should be administered so that the results are as representative as possible of the general population. Truly representative samples would provide

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

information about gender, ethnic, and geographic differences in technology-related knowledge, capabilities, and critical thinking and decision making. And surveys of broad populations could also provide data on public attitudes toward technology.

A number of measurement methods, strategies, and practices have been developed for studying broad populations. Deciding which of these to use will depend on the population of interest and the goals of the study.

Obstacles to Implementation

In recent decades, most measurements of all segments of the adult population have been conducted through large-scale sample surveys, mostly telephone samples and interviews. In addition, a solid body of research has been accumulated on the best methods of constructing questionnaires and analyzing their results. In the last decade, however, resistance to telephone-based surveys has been growing, and response rates are often unacceptably low. As a result, researchers of broad populations now rely increasingly on online panels, which raise questions about probability-based recruitment versus online participants’ self-selection. Some researchers have turned to surveys of broad populations that are co-located, such as patrons of science museums, but these samples may be biased toward people familiar with both science and technology.

Another difficulty for survey designers is that some types of knowledge questions quickly lose currency because of rapid advancements in technology. This can make changes over time difficult to track.

Sample Assessment Items

For Technology Consumers

1. You have bought a new home entertainment system. The system has several large components, not including the speakers, as well as a number of connecting wires and plugs, batteries, and a remote-control device. When you unpack the system at home, you discover that the instruction manual for assembling it is missing. Which of the following best reflects your approach to this problem?

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
  1. I have a good idea how systems like this work, so I would be able to assemble it without instructions or outside help.

  2. I do not have experience with this exact type of system, but I would be comfortable trying to figure out how everything fits together through a process of trial and error.

  3. I do not have experience with this type of system and would search the World Wide Web for a copy of the instruction manual or to get other online help.

  4. I do not have experience with this type of system and would not feel comfortable searching the Web for help.

2. All technologies have consequences not intended by their designers, and some of these consequences are undesirable. Below is a list of consequences some people associate with cell phones. For each, please indicate the level of concern you have (no concern at all; a little concern; a moderate amount of concern; a lot of concern).

  1. Possible negative health effects, including cancer.

  2. Loss of enjoyment of quiet in public places, such as restaurants.

  3. Car accidents caused by drivers using cell phones while on the road.

  4. Possible theft of personal data by cell-phone hackers.

For Policy-Attentive Citizens

1. To what extent do you agree or disagree that the following applications of technology pose a risk to society? (Answer choices: completely agree; agree; neither agree nor disagree; disagree; completely agree; not sure.)6

  1. The use of biotechnology in the production of foods—for example, to increase their protein content—makes them last longer, or enhance their flavor.

6

This question and answers a and b are adapted from U.S. Environmental and Biotechnology Study (Pardo and Miller, 2003).

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
  1. Cloning human cells to replace the damaged cells that are not fulfilling their function well.

  2. The computerized collection and sorting of personal data by private companies or the government in order to catch terrorists.

  3. The placement under the skin of small computer chips that enable medical personnel to retrieve your personal health information.

2. Please indicate for each of the following sentences the extent to which you believe it is absolutely true, probably true, probably false, or absolutely false. If you do not know about or are not sure about a specific question, check the “Not Sure” box.7

  1. Antibiotics kill viruses as well as bacteria.

  2. Ordinary tomatoes, the ones we normally eat, do not have genes, whereas genetically modified tomatoes do.

  3. The greenhouse effect is caused by the use of carbon-based fuels, like gasoline.

  4. All pesticides and chemical products used in agriculture cause cancer in humans.

For the General Public

1. Please indicate the extent to which you believe the following statements to be absolutely true, probably true, probably false, or absolutely false.8

  1. Nuclear power plants destroy the ozone layer.

  2. All radioactivity is produced by humans.

  3. The U.S. government regulates the World Wide Web to ensure that the information people retrieve is factually correct.

  4. Using a cordless phone while in the bathtub creates the possibility of being electrocuted.9

7

This question and answers a, b, c, and d are adapted from U.S. Environmental and Biotechnology Study (Pardo and Miller, 2003).

8

This question and answers a and b are adapted from U.S. Environmental and Biotechnology Study (Pardo and Miller, 2003).

9

This answer adapted from ITEA, 2004b.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Case 5:
Assessments for Visitors to Museums and Other Informal-Learning Institutions

Description and Rationale

Case 5 describes an assessment of technological literacy for visitors to a museum, science center, or other informal-learning institution, where participants set their own learning agendas and determine the duration and selection of content; this is called “free-choice learning.” Some 60 million people are served by public science-technology centers in the United States every year (ASTC, 2004). This number is consistent with NSB survey data indicating that 61 percent of adult Americans visit an informal science institution (e.g., a zoo, aquarium, science center, natural history museum, or arboretum) at least once a year (NSB, 2000).

Typically, visitors are children attending as part of a family or school group (which often includes teachers) or adults attending alone or in groups without children. Because of the transient nature of the population of interest (visitors usually spend no more than a few hours in these institutions), the assessment would rely on sampling techniques, although focus-group-style assessments might also be used.

The principal rationale for conducting assessments in informal-education settings is to gain insights into the type and level of technological literacy among a unique (though not random) cross-section of the general public. In addition, because visitors to these facilities are often surrounded by and interact with three-dimensional objects representing aspects of the designed world, informal-learning locations present opportunities for performance-related assessments. The sheer volume of visitors, particularly at mid-sized and large institutions, provides an additional incentive.

Purpose

Organizations that provide informal-learning opportunities, including museums, book and magazine publishers, television stations, websites, and continuing-education programs offered by colleges and universities, all provide information about technology, but generally have limited knowledge of the level of understanding or interest of their intended audiences. For this diverse group of institutions and companies,

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

assessments of technological knowledge and attitudes would provide a context for making programming and marketing decisions.

For example, a science center might want to involve members of the non-expert public in discussions of how using technology to enhance national security might affect privacy. For these discussions to be effective, the center would have to know the nature and extent of participants’ understanding (and misunderstanding) of various technologies, such as the Internet and voice- and face-recognition software, as well as their grasp of the nature of technology (e.g., the concepts of trade-offs and unintended consequences). The center might also benefit from an assessment of attitudes about the topic. For instance, knowing that two-thirds of potential participants feel powerless to influence government decisions about deploying such technology, for instance, might influence the type of background information the center provides prior to a discussion.

In addition to planning tools, assessments could be used to determine what members of the public take away from their experiences—new knowledge and understanding (as well as, possibly, misunderstanding), new skills and confidence in design-related processes, and new or different concerns and questions about specific technologies or technology in general. These findings, in turn, could be used to adjust and improve existing or future programs, exhibits, or marketing.

Apart from the direct impact of assessments of technology literacy on individual institutions that want to attract more visitors and improve the quality of their outreach to the public, the assessments might be of wider interest. The formal education system in the United States evolved at a time when the body of knowledge—the set of facts, reasoning abilities, and hands-on skills that defined an “educated” person—was small. A decade or so of formal education was enough to prepare most people to use and understand the technologies they would encounter throughout their lives. Today, the pace of technological change has increased, and individuals are being called upon to make important technological decisions, including career changes required by new technologies, many times in their lives. For this reason, “lifelong learning,” which can take place formally in settings like community colleges and the workplace, or informally through independent reading, visits to museums and science centers, or exposure to radio, television, and the Internet, has become critical to technological literacy.

Individuals are called upon to make important technological decisions many times in their lives.

But little is known about how well informal, or free-choice, learning promotes technological understanding. This information would

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

be of interest not only to the institutions themselves but also to the publics they serve, funders, policy makers, and the education research community.

Content

The three dimensions of technological literacy, as described in Technically Speaking, could provide a reasonable starting point for determining content relevant to an assessment of this population. The ITEA standards also should be consulted, particularly standards related to the nature of technology. To a great extent, however, the content of the assessment would be determined by the specific technology or technology-related concerns at issue. That is, just as a student assessment should be aligned with relevant standards and curriculum, an assessment of visitors to an informal-education institution should be aligned with the subject matter and goals of the program or exhibit.

In situations where the assessment involves a hands-on or design component, assessment developers could use a rubric for judging design-process skills. The model developed by Custer et al. (2001) might be useful here.

Performance Levels

Assessments of visitors to informal-learning institutions would be most useful for identifying a spectrum of technological literacy rather than specific levels of literacy. Changes in the spectrum, for example, movement—up or down—of the entire curve or changes in the shape of the curve, would provide valuable information. Correlations among the three dimensions and with attitudes would be of special interest. Does a high level of knowledge correlate with critical thinking and decision making? with attitudes? How are capabilities related to knowledge and attitudes? Does literacy in one aspect of technology translate to literacy in other areas? These are just a few of the questions that could be answered.

Administration and Logistics

Many informal-learning institutions are open 300 days a year or more, including weekends; thus, there would be fewer constraints on content selection and assessment methodologies than for formal-education settings, such as classrooms, where time, space, and trained

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

staff are all at a premium. Practically all testing methods would work for this population: interviews, multiple-choice questions, constructed-response items, performance items, and focus groups.

Assessments could also measure changes in visitors’ understanding of technology or technology-related exhibits over time. Short-term understanding could be measured by pre- and post-visit surveys; long-term understanding might be measured by e-mail or telephone follow-up. A variety of methods could be used to enable museums and other institutions to compare the effects of different exhibit formats and content on specific measures of technological literacy (Miller, 2004b).

Many informal-learning institutions routinely conduct visitor surveys for demographic and marketing purposes, and many also conduct extensive cognitive and affective visitor assessments for front-end, formative, and summative evaluations of exhibitions (Taylor and Serrell, 1991). Some larger institutions even have staff members or consultants capable of performing assessments of the type that could gauge technological literacy, although they rarely have the funds to carry out such assessments.

Obstacles to Implementation

Assessments should be conducted in ways that take into account potential sample bias.

Obtaining a sample of visitors that represents the diversity—in income, education, and other factors—of the U.S. population as a whole would be difficult in the typical informal-learning setting. The population represented by visitors to these institutions is undoubtedly biased in favor of the science-attentive, as opposed to the science-“inattentive” (Miller, 1983b). In addition, compared to the population at large, patrons of science centers, zoos, and related institutions tend to have higher socioeconomic parameters, although institutions in urban areas attract more diverse patrons. For example, at the New York Hall of Science, in Queens, 38 percent to 68 percent of family visitors are non-Caucasian (depending on the season), probably because of the location of the institution and the diversity of the staff (Morley and Associates, unpublished). In any case, assessments should be conducted in ways that take into account potential sample bias. Pre-surveys might be used to identify those biases.

Another potential obstacle to assessment in informal-learning institutions is the reluctance of visitors to take part in structured interviews, surveys, or focus groups. Given the relatively short duration of a typical visit, the desire of many patrons to move freely among exhibits of their choosing, and the fact that admission is usually paid, this reluctance

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

is understandable. Offering incentives for participation, such as token gifts or free admission, may help to lower this barrier. Exhibit designs that build in opportunities for assessment might also be helpful. For example, assessment designers might consider using technologies that are portable (e.g., PDAs, electronic tablets) and can be programmed to select assessment items based on the visitor’s characteristics and physical location in an exhibit space.

Sample Test Items10

Give an example of a type of technology you like.

Give an example of a type of technology you don’t like.

On a scale of 1 to 100, how much do you think technology affects people’s lives?

On a scale of 1 to 100, how much of a role do you think people play in shaping the technologies we have and use?

Give an example of how people like you and me shape technologies.

Imagine that you work for Coca Cola or Pepsi and you are part of the team that came up with a new 20-ounce bottle. What steps did you go through?

Imagine that you are an inventor, and a friend of yours asks you to think about an idea. What steps would you go through to work on this idea?

Do you ever do things that involve creating or designing something, testing it, modifying how you do it, evaluating how someone uses it, and considering the consequences? Give an example.

10

Test items are adapted from a formative evaluation conducted for the Oregon Museum of Science and Industry by People, Places & Design Research, Northhampton, Mass. Used with permission.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

References

AAAS (American Association for the Advancement of Science). 1993. Benchmarks for Science Literacy. Project 2061. New York: Oxford University Press.

AAAS. 1998. Blueprints for Reform: Science, Mathematics, and Technology Education. New York: Oxford University Press.

ASTC (Association of Science-Technology Centers). 2004. ASTC Sourcebook of Science Center Statistics 2004. Washington, D.C.: ASTC.

Broughman, S.P., and K.W. Pugh. 2004. Characteristics of Private Schools in the United States: Results from the 2001–2002 Private School Universe Study, Table 1. Available online at: http://nces.ed.gov/pubs2005/2005305.pdf (April 11, 2006).

Bybee, R.W. 1997. Achieving Scientific Literacy: From Purposes to Practices. Portsmouth, N.H.: Heinemann.

Custer, R.L., G. Valesey, and B.N. Burke. 2001. An assessment model for a design approach to technological problem solving. Journal of Technology Education 12(2): 5–20.

DoEd (U.S. Department of Education). 2003. The Nation’s Report Card: Science 2000. NCES 2003-453. Institute of Education Sciences, National Center for Education Statistics. Washington, D.C.: DoEd.

Friedman, S., S. Dunwoody, and C. Rogers, eds. 1986. Scientists and Journalists: Reporting Science as News. New York: Free Press.

Friedman, S., S. Dunwoody, and C. Rogers. 1999. Communicating Uncertainty. Mahwah, N.J.: Lawrence Erlbaum Associates.

ITEA (International Technology Education Association). 2000. Standards for Technological Literacy: Content for the Study of Technology. Reston, Va.: ITEA.

ITEA. 2004a. Measuring Progress: A Guide to Assessing Students for Technological Literacy. Reston, Va.: ITEA.

ITEA. 2004b. The Second Installment of the ITEA/Gallup Poll and What It Reveals as to How Americans Think About Technology. A Report of the Second survey Conducted by the Gallup Organization for the International Technology Education Association. Available online at: http://www.iteaconnect.org/TAA/PDFs/GallupPoll2004.pdf (October 5, 2005).

Lehr, C., and M. Thurlow. 2003. Putting It All Together: Including Students with Disabilities in Assessment and Accountability Systems. NCEO Policy Directions, Number 16/October 2003. Available online at: http://education.umn.edu/nceo/OnlinePubs/Policy16.htm (February 23, 2006).

MCREL (Mid-Continent Research for Education and Learning). 2004. Content Knowledge, 4th ed. Available online at: http://mcrel.org/standards-benchmarks/ (January 13, 2006).

Miller, J.D. 1983a. The American People and Science Policy: The Role of Public Attitudes in the Policy Process. New York: Pergamon Press.

Miller, J.D. 1983b. Scientific literacy: a conceptual and empirical review. Daedalus 112(2): 29–48.

Miller, J.D. 1986. Reaching the Attentive and Interested Publics for Science. Pp. 55– 69 in Scientists and Journalists: Reporting Science as News, edited by S. Friedman, S. Dunwoody, and C. Rogers. New York: Free Press.

Miller, J.D. 1987. Scientific Literacy in the United States. Pp. 19–40 in Communicating Science to the Public, edited by D. Evered and M. O’Connor. London: John Wiley and Sons.

Miller, J.D. 1992. From Town Meeting to Nuclear Power: The Changing Nature of Citizenship and Democracy in the United States. Pp. 327–328 in The United States Constitution: Roots, Rights, and Responsibilities, edited by A.E.D. Howard. Washington, D.C.: Smithsonian Institution Press.

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Miller, J.D. 1995. Scientific Literacy for Effective Citizenship. Pp. 185–204 in Science/Technology/Society as Reform in Science Education, edited by R.E. Yager. New York: State University of New York Press.

Miller, J.D. 1998. The measurement of civic scientific literacy. Public Understanding of Science 7: 1–21.

Miller, J.D. 2000. The Development of Civic Scientific Literacy in the United States. Pp. 21–47 in Science, Technology, and Society: A Sourcebook on Research and Practice, edited by D.D. Kumar, and D. Chubin. New York: Plenum Press.

Miller, J.D. 2004a. Public understanding of and attitudes toward scientific research: what we know and what we need to know. Public Understanding of Science 13: 273–294.

Miller, J.D. 2004b. The Evaluation of Adult Science Learning. ASP Conference Series, vol. 319. Washington, D.C.: National Aeronautics and Space Administration.

Miller, J.D., and L. Kimmel. 2001. Biomedical Communications: Purposes, Audiences, and Strategies. New York: Academic Press.

Miller, J.D., and R. Pardo. 2000. Civic Scientific Literacy and Attitude to Science and Technology: A Comparative Analysis of the European Union, the United States, Japan, and Canada. Pp. 81–129 in Between Understanding and Trust: The Public, Science, and Technology, edited by M. Dierkes and C. von Grote. Amsterdam: Harwood Academic Publishers.

Miller, J.D., R. Pardo, and F. Niwa. 1997. Public Perceptions of Science and Technology: A Comparative Study of the European Union, the United States, Japan, and Canada. Madrid: BBV Foundation.

Morley and Associates. Unpublished. Unpublished visitor survey for the New York Hall of Science, 2005.

NAE (National Academy of Engineering) and NRC (National Research Council). 2002. Technically Speaking: Why All Americans Need to Know More About Technology. Washington, D.C.: National Academy Press.

NRC (National Research Council). 1996. National Science Education Standards. Washington, D.C.: National Academy Press.

NRC. 1997. Educating One and All: Students with Disabilities and Standards-Based Reform, edited by L.M. McDonnell, M.J. McLaughlin, and P. Morison. Washington, D.C.: National Academy Press.

NRC. 1999a. Recommendations from High Stakes Testing for Tracking, Promotion, and Graduation, edited by J.P. Heubert and R.M. Hauser. Washington, D.C.: National Academy Press.

NRC. 1999b. Being Fluent with Information Technology. Washington, D.C.: National Academy Press.

NSB (National Science Board). 2000. Science and Engineering Indicators 2000, vol. 2. Arlington, Va.: NSB.

Pardo, R., and J.D. Miller. 2003. U.S. Environmental and Biotechnology Study, 2003. Unpublished questionnaire.

Shen, B.S.P. 1975. Science Literacy and the Public Understanding of Science. Pp. 44– 52 in Communication of Scientific Information, edited by S.B. Day. New York: Karger.

Taylor, S., and B. Serrell. 1991. Try It!: Improving Exhibits Through Formative Evaluation. Washington, D.C.: Association of Science Technology Centers and New York: New York Hall of Science.

U.S. Census Bureau. 2005. Annual Estimates of the Population for the United States and States, and for Puerto Rico: April 1, 2000 to July 1, 2005 (NST-EST2005-01): Table 1. Available online at: http://www.census.gov/popest/states/tables/NST-EST2005-01.xls (April 11, 2006).

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Young, B.A. 2003a. Public school student, staff, and graduate counts by state: School year 2001–02. Education Statistics Quarterly 5 (1): Table 2. Available online at: http://nces.ed.gov/programs/quarterly/vol_5/5_2/q3_4_t1.asp#Table-2 (April 11, 2006).

Young, B.A. 2003b. Public school student, staff, and graduate counts by state: School year 2001–02. Education Statistics Quarterly 5 (1): Table 1. Available online at: http://nces.ed.gov/programs/quarterly/vol_5/5_2/q3_4_t1.asp#Table-1 (April 11, 2006).

Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 127
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 128
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 129
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 130
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 131
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 132
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 133
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 134
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 135
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 136
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 137
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 138
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 139
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 140
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 141
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 142
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 143
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 144
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 145
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 146
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 147
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 148
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 149
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 150
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 151
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 152
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 153
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 154
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 155
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 156
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 157
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 158
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 159
Suggested Citation:"6 From Theory to Practice: Five Sample Cases." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 160
Next: 7 Computer-Based Assessment Methods »
Tech Tally: Approaches to Assessing Technological Literacy Get This Book
×
Buy Hardback | $65.00 Buy Ebook | $49.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In a broad sense, technology is any modification of the natural world made to fulfill human needs or desires. Although people tend to focus on the most recent technological inventions, technology includes a myriad of devices and systems that profoundly affect everyone in modern society. Technology is pervasive; an informed citizenship needs to know what technology is, how it works, how it is created, how it shapes our society, and how society influences technological development. This understanding depends in large part on an individual level of technological literacy.

Tech Tally: Approaches to Assessing Technological Literacy determines the most viable approaches to assessing technological literacy for students, teachers, and out-of-school adults. The book examines opportunities and obstacles to developing scientifically valid and broadly applicable assessment instruments for technological literacy in the three target populations. The book offers findings and 12 related recommendations that address five critical areas: instrument development; research on learning; computer-based assessment methods, framework development, and public perceptions of technology.

This book will be of special interest to individuals and groups promoting technological literacy in the United States, education and government policy makers in federal and state agencies, as well as the education research community.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!