National Academies Press: OpenBook

Tech Tally: Approaches to Assessing Technological Literacy (2006)

Chapter: 7 Computer-Based Assessment Methods

« Previous: 6 From Theory to Practice: Five Sample Cases
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

7
Computer-Based Assessment Methods

The committee believes that assessments of technological literacy would benefit from—may even require—innovative approaches, especially for the capability dimension, for which test takers must demonstrate iterative problem-solving techniques typical of a design process. Even with thoughtfully developed paper-and-pencil assessments, it would be extremely difficult to assess this dimension. An alternative approach would be to present test takers with hands-on laboratory exercises, but the costs and complexities of developing, administering, and “grading” a truly hands-on design or problem-solving activity for a large sample of individuals would be prohibitive.

Social scientists, public opinion polling organizations, and others interested in assessing what out-of-school experiences contribute to technological literacy have few tools at their disposal. In national-scale surveys, for example, it is customary to contact participants by telephone using various forms of random-digit dialing. However, response rates have dropped significantly recently because of the number of research surveys, the exponential increase in cell phone use, and other factors, raising concerns about the reliability and validity of survey data. Free-choice learning environments, such as museums and science centers, are also struggling to find ways of measuring attitudinal changes and learning as a result of exposure to exhibits and other programs.

The presentation strategies and analyses possible with computer-based methods would be, at best, impractical, and often, out of the question with traditional assessment methods. Computer-based methods could have several advantages over traditional methods. They could provide faster, more accurate scoring (Bahr and Bahr, 1997), reduce test-

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

administration times (Shermis et al., 1996), and make possible relatively low-cost scaling to large numbers of test takers. They could also be designed to meet the needs of special populations, including people with physical disabilities and people from diverse cultural or linguistic backgrounds (Naglieri et al., 2004).

However, there are legitimate concerns about using computers in educational testing. A potential limitation, of course, is the lack of computer literacy of the test population. Test takers—children or adults— who do not have at least a basic familiarity with computers and computer keyboarding may not perform as well as those who have at least basic computer skills (Russell, 1999). In addition, requirements for computer memory and processing speeds, graphics quality, and bandwidth— for applications using the Internet—may pose significant cost and resource barriers.

There are legitimate concerns about using computers in educational testing.

Computer-based tests would be just as susceptible to cheating as traditional paper-and-pencil assessments, although the types of cheating and strategies for countering them may differ. For example, someone other than the registered examinee could take the test or help answer questions on an assessment administered remotely (online). To preclude this kind of cheating, authentication could be attempted using a biometric measure (e.g., a fingerprint or retina scan), or the test taker could be required to take a short, proctored confirmatory test (Segall, 2001).

It is important to keep in mind that although computer technology could potentially increase testing flexibility, authenticity, efficiency, and accuracy, computer-based assessments must still be subject to the same defensible standards as paper-and-pencil assessments, particularly if the results are used to make important decisions. The reference of choice is Standards for Educational and Psychological Testing (AERA et al., 1999).

The following discussion focuses on aspects of computer-based testing that offer significant potential benefits for the assessment of technological literacy.

Computer-Based Adaptive Assessments

Computer-based, flexi-level, branching, and stratified adaptive testing have been investigated for more than 30 years (Baker, 1989; Bunderson et al., 1989; Lord, 1971a,b,c; van der Linden, 1995; Weiss, 1983). Research has been focused mostly on using interactive (computer) technology to select, in real time, specific items to present to individual

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

examinees based on responses to previous items. Incorrect responses evoke less difficult items in that dimension, whereas correct responses evoke increasingly difficult items until the standard error of estimate for that dimension oscillates regularly—within preset confidence levels— around a particular value.

Adaptive testing has been used by the U.S. Department of Defense in some high-profile areas. For example, a computerized version of the Armed Services Vocational Ability Test (ASVAB) has been administered to thousands of recruits since 1998. ASVAB now uses computers for item writing, item banking, test construction, test administration, test scoring, item and test analyses, and score reporting (Baker, 1989). Overall, research findings and experience suggest that tests using adaptive techniques are shorter, more precise, and more reliable than tests using other techniques (Weiss, 2004). Therefore, it is reasonable to expect that adaptive testing would be effective for assessments of technological literacy.

Tests using adaptive techniques are shorter, more precise, and more reliable than tests using other techniques.

However, computer-based adaptive testing has some shortcomings. Because of the nature of the algorithms used to select successive test questions, computer-adaptive items are usually presented only once. Thus, test takers do not have an opportunity to review and modify responses, which could be a disadvantage to some test takers who might improve their scores by changing responses on a traditional paper-and-pencil test.

In theory, each person who takes a computer-adaptive test is presented with a unique subset of the total pool of test items, which would seem to make it very difficult for cheaters to beat the system by memorizing individual items. However, this assumption was challenged in the mid-1990s when significant cheating was uncovered on the Educational Testing Service (ETS) computer-adaptive Graduate Record Exam (Fair Test Examiner, 1997), causing the company to withdraw this version of the exam. ETS has since made a number of changes, including enlarging the item pool, and the online test is now back on the market.

The two main costs of computer-adaptive testing are (1) the software coding necessary to create an adaptive test environment and (2) the creation of items. Although the cost varies depending on the nature of the assessment, it is not unusual for an assessment developer to spend $250,000 for software coding (D. Fletcher, Institute for Defense Analyses, personal communication, February 27, 2006). Per-item development costs are about the same for paper-and-pencil and computer-adaptive tests, but two to four times as many items may be required to support a computerized assessment. Nevertheless, computerized adaptive

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

tests, such as the Renaissance Learning Star Reading Test (http://www.renlearn.com/starreading/), are being used in some K–12 settings. Some firms (e.g., Microsoft) are also using adaptive testing to certify an individual’s product knowledge.

Simulations

Rather than presenting a series of test items, even items adapted to an individual’s responses, assessments might be improved by immersing the test taker in simulations of real-life situations. This idea is particularly appealing for assessments of technological literacy, which necessarily emphasize capability and critical thinking and decision making, in addition to basic knowledge.

With simulated environments, performance and competence can be assessed in situations that cannot be attempted in the real world. Aircraft can be crashed, bridges can be tested with heavy loads, expensive equipment can be ruined, and lives can be risked in simulated environments in ways that would be impractical, or unthinkable, in the real world. Simulated environments can also make the invisible visible, compress or expand time, and repeatedly reproduce events, situations, and decision points.

The military has long used simulations to assess the readiness of individuals and groups for military operations (Andrews and Bell, 2000; Fletcher, 1999; Fletcher and Chatelier, 2000; Pohlman and Fletcher, 1999). Industry also uses simulation-based assessments for everything from device maintenance and social role-playing to planning marketing campaigns (Aldrich, 2004). In formal education, simulations and computer-based modeling are being investigated as tools for improving learning in biology, chemistry, and physics (e.g., Concord Consortium, 2005; TELS, 2005; Thinkertools, 2005).

Simulation can be used in a variety of ways: (1) in design, to describe the behavior of a system that does not yet exist; (2) in analysis, to describe the behavior of an existing system under various operating conditions; (3) in training, to shape the behavior of individuals and groups and prepare them for situations they may encounter on the job; and (4) in entertainment, to provide computer games (Smith, 2000). The quality of a simulation depends on its purpose—the question(s) it is expected to answer—and the accuracy with which it represents system components that are relevant to this purpose.

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

A simulation can be used to situate individuals in the system it represents and then compare their judgments about the operation of the system with those of the simulation. Simulations might represent a system with sufficient accuracy to allow individuals and groups to try to understand and apply technology, without delving into the scientific basis of the system’s operation.

Because simulation-based assessments have highly reactive and interactive capabilities, they can be more sophisticated and elaborate than paper-based tests and provide more comprehensive and more substantive measures of technological literacy. Simulations can not only provide opportunities for individuals or teams to demonstrate technological literacy through designing, building, and application capabilities, they can also review the results, assess the ability to correct errors (if any), apply probability techniques to infer understanding of actions, and “coach” and “supply hints” to improve partial solutions. One can imagine a number of simulated design-related tasks (Box 7-1) in which individuals could build and test their own systems and system components within a larger, simulated context that could assess their actions.

One concern about computer-based simulations is the cost of developing them. In some instances, the costs could even outweigh the value of using simulation in an assessment. But determining when simulation would be too expensive requires that one know the costs and benefits of assessment with and without simulation, and the committee was unable to find studies that address this issue.

Cost-benefit decisions would have to take into account the time-saving potential of so-called authoring tools (software designed to simplify the creation of simulations). A number of off-the-shelf products have been developed for this purpose, such as Macromedia Captivate

BOX 7-1

Sample Simulation Tasks for Assessing Technological Literacy

  • Assemble a working system from components.

  • Disassemble a working system and identify the purpose of each component.

  • Redesign a working system to make it more ergonomic, more environmentally friendly, or more cost effective.

  • Repair a nonworking or faulty system by replacing one or more components.

  • Operate a system (or system of systems) to achieve a specified outcome.

  • Observe a debate (portrayed by actors or animated figures) about a controversial new technology, choose a point of view, and defend it using information gathered from the Web.

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

(http://www.macromedia.com/software/captivate) and Vcommunicator Studio (http://www.vcom3d.com/vstuidio.htm). Other authoring tools have been developed with government funding by academic researchers (e.g., Munro and Pizzini, 1996; Pizzini and Munro, 1998).

One study describes the use of DIAG, a set of authoring tools developed by the Behavioral Technology Laboratories at the University of Southern California, to create a simulation-based instructional module for diagnosing faults in an aircraft power-distribution system (Towne, 1997). The module consists of 28 screen displays (including a fully operational front panel simulation), 13 operational circuit breakers, 11 connectors, 94 wires, and 21 other components that could be faulty. The system was capable of generating and diagnosing 19,110 fault conditions.

Using the authoring tool, Towne found that it required 22 person-days to develop the module with all of the control and logic necessary for its operation as an instructional system. Without DIAG, he estimated that the time required would be 168 days. Whether 22 days of a technician’s time is a reasonable cost for the development of a computer-based simulation for assessing technological literacy depends on the uses of the simulation and the decisions it is intended to inform. In any case, this study suggests that it is reasonable to expect that authoring tools will have a substantial impact on the costs of developing simulations.

Despite increasing use of simulations by industry, the military, and educators, the design, development, and use of simulations specifically for assessments is rarely discussed in the technical literature. In addition, the prospect of assessment via simulation has raised questions about measurement that are just being articulated and addressed by assessment specialists. For instance, O’Neil and colleagues have conducted empirical studies of psychometric properties, such as reliability, validity, and precision (e.g., O’Neil et al., 1997a,b).

Use of simulations specifically for assessments is rarely discussed in the technical literature.

After reviewing the potential of using simulation for assessment, the committee identified several questions for researchers (Box 7-2). With simulations, individuals (or groups) may be immersed in a system (or situation) that reacts to their decisions and allows them to achieve their goals, or not—providing feedback on their success or failure. However, sometimes test takers may take correct actions for the wrong reasons—in other words, they may be lucky rather than competent. This could also happen, of course, in any design problem or laboratory-based exercise. Sometimes, if an incorrect decision is made early in the running of a simulation, all subsequent actions, even if correct, may lead to failure at

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

BOX 7-2

Sample Research Questions for Computer-Based Simulation and Games for Assessment of Technological Literacy

Can each action taken by an individual in a simulation or game be treated as a test item and its correctness judged by an on-demand, real-time assessment of the circumstances in which that action is taken, or must prior actions that led to the context in which the action was taken be taken into account?

What model of technological expertise should be used, and how might that model be superimposed on the simulation (or game) to identify participants’ areas and levels of capability?

How can individuals’ misconceptions about technology be incorporated into simulation-based assessments?

How can simulations and games be constructed to avoid gender, cultural, and other kinds of bias?

What aspects of technological literacy can best be measured via simulations and games?

Should simulation-based assessment be used to assess the technological literacy of groups or teams, as opposed to the technological literacy of individuals?

Could automated means, such as those used by intelligent tutoring systems, be used to develop simulation-and game-based assessments of technological literacy?

How can the costs of developing computer-based assessments be minimized?

the end. Sometimes, an incorrect decision toward the end of a simulation may be inconsequential. In addition, simulations begin with a set of circumstances—a scenario. A change in any one of the circumstances could change the entire nature of the assessment.

Nevertheless, researchers are making progress in using simulations for assessing complex problem solving comparable to the skills required for technological literacy. For instance, one promising approach is based on evidence-centered design (ECD) (Mislevy et al., 2003). In this approach, capabilities are identified for a subject area and organized into a graphical framework. ECD then shows how to connect the responses of test takers working in a complex simulated environment to the framework. Bennett and colleagues (2003) have provided an example of how ECD might be used to assess scientific-inquiry skills in a simulated environment.

Simulations can also be used in networked configurations to assess individuals or groups at any time and anywhere from remote

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

locations. Both the military and the computer-games industry have made major investments in networked simulation. In the military, the focus is on team performance, rather than individual performance. The members of crews, teams, and units are assumed to be proficient in their individual specialties (they are expected to know how to drive tanks, read maps, fly airplanes, fire weapons) before they begin networked simulation exercises (Alluisi, 1991). Because some aspects of technological literacy also involve group coordination and communication, networked simulation may be useful for assessing these competencies. However, as noted, development costs may be higher than for more traditional test methods.

Computer-Based and Web-Based Games

Games, especially games available over the World Wide Web, may also be useful for assessing technological literacy. Most technology-based games incorporate simulations of real and/or imagined systems. Although they emphasize entertainment over realism, well-designed games provide both realism and entertainment.

Some games are designed to be played by thousands of players. According to one estimate, there are some 5 million players of massive, multiplayer, on-line games (MMOGs) with at least 10,000 subscribers each (Woodcock, 2005). One might imagine an ongoing (continuous and unobtrusive) assessment of technological literacy based on an MMOG that collects data aggregated from the activities of hundreds of thousands of players who could contribute minimal personal data without compromising their privacy. Provisions would have to be put in place to ensure that participation was voluntary.

One example of a game that might be adapted to assess technological literacy is “Monkey Wrench Conspiracy” (available from http://www.Games2train.com). In this game, which is actually a set of training modules for new users of another company’s computer-aided design/ computer-aided manufacturing (CAD/CAM) design software, the player (i.e., trainee) becomes an intergalactic secret agent who has to save a space station from attack by using CAD software to build tools, repair weapons, and defeat booby traps. The 30 tasks to be performed are presented in order of difficulty and keyed to increasing levels of technological capability. Because the game is modular, modified or new tasks can be added easily; thus, the concept of technological literacy could evolve with the technology.

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Another useful feature of computer games is their capacity for motivation. Great numbers of people are motivated to play games, perhaps even games intended to assess technological literacy, for extended periods of time, thereby increasing the reliability and accuracy of the assessments they could provide. A computer game that assesses technological literacy could be a national assessment instrument for identifiable segments of the population. If players allow their responses to be anonymously collected and pooled, a well designed game that taps into technological knowledge and capability could become an unobtrusive, continuous, self-motivating, and inexpensive source of diagnostic information on the levels of technological literacy of different segments of the national population.

Considerable research has been done to identify and describe gender differences in game-seeking and game-playing behavior, whether on a personal computer, video arcade console, or online. In absolute numbers, at least as many women as men play games, including online games, but women prefer different types of games and different types of interactions (Crusoe, 2005; Robar and Steele, 2004). Women prefer quizzes, trivia games, and board and contest games, whereas men prefer action games. Women tend to enjoy the social aspects of online gaming and relationship-building in games. In contrast, men prefer strategy games, military games, and games that involve fighting or shooting. Both men and women seem to be interested in simulations (e.g., The Sims), racing games (e.g., Need for Speed Underground), and role-playing games (e.g., Everquest).

Women tend to enjoy the social aspects of online gaming.

Male-female differences in online game-playing behavior suggest that assessments that rely on computer technology may also be skewed by gender (i.e., sample bias). Other potential sources of sample bias include socioeconomic status and age. Lower income individuals, for example, may have relatively infrequent access to computers and computer-game software and therefore may not have experience or interest in operating computers and engaging in computer-based simulation. Similarly, older adults who have not grown up in the digital age—a demographic Prensky dubs “digital immigrants”—may have varying degrees of difficulty adapting to and using digital technology (Prensky, 2001). They may also simply have less interest in interacting with computers. Whether or not one accepts Prensky’s characterization, assessment developers will have to ensure that the mode of assessment does not bias results based on test takers’ computer literacy skills (Haertel and Wiley, 2003).

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Electronic Portfolios

Artists, dancers, musicians, actors, and photographers have used portfolios to demonstrate their competency and show examples of their work. In formal education, portfolios have been used in K–12 and undergraduate classrooms, as well as schools of education (Carroll et al., 1996). Portfolios typically document student projects, often detailing the iterative steps in the production of a finished product. Portfolios can provide information for both formative and summative assessments, as well as an opportunity for making accurate measurements of performance and self-reflection.

Traditional paper-based portfolios, which may include writing, drawing, photos, and other visual information and which have been used for decades by U.S. educators, have several limitations. Most important, they require large amounts of physical storage space, and their contents can be difficult to maintain and share. With the introduction of computers and online communication into educational settings in the early 1990s, digital, or electronic, portfolios could be created (Georgi and Crowe, 1998). Electronic portfolios can be used for many purposes, including marketing or employment (to highlight competencies), accountability (to show attainment of standards), and self-reflection (to foster learning); these purposes may sometimes be at odds with one another (Barrett and Carney, 2005).

To the committee’s knowledge, electronic portfolios have not been used in the United States to assess technological literacy as defined in this report. However, electronic portfolios appear to be excellent tools for documenting and exploring the process of technological design. A number of companies produce off-the-shelf portfolio software (e.g., HyperStudio, FolioLive [McGraw Hill]), and customized software is being developed by universities and researchers in other settings (e.g, Open Source Portfolio Initiative, http://www.osportfolio.org). The question of whether existing software could be adapted for assessments of technological literacy is a subject for further inquiry.

Electronic portfolios appear to be excellent tools for documenting and exploring the process of technological design.

Traditional, paper-based portfolios have been an essential component of the design and technology curriculum in the United Kingdom for documenting and assessing student projects. The portfolios of some 500,000 16-year-olds are reviewed and graded every year. Assembling a portfolio is a learning tool as much as an assessment tool, and students typically report that they learn more from their major project—which may

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

occupy them for as long as eight months of their final year—than from anything else in their design and technology program (R. Kimbell, professor, Technology Education Research Unit, Goldsmiths College, London, personal communication, May 5, 2005).

Recently, the British government funded a research group at Goldsmiths College to develop an electronic-portfolio examination system to enable students to develop design projects digitally, submit them digitally (via a secure website), and have them assessed digitally. In addition to computers and CAD software, other technologies that might enrich electronic portfolios are being considered, such as digital pens that can store what has been written and drawn with them; personal digital assistants that can store task-related data; and speech-to-text software that can enable sharing and analysis of design discussions. If the prototype system is successful, the research team will expand the electronic-portfolio system for four other areas of the curriculum, English, science, and two cross-curricular subjects.

Electronic Questionnaires

Adaptive testing, simulations, games, and portfolios could also be used in informal-education settings, such as museums and science centers. For example, portable devices, such as PC tablets and palm computers, might be used in museums, where people move from place to place. A questionnaire presented via these technologies could include logic branching and dynamic graphics, allowing a respondent to use visual as well as verbal resources in thinking about the question (Miller, 2004).

Very short questionnaires, consisting of only one or two questions, could be delivered as text messages on cell phones, a technique that some marketing companies now use to test consumer reactions to potential new products or product-related advertising. At least one polling organization used a similar technique to gauge young voters’ political leanings during the 2004 U.S. presidential election (Zogby International, 2004). Finally, considering that more than 70 percent of U.S. homes have Internet access (Duffy and Kirkley, 2004), informal-learning centers, survey researchers, and others interested in tapping into public knowledge and attitudes about technology could send follow-up questionnaires by e-mail or online. Several relatively inexpensive software packages are available for designing and conducting online surveys, and the resulting

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

data usually cost less and are of higher quality than data from traditional printed questionnaires or telephone interviews.

References

AERA (American Educational Research Association), APA (American Psychological Association), and NCME (National Council on Measurement in Education). 1999. Standards for Educational and Psychological Testing. Washington, D.C.: AERA.

Aldrich, C. 2004. Simulations and the Future of Learning. San Francisco: Pfeiffer.

Alluisi, E.A. 1991. The development of technology for collective training: SIMNET, a case history. Human Factors 33(3): 343–362.

Andrews, D.H., and H.H. Bell. 2000. Simulation-Based Training. Pp. 357–384 in Training and Retraining: A Handbook for Business, Industry, Government, and the Military, edited by S. Tobias and J.D. Fletcher. New York: Macmillan Reference USA.

Bahr, M.W, and C.M. Bahr. 1997. Education assessment in the next millennium: contributions of technology. Preventing School Failure 4(Winter): 90–94.

Baker, F.B. 1989. Computer technology in test construction and processing. Pp. 409– 428 in Educational Measurement, 3rd ed., edited by R.L. Linn. New York: Macmillan.

Barrett, H., and J. Carney. 2005. Conflicting paradigms and competing purposes in electronic portfolio development. Educational Assessment. Submitted for publication.

Bennett, R.E., F. Jenkins, H. Persky, and A. Weiss. 2003. Assessing complex problem-solving performances. Assessment in Education 10(3): 347–359.

Bunderson, C.V., D.K. Inouye, and J.B. Olson. 1989. The four generations of computerized educational measurement. Pp. 367–408 in Educational Measurement, 3rd ed., edited by R.L. Linn. New York: Macmillan.

Carroll, J., D. Potthoff, and T. Huber. 1996. Learning from three years of portfolio use in teacher education. Journal of Teacher Education 47(4): 253–262.

Concord Consortium. 2005. Molecular Logic Project. Available online at: http://molo.concord.org/ (October 19, 2005).

Crusoe, D. 2005. A discussion of gender diversity in computer-based assessment. Available online at: http://www.bitculture.org/storage/DHC_Gender_Div_EdDRvw0705.pdf (December 23, 2005).

Duffy, T.M., and J.R. Kirkley. 2004. Learning Theory and Pedagogy Applied in Distanced Learning: The Case of Cardean University. Pp. 107–141 in Learner Centered Theory and Practice in Distance Education: Cases from Higher Education, edited by T.M. Duffy and J.R. Kirkley. Mahwah, N.J.: Lawrence Erlbaum Associates.

Fair Test Examiner. 1997. ETS and test cheating. Available online at: http://www.fairtest.org/examarts/winter97/etscheat.htm (January 4, 2006).

Fletcher, J.D. 1999. Using networked simulation to assess problem solving by tactical teams. Computers in Human Behavior 15(May/July): 375–402.

Fletcher, J.D., and P.R. Chatelier. 2000. Military Training. Pp. 267–288 in Training and Retraining: A Handbook for Business, Industry, Government, and the Military, edited by S. Tobias and J.D. Fletcher. New York: Macmillan.

Georgi, D., and J. Crowe. 1998. Digital portfolios: a confluence of portfolio assessment and technology. Teacher Education Quarterly 25(1): 73–84.

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Haertel, E., and D. Wiley. 2003. Comparability issues when scores are produced under varying test conditions. Paper presented at the Validity and Accommodations: Psychometric and Policy Perspectives Conference, August 4–5, College Park, Maryland.

Lord, F.M. 1971a. Robbins-Monro procedures for tailored testing. Educational and Psychological Measurement 31: 3–31.

Lord, F.M. 1971b. A theoretical study of the measurement effectiveness of flexilevel tests. Educational and Psychological Measurement 31: 805–813.

Lord, F.M. 1971c. The self-scoring flexilevel test. Educational and Psychological Measurement 31: 147–151.

Miller, J. 2004. The Evaluation of Adult Science Learning. Pp. 26–34 in Proceedings of NASA Office of Space Science Education and Public Outreach Conference 2002. ASP Conference Series 319. Washington, D.C.: National Aeronautics and Space Administration.

Mislevy, R.J., R.G. Almond, and J.F. Lukas. 2003 A Brief Introduction to Evidence-Centered Design. RR-03-16. Princeton, N.J.: Educational Testing Service.

Munro, A., and Q.A. Pizzini. 1996. RIDES Reference Manual. Los Angeles, Calif.: Behavioral Technology Laboratories, University of Southern California.

Naglieri, J.A., F. Drascow, M. Schmidt, L. Handler, A. Prifitera, A. Margolis, and R. Velasquez. 2004. Psychological testing on the Internet: new problems, old issues. American Psychologist 59(3): 150–162.

O’Neil, H.F., K. Allred, and R.A. Dennis. 1997a. Validation of a Computer Simulation for Assessment of Interpersonal Skill. Pp. 229–254 in Workplace Readiness: Competencies and Assessment, edited by H.F. O’Neil. Mahwah, N.J.: Lawrence Erlbaum Associates.

O’Neil, H.F., G.K.W.K. Chung, and R.S. Brown. 1997b. Use of Networked Simulations as a Context to Measure Team Competencies. Pp. 411–452 in Workplace Readiness: Competencies and Assessment, edited by H.F. O’Neil. Mahwah, N.J.: Lawrence Erlbaum Associates.

Pizzini, Q.A., and A. Munro. 1998. VIVIDS Authoring for Virtual Environments. Los Angeles, Calif.: Behavioral Technology Laboratories, University of Southern California.

Pohlman, D.L., and J.D. Fletcher. 1999. Aviation Personnel Selection and Training. Pp. 277–308 in Handbook of Aviation Human Factors, edited by D.J. Garland, J.A. Wise, and V.D. Hopkin. Mahwah, N.J.: Lawrence Erlbaum Associates.

Prensky, M. 2001. Digital natives, digital immigrants. On the Horizon 9(5). Available online at: http://www.marcprensky.com/writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf (January 4, 2006).

Robar, J., and A. Steele. 2004. Females and Games. Computer and Video Game Industry Research Study. March 2004. Issaquah, Washington: AisA Group.

Russell, M. 1999. Testing on Computers: A Follow-up Study Comparing Performance on Computer and on Paper. Available online at: http://epaa.asu.edu/epaa/v7n20/ (January 4, 2006).

Segall, D.O. 2001. ASVAB Testing via the Internet. Unpublished paper.

Shermis, M.D., P.M. Stemmer, and P.M. Webb. 1996. Computerized adaptive skill assessment in a statewide testing program. Journal of Research on Computing in Education 29(1): 49–67.

Smith, R.D. 2000. Simulation. Pp. 1578–1587 in Encyclopedia of Computer Science, 4th ed., edited by A. Ralston, E.D. Reilley, and D. Hemmendinger. New York: Grove’s Dictionaries.

TELS (Technology Enhanced Learning in Science). 2005. Web-based inquiry science environment. Available online at: http://wise.berkeley.edu/ (October 19, 2005).

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×

Thinkertools. 2005. Force and motion. Available online at: http://thinkertools.soe.berkeley.edu/Pages/force.html (October 19, 2005).

Towne, D.M. 1997. An Intelligent Tutor for Diagnosing Faults in an Aircraft Power Distribution System. Technical Report 118. Los Angeles, Calif.: Behavioral Technology Laboratories, University of Southern California.

van der Linden, W.J. 1995. Advances in Computer Applications. Pp. 105–123 in International Perspectives on Academic Assessment, edited by T. Oakland and R.K. Hambleton. Boston: Kluwer Academic Publishers.

Weiss, D.J. 1983. Computer-Based Measurement of Intellectual Capabilities: Final Report. Minneapolis, Minn.: Computerized Adaptive Testing Laboratory, University of Minnesota.

Weiss, D.J. 2004. Computerized adaptive testing for effective and efficient measurement in counseling and education. Measurement and Evaluation in Counseling and Development 37(2): 70–84.

Woodcock, B.S. 2005. Total MMOG Active Subscriptions (Excluding Lineage, Lineage II, and Ragnorak Online). Available online at: http://mmogchart.com/ (August 22, 2005).

Zogby International. 2004. Young Mobile Voters Pick Kerry over Bush 55% to 40%, Rock the Vote/Zogby Poll Reveals: National Text-Message Poll Breaks New Ground. Press release dated October 31, 2004. Available online at: http://www.zogby.com/news/ReadNews.dbm?ID=919 (August 22, 2005).

Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 161
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 162
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 163
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 164
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 165
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 166
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 167
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 168
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 169
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 170
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 171
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 172
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 173
Suggested Citation:"7 Computer-Based Assessment Methods." National Academy of Engineering and National Research Council. 2006. Tech Tally: Approaches to Assessing Technological Literacy. Washington, DC: The National Academies Press. doi: 10.17226/11691.
×
Page 174
Next: 8 Findings and Recommendations »
Tech Tally: Approaches to Assessing Technological Literacy Get This Book
×
Buy Hardback | $65.00 Buy Ebook | $49.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In a broad sense, technology is any modification of the natural world made to fulfill human needs or desires. Although people tend to focus on the most recent technological inventions, technology includes a myriad of devices and systems that profoundly affect everyone in modern society. Technology is pervasive; an informed citizenship needs to know what technology is, how it works, how it is created, how it shapes our society, and how society influences technological development. This understanding depends in large part on an individual level of technological literacy.

Tech Tally: Approaches to Assessing Technological Literacy determines the most viable approaches to assessing technological literacy for students, teachers, and out-of-school adults. The book examines opportunities and obstacles to developing scientifically valid and broadly applicable assessment instruments for technological literacy in the three target populations. The book offers findings and 12 related recommendations that address five critical areas: instrument development; research on learning; computer-based assessment methods, framework development, and public perceptions of technology.

This book will be of special interest to individuals and groups promoting technological literacy in the United States, education and government policy makers in federal and state agencies, as well as the education research community.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!