examinees based on responses to previous items. Incorrect responses evoke less difficult items in that dimension, whereas correct responses evoke increasingly difficult items until the standard error of estimate for that dimension oscillates regularly—within preset confidence levels— around a particular value.

Adaptive testing has been used by the U.S. Department of Defense in some high-profile areas. For example, a computerized version of the Armed Services Vocational Ability Test (ASVAB) has been administered to thousands of recruits since 1998. ASVAB now uses computers for item writing, item banking, test construction, test administration, test scoring, item and test analyses, and score reporting (Baker, 1989). Overall, research findings and experience suggest that tests using adaptive techniques are shorter, more precise, and more reliable than tests using other techniques (Weiss, 2004). Therefore, it is reasonable to expect that adaptive testing would be effective for assessments of technological literacy.

Tests using adaptive techniques are shorter, more precise, and more reliable than tests using other techniques.

However, computer-based adaptive testing has some shortcomings. Because of the nature of the algorithms used to select successive test questions, computer-adaptive items are usually presented only once. Thus, test takers do not have an opportunity to review and modify responses, which could be a disadvantage to some test takers who might improve their scores by changing responses on a traditional paper-and-pencil test.

In theory, each person who takes a computer-adaptive test is presented with a unique subset of the total pool of test items, which would seem to make it very difficult for cheaters to beat the system by memorizing individual items. However, this assumption was challenged in the mid-1990s when significant cheating was uncovered on the Educational Testing Service (ETS) computer-adaptive Graduate Record Exam (Fair Test Examiner, 1997), causing the company to withdraw this version of the exam. ETS has since made a number of changes, including enlarging the item pool, and the online test is now back on the market.

The two main costs of computer-adaptive testing are (1) the software coding necessary to create an adaptive test environment and (2) the creation of items. Although the cost varies depending on the nature of the assessment, it is not unusual for an assessment developer to spend $250,000 for software coding (D. Fletcher, Institute for Defense Analyses, personal communication, February 27, 2006). Per-item development costs are about the same for paper-and-pencil and computer-adaptive tests, but two to four times as many items may be required to support a computerized assessment. Nevertheless, computerized adaptive



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement