Skip to main content

Currently Skimming:

7 Design of Automated Authoring Systems for Tests
Pages 79-89

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 79...
... There is almost no validity evidence supporting these multiple purposes in widely used achievement tests. Such evidence is needed (American Educational Research Association MADRAS, American Psychological Association, & National Council on Measurement in Education, 1999)
From page 80...
... What is on our wish list? The goals of one or more configurations of a system are identified below: 80 improved achievement information for educational decision making; assessment tasks that measure challenging domains, present complex stimuli, and employ automated scoring and reporting options; assessment tasks that are useful for multiple assessment purposes; reduced development time and costs of high-quality tests; support for users with a range of assessment and content expertise, including teachers; and reduced timelines for assembling validity evidence.
From page 81...
... We need the assessments to focus primarily on open-ended responses, constructed at one sitting or over time, developed individually or by more than one examinee partner. We want the assessments to reflect explicit cognitive domains, described as families of cognitive demands, with clearly described attributes and requirements.
From page 82...
... One ofthe questions is whether search and organizational rules for document organization can be applied within documents to select candidate content for tests. Clearly, it is time for a merger of browser technology, digital library knowledge structures, and test design requirements.
From page 83...
... The created assessments would provide an adequate degree of accuracy and validity arguments drawn from their subsequent empirical data to document their quality. Assessments intended to meet multiple purposes would require additional technical attributes and relevant evidence supporting their applicability for various uses: making individual, group, or program decisions or supporting prescriptions offered to ameliorate unsatisfactory results.
From page 84...
... These groups include teachers (who need to create assessments that map legitimately to standards and external tests) , local school district and state assessment developers, the business community, and commercial developers.
From page 85...
... · Fund competing total object-oriented systems, requiring common interoperability standards, addressing different ages of learners and different task complexity. Specifically fund approaches that import candidate content for use in assessment design and development.
From page 86...
... 997~. Accommodation strategiesfor English language learners on large-scale assessments: Student characteristics and other considerations (CSE Tech.
From page 87...
... (1997~. Development of automated scoring algorithms for complex performance assessments: A comparison of two approaches.
From page 88...
... Vol. Il: Solving instructional design problems (lip.
From page 89...
... (1998~. Learning from text: Matching readers and texts by latent semantic analysis.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.