The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Measuring Literacy: Performance Levels for Adults
blank questionnaire is included in Background Materials at the end of this appendix). A majority (85 percent, n = 28) had managerial responsibilities for adult education in their states or regional areas, although many panelists were instructors as well as program coordinators or directors. Most panelists worked in adult basic education (66 percent, n = 22), general educational development or GED (54 percent, n = 18), or English language instruction (51 percent, n = 17) settings. Almost half (45 percent, n = 15) reported they were very familiar with NALS prior to participating in the standard-setting activities; 42 percent (n = 14) reported that they were somewhat familiar with NALS. Only four participants (12 percent) who completed the questionnaire said they were unfamiliar with NALS prior to the standard setting.
Panelists were assigned to tables using a quasi-stratified-random procedure intended to produce groups with comparable mixtures of perspectives and experience. To accomplish this, panelists were assigned to one of nine tables after being sorted on the following criteria: (1) their primary professional responsibilities (instructor, coordinator or director, researcher), (2) the primary population of adults they worked with as indicated on their resumes, and (3) the areas in which they worked as indicated on their resumes. The sorting revealed that panelists brought the following perspectives to the standard-setting exercise: adult basic education (ABE) instructor, English for speakers of other languages (ESOL) instructor, GED instructor, program coordinator or director, or researcher. Panelists in each classification were then randomly assigned to one of the nine tables so that each group included at least one person from each of the classifications. Each table consisted of four or five panelists and had a mixture of perspectives: instructor, director, researcher, ESOL, GED, and ABE.
Once panelists were assigned to tables, each table was then randomly assigned to two of the three literacy areas (prose, document, or quantitative). The sequence in which they worked on the different literacy scales was alternated in an attempt to balance any potential order effects (see Table C-1). Three tables worked with the prose items first (referred to as Occasion 1 bookmark placements) and the document items second (referred to as Occasion 2 bookmark placements); three tables worked with the document items first (Occasion 1) and the quantitative items second (Occasion 2); and three tables worked with the quantitative items first (Occasion 1) and the prose items second (Occasion 2).
Ordered Item Booklets
For each literacy area, an ordered item booklet was prepared that rank-ordered the test questions from least to most difficult according to the responses of NALS examinees. The ordered item booklets consisted of all