sionalism (16 percent of the multiple-choice items). Each broad topic includes several subcategories, none of which speak directly to cultural diversity (Tests at a Glance, pp. 10–12).
Comment: The four broad topics and the more detailed descriptions seem reasonable (but a pedagogy specialist could judge more appropriately the quality and completeness of the content coverage).
How were the KSAs derived and by whom? The content domain1 was determined by using a job analysis procedure that began in 1990. The job analysis consisted of two sets of activities.
The first set of activities was intended to define the domain for teaching. These activities entailed developing a Draft Job Analysis Inventory, having the draft inventory reviewed by an External Review Panel, and then having it reviewed by an Advisory/Test Development Committee. The draft inventory was produced by ETS test development staff, who reviewed the literature and current state requirements and brought to bear their own experience in human development, educational psychology, and pedagogy to produce an initial set of knowledge and skills across five domains (pedagogy, human growth and development, curriculum, context, and professional issues). The draft inventory was then reviewed by an External Review Panel of nine practicing professionals (four working teachers, one school administrator, three teacher educators, and one state education agency administrator), who were selected by a process of peer nominations. All members were prominent in professional organizations and had experience either teaching or supervising teachers. After the External Review Panel reviewed and modified the Draft inventory (all by telephone interview), the revised inventory was reviewed by a nine-member Advisory/Test Development Committee (five practicing teachers and four teacher educators with the same qualifications as the External Review Panel). At a meeting held in Princeton in June 1990, this committee made additional adjustments and developed draft test specifications. By the close of the meeting, the draft inventory included 64 knowledge statements and the five domain headings that are named above.
The second set of activities for the job analysis consisted of a pilot testing of the inventory, a final survey, and an analysis of the survey results. The pilot test was undertaken to obtain information about the clarity of the instructions and the content of the survey instrument. It was administered to four teachers in the New Jersey area, one school administrator, and one teacher educator.
The final survey was mailed to 1,830 individuals, including practicing teachers, school administrators, teacher educators, and state department of education officials. A total of 820 surveys were returned (a response rate of about 45 percent); of these only 724 were analyzed (a functional response rate of 40