percent). The response rates varied by type of respondent. The highest response rate was from teachers (48 percent) and the lowest from state agency officials (3 percent). All those surveyed were asked to judge the importance of the 64 knowledge statements. The rating scale for importance was a five-point scale with the highest rating being Very Important (a value of 4) and the lowest rating being Not Important (a value of 0). Based on analyses of all respondents and of respondents by subgroup (e.g., teachers, administrators, teacher educators), 75 percent (48) of the 64 knowledge statements were considered eligible for inclusion in the PLT test because they had an importance rating of 2.5 or higher on the five-point scale. The final decision regarding inclusion of items related to these knowledge statements rests with ETS. For inclusion of items related to these statements, compelling written rationales are needed from the Advisory/ Test Development Committee (ETS, Beginning Teacher Knowledge of General Principles of Teaching and Learning: A National Survey, September 1992).

To check for across-respondent consistency, the means for each item were calculated for each of the relevant subgroups. Correlations of means of selected pairs of subgroups were calculated to check the extent that the relative ordering of the enabling skills was the same across different mutually exclusive comparison groups (e.g., teachers, administrators, teacher educators; elementary school teachers, middle school teachers, secondary school teachers).

ETS’s report, Beginning Teacher Knowledge of General Principles of Teaching and Learning: A National Survey, describes the job analysis in detail. Also included are the names of the non-ETS participants on the various committees and the individuals who participated in the pilot test of the inventory. Copies of the various instruments and cover letters also are included.

Comment: The process described is consistent with the literature for conducting a job analysis. This is not the only method, but it is an acceptable one. The initial activities were well done. The use of peer nominations to identify a qualified group of external reviewers was appropriate. Although there was diverse representation geographically, by sex and job classification, a larger and more ethnically diverse membership on the External Review Panel would have been preferred. The subsequent review by the Advisory/Test Development Committee helped ensure an adequate list of skills.

The final set of activities also was well done. Although normally one would expect a larger sample in the pilot survey, the use of only six individuals seems justified. It is not clear, however, that these six individuals included minority representation to check for potential bias and sensitivity. The final survey sample was moderate in size. The response rate was consistent with (or superior to) response rates from job analyses for other licensure programs. The response rates from other classifications were somewhat low (and from state education agency administrators appallingly low). An inspection of the characteristics of the 724 usable respondents in the teacher sample showed a profile consistent with that of the sampling frame except that it was somewhat heavy on



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement