Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Job Evaluation Research and Research Needs Donald Schwab Job evaluation has been available since the late l8OOs and fairly widely implemented by private-sector organizations since the 1930s and especially in the 1940s. Extant research was published largely in the 1940s and l950s, but it was virtually ignored by investigators in the 1960s and 1970s. Recently, however, spurred largely by interest in comparable worth, atten- tion once again has been focused on job evaluation. This paper reviews research on job evaluation and suggests appropriate questions and methodologies for the conduct of empirical inquiries needed to assess job evaluation procedures in the context of present challenges. It begins with a brief discussion of perspectives on job evaluation and the implications of these perspectives for the processes central to the installation and maintenance of evaluation systems overtime. Existing research investi- gations bearing on these perspectives and processes are then reviewed as a springboard for suggesting research needed on job evaluation. The discus- sion of needed research also focuses on issues explicitly evolving from the comparable worth controversy. PERSPECTIVES Job evaluation is generally characterized as an administrative procedure designed to help employers develop and maintain job hierarchies for pur- poses of making pay differentials. Moreover, there is general agreement that the objective of job evaluation is to produce an acceptable pay structure (Munson, 1963:601: 37
38 SCHWAB The tine function of job evaluationand it's an important oneis to rationalize and gain acceptability for some way of distributing a batch of money in wages. Although there is widespread agreement on the general objective of job evaluation, there is considerable disagreement on how acceptability is to be achieved. The dominant perspective views job evaluation from the point of view of applied measurement, with accompanying emphasis on characteris- tics such as objectivity, generalizability, and parsimony (see, e.g., Viteles, 19411. Industrial psychologists and engineers and most textbook authors have viewed critical job evaluation issues from this point of view. Acceptability ofthe results of job evaluation from this perspective is seen as being heavily dependent on the quality of the scores that emerge from the measuring instruments developed to describe and evaluate jobs. To what extent are such scores free from random and systematic errors, for example? How can measuring instruments be changed or improved to reduce such errors? A very different perspective on job evaluation has emerged from the research and thinking of institutional economists (Kerr and Fisher, 1950; Livernash, 19571. They view job evaluation as a procedure for working out conflicts that inevitably arise about pay differentials over time. These con- flicts are largely a function of the fact that internal acceptability (based largely on job content) varies from external acceptability (based largely on market factors). While institutionalists sometimes argue that both internal and external equity are achieved when a job evaluation system is initially installed (e.g., Livernash, 1957), exogenous forces immediately begin to pull them apart. The major task for job evaluation, then, is to accommodate these conflicting forces. The objectivity of the instrumentation emphasized by the applied mea- surement perspective contrasts with a view by the institutionalists that sees job evaluation as a flexible set of rules of the game (Kerr and Fisher, 1950:871: "The technical core of a plan (instrumentation), on which so much attention is lavished, has generally less bearing on the ultimate results than either the environment into which it is injected or the policies by which it is administered." Institutionalists, then, emphasize the historical milieu of the system's implementation and the administrative procedures used initially and especially to maintain the system over time. Research and research needs consistent with this orientation thus emphasize the importance of accounting for the context and longitudinal elements of job evaluation pro- cesses within organizations. AVAILABLE RESEARCH Job evaluation research to date has been most strongly influenced by the applied measurement perspective. Five issues of varying significance have
JOB EVALUATION RESEARCH AND RESEARCH NEEDS 39 dominated the empirical literature: (1) the reliability of evaluations, (2) the predictability of wage distributions, (3) the convergence between evaluation systems, (4) redundancy in compensable factors, and (5) the impact of rater characteristics on evaluation scores. ~ Each is reviewed briefly below. 1. Reliability of evaluations. Of obvious importance to an understand- ing of job evaluation frown a measurement perspective is the question of reliability. More specifically, research has focused on the degree to which evaluations of jobs using point systems are free of random error attributable to the individuals (see, e.g., Ash, 1948; Doverspike et al., 1983; Lawsche and Farbo, 1949; Lawshe and Wilson, 1947) or groups (Schwab and Hene- man, 1984) performing the evaluations. In general, studies have found that unreliability is a serious problem in the evaluation of specific compensable factors. Total scores are also unreliably evaluated by single evaluators, although total evaluations from pooled assessments of five or more indepen- dently derived judgments tend to be reliable. As one fairly typical example of this sort of research, Lawshe and Wilson (1947) had 10 raters independently evaluate 20 production jobs on an 11- factorpoint system. Average correlations between ratings of pairs of evalua- tors ranged from .34 to .82 for the individual compensable factors, and the average for the total score was r = .77. When the average ratings of 5 of these raters (randomly chosen) were correlated with the pooled ratings of the remaining 5 raters, the resulting correlations among compensable factors ranged from .72 to .96, and the reliability coefficient for the total score was .94. Schwab and Heneman (1984) recently assessed the reliabilities obtained from groups of 3 evaluators using consensus procedures to derive ratings of 53 jobs. The intergroup reliability coefficients resulting from this procedure ranged from .39 to .99 on 10 compensable factors, and the relia- bility coefficient for the total score was .99. 2. Predictability of wage distributions. A large number of demonstra- tions have been made to show that compensable factor scores (e.g., Chester, 1948;DavisandTiff~n, l950;Dertien, l981;Fitzpatrick, 1947;Fox, 1962; Schwab and Heneman, 1984) and quantitatively derived job analysis scores (e.g., McCormick et al., 1972; Robinson et al., 1974; Tornow and Pinto, 1976) can be weighted (typically using multiple regression) to predict wage distributions with moderate success. By way of illustration, Tornow and Pinto developed a wage prediction model by regressing current wages of 433 managers on 13 evaluation dimensions derived from the Management Posi- tion Description Questionnaire. The resulting mathematical model was then used to predict current wages of 56 managers not included in the original 1 See also Schwab ( 1980b) and Treiman ( 1979) for review of portions of this research.
40 SCHWAB sample. The model accounted for 81 percent of the wage variance in the latter group (R = .901. 3. Convergence between evaluation systems. Several studies have investigated the extent to which different job evaluation systems yield simi- lar results (Atchison and French, 1967; Chester, 1948; Dunham, 1978; Robinson et al., 1974; Snelgar, 19831. Table 1 summarizes the results of these studies by showing the lowest and highest degree of convergence (in correlational terms) between different job evaluation systems. As can be seen, results vary widely, with correlation coefficients between systems ranging from a low of .59 (Atchison and French, 1967) to a high of .99 (Sneigar, 1983~. Unfortunately, the studies generally do not describe the systems investigated very thoroughly, so it is difficult to understand why there is so much variability in results. 4. Redundancy in compensablefactors. Point systems often have 10 or more compensable factors on which jobs are evaluated. Using factor analy- sis or stepwise multiple regression procedures or both, investigators have repeatedly shown that much of Me total variance generated by such systems can be explained or accounted for by just a few factors or dimensions (e.g., Davis and Tiff'n, 1950; Grant, 1951; Howard and Schutz, 1952; Lawshe, 1945; Lawshe and Maleski, 1946; Lawshe et al., 19481. As one example, Lawshe et al. (1948) found that just 3 compensable factors were necessary to account for from 86 to 96 percent of the variance in the total scores generated from a system of 11 compensable factors used in three firms. This sort of finding is obtained because compensable factors in point systems tend to be highly intercorrelated (colinear). 5. Impact of rater characteristics on evaluation scores. Finally, there have been investigations of whether traits or situational characteristics of evaluators influence mean ratings or reliability. Although there is some TABLE 1 Convergence of Results Across Alternative Job Evaluation Systems Range of Correlation No. of Coefficients Study Plans Low High . Atchison and French ( 1967) 3 .54 .82 Chester ( 1948) 3~' .85 .97 Dunham (1978) 6 .89 .97 Robinson et al. (1974) Sa .82 95 Snelgar(1983)b 16 .77 .99 a Includes Market Wage Sulvey as a "system." b Includes a sample of "heterogeneous" and a sample of "homogeneous" jobs.
JOB EVALUATION RESEARCH AND RESEARCH NEEDS 41 1 evidence that evaluators' familiarity with jobs influences mean ratings (Madden, 1962, 1963), research does not suggest that evaluations by man- agers differ appreciably from those by incumbents (Chambliss, 1950) or union representatives (Lawshe and Farbo, 19491. Of more direct relevance to the comparable worth controversy, a few studies have found that evalua- tions (Doverspike et al., 1983; Schwab and Grams, in press) or quantitative job analysis scores (Arvey et al., 1977) do not differ as a function of the sex of the evaluator. RESEARCH AGENDA The paucity of research on job evaluation reviewed here may come as a surprise, given the significance of job evaluation to compensation adminis- tration. Clearly, there are a large number of relevant research issues even without the challenges of comparable worth. Questions raised by advocates and critics of comparable worth simply compound that number. What fol- lows is an attempt to suggest important questions that require research answers, especially if job evaluation is to be considered as a mechanism for achieving comparable worth (however defined). It begins with issues that have been raised specifically in a comparable worth context, issues that follow closely from the applied measurement perspective. These are fol- lowed by a set of broader, more descriptive questions, which reflect to a greater degree the institutionalist perspective. Issues Stimulated by Comparable Worth While advocates of comparable worth have raised many specific criti- cisms of current job evaluation, their major concerns fall within two broad categories. The first and most important concern has to do with the criterion used to weight compensable factor scores. The second has to do with biases that may occur in the evaluation process itself. Each of these as well as potentially useful research to be performed is discussed below. The Criterion As currently used in the private sector, job evaluation is typically validated against a wage criterion (e.g., Schwab, l980b; Treiman and Hartmann, 19811. That is, the acceptability of job evaluation results are initially deter- mined by the correspondence between the job hierarchy produced by the evaluation system and some existing distribution of wages for those jobs. Sometimes this is done via "policy capturing," wherein the compensable factors are formally weighted to maximize the relationship between evalua-
42 SCHWAB lion results and wages (Treiman, 1979) . In all probability the process is more often done less formally, but, in any event, correspondence between evalua- tion results and current wages is an important element in determining the acceptability of the system to the employer. Advocates of comparable worth have, of course, raised objections to the use of wages as the criterion, because they conclude that existing wage distributions are biased against jobs held mainly by women (e.g., Blumro- sen, 1979;TreimanandHartmann, l9811.Ifwagesarebiased,andifwages serve as the criterion for job evaluation, that bias will be reflected in the job evaluation results (Schwab and Wichern, 1983~.2 As a consequence, some analysts have suggested that wages not be used as the criterion or, if used, "corrected" for sex bias (e.g., Ellumrosen, 1979; Treiman and Hartmann, 19811. Although these recommendations are far from universally accepted (see, e.g., McCormick, 1981; Milkovich, 1980; Milkovich and Broderick, 1982; Nelson et al., 1980), empirical research and theory addressing the criterion question are clearly appropriate. What are alternatives to using wages for weighting compensable factors, and what are the implications of such alternatives? A not inconsequential advantage of an observable criterion such as a wage distribution stems from the fact that it simplifies and makes "objective" the weighting of compensable factors. Once the criterion is agreed on, the procedure for weighting and for determining how well the resulting model "fits" the criterion distribution can proceed statistically. Job evaluation implemented in this manner represents a form of criterion-related (empiri- cal) validation. Alternatively, specification of an unobservable conceptual criterion such as worth casts the problem into the domain of construct validation (Schwab, 1980b). Not only is establishing construct validity more difficult, but also the standards for deciding when adequate validity has been achieved are less amenable to unambiguous interpretation and hence agree- ment (see, e.g., Schwab, 1980a). It would appear highly desirable, therefore, if observable criteria could be generated against which job evaluation instruments could be developed and validated. Unfortunately, comparable worth advocates generally have not made suggestions regarding alternative observable criteria. What follows is thus highly tentative; it is only a suggestion for how research investigating observable criteria forjob evaluation might proceed. A starting point might be to focus on the objective of job evaluation, a conceptual criterion that commentators agree on, namely, acceptability of 2 An obvious question pertains to the veracity of the comparable worth conclusion. Are market wages biased against jobs held mainly by women? Issues pertaining to this question are not considered in the present paper.
JOB EVALUATION RESEARCH AND RESEARCH NEEDS 43 the outcomes to the participants (see also Munson, 1963, on this issue). Could participants agree, for example, on compensable factors and weights that produced perceptually equitable job hierarchies? The existing job evaluation literature will probably be of little help in identifying likely compensable factors for such a criterion. There seems to be general agreement that existing job evaluation plans have emerged with- out much thought or research (e.g., Belcher, 1974; Nash and Carroll, 1975) . In the historical evolution of the procedure, compensable factors were found that "predicted" market wages; little further development took place, since the major organizational objective was easily achieved. Equity theoIy (e.g., Adams, 1963) may be of greater value foridentifying compensable factors that will produce perceptually equitable job distribu- tions. Although most research in an employment context has focused on behavioral consequences of inequity (for a review see, e.g., Campbell and Pritchard, 1976), a growing body of literature has examined equity determi- nants, particularly in a compensation context (e.g., Belcher and Atchison, 1970; Birnbaum, 1983; Dyer et al., 1976; Goodman, 1975; Lawler, 19661. Although these studies have emphasized individual rather than job charac- teristics, they nevertheless may serve as methodological models for job- evaluation-oriented investigations. One model appears especially well suited to the problem at hand. Specifi- cally, once compensable factors have been tentatively identified, policy capturing (see, e.g., Slovic and Lichtenstein, 1971) could be employed to specify weights and to ascertain the degree of agreement within and between representative samples of management and employee groups. If this line of inquiry appears fruitful (i.e., if compensable factors can be found and weighted to predict perceptually equitable job hierarchies), sub- sequent research can compare results with those obtained from more tradi- tional job evaluation methods and with existing wage distributions. Judgments could then be made about whether the results would be accept- able in terms of political and economic considerations. Evaluation Bias The second general concern expressed is the possibility thatjob evaluation is developed or implemented so that the resulting job hierarchies lead to wage underpayment for predominantly female jobs . "It is likely that most, if not all, job-evaluation systems contain sex bias" (Collette, 1982: 154~. Such bias could be manifested in several ways. One potential difficulty discussed has to do with the compensable factors included in the job evaluation system. It has been frequently hypothesized that compensable factors in conventional job evaluation plans favor work
44 SCHWAB none in predominantly male jobs (e.g., Blumrosen, 1979; Grune, 1982; Remick, 1981; Thomsen, 1977; Treiman, 19791. Evidence supportive of this hypothesis, however, is scarce. Indeed, it is difficult to envision a methodology that would test this hypothesis unambiguously. For example, Doverspike and Barrett (1984) compared sets of predomi- nantly female with predominantly male jobs on 15 compensable factors. They then took as evidence of bias any difference between the male and female job sets on a number of psychometric characteristics (e.g., reliabil- ity, mean differences, and so forth). However, to assume that bias in this instance means anything other than difference, one must additionally assume that (a) ratings were not a function of raters (i.e., no rater bias), and (b) the two job sets were psychometrically identical in some "true" score sense. Given the degree of occupational sex-related segregation known to exist, the latter assumption is particularly hard to accept. Consequently, it is difficult to conclude from such an exercise that anything very meaningful has been learned about potential bias in compensable factors. A second hypothesis about an error source that could be to the wage disadvantage of predominantly female jobs has to do with bias in either the description or the evaluation of jobs. Analysts or evaluators may deliber- ately or inadvertently denigrate jobs performed predominantly by women (Grune, 1982; Remick, 1981; Schwab, 1980b; Thomsen, 1981; Treiman, 1979; Treiman and Hartmann, 19811. This hypothesis has been addressed from two perspectives. As already noted in this review, several studies found little evidence that judgments of jobs vary as a function of the sex of the evaluator. If sex stereotypes about jobs exist, they apparently transcend the sex of the individual making the judgment. Several studies have also been performed that have tried to directly identify sex stereotypes. Arvey et al. (1977) investigated bias in job analysis by having subjects evaluate two jobs identical in all respects except that one was characterized as female, the other as male (manipulated with photographs). They found no differences as a function of the sex characteri- zation of the job. Three studies have addressed sex bias in the evaluation of jobs. In a correlational study Mahoney and Blake (1979) found that the perceived femininity of an occupation explained a small but statistically significant amount of recommended salary variance after controlling for effects due to job characteristics. Experimental studies alternatively have found little evidence that student evaluators were influenced by the sex composition of a set of jobs (Grams and Schwab, 1985) and no evidence that experienced compensation specialists or administrators were so influenced (Schwab and Grams, in press). Thus, the evidence to date does not provide much support for the hypothe- sis that the sex of the job per se influences job descriptions or evaluations. This evidence, however, is limited, and there is an obvious need for addi-
JOB EVALUATION RESEARCH AND RESEARCH NEEDS 45 tional research. It is especially important that these sorts of investigations study alternative types of job analysis and evaluation instruments and that they manipulate job stereotypes in alternative ways. A potential source of bias that has not been previously considered in the comparable worth debate did obtain support in both the Grams and Schwab (1985) and Schwab and Grams (in press) experiments. Specifically, both studies found evidence that evaluations were substantially influenced by the current salary reported for the jobs studied. While more research is again called for, the implications of this finding for wage fairness are potentially profound. For if there is sex bias in current wage structures, replicated evidence that wage rates influence evaluations suggests that bias could enter evaluation results even though salaries are not used as an explicit external criterion in validating the system. Contextual Issues The research questions posed above are understandably narrow in the sense that they focus on differential job evaluation results as a function of the predominant sex of the job incumbents. They are also narrow, it seems to me, in the assumptions they make about the job evaluation process as it is conducted in organizations. The issues identified below are illustrative of contextual and more basic information needed to thoroughly understand how job evaluation is used by organizations and how it affects resulting wage structures. Despite the presumption that job evaluation is widely used by organiza- tions and that its use is increasing (e.g., Nash and Carroll, 1975), few reliable data exist on the number of fimns using it, the types of systems used, the jobs included, or the employees covered. Descriptive information on such questions tends to come from ad hoc surveys that have been conducted only periodically. Inferences about general usage have often been drawn from samples of unknown populations. As a result, a reasonable foundation for a systematic investigation of job evaluation might well begin with comprehensive descriptive data on its use, especially in the private sector. Such a survey should go beyond merely documenting types of plans (point system versus classification, and so forth) and obtain information on the specific procedures used to evaluate and price jobs. Evaluation Processes An illustration may be helpful in suggesting the type of information that I believe is needed about how job evaluation is conducted and the importance of obtaining such information for subsequent research. Investigations to
46 SCHWAB date have tended to use similar methodologies to study job evaluation. The studies described above have nearly always had evaluators independently examine one or more written job descriptions and then evaluate those descriptions using some variant of a common point system. To what extent does this procedure describe organizational practice? My experience with organizations implementing job evaluation is that the evaluators frequently begin by generating the job descriptions and that, in the process, they obtain information from incumbents, supervisors, and other sources. Schwab and Grams (1983) provide a somewhat more repre- sentative confirmation that evaluators typically have substantially more information than is provided by written descriptions when making judg- ments about jobs. Whether these data increase evaluation accuracy or merely add error variance is largely unknown at this point, but certainly their presence raises questions about the external validity of the research that has been performed. Pricing lobs Descriptive information is also needed about the mechanisms that organi- zations use to price job hierarchies following the initial evaluation process. Treiman (1979) reported that it is customary for organizations to weight compensable factors so that the relationship with some wage distribution is maximized. To what extent does this actually apply? What procedures are used to maximize the relationship (reevaluation of jobs, changes in the evaluation instrumentation, elimination of outlierjobs, and so forth)? What criteria, if any, are used when wages are not employed in this fashion? Inferences about what job evaluation accomplishes will not be very informed until we have more data on how organizations use it to make compensation decisions. It is especially important to obtain information about pricing decisions following initial implementation. Regardless of how the job hierarchy is priced when the system is first installed, what happens to wage structures over time and what adjustments in other personnel or human resources systems are made to accommodate the job evaluation system? Livernash (1957) noted that pressures emerge on the job evaluation system as changes occur eitherin the external wage contours of the organization (e.g., changes in differential labor demand patterns, modifications in union relationships) or in internal job content (e.g., as a function of product or technological change). On the basis of field observations, Slichter et al. (1960) and Kerr and Fisher (1950) reported that job evaluation systems can easily fail unless administrative procedures and practices account for these changes. Kerr and
JOB EVALUATION RESEARCH AND RESEARCH NEEDS 47 Fisher in particular suggest not only that alterations are made in the wage structures emerging from job evaluation, but also that other personnel or human resource systems are modified to accommodate wage structure pres- sures. In the air manufacturing industry, for example, they observed not only job reevaluation, inflation of job descriptions, and demoralization of the merit pay systems but also changes in training programs, recruiting prac- tices, and job redesign. Personal conversations with compensation adminis- trators suggest that these responses are far from unique. If organizations routinely modify wage structures following implementa- tion,3 problems are necessarily increased for those who wish to achieve any sort of specific objective through job evaluation (e.g., a homogeneous stan- dard of internal equity). The research of the institutionalists suggests that in order to respond to the multiple and potentially conflicting claims of equity, job evaluation systems must remain flexible (i.e., to some extent they must tolerate distortion). Clearly, current and more representative evidence is needed regarding this important possibility. SUMMARY AND CONCLUSIONS Job evaluation has been thought about and studied from two perspectives. One of these, emphasizing the importance of applied measurement, views job evaluation largely as a scaling system for generating "true" job scores on compensable factors. The other perspective, the institutional view, empha- sizes the role of the job evaluation system in dealing with the conflicts that occur between internal (organizational) and external (market) forces and values. Research on job evaluation has been dominated by the applied measure- ment perspective. Thus a good bit of research has been conducted on the quality of the scores emerging from the evaluation of jobs on compensable factors. In particular, research has been performed on the reliability of scores, the validity of those scores for predicting current wage differentials, convergence between alternative evaluation systems and between compen- sable factors, and the degree to which scores are a function of the individuals performing the evaluations. Research questions suggested by the comparable worth controversy so far have fallen largely within the purview of the applied measurement perspec- 3 Indeed, Kerr and Fisher (1950) argued that such modifications are necessary if the job evalua- tion system is to remain viable. "The more fixed, definite, and self executing the formula [the formal job evaluation plan], the less will it allow for the other end perhaps more important pressures to which wage rates respond" (p. 94).
48 SCHWAB five. Questions have been raised especially about the quality of the criterion used to weight compensable factor scores and about the possibility that such scores are biased in various ways to the disadvantage of jobs held largely by women. These are certainly appropriate questions, and a number of suggestions for how such research might proceed are offered in this paper. Nevertheless, it seems to me that the more fundamental challenge to an understanding of job evaluation rests in the institutional perspective. Job evaluation is a complex system, complexly related to a number of other personnel systems (e.g., wage surveys) in the organization. A great number of judgments are neces- sa~y to set up such systems and to maintain them over time. Until we learn much more about how these judgments are made, about their consequences, and about the exogenous factors that influence them, we run the risk of establishing policies that will not accomplish the objectives sought. A number of the research questions suggested in this paper fall within the institutional domain. These, it seems to me, are more basic than the applied measurement issues that have so far dominated the comparable worth debate. However, no claim is made that the institutional issues raised here are exhaustive, or that they are even the most critical to our understanding of job evaluation processes. So bereft are we of views of job evaluation from an institutional perspective that farther work needs to be performed just to specify a reasonable research agenda. ACKNOWLEDGMENT Financial assistance from the Graduate School of Business and the Gradu- ate School, University of Wisconsin-Madison, is gratefully acknowledged. Helpful comments on an earlier draft were made by Chris Berger, Bob Grams, Heidi Hartmann, and Tom Mahoney. REFERENCES Adams, J.S. 1963 Toward an understanding of inequity. Journal of Abnormal and Social Psychology 67:422-436. Arvey, R.D., E.M. Passino, andJ.W. Lounsbury 1977 Job analysis results as influenced by sex of incumbent and sex of analyst. Journal of Applied Psychology 62:41 1-416. Ash, P. 1948 The reliability of job evaluation rankings. Journal of Applied Psychology 32:313-320. Atchison, T., and W. French 1967 Pay systems for scientists and engineers. Industrial Relations 7:44-56. Belcher, D.W. 1974 Compensation Administration. Englewood Cliffs, N.J.: Prentice-Hall.
JOB EVALUATION RESEARCH AND RESEARCH NEEDS 49 Belcher, D.W., and T.J. Atchison 1970 Equity theory and compensation policy. Personnel Administration 33(3):22-33. Birnbaum, M.H. 1983 Perceived equity of salary policies. Journal of Applied Psychology 68(1):49-59. Blumrosen, R.G. 1979 Wage discrimination, job segregation, and Title VII of the Civil Rights Act of 1964. University ofMichiganJoumalof~w Reform 12:397-502. Campbell, J.P., and R.D. Pritchard 1976 Motivation theory in industrial and organizational psychology. Pp. 63-130 in M.D. Dunnette, ea., Handbook of Industrial arid Organizational Psychology. Chicago: Rand McNally. Chambliss, L.A. 1950 Our employees evaluate their own jobs. Personnel Journal 29(4):141-142. Chester, D.J. 1948 Reliability and comparability of different job evaluation systems. Journal of Applied Psychology 32 :465-475 . Collette, C.O. 1982 Ending sex discrimination in wage setting. Pp. 150-155 in Proceedings of the 35th Annual Meeting of the Industrial Relations Research Association. Madison, Wis.: Industrial Relations Research Association. Davis, M.K., and J. Titan 1950 Cross validation of an abbreviated point job evaluation system. Journal of Applied Psychology 34:225-228. Dertien, M.G. 1981 The accuracy of job evaluation plans. Personnel Journal 60:566-570. Doverspike, D., and G.V. Barrett 1984 An internal bias analysis of a job evaluation instrument. Journal of Applied Psychology 69:648-662. Doverspike, D., A.M. Carlisi, G.V. Barrett, and R.A. Alexander 1983 Generalizability analysis of a point-method Job evaluation instrument. Journal of Applied Psychology 68:476-483. Dunham, R.B. 1978 Job Evaluation: Two Instruments on Sources of Pay Satisfaction. Paper presented at the American Psychological Association, Toronto. Dyer, L., D.P. Schwab, and R.D. Theriault 1976 Managerial perceptions regarding salary increase criteria. Personnel Psychology 29:233-242. Fitzpatrick, B.H. 1947 An objective test of job evaluation validity. Personnel Journal 28(9): 128- 132. Fox, W.M. 1962 Purpose and validity in job evaluation. PersonnelJournal 41:332-337. Goodman, P.S. 1975 Effect of perceived inequity on salary allocation decisions. Journal of Applied Psychol- ogy 60(3):372-375. Grams, R., and D.P. Schwab 1985 Systematic sex-related error in job evaluation. Academy of Management Journal. (Forthcoming) Grant, D.L. 1951 An analysis of a point rating lob evaluation plan. Journal of Applied Psychology 35 :236- 240.
50 SCHWAB Grune, J.A. 1982 Comparable worth: Issues and perspectives. Discussion. Pp. 169-172 in Proceedings of the 35th Annual Meeting of the Industrial Relations Research Association. Madison, Wis.: Industrial Relations Research Association. Howard, A.H., and H.G. Schutz 1952 A factor analysis of a salary job evaluation plan. Journal of Applied Psychology 36:243- 246. Kerr, C., and L.H. Fisher 1950 Effect of environment and administration on job evaluation. Harvard Business Review 28(3):77-96. Lawler, E.E., III 1966 Managers' attitudes toward how their pay is and should be determined. Journal of Applied Psycholog y 50:273-279. Lawshe, C.H., Jr. 1945 Studies in job evaluation: II. The adequacy of abbreviated point ratings for hourly-paid jobs in three industrial plants. Journal of Applied Psychology 29: 177-184. Lawshe, C.H., Jr., and PC. Farbo 1949 Studies in job evaluation: 8. The reliability of an abbreviated job evaluation system. Journal of Applied Psychology 33: 158-166. Lawshe, C.H., Jr., and A.A. Maleski 1946 Studies in job evaluation: 3. An analysis of point ratings for salary paid jobs in an industrial plant. Journal of Applied Psychology 30: 1 17- 128. Lawshe, C.H., Jr., and R.F. Wilson 1947 Studies in job evaluation: 6. The reliability of two point rating systems. Journal of Applied Psychology 31: 355-365. Lawshe, C.H., Jr., E.E. Dudek, and R.F. Wilson 1948 Studies in job evaluation: 7. A factor analysis of two point rating methods of job evaluation. Journal of Applied Psychology 32: 1 18-129. Livernash, E.R. 1957 The internal wage structure. Pp. 140-172 in G.W. Taylor and F.C. Pierson, eds., New Concepts in Wage Determination. New York: McGraw-Hill. Madden, J.M. 1962 The effect of varying the degree of rater familiarity in job evaluation. Personnel Admin- istrator25:42~6. 1963 A further note on the familiarity effect in job evaluation. Personnel Administration 26:S2-53. Mahoney, T.A., and R.H. Blake 1979 Occupational Pay as a Function of Sex Stereotypes and Job Content. Paper presented at the meeting of the National Academy of Management, Atlanta. McCormick, E.J. 1981 Minority report. Pp. 115-130 in D.J. Treiman and H.I. Hartmann, eds., Women, Work, and Wages: Equal Payfor Jobs of Equal Value. Committee on Occupational Classifica- tion and Analysis. Washington, D.C.: National Academy Press. McCormick, E.J., P.R. Jeanneret, and R.C. Mecham 1972 A study of job characteristics and job dimensions as based on the Position Analysis Questionnaire (PAQ). Journal of Applied Psychology 56:347-368. Milkovich, G.T. 1980 The emerging debate. Pp. 23-47 in E.R. Livernash, ea., Comparable Worth: Issues and Alternatives. Washington, D.C.: Equal Employment Advisory Council.
JOB EVALUATION RESEARCH AND RESEARCH NEEDS 51 Milkovich, G.T., and R. Broderick 1982 Pay discrimination: Legal issues and implications for research. Industrial Relations 21:309-317. Munson, F. 1963 Four fallacies for wage and salary administrators. Personnel 40(4):57-64. Nash, A.N., and S.J. Carroll, Jr. 1975 The Management of Compensation. Monterey, Calif.: Brooks/Cole. Nelson, B.A., E.M. Opton, Jr., and T.E. Wilson 1980 Wage discrimination and the "comparable worth" theory in perspective. University of Michigan Journal of Law Reform 13:231-301. Remick, H. 1981 The comparable worth controversy. Public Personnel Management 10:371-383. Robinson, D.D., O.W. Wahlstrom, and R.C. Mecham 1974 Comparison of job evaluation methods: A "policy-capturing" approach using the Posi- tion Analysis Questionnaire (PAQ). Journal of Applied Psychology 59 :633-637. Schwab, D.P. 1980a Construct validity in organizational behavior. Pp. 3-43 in B. Staw and L.L. Cummings, eds., Research in Organizational Behavior. Vol. 2. Greenwich, Conn.: JAI Press. In 1980b Job evaluation and pay setting: Concepts and practices. Pp. 49-77 in E.R. Livernash, ea., Comparable Worth: Issues and Alternatives. Washington, D.C.: Equal Employ- ment Advisory Council. Schwab, D.P., and R. Grams 1983 A Survey of Job Evaluation Practice Among Compensation Specialists: A Summary of Findings. Technical Report, American Compensation Association, Scottsdale. Anz. Sex-related errors in job evaluation: A "real-world" test. Journal of Applied Psychol- press ogy. Schwab, D.P., and H.G. Heneman, III 1984 Assessment of a Consensus-Based Multiple Information Source Job Evaluation System. Paper presented at the National Academy of Management Meetings, Boston. Schwab, D.P., and D.W. Wichern 1983 Systematic bias in job evaluation and market wages: Implications for the comparable worth debate. Journal of Applied Psychology 31 :353-364. Slichter, S.H., J.J. Healy, and E.R. Livernash 1960 Thre Impact of Collective Bargaining on Management. Washington, D.C.: Brookings Institution. Slovic, P., and S. Lichtenstein 1971 Comparison of Bayesian and regression approaches to the study of information process- ing in judgment. Organizational Behavior and Human Performance 6:649-744. Snelgar, R.J. 1983 The comparability of job evaluation methods in supplying approximately similarclassi- fications in rating one job series. Personnel Psychology 36:371-380. Thomsen, D.J. 1977 Unmentioned problems of salary administration. American Compensation Review (4):11-21. 1981 Compensation and benefits: More on comparable worth. Personnel Journal 60:348- 354. Tornow, W.W., and P.R. Pinto 1976 The development of a managenal job taxonomy: A system for descnbing, classifying, and evaluating executive positions. Journal of Applied Psychology 61:410-418.
52 SCHWAB Treiman, D.J. 1979 Job Evaluation: An Analytical Review. Interim Report to the Equal Employment Oppor- h~nity Commission. Committee on Occupational Classification and Analysis. Washing- ton, D.C.: National Academy of Sciences. Treiman, D.J., and H.I. Hartmann, eds. 1981 Women, Work, and Wages: Equal Payfor Jobs of Equal Value. Committee on Occupa- tional Classification and Analysis. Washington, D.C.: National Academy Press. Viteles, M.S. 1941 A psychologist looks at job evaluation. Personnel 17(3):165- 176. , .