believed), and one result was the Civil Rights Act of 1991, which explicitly prohibits any kind of score adjustments designed to favor one group over another with regard to employment decisions.

This, then, is a simple version of the quandary faced by organizational decision makers. How do employers use the valid assessment procedures they have been using in a way that will produce a work force that is optimally capable and that is representative of the diverse groups in our society? Or how do we develop equally valid instruments that do not produce adverse impact?

This paper attempts to describe some of the ways in which organizations and assessment specialists have tried to adjust to this quandary, the success of these attempts, and what new legal issues might be raised when these procedures are challenged, as they either have been or almost certainly will be. Specifically, the following five approaches to reducing or eliminating the adverse impact of psychological measures will be discussed: (1) inclusion of additional job-related constructs with low or no adverse impact in a battery that includes cognitive or academically based measures with high adverse impact; (2) changing the format of the questions asked or the type of response requested; (3) using computer or video technology to present test stimuli and collect responses; (4) using portfolios, accomplishment records, or other formalized methods of documenting job-related accomplishments or achievements; and (5) changing the manner in which test scores are used: specifically, by the use of banding.

Use of Additional Constructs To Assess Competence

One criticism of traditional personnel selection procedures is that they often focus on a single set of abilities, usually cognitive. These cognitive abilities are relatively easy and inexpensive to measure in a group context with paper and pencil instruments. Moreover, they tend to exhibit some validity for most jobs in the economy. They also, of course, exhibit large subgroup differences. It should be noted that with unequal subgroup variances, a possibility not often examined, the differences between lower- and higher-scoring subgroups might vary as a function of the part of the test score distribution examined.

If the job requires other capabilities, such as interpersonal or teamwork skills, for example, why are these capabilities not measured? If we did measure these alternative constructs, what would happen to the organization's ability to identify talent and to the size of the subgroup difference when information from multiple sources on multiple constructs is combined to make hiring decisions? Recently, Sackett and Wilk (1994) examined a simple instance of this case in which one predictor with a large subgroup difference (i.e., one standard deviation) was combined with a second predictor on which subgroup scores were equivalent. If the two measures are uncorrelated, the subgroup difference of a simple equally weighted composite is 0.71. In Sackett and Wilk's case the two predictors were equally valid and uncorrelated with each other. In the



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement