Over time, researchers and assessment experts have suggested a large number of ways to improve assessments. To help organize and make sense of the multitude of suggestions, Paul Sackett, a distinguished professor of psychology and liberal arts at the University of Minnesota in Minneapolis, offered a taxonomy for thinking about ways to improve the quality of selection systems. In his presentation, he also offered a number of specific examples of approaches to improving assessments.
In particular, Sackett drew from a review article he published with a colleague in 2008, which proposed four categories of methods to improve selection systems: (1) identify new predictor constructs, (2) measure existing predictor constructs better, (3) develop a better understanding of the criterion domain, and (4) improve the specification and estimation of predictor-criterion relationships (Sackett and Lievens, 2008). A number of the presentations in the workshop, he noted, had already focused on new predictor constructs—for example, working memory capacity, inhibitory control, and personal agency—so Sackett focused his presentation on the other three categories.
Improving Measurements of Existing Predictor Constructs
In addition to identifying new predictor constructs, another way to improve assessments would be to measure existing predictor constructs better. “We need to think systematically about that,” Sackett said. To encourage the workshop participants to think about how one might begin to improve existing predictor constructs, he offered three specific examples: contextualized personality items, narrower dimensions of personality measures, and use of real-time faking warnings.
Contextualized Personality Items
To begin, Sackett addressed personality assessments. Many personality inventories simply do not provide context, he said. The questions are overly generalized, such as, “Agree or disagree: I like keeping busy.”
But there has been a great deal of work in the area of industrial and organization psychology, Sackett noted, that indicates adding context can greatly improve the predictive ability of assessments (Shaffer and Postlethwaite, 2012). “Just add two words,” he said, “At the end of that item, add ‘at work’: ‘I like keeping busy at work.’ We’re not talking fancy contextualization to a specific job, but very, very generic contextualizations.”
Sackett described the results of a meta-analysis that examined the effect of adding context to assessment items. In particular, the analysis