National Academies Press: OpenBook
« Previous: 6 Hot Cognition: Defensive Reactivity, Emotional Regulation, and Performance under Stress
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

7

Adaptability and Inventiveness

Committee Conclusion: The military has a strong interest in adaptive behavior, expressed in terms of assessing novel problems and solving them or acting upon them effectively. Research indicates two promising lines of inquiry. The first would use measures of frequency and quality of ideas generated in open-ended tasks, which have demonstrated incremental validity over and above measures of general cognitive ability for predicting important outcomes related to work performance. The second line of inquiry would use narrow personality constructs to predict adaptive behavior and inventive/creative problem solving. Thus, the committee concludes that idea generation measures and narrow personality measures specific to adaptability and inventiveness merit inclusion in a program of basic research with the long-term goal of improving the Army’s enlisted accession system.

BACKGROUND, DEFINITIONS, AND ISSUES

It is essential for organizations seeking to thrive and prosper in a variety of environments to have members who respond effectively to challenging and changing situations whose context may be broad (e.g., interactions with other organizations in the turbulence of international politics) or within the confines of the organization itself (e.g., dealing with coworkers on team projects under constant stress and turnover; see also the discussion in Chapter 6 of performance under stress). Ideas and alternative plans are often needed for solving difficult and challenging problems or for removing obstacles that thwart taskwork, teamwork, and mission accomplishment. Much is known, and yet much more needs to be known, about meeting

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

these needs when attempting to hire employees who will not only adapt successfully as newcomers to an organization but who will, over time, adapt to change and create change as well.

One can think of examples of exceptional problem solving in real or fictional life-threatening situations. Recall the NASA specialists who adapted materials available on the ill-fated, crippled, moon-bound Apollo 13 spacecraft to bring the astronauts safely back to earth. Or remember the weekly episodes of the ABC television series MacGyver, the ingenious troubleshooter who solved problems with everyday materials he found at hand.

Clearly, civilian organizations and the military alike seek to hire talent who can work effectively individually and in teams to solve problems that are critical to their missions. The scientific community has demonstrated without question the importance of cognitive ability, cognitive flexibility, motivation, and team coherence and coordination in solving such problems (Bell and Kozlowski, 2002; Chen et al., 2002; Ilgen et al., 2005; Salas et al., 2005). Research indicates that in many problem-solving situations where prior training or available materials are inadequate, it is often not the smartest person on the team that comes up with a solution to a problem (Mason and Watts, 2012; Woolley et al., 2010). Who are these employees and soldiers who can adapt and innovate in changing, even stressful, circumstances? What characteristics differentiate them from others?

What Is Adaptability? What Is Inventiveness?

Adaptability refers to the ability to adjust and accommodate to changing and often unpredictable physical, interpersonal, cultural, and task environments. People who are adaptable are often described as cognitively and temperamentally flexible, resilient, and hardy, actively accommodating and adjusting to uncertainty and ambiguity even under duress.

Inventiveness involves innovative thinking and the ability to produce novel ideas that are of high quality and task-appropriate,1 especially in work settings that require practical and concrete solutions often of a mechanical nature. Inventive people are often described as ingenious, creative, original, and clever. Innovative thinking can lead to outcomes that range from everyday problem solving to transformational, paradigm-changing outcomes—all of which require novel, high-quality, task-appropriate think-

_________________

1 Creativity is often defined as the ability to produce high-quality task-appropriate, novel ideas (see Sternberg, 2001). Creativity in comparison to inventiveness embraces artistic creativity, whereas inventiveness is more descriptive and useful in realistic occupations such as those found in the military.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

ing. But inventiveness is more than generating ideas; it also incorporates an action orientation focused on problem solving.

Researchers in many disciplines have examined adaptability and inventiveness. Studies focusing on learning agility (the ability to learn from experience—from successes but especially from mistakes), fluid intelligence, thinking biases, intellectual engagement, domain-specific knowledge, idea generation, personality, motivation, and interests have all contributed to our understanding of inventive, adaptive behavior. The present chapter focuses specifically on idea generation and temperament (personality) variables as indicators of adaptability and inventiveness. Both fluid intelligence (especially spatial ability; see Chapter 4) and cognitive biases also play a role in inventive, adaptive behavior, but these are dealt with separately in Chapters 2 and 3, respectively. Likewise, contextual and environmental factors also play critical roles in fostering and inhibiting adaptive, inventive behavior, but these roles are outside the scope of this report, which focuses on measuring critical individual differences.

Other Relevant Constructs

Other constructs that are related to adaptability and inventiveness have unique aspects of their own worthy of discussion. Two of them, learning agility and intellectual engagement, are each compound variables that capture content from multiple constructs from different individual-differences domains. These compound constructs have been found to be successful in predicting performance in challenging situations that require new solutions, as described below.

Learning Agility

Learning agility can be defined as the willingness and ability to learn from experience of both success and failure and to apply that learning later, often under stress in new or first-time conditions (De Meuse et al., 2010). Lombardo and Eichinger (2000) described learning agility as consisting of four components: people agility, results agility, mental agility, and change agility, with each component incorporating both cognitive and noncognitive elements.2 Similarly, and likely with more complexity, Koutstaal (2012) examined the agile mind from a multidisciplinary perspective (developmental psychology, social psychology, and neuropsychology), incorporating cognition, action, perception, and emotion. Others, such as DeRue and colleagues (2012a) argued for a narrower conceptualization of

_________________

2 More recently, they have added a fifth factor—self-awareness—to the definition of learning agility (De Meuse et al., 2012).

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

learning agility, defining it as “speed and flexibility of learning” (DeRue et al., 2012b, p. 318). Although they state that they “ . . . do not believe that learning agility is a purely cognitive process” (DeRue et al., 2012b, p. 319), several academics and practitioners have criticized their narrow definition, in that reducing the concept of learning agility to speed and flexibility of learning eliminates the most interesting and practical aspects of the concept as it applies in work and temporal contexts, such as motivation to learn, emotional regulation, and learning from prior failures and successes (see, for example, Carette and Anseel, 2012; De Meuse et al., 2012; Hezlett and Kuncel, 2012). The committee tends to agree with this latter point of view.

Typical Intellectual Engagement

Intelligence is often, perhaps typically, thought of as and measured under “maximal” performance conditions, when motivation is highest. Certainly that is often true when some form of an intelligence or achievement test is administered to employees or students. Yet intelligence is surely just as important to understand (if not more important) in day-to-day work experiences when motivation to perform is not always maximal. Goff and Ackerman (1992) introduced the concept of typical intellectual engagement (TIE) as an individual-differences variable that might account for differences in the expression of intelligence in everyday life. They defined TIE as a desire to engage and understand the world, an interest in a wide variety of things, a preference for thorough understanding of a topic or problem, a need to know.3 Some have argued that TIE is little more than what is found in other personality variables such as openness to experience (e.g., Rocklin, 1994) and need for cognition (Woo et al., 2007). Facets of the Big Five factor Openness to Experience (such as intellectual efficiency, ingenuity, and curiosity) are also likely relevant to TIE. The point here is to say that TIE, Openness (the Big Five factor), and need for cognition are useful constructs that (1) encompass both cognitive and temperament (personality) variables, (2) are different from traditional intelligence measures that focus on maximal motivation of the test taker, (3) are clearly relevant to adaptability and inventiveness, and (4) warrant closer examination of their relationship to each other.

Structure of the Chapter

The remainder of this chapter consists of five sections. Section 2 is devoted to issues related to understanding and measuring outcomes that the individual-differences variable adaptability/inventiveness and its facets

_________________

3 Von Stumm and colleagues (2011) refer to TIE as intellectual curiosity and “hungry mind.”

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

should predict. The usual outcome variables, such as education, training, overall job performance, and turnover are important but inadequate. New criterion measures that are theoretically related to adaptability and inventiveness are needed. Section 3 discusses a specific cognitive ability: idea generation measured in the context of maximal motivation to perform.4Section 4 reviews temperament (personality) measures of behavior under typical motivational circumstances. In both Sections 3 and 4, the committee reviews the evidence indicating the relevance of measures of these constructs for predicting adaptable behavior in changing, challenging situations that require flexibility and innovative problem solving. Section 5 presents the committee’s conclusion based on the research evidence, and Section 6 lists our recommendations for future research on adaptive behavior.

In short, this chapter informs readers about the importance of adaptability and inventiveness constructs in personnel selection and classification contexts. Interestingly, personnel selection and classification in the 21st century is itself a problem that requires creative and adaptable researchers and practitioners.

ADAPTABILITY/INVENTIVENESS AS AN OUTCOME VARIABLE

It is important to distinguish between adaptability and inventiveness as stable traits on which people differ and to distinguish these traits from behavioral outcomes. Pulakos and colleagues (2000), for example, examined adaptability as an outcome variable, concluding that it consists of eight components. Keeping predictors and behavioral outcomes (criteria) conceptually distinct allows one to draw empirical distinctions within testable models that ask questions such as (a) How do personality characteristics affect training success? (b) How do personality and training together predict relevant work outcomes? (c) How strong and for how long does training and past behavior predict future behavior?

The challenge of defining and measuring adaptive/creative outcomes makes validity studies challenging as well. For example, experts may disagree on whether an outcome is in fact creative; just because a person can generate a large number of solutions to a problem does not mean the solutions are any good, or conversely, just because a person generates only one solution to a problem does not mean the person (or that solution) is not adaptive or creative. Furthermore, the type of creative output might be numerous yet also constrained and specific to a particular domain of knowledge or expertise: an elegant set of computer code, a technological invention and its patent, a tricky multicultural military negotiation that is

_________________

4 Spatial ability is another cognitive ability that is important in understanding and predicting adaptability and inventiveness (Kell et al., 2013; also see Chapter 4 of this report).

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

handled effectively, or a soul-stirring musical performance. Furthermore, sometimes the adaptive or creative nature of human behavior is not discovered within the solution but instead is reflected in reframing the problem creatively such that it, in turn, can make resulting strategies and solutions obvious and even mundane. Despite this heterogeneity, a useful conceptual framework for characterizing all individual outcomes of creativity might include frequency/fluency and quality/usefulness as reliable qualities of various adaptive and creative behaviors and output, as well as the inventiveness/novelty and radicalness/surprise of inventive or creative solutions to problems (Simonton, 2012; West and Anderson, 1996).

Social context and social networks, including teammates or coworkers, supervisors and their leadership style, and organizational climate, influence most forms of employee or soldier adaptability and creativity (Hon et al., 2014; Zhang and Zhou, 2014). Other situational factors matter as well; more complex tasks, highly stressful or emergency situations, and socially ambiguous contexts all have strong unpredictable elements to them, and unpredictability is a theme that appears to encourage and accentuate individual differences in adaptive and creative problem solving in the workplace (Pulakos et al., 2000).

To summarize, the committee emphasizes the distinction between adaptability and creative performance/outcomes and adaptability and inventiveness as individual differences that predict this type of performance. We are also sensitive to the importance of how numerous contextual factors moderate effects on empirical validity findings in this domain (Zhou and Hoever, 2014). This chapter and the entire report focus on predictors of behavioral and performance outcomes at the individual level as opposed to the team or unit level.

ADAPTABILITY/INVENTIVENESS AS AN INDIVIDUAL-DIFFERENCES COGNITIVE VARIABLE: IDEA PRODUCTION MEASURES INCREMENT VALIDITY OVER GENERAL COGNITIVE ABILITY

Carroll’s (1993) reanalysis of 467 datasets identified Idea Production (he also called it Retrieval Ability) as one of eight second-stratum factors underlying general cognitive ability.5 Idea Production usefully summarized correlations among nine first-stratum factors, including Originality/Creativity (the other eight factors were Ideational Fluency, Associational

_________________

5 In this chapter, idea generation is used interchangeably with idea production. However, idea generation is often used as a descriptor of a kind of task (e.g., ideational fluency) that measures the idea production factor.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

Fluency, Expressional Fluency, Word Fluency, Figural Fluency, Naming Facility, Sensitivity to Problems, and Figural Flexibility).

All the Idea Production factors and tests involve the production of ideas as opposed to recognition or comparison of them; the implication of this distinction is that idea production tasks require open-ended/recall responses, rather than multiple-choice/recognition responses. For example, listing things that are red, writing antonyms to a specified word, or listing ways that a brick can be used are all Idea Production tasks (see Box 7-1). Within this set of Idea Production factors, the Originality/Creativity factor tests are differentiated from other Idea Production tests in that “they require examinees fairly quickly to think of . . . a series of responses fitting

BOX 7-1
Idea Production Factor (Test Type) with Sample Items

Idea Production Factor Sample Items (example test and typical item)
   
Originality/Creativity Consequences (“what would happen if people did not have to eat?”)
   
Ideational Fluency Related things (“name all the red things you can think of”)
   
Naming Facility Picture name (“list names for a picture”)
   
Associational Fluency Synonyms (“list synonyms of the word good”)
   
Expressional Fluency Similes (“her eyes twinkled like ___.”)
   
Word Fluency First and last letter (“name words that begin with g and end with t”)
   
Sensitivity to Problems Improvements (“identify ways to improve the telephone”)
   
Figural Fluency Sketches (“add details to simple objects to make new ones”)
   
Figural Flexibility Match (“rearrange matchsticks to make new figures”)

_________________

NOTE: Various scores can be computed from the examinee’s responses.
SOURCE: Box created from research presented in Carroll (1993).

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

the requirements of the task . . . furthermore, . . . it is difficult and challenging to think of responses beyond the more obvious commonsense ones” (Carroll, 1993, p. 428). That is, Carroll suggested that creativity tests can be thought of as difficult fluency tests, ones requiring the rapid generation of appropriate responses but where the appropriate responses beyond the first few are nonobvious.

Evidence of Predictive Validity

This section describes several independent studies that show incremental validity of idea generation test scores over other cognitive ability measures in predicting significant real-world outcomes.

Studies on Creativity by the U.S. Army Research Institute for the Behavioral and Social Sciences

In one important study from the U.S. Army Research Institute for the Behavioral and Social Sciences, scores from the Consequences test were shown to be strong predictors of leadership abilities and Army officer career outcomes,6 independent of other cognitive ability predictors (Mumford et al., 1998). The authors administered a five-item version of the test to 1,819 U.S. Army officers, along with measures of verbal reasoning and leadership expertise. Officers were asked to work on five Consequences problems by first reading through the description of each situation and then listing as many significant outcomes of the situations as possible in the allotted time. They were given 12 minutes to complete all five problems.

This consequences test demonstrated predictive validity with respect to several important outcomes, including career continuance, career progression, and performance at both the junior and senior levels. The main finding was that the Consequences test, when scored various ways and with all the scores entered in a multiple regression equation, predicted all of the outcomes, ranging from R = .22 (for critical incidents) to R = .58 (for rank). This predictive validity held up even after controlling for cognitive ability and expertise (incremental R2 ranging from .06 to .22, with a median of .20). This finding of incremental validity stands out in the scheme of what we know about cognitive abilities measurement. Other than with personality measures, it is rare to find cognitively oriented measures providing much incremental validity over general cognitive ability in predicting broad, real-world outcomes (e.g., Humphreys, 1986; Ree and Earles, 1991).

_________________

6 An important distinction relevant to military performance is that leadership is a behavior exhibited by officers and enlisted soldiers alike.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

Studies on Creativity by Educational Testing Service

In a series of studies, Educational Testing Service researchers (Bennett and Rock, 1995; Frederiksen and Ward, 1978) found that idea generation (also sometimes referred to as idea production) measures, specifically ones obtained from tests of formulating hypotheses, measuring constructs, evaluating proposals, and solving methodological problems, predicted graduate school outcomes beyond what could be predicted by verbal and mathematics scores on the graduate records examination (GRE) test. The tests were originally developed by Frederiksen and Ward (1978) based on critical incident studies (Flanagan, 1954).

The tests were given to 3,586 examinees as part of an experimental section of the graduate record examinations (GRE) test. Several scores were generated from the tests, including a number score, a number of unusual responses score, a number of quality responses score, and variations on these. After they had completed a year of graduate school (a year and a half after the initial test administration), students were tested again. Significant relationships were found for a number of outcomes, including various measures of professional activities such as the number of professional activities engaged in (r = .24), whether they engaged in collaborative research (r = .18), and number of publications (r = .18). N’s ranged from 525 to 650. Interestingly, there were no significant relationships found between these outcomes and GRE scores, suggesting that these idea production scores were related to and predictive of important school outcomes that the other standardized measures neither related to nor predicted. Bennett and Rock (1995) replicated these findings with a computer administration, albeit with a much smaller sample size. They also administered several additional tests including a Topics test (suggest ideas about a train journey), a Pose-a-Question-to-a-Cardboard-Box test (from Torrance, 1974), and two pattern-meaning items (Wallach and Kogan, 1965) that required examinees to imagine what an unfinished drawing would look like if finished. The score was simply the sum of the number of responses given by the examinee across all four items. This score was highly correlated with the formulating hypotheses test score (r = .65) and weakly correlated with undergraduate grade point average (r = .27). These findings suggest that idea generation is an ability not well reflected in standardized tests of verbal and mathematical reasoning but at the same time predictive of important outcomes. (The committee notes that some of the incremental validity of the new measures over GRE scores might be attributed to range restriction on GRE scores.)

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

Idea Generation Scales: Measurement Issues

Types of Scales

As noted above, Carroll (1993) summarized the variety of measures used to assess creativity in individual-differences studies. As can be seen in Box 7-1 (above), tests involve open-ended prompts, with instructions for the examinee to produce as many responses as possible within a set period of time (typically, a minute to a few minutes).

Scoring Methods

Traditionally, there are three alternative scoring approaches for idea production tests (Mumford and Gustafson, 1988):

  1. Fluency scores (number of responses/number of alternative solutions);
  2. Flexibility scores (number of shifts or variability in response categories); or
  3. Originality scores (novelty of proposed alternatives).

Existing measures have been individually hand-scored to one or all of these criteria (or even more specific criteria, e.g., Mumford et al., 1998), sometimes with the aid of rubrics but nevertheless in a labor-intensive fashion. Developments in natural language processing technology might now enable more efficient computerized scoring, resulting in operational feasibility. For example, machine scoring of essays routinely outperforms human scoring and is now commonplace in the testing industry (Shermis and Burstein, 2013).

Scoring methods: Sample studies Carroll (1993) identified fluency and originality as among the most common scoring methods in use with creativity tests. Frederiksen and Ward (1978) scored formulating hypotheses responses six ways: (a) number of hypotheses (a fluency score), (b) number of unusual (identified by fewer than 5 percent of examinees) hypotheses (an originality score), (c) number of unusual-and-high-quality hypotheses, (d) mean quality of hypotheses, (e) highest quality of any hypothesis, and (f) quality of the hypothesis marked “best.” Agreement among raters for all six of these scores was fairly high (alphas ranged from .69 to .90): Coefficient alphas (Cronbach, 1951) were computed for single items, based on categorizations by two independent scorers. However, due to the brief length of the test (four items), test reliability ranged from fairly low for some of the tests on some of the scores, such as

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

the unusual-and-high-quality score on the measuring constructs task (rxx’ = .34), to fairly high, such as the mean quality score on that same task (rxx’ = .88). Interestingly, although this was a high-ability sample (graduate school applicants), examinees averaged only about 2.5 responses for formulating hypotheses and solving methodological problems and slightly more for measuring constructs. The number of responses scored as unusual and as unusual-and-high-quality was only about one-third or one-quarter of these. Despite the low mean scores, the measures nevertheless correlated with outcomes, even after controlling for standardized test scores. It is possible to develop easier items, that is, items that yield higher mean scores. This could be done either through the development of easier prompts (ones that enabled more responses to be given to them) or longer response times. Presumably, with research, it would be fairly easy to develop prompts and response time windows appropriate for the enlisted applicant population.

The original scoring system for the Consequences test (Christensen et al., 1953; 1958) classified responses as (a) remote, (b) obvious, and (c) irrelevant, with separate scores given for remote and obvious responses. Typically, examinees generate about 4 obvious responses per item in the 2-minute time window (in Mumford et al., 1998, they were given slightly more time), and 1 or 2 remote responses (similar to the Frederiksen-Ward results for a different test). The number of remote responses was the score most likely to show high correlations with outcomes, according to the initial reports (Christensen et al., 1958). Mumford and colleagues’ more recent scoring method (1998) classified responses on several dimensions (see Table 7-1). They found that scores from the first six of these eight dimensions were highly correlated, ranging from r = .64 to .92. Note that positive and negative consequences were independent of the other scores and also did not correlate with the outcomes.

The Consequences test has consistently shown low reliability (Gleser, 1965). Dela Rosa and colleagues (1997) suggested reverting back to something closer to the Christensen scoring approach, in which responses were scored as obvious, remote, duplicate, or irrelevant/unratable. Obvious responses were those that directly resulted from the situation, remote responses were those that referred to indirect results and differed from the material presented, duplicate responses were those restating an idea (or restating one of the given responses), and irrelevant/unratable responses were all others. An ideational fluency score was computed as the sum of obvious responses, and an originality score was computed as the sum of remote responses. With this scheme, Dela Rosa and colleagues (1997) were able to attain reasonable reliabilities: for three raters, rxx’ = .82 to .92 for ideational fluency, and rxx’ = .86 to .94 for originality. Milan

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

TABLE 7-1 Scoring Schemes Applied by Mumford and Colleagues (1998)

Criterion Scale Description
1. Quality 5-point-scale How coherent, meaningful, and logical are the consequences with respect to the question being asked?
 
2. Originality 5-point-scale To what degree are the consequences novel and imaginative? To what extent do they differ from the material presented or state more than what is obviously apparent from the problem? This also refers to the degree to which obvious consequences are presented with unique or unusual implications.
 
3. Time Frame 5-point-scale How realistic and pragmatic are the consequences and would they occur in the real world?
 
4. Realism 5-point-scale To what extent do the consequences focus on long-term implications as opposed to short-term or immediate concerns?
 
5. Complexity 5-point-scale The degree to which the consequences contain multiple elements and describe the interrelations among those elements.
 
6. Use of General Principles 5-point-scale To what degree are there principles, laws, procedures, etc. underlying the consequences.
 
7. Positive Consequences (yes/no) Refers to the presence or addition of something.
 
8. Negative Consequences (yes/no) Refers to the absence or diminishment of something.

SOURCE: Mumford, M.D., A. Michelle, M.S. Connelly, S.J. Zaccaro, and J.F. Johnson. (1998). Domain-based scoring in divergent-thinking tests: Validation evidence in an occupational sample. Creativity Research Journal, 11(2):155. Reproduced by permission of Taylor & Francis, Ltd., http://www.tandfonline.com. Scale column added by committee. Criterion numbers assigned by committee and description slightly edited.

and colleagues (2002) found that the ideational fluency and originality scores were fairly independent (r = .10).

Scoring approaches: Summary findings Box 7-2 lists five idea generation studies, identifying the test and the criteria.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

BOX 7-2
Five Important Idea Generation Studies

Bennett and Rock (1995) Test: formulating hypotheses; Criteria: GRE scores and GPA
   
Bennett and Rock (1998) Test: generating explanations; Criteria: GRE scores and GPA
   
Frederiksen and Ward (1978) Test: formulating hypotheses; Criteria; scores on several cognitive tests (e.g., verbal, quantitative)
   
Hoover and Feldhusen (1990) Test: formulating hypotheses; Criteria: scores on several ognitive tests (e.g., abstract reasoning, verbal, quantitative, speed)
   
Mumford et al. (1998) Test: consequences; Criteria: several different indicators of organizational leadership

_________________

NOTE: Based on studies found during the committee’s literature review. GPA = grade point average, GRE = graduate records examination.

ADAPTABILITY/INVENTIVENESS AS AN INDIVIDUAL-DIFFERENCES NONCOGNITIVE VARIABLE

The previous section described in depth the evidence that combining a measure of general cognitive ability with a measure of idea generation increases the accuracy of predicting important outcomes. The same is true for adding personality variables to the equation: accuracy of predicting important outcomes increases. When the validity of specific cognitive abilities (e.g., divergent thinking and spatial ability) and personality variables (e.g., achievement motivation, dominance, and creative personality) are included in a predictor battery, correlations for predicting innovative contributions are high (observed r = .53; ρ = .58) (Hough and Dilchert, 2007).

To draw on the literature on personality as it pertains to adaptability and inventiveness, the committee first briefly reviews the nature and structure of personality constructs. After that, the discussion turns to an examination of empirically supported relationships between personality variables and innovative contributions/outcomes. The personality variables that are

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

implicated in the sections that follow are suggestive of the complement of individual differences required to understand individual differences in adaptability and inventiveness.

Structure of Personality

The Five-Factor Model7 of personality variables is often used to organize the myriad of personality variables that psychologists use to study personality. It is a hierarchical model with five broad factors: Emotional Stability, Extraversion, Conscientiousness, Agreeableness, and Openness to Experience.

There are also correlated subdimensions, often referred to as facets, associated with each of the five factors. Although there is agreement within the personality community on the existence of facets, there is less agreement on their specific identity. Different investigators tend to identify different facets. There have been several attempts to try to characterize commonalities across investigators. One by John and colleagues (2008), which is represented in Table 7-2, examines the overlap between facets measured by the NEO Personality Index R (NEO-PI-R, Costa and McCrae, 1992),8 by a lexical-based facets instrument (Saucier and Ostendorf, 1999), and by the California Psychological Inventory (Soto and John, 2009). It appears from this analysis that approximately two or three facets per factor overlap across the three schemes, but others are unique. The solution developed by Drasgow and colleagues (2012) represents an amalgamation or summary of several solutions. They began with the findings of Saucier and Ostendorf (1999) as their first input. They then considered International Personality Item Pool data from the Oregon-Eugene-Springfield community samples (N = 727 adult volunteers) on seven personality inventories: the NEO-PI-R, the 16PF (16 Personality Factors), the California Psychological Inventory, the Manchester Personality Questionnaire, the Jackson Personality Inventory, the Hogan Personality Inventory, and the Abridged Big Five-Dimensional Circumplex. They analyzed data from the various inventories, one Big Five scale at a time. Then, using exploratory factor analysis, they identified 22 stable facets that fall under the broader Big Five factors. The Big Five factors along with the 22 facets appear in Table 7-2. Their solution represents a comprehensive mapping of facets of various inventories or researchers that embrace the Five-Factor Model of personality.

Other approaches to examining the structure of personality include the

_________________

7 “Five-Factor Model” is an alternative name for the Big Five personality factor model. Both names are used interchangeably in this chapter.

8 NEO-PI-R measures neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness, as well as subordinate dimensions.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

TABLE 7-2 Defining Facets for the Big Five Factors: Four Solutions

Lexical Facets (18) (Saucier and Ostendorf, 1999) NEO-PI-R Facets (30) (Costa and McCrae, 1992) CPI–Big Five Facets (16) (Soto and John, 2009) TAPAS (22) (Drasgow et al., 2012)
Extraversion Facets
Sociability Gregariousness Gregariousness Dominance
Assertiveness Assertiveness

Assertiveness/Leadership

Activity

Activity/Adventurousness

Activity

Social Confidence vs. Anxiety

Sociability
Unrestraint Excitement-Seeking Attention Seeking
Positive Emotions
Warmth
Agreeableness Facets

Warmth/Affection

Modesty

Modesty vs. Narcissism

Cooperation

Modesty/Humility

Trust Trust vs. Suspicion Consideration

Generosity

Tender-Mindedness Empathy/Sympathy Selflessness

Gentleness

Compliance Altruism

Straightforwardness
Conscientiousness Facets

Orderliness

Order Orderliness Order

Industriousness

Achievement Industriousness Achievement

Striving

Reliability

Dutifulness Self-Discipline Self-Control

Decisiveness

Self-Discipline Responsibility

Competence Non-Delinquency

Deliberation Virtue
Neuroticism Facets

Insecurity

Anxiety Anxiety Optimism

Emotionality

Anger/Hostility Irritability Adjustment

Irritability

Depression Depression Even-Tempered

Self-Consciousness

Rumination-Compulsiveness

Vulnerability

Impulsiveness
Openness Facets

Intellect

Ideas Intellectualism Aesthetics

Imagination/Creativity

Aesthetics Idealism Intellectual Efficiency

Perceptiveness

Fantasy Adventurousness Tolerance

Actions Ingenuity

Feelings Depth
Values Curiosity

SOURCE: John, O.P., L.P. Naumann, and C.J. Soto. (2008). Paradigm shift to the integrative Big Five trait taxonomy: History, measurement, and conceptual issues. In O.P. John, R.W. Robins, and L.A. Pervin, Eds., Handbook of Personality: Theory and Research (p. 126). New York: Guilford Press. Reproduced by permission conveyed through Copyright Clearance Center.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

HEXACO model9 (Ashton and Lee, 2001; Ashton et al., 2004a, 2004b) and Hough’s nomological web clustering approach (Hough and Ones, 2001; Hough et al., 2015; Oswald et al., 2013). The HEXACO model organizes facet-level variables using a six-factor, hierarchical circumplex model in which relationships between personality characteristics are envisioned as a circle with two factors, one on each axis. Hough’s nomological web clustering approach organizes personality variables into clusters that demonstrate very high construct validity (including convergent and discriminant validity) based on correlational evidence between personality variables, factor and component analysis, expert judgments, criterion-related validities between personality variables and outcome variables, and indices of subgroups of people (e.g., ethnic groups, men and women). Both approaches are nonhierarchical; that is, they acknowledge the reality of complex relationships between personality variables wherein facets in one factor correlate with facets in other factors more highly than they do with facets in the factor to which they supposedly belong, a phenomenon that should not occur if the model is envisioned as hierarchical.

The point is that personality variables defined and measured more narrowly than at the broad level of the Five-Factor Model are likely to yield stronger correlations with outcome variables measuring adaptability and inventiveness. In short, facet-level personality variables, which may or may not be later combined, warrant further research.

Evidence of Validity of Personality Variables Predicting Adaptive/Innovative Outcomes

Numerous meta-analyses have used the Five-Factor Model to summarize criterion-related validities of personality variables for predicting work-related outcomes, including adaptive/innovative/creative contributions. Table 7-3 organizes meta-analytic and single-study correlational evidence of the relationships between personality variables and adaptive/innovative outcomes (criteria) using the Five-Factor Model.10

_________________

9 The HEXACO personality inventory assesses honesty-humility, emotionality, extraversion, agreeableness, contentiousness, and openness to experience.

10 Validity studies of personality variables are sometimes criticized because researchers involved in some of the studies have financial interests in one or more of the personality measures. It is the experience of the committee members who have developed personality measures (the majority of which do not have financial interest in any personality measure) that the validities in Table 7-3 are representative of findings that they have observed. The committee also points out that this criticism is not typically directed at criterion-related validity studies involving cognitive abilities that were undertaken by developers of cognitive ability tests who had financial interests in those cognitive ability tests.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

These data indicate the following:

  • Some personality variables are related to adaptive/inventive outcomes, whereas other personality variables are not. For example, for the Five-Factor Model:
  • Emotional Stability predicts some types of adaptive/inventive outcomes.
  • Some facets of Conscientiousness predict adaptive/inventive outcomes.
  • Some facets of Extraversion predict adaptive/inventive outcomes.
  • Some facets of Openness to Experience predict inventive outcomes but do not appear to predict either proactive or reactive forms of adaptive behavior.
  • Composites (compound variables) that comprise relevant personality variables predict adaptive/inventive outcomes better than any personality variable used individually.
  • In several cases, a facet-level variable is a stronger predictor of adaptive/inventive outcomes than its umbrella Big Five construct. For example:
  • The Big Five construct Extraversion does not appear to predict adaptive/inventive outcomes but two of its facets, Dominance and Activity/Energy, do predict adaptive/inventive outcomes, whereas Sociability, another Extraversion facet, does not.
  • The Big Five construct Conscientiousness, does not appear to predict adaptive/inventive outcomes. But, again, one of its facets, Achievement, does predict adaptive/inventive outcomes, whereas Deliberation/Cautiousness, another Conscientiousness facet, does not.
  • The type of job and the type of adaptive outcome moderate the relationship between personality and adaptive outcomes. In particular, adaptive performance outcomes may be proactive or reactive in nature, where proactive forms deal with people identifying a need to change the environment when it is relatively constant and reactive forms deal with people needing to adapt whenever the environment changes. Some findings are as follows:
  • Achievement (a facet of Conscientiousness) predicts proactive forms of adaptive performance for managers (ρ = .28) better than reactive forms of adaptive performance (ρ = .20).
  • Similarly, for nonmanagerial employees, Achievement (facet of Conscientiousness) predicts proactive forms of adaptive performance (ρ = .14) better than reactive forms of adaptive performance (ρ = .11), although clearly Achievement is a better predictor of
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
  • proactive forms of adaptive performance for managers than it is for nonmanagerial employees.
  • On the other hand, Emotional Stability predicts reactive forms of adaptive performance for managers (ρ = .25) better than proactive forms of adaptive performance (ρ = .15).
  • Similarly, for nonmanagerial employees, Emotional Stability predicts reactive forms of adaptive performance (ρ = .18) better than proactive forms of adaptive performance (ρ = .11).
  • In both cases, personality variables predict adaptive behavior more strongly for managers than for nonmanagers.
  • The determinants of adaptive/inventive outcomes are complex. Advances in scientific understanding of the role of personality in determining these outcomes will likely come from understanding facets and other personality constructs that are narrower than broad factors.

The majority of the studies in the meta-analyses and single studies listed in Table 7-3 are concurrent (rather than predictive) validity studies. They are indicative of the relationships between personality variables and adaptive/innovative outcomes and are thus instructive for identifying measures (especially facet-level measures) of personality characteristics that are most likely to be predictive of adaptive/innovative outcomes. The values of the better personality predictors in Table 7-3 are in the .20s even .30s. Given that personality variables and cognitive ability variables are only minimally correlated or uncorrelated (Ackerman and Heggestad, 1997; Judge et al., 1999; McHenry et al., 1990), the incremental validity of predicting adaptive/innovative outcomes is likely significant. The Tailored Adaptive Personality Assessment System (TAPAS), the personality inventory the military is using and continues to evaluate, includes facet-level measures such as Ingenuity, Curiosity, and Intellectual Efficiency (Stark et al., 2014). Table 7-3 indicates these scales likely measure important variance relevant to adaptability and inventiveness. Their merit for predicting adaptive/inventive outcomes needs to be researched.

Nonetheless, given the extent to which coaching and intentional distortion in high-stakes employment settings occurs, validities may be lower. This issue and advances in personality test development that address this issue are discussed below.

Measurement Issues: Personality

Use of self-report personality measures to select among applicants for a desirable job or school is frequently criticized because respondents can lie about themselves on positive traits, thus improving their scores on such

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

TABLE 7-3 Criterion-Related Validities of Big Five, Facet-Level, Compound, and Other Personality Variables

Adaptive/Innovative Outcomes
Personality Variable Observed Validity* Corrected Validity*
Big Five and Facets
Emotional Stability

r = –.07; k = 128; artists vs. non-artists (Feist, 1998)

r = –.05; k = 8; N = 442 (Hough, 1992)

r = .02; k = 66; creative scientists (Feist, 1998)

r = .09; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

r = .08; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

r = .16; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

r = .13; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

r = .18; N ~330 (Pulakos et al., 2002)

ρ = -.03; k = 4; N = 1,332; lab (Harrison et al., 2006)

ρ = .04; k = 3; N = 448; field (Harrison et al., 2006)

ρ = .15; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .11; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .25; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

ρ = .18; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

Extraversion

r = .14; k = 135; creative scientists (Feist, 1998)

r = .08; k = 148; artists vs. non-artists (Feist, 1998)

ρ = .04; k = 3; N = 448; field (Harrison et al., 2006)

ρ = .03; k = 4; N = 1,332; lab (Harrison et al., 2006)

Facet: Dominance

r = .21; k = 11; N = 550 (Hough, 1992)

r = .19; k = 42; creative scientists (Feist, 1998)

r = .08; k = 42; artists vs. non-artists (Feist, 1998)

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Adaptive/Innovative Outcomes
Personality Variable Observed Validity* Corrected Validity*

Facet: Sociability

r = –.25; k = 2; N = 116 (Hough, 1992)

r = .07; k = 23; creative scientists (Feist, 1998)

r = .01; k = 35; artists vs. non-artists (Feist, 1998)

r = .04; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

r = .01; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

r = .01; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

r = .00; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

Negative; N = 225 (Weiss, 1981)

ρ = .05; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .01; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .02; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

ρ = -.01; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

Facet: Activity/Energy

Positive (Weiss, 1981)
     
Conscientiousness

r = .07; k = 48; creative scientists (Feist, 1998)

r = –.29; k = 52; artists vs. non-artists (Feist, 1998)

ρ = .13; k = 3; N = 707; lab (Harrison et al., 2006)

ρ = .00; k = 3; N = ? (Eder and Sawyer, 2007)

ρ = –.06; k = 5; N = 946; field (Harrison et al., 2006)

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Adaptive/Innovative Outcomes
Personality Variable Observed Validity* Corrected Validity*

Facet: Dependability

r = -.07; k = 5; N = 268 (Hough, 1992)

r = .07; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

r = .05; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

r = .05; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

r = .08; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

Negative (Welsh, 1975)

ρ = .11; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .07; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .08; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

ρ = .10; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Adaptive/Innovative Outcomes
Personality Variable Observed Validity* Corrected Validity*

Facet: Achievement

r = .14; k = 2; N = 116 (Hough, 1992)

r = .18; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

r = .09; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

r = .12; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

r = .08; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

r = .31; N ~ 330 (Pulakos et al., 2002)

Positive (Amabile et al., 1994)

ρ = .28; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .14; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .20; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

ρ = .11; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

Facet: Deliberation/Cautiousness

Negative (Welsh, 1975)
Negative (King, 1990)
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Adaptive/Innovative Outcomes
Personality Variable Observed Validity* Corrected Validity*

Agreeableness

r = -.29; k = 3; N = 174 (Hough, 1992)

r = -.10; k = 63; artists vs. non-artists (Feist, 1998)

r = -.03; k = 64; creative scientists (Feist, 1998)

r = .09; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

r = .04; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

r = .10; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

r = .07; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

ρ = -.04; k = 3; N = 448; field (Harrison et al., 2006)

ρ = .08; k = 3; N = 707; lab (Harrison et al., 2006)

ρ = .11; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .04; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .12; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

ρ = .07; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

Openness to Experience

r = .21; k = 93; artists vs. non-artists (Feist, 1998)

r = .18; k = 52; creative scientists (Feist, 1998)

r = .06; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

r = .03; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

r = .04; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

r = .01; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

ρ = .33; k = 3; N = 707; lab (Harrison et al., 2006)

ρ = .29; k = 4; N = 597; field (Harrison et al., 2006)

ρ = .17; k = 7; N = ? (Eder and Sawyer, 2007)

ρ = .09; k = 17; N = 1,823; managers; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .03; k = 48; N = 5,270; employees; proactive forms of adaptive performance (Huang et al., 2014)

ρ = .07; k = 18; N = 1,864; managers; reactive forms of adaptive performance (Huang et al., 2014)

ρ = .01; k = 51; N = 5,450; employees; reactive forms of adaptive performance (Huang et al., 2014)

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Adaptive/Innovative Outcomes
Personality Variable Observed Validity* Corrected Validity*
Compound Variables
Creative Personality

ρ = .17; k = 6; N = ? (Eder and Sawyer, 2007)

ρ = .17; k = 5; N = 1,031 (Harrison et al., 2006)

ρ = .17 business creation; k = 15; N = 4,620 (Rauch and Frese, 2007)

ρ = .27 entrepreneurial success; k = 7; N = 800 (Rauch and Frese, 2007)

Specific Compound Scales:

Creative Personality Scales1

r = .26; k = 15; N = 1,086 (Hough and Dilchert, 2007)

ρ = .37; k = 15; N = 1,086 (Hough and Dilchert, 2007)

Achievement via Independence2

r = .26; N = 1,028 (Gough, 1992)

Independence3

r = .27; N = 1,028 (Gough, 1992)

Intellectual Efficiency2

r = .25; N = 1,028 (Gough, 1992)

Flexibility2

r = .18; N = 1,028 (Gough, 1992)

Tolerance2

r = .22; N = 1,028 (Gough, 1992)

Cognitive Flexibility4

r = .17; N = 1,028 (Gough, 1992)

Complexity/Simplicity5

r = .25; N = 1,028 (Gough, 1992)

Inquiringness4

r = .18; N = 1,028 (Gough, 1992)

*The statistic r (in comparison with r2) is a direct measure of predictive efficiency (Brogden, 1946; Campbell, 1976).

1 Affective Check List (Gough, 1979; Gough and Heilbrun, 1983).

2 California Psychological Inventory (Gough, 1996).

3 Barron Independence Scale (Barron, 1953b).

4 Differential Reaction Schedule (Gough, 1962).

5 Barron Complexity/Simplicity Scale (Barron, 1953a).

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

tests, and studies show that test takers are certainly able to improve their scores on self-report personality tests.11 The concern, then, is if all applicants describe themselves in ways that increase their chances of getting the job or school offer, the resulting scores will have little to no variance and thus will no longer predict outcomes of interest. Issues involved in faking, intentional distortion, and coaching are complex; simplistic claims such as “faking doesn’t matter” or “faking renders personality tests useless” are unwarranted (Hough and Connelly, 2012; Hough and Johnson, 2013; Oswald and Hough, 2011).

Personality test items are easy to fake when presented as statements tied to a rating scale response format (e.g., “indicate level of agreement with the statement ‘I work hard’ on a scale ranging from ‘strongly agree’ to ‘strongly disagree’”). An examinee simply “strongly agrees” to a socially desirable statement (and “strongly disagrees” with an undesirable one). Thus, in the high stakes use of personality testing, the forced choice is a popular alternative response format because it makes it more difficult to fake. The forced-choice method presents two or more statements and asks examinees to select the one that “best describes” them and/or the one that least describes them. A limitation of forced-choice methods is that they typically yield ipsative data, which means that traits are constrained to be negatively correlated on average (by approximately –1 / [d – 1], where d is the number of dimensions being compared). One way to circumvent this problem would be to test a large number of dimensions with an expectation that many of them would not be used in employment screening.

More recently, there have been developments in Item Response Theory (IRT) methods to analyze forced-choice responses that are designed to yield normative rather than ipsative data. This means that there are no constraints on the correlations between dimensions. One of these is Stark’s multidimensional unfolding pairwise preference model (Stark 2002; Stark et al., 2005), which assumes that (a) personality item responses can be modeled with an unfolding model (e.g., one can fail to endorse a statement either because one is too low or too high on the trait being measured) and (b) when evaluating a pair of statements in forced-choice presentation, the examinee chooses the statement that is “closer” to him or her, which is possible because the statement can be located on the same trait continuum as the examinee. This model is implemented in the Department of Defense’s application of the TAPAS (Drasgow et al., 2012).

Another IRT model for scoring forced choice is one developed by Brown and Maydeu-Olivares (2011; 2013). A strength of this model is that

_________________

11See Hough and Connelly (2012) for an indepth review of intentional distortion on self-report personality inventories.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

one can apply it to existing preference data. This model has been implemented in a commercial personality assessment.

A recent meta-analysis by Salgado and Tauriz (2014) compared forced-choice methods that are scored to minimize the ipsative data constraints (which they called “quasi-ipsative”) to pairs of single statements from different traits that are compared with one another and then scored (which they called “normative forced choice”). After collapsing across academic and occupational criteria, and after correcting for psychometric artifacts (measure unreliability and range restriction), the quasi-ipsative forced-choice scores for conscientiousness showed a much higher average criterion-related validity coefficient across studies (i.e., ρ = .40; k = 44) compared with normative forced-choice scoring of the same trait (i.e., ρ = .16; k = 88). It is worth noting that the correlations for quasi-ipsative forced-choice measures are higher than the correlations found in other personality meta-analyses, which have been primarily driven by results for single-statement normatively scored measures. Studies involving some of the newer IRT modeling for ipsative measures were not included in the meta-analysis, but the results for these newer approaches are likely to be more similar to the results obtained for quasi-ipsative forced-choice measures than to results for normative measures. It is important to note that these meta-analytic results for validity are averages that were associated with a vast amount of heterogeneity that remains to be explored in additional large-sample studies comparing formats in organizational samples.

Turning to the issue of the amount of faking on personality measures, the evidence seems consistent that forced-choice measures reduce score inflation (e.g., Nguyen and McDaniel, 2000). Recent research with the TAPAS (a forced-choice format) comparing scores obtained in applicant (high-stakes testing) and incumbent (low stakes, low motivation to inflate) settings in the military indicates that score inflation is minimal, about 0.15 standard deviation (Stark et al., 2014), a value much lower than found with non-forced-choice formats (Viswesvaran and Ones, 1999). Future research could be conducted to determine what aspects of forced choice are most useful for improving measurement quality and to assess the importance of various item and scoring features, such as the ideal point response process or the addition of personality items worded at a moderate level (Oswald and Schell, 2010).

RESEARCH RECOMMENDATION

The U.S. Army Research Institute for the Behavioral and Social Sciences should support research to understand constructs and assessment

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

methods in the domains of adaptability/inventiveness and adaptive performance, including but not limited to the following topics:

  1. Compare alternative approaches to the measurement and scoring of idea generation as a cognitive measure of adaptability/inventiveness.
  2. Use existing literature, theory, and empirical research to identify and develop narrow personality measures as candidates for predicting adaptive performance.
  3. Develop a range of measures of relevant work criteria that reflect adaptive performance in research studies.
  4. Examine the use of these personality and idea generation measures in predicting the above adaptive performance criteria.

REFERENCES

Ackerman, P.L., and E.D. Heggestad. (1997). Intelligence, personality, and interests: Evidence for overlapping traits. Psychological Bulletin, 121(2):219–245.

Amabile, T.M., K.G. Hill, B.A. Hennessey, and E.M. Tighe. (1994). The Work Preference Inventory: Assessing intrinsic and extrinsic motivational orientations. Journal of Personality and Social Psychology, 66(5):950–967.

Ashton, M.C., and K. Lee. (2001). A theoretical basis for the major dimensions of personality. European Journal of Personality, 15(5):327–353.

Ashton, M.C., K. Lee, and G.R. Goldberg. (2004a). A hierarchical analysis of 1,071 English personality-descriptive adjectives. Journal of Personality and Social Psychology, 87(5):707–721.

Ashton, M.C., K. Lee, M. Perugini, P. Szarota, R.E. De Vries, L. Di Blas, and B. De Raad, B. (2004b). A six-factor structure of personality-descriptive adjectives: Solutions from psycholexical studies in seven languages. Journal of Personality and Social Psychology, 86(2):356–366.

Barron, F. (1953a). Complexity-simplicity as a personality dimension. Journal of Abnormal and Social Psychology, 48(2):163–172.

Barron, F. (1953b). Some personality correlates of independence of judgment. Journal of Personality, 21(3):287–297.

Bell, B.S., and S.W.J. Kozlowski. (2002). Goal orientation and ability: Interactive effects on self-efficacy, performance, and knowledge. Journal of Applied Psychology, 87(3):497–505.

Bennett, R.E., and D.A. Rock. (1995). Generalizability, validity, and examinee perceptions of a computer-delivered formulating hypotheses test. Journal of Educational Measurement, 32(1):19–36.

Bennett, R.E., and D.A. Rock. (1998). Examining the Validity of a Computer-Based Generating-Explanations Test in an Operational Setting (Research Report No. RR-97–18). Princeton, NJ: Educational Testing Service.

Brogden, H.E. (1946). On the interpretation of the correlation coefficient as a measure of predictive efficiency. Journal of Educational Psychology, 37(2):65–76.

Brown, A., and A. Maydeu-Olivares. (2011). Item response modeling of forced-choice questionnaires. Educational and Psychological Measurement, 71(3):460–502.

Brown, A., and A. Maydeu-Olivares. (2013). How IRT can solve problems of ipsative data in forced-choice questionnaires. Psychological Methods, 18(1):36–52.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

Campbell, J.P. (1976). Psychometric theory. In M.D. Dunnette, Ed., Handbook of Industrial and Organizational Psychology (pp. 185–222). Chicago, IL: Rand McNally.

Carette, B., and F. Anseel. (2012). Epistemic motivation is what gets the learner started. Industrial and Organizational Psychology, 5(3):306–308.

Carroll, J.B. (1993). Human Cognitive Abilities: A Survey of Factor-Analytic Studies. New York: Cambridge University Press.

Chen, G., S.S. Webber, P.D. Bliese, J.E. Mathieu, S.C. Payne, D.H. Born, and S.J. Zaccaro. (2002). Simultaneous examination of the antecedents and consequences of efficacy beliefs at multiple levels of analysis. Human Performance, 15(4):381–409.

Christensen, P.R., P.R. Merrifield, and J.P. Guilford. (1953). Consequences Form A-1. Beverly Hills, CA: Sheridan Supply.

Christensen, P.R., P.R. Merrifield, and J.P. Guilford. (1958). Consequences: Manual for Administration, Scoring, and Interpretation. Beverly Hills, CA: Sheridan Supply.

Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3):297–334.

Costa, P.T., Jr., and R.R. McCrae. (1992). Revised NEO Personality Inventory (NEO-PI-R) and NEO Five-Factor Inventory (NEO-FFI) Professional Manual. Odessa, FL: Psychological Assessment Resources.

Dela Rosa, M.R., D.J. Knapp, B.D. Katz, and S.C. Payne. (1997). Scoring System Improvements to Three Leadership Predictors (Technical Report #1070). Alexandria, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.

De Meuse, K.P., G. Dai, and G.S. Hallenbeck. (2010). Learning agility: A construct whose time has come. Consulting Psychology Journal: Practice and Research, 62(2):119–130.

De Meuse, K.P., G. Dai, V.V. Swisher, R.W. Eichinger, and M.M. Lombardo. (2012). Leadership development: Exploring clarifying, and expanding our understanding of learning agility. Industrial and Organizational Psychology, 5(3):280–315.

DeRue, D.S., S.J. Ashford, and C.G. Myers. (2012a). Learning agility: In search of conceptual clarity and theoretical grounding (focal article). Industrial and Organizational Psychology, 5(3):258–279.

DeRue, D.S., S.J. Ashford, and C.G. Myers. (2012b). Learning agility: In search of conceptual clarity and theoretical grounding (response to commentaries). Industrial and Organizational Psychology, 5(3):316–322.

Drasgow, F., S. Stark, O.S. Chernyshenko, C.D. Nye, C.L. Hulin, and L.A. White. (2012). Development of the Tailored Adaptive Personality Assessment System (TAPAS) to Support Army Personnel Selection and Classification Decisions. Urbana, IL: Drasgow Consulting Group.

Eder, P., and J.E. Sawyer. (2007). A Meta-Analytic Examination of Employee Creativity. Poster presented at 22nd Annual Conference for the Society for Industrial and Organizational Psychology, New York City, NY.

Feist, G.J. (1998). A meta-analysis of personality in scientific and artistic creativity. Personality and Social Psychology Review, 2(4):290–309.

Flanagan, J.C. (1954). The critical incident technique. Psychological Bulletin, 51(4):327–358.

Frederiksen, N., and W.C. Ward. (1978). Measures for the study of creativity in scientific problem-solving. Applied Psychological Measurement, 2(1):1–24.

Gleser, G. (1965). Review of consequences test. In O.K. Buros, Ed., The Sixth Mental Measurements Yearbook. Lincoln, NE: Buros Institute of Mental Measurements.

Goff, M., and P.L. Ackerman. (1992). Personality-intelligence relations: Assessment of typical intellectual engagement. Journal of Educational Psychology, 84(4):537–552.

Gough, H.G. (1962). Imagination—undeveloped resource. In S.J. Parnes and H.F. Harding, Eds., A Source Book for Creative Thinking (pp. 217–226). New York: Scribner.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

Gough, H.G. (1979). A creative personality scale for the Adjective Check List. Journal of Personality and Social Psychology, 37:1,398–1,405.

Gough, H.G. (1992). Assessment of creative potential in psychology and development of a creative temperament scale for the CPI. In J.C. Rosen and P. McReynolds, Eds., Advances in Psychological Assessment (pp. 225–257). New York: Springer.

Gough, H.G. (1996). Manual: The California Psychological Inventory (3rd ed.). Palo Alto, CA: Consulting Psychologists Press.

Gough, H.G., and A.B. Heilbrun, Jr. (1983). The Adjective Check List Manual. Palo Alto, CA: Consulting Psychologists Press.

Harrison, M.M., N.L. Neff, A.R. Schwall, and X. Zhao. (2006). A Meta-analytic Investigation of Individual Creativity and Innovation. Presented at the Annual Conference for the Society for Industrial and Organizational Psychology, Dallas, TX.

Hezlett, S.A., and N.R. Kuncel. (2012). Prioritizing the learning agility research agenda. Industrial and Organizational Psychology, 5(3):296–301.

Hon, A.H.Y., M. Bloom, and J.M. Crant. (2014). Overcoming resistance to change and enhancing creative performance. Journal of Management, 40(3):919–941.

Hoover, S.M., and J.F. Feldhusen. (1990). The scientific hypothesis formulation ability of gifted ninth-grade students. Journal of Educational Psychology, 82(4):838–848.

Hough, L.M. (1992). The “Big Five” personality variables-construct confusion: Description versus prediction. Human Performance, 5(1-2):139–155.

Hough, L.M., and B.S. Connelly. (2012). Personality measurement and use in industrial and organizational psychology. In K.F. Geisinger, Editor-in-Chief, American Psychological Association Handbook on Testing and Assessment and N. Kuncel (Vol. 1 ed.), Test Theory and Testing and Assessment in Industrial and Organizational Psychology (pp. 501–531). Washington, DC: American Psychological Association.

Hough, L.M., and S. Dilchert. (2007). Inventors, Innovators, and Their Leaders: Selecting for Conscientiousness Will Keep You “Inside the Box.” Paper presented at the Society for Industrial and Organizational Psychology’s 3rd Leading Edge Consortium: Enabling Innovation in Organizations, Kansas City, MO.

Hough, L.M., and J.W. Johnson. (2013). Use and importance of personality variables in work settings. In I.B. Weiner (Ed.-in-Chief) and N. Schmitt and S. Highhouse (Vol. Eds.), Handbook of Psychology: Vol. 12. Industrial and Organizational Psychology (pp. 211–243). New York: Wiley & Sons.

Hough, L.M., and D. Ones. (2001). The structure, measurement, validity, and use of personality variables in industrial, work, and organizational psychology. In N.R. Anderson, D.S. Ones, H.K. Sinangil, and C. Viswesvaran, Eds., Handbook of Industrial, Work, and Organizational Psychology (pp. 233–277). New York: Sage.

Hough, L.M., F.L. Oswald, and J. Ock. (2015). Beyond the Big Five—A paradigm shift in researching the structure and role of personality. Annual Review of Organizational Psychology and Organizational Behavior, 2.

Huang, J.L., A.M. Ryan, K.L. Zabel, and A. Palmer. (2014). Personality and adaptive performance at work: A meta-analytic investigation. Journal of Applied Psychology, 99(1):162–179.

Humphreys, L.G. (1986). Commentary. Journal of Vocational Behavior, 29(3):421–437.

Ilgen, D.R., J.R. Hollenbeck, M. Johnson, and D. Jundt. (2005). Teams in organizations: From Input-Process-Output models to IMOI models. Annual Review of Psychology, 56:517–543.

John, O.P., L.P. Naumann, and C.J. Soto. (2008). Paradigm shift to the integrative Big Five trait taxonomy: History, measurement, and conceptual issues. In O.P. John, R.W. Robins, and L.A. Pervin, Eds., Handbook of Personality: Theory and Research (pp. 114–158). New York: Guilford Press.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

Judge, T.A., C.A. Higgins, C.J. Thoresen, and M.R. Barrick. (1999). The Big Five personality traits, general mental ability, and career success across the life span. Personnel Psychology, 52(3):621–652.

Kell, H.J., D. Lubinski, C.P. Benbow, and J.H. Steiger. (2013). Creativity and technical innovation: Spatial ability’s unique role. Psychological Science, 24(9):1,831–1,836.

King, N. (1990). Innovation at work: The research literature. In M.A. West and J.L. Farr, Eds., Innovation and Creativity at Work: Psychology and Organizational Strategies (pp. 15–59). Oxford, UK: Wiley & Sons.

Koutstaal, W. (2012). The Agile Mind. New York: Oxford University Press.

Lombardo, M.M., and R.W. Eichinger. (2000). High potentials as high learners. Human Resource Management, 39(4):321–330.

Mason, W., and D.J. Watts. (2012). Collaborative learning in networks. Proceedings of the National Academy of Sciences of the United States of America, 109:764–769.

McHenry, J.J., L.M. Hough, J.L. Toquam, M.A. Hanson, and S. Ashworth. (1990). Project A validity results: The relationship between predictor and criterion domains. Personnel Psychology, 43(2):335–354.

Milan, L.M., D.M. Bourne, M.M. Zazanis, and P.T. Bartone. (2002). Measures Collected on the USMA Class of 1998 as Part of the Baseline Officer Longitudinal Data Set (BOLDS) (ARI Tech. Rep. No. 1127). Alexandria, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.

Mumford, M.D., and S.B. Gustafson. (1988). Creativity syndrome: Integration, application, and innovation. Psychological Bulletin, 103(1):27–43.

Mumford, M.D., M.A. Marks, M.S. Connelly, S.J. Zaccaro, and J.F. Johnson. (1998). Domain-based scoring of divergent-thinking tests: Validation evidence in an occupational sample. Creativity Research Journal, 11(2):151–163.

Nguyen, N.T., and M.A. McDaniel. (2000). Faking and forced-choice scales in applicant screening: A meta-analysis. Presented at the 15th Annual Meeting of the Society for Industrial and Organizational Psychology, New Orleans, LA.

Oswald, F.L., and L.M. Hough. (2011). Personality and its assessment in organizations: Theoretical and empirical developments. In S. Zedeck, Ed., APA Handbook of Industrial and Organizational Psychology: Vol. 2. Selecting and Developing Members for the Organization (pp. 153–184). Washington, DC: American Psychological Association.

Oswald, F.L., and K.L. Schell. (2010). Developing and scaling personality measures: Thurstone was right—but so far, Likert was not wrong. Industrial and Organizational Psychology, 3(4):481–484.

Oswald, F.L., L.M. Hough, and J. Ock. (2013). Theoretical and empirical structures of personality: Implications for measurement, modeling and prediction. In N. Christiansen and R. Tett, Eds., Handbook of Personality at Work (pp. 11–29). New York: Informa UK Limited/Taylor and Francis Psychology Press.

Pulakos, E.D., S. Arad, M.A. Donovan, and K.E. Plamondon. (2000). Adaptability in the workplace: Development of a taxonomy of adaptive performance. Journal of Applied Psychology, 85(4):612–624.

Pulakos, E.D., N. Schmitt, D.W. Dorsey, S. Arad, J.W. Hedge, and W.C. Borman. (2002). Predicting adaptive performance: Further tests of a model of adaptability. Human Performance, 15(4):299–323.

Rauch, A., and M. Frese. (2007). Let’s put the person back into entrepreneurship research: A meta-analysis on the relationship between business owners’ personality traits, business creation, and success. European Journal of Work and Organizational Psychology, 16(4):353–385.

Ree, M.J., and J.A. Earles. (1991). Predicting training success: Not much more than g. Personnel Psychology, 44(2):321–332.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

Rocklin, T. (1994). Relation between typical intellectual engagement and openness: Comment on Goff and Ackerman (1992). Journal of Educational Psychology, 86(1):145–149.

Salas, E., D.E. Sims, and C.S Burke. (2005). Is there a “Big Five” in teamwork? Small Group Research, 36(5):555–599.

Salgado, J.F., and G. Tauriz. (2014). The Five-Factor Model, forced-choice personality inventories and performance: A comprehensive meta-analysis of academic and occupational studies. European Journal of Work and Organizational Psychology, 23(1):3–30.

Saucier, G., and F. Ostendorf. (1999). Hierarchical subcomponents of the Big Five personality factors: A cross-language replication. Journal of Personality and Social Psychology, 76:613–627.

Shermis, M.D., and J. Burstein., Eds. (2013). Handbook of Automated Essay Evaluation: Current Applications and New Directions. New York: Routledge.

Simonton, D.K. (2012). Taking the U.S. Patent Office criteria seriously: A quantitative three-criterion creativity definition and its implications. Creativity Research Journal, 24(2-3):97–106.

Soto, C.J., and O.P. John. (2009). Using the California Psychological Inventory to assess the Big Five personality domains: A hierarchical approach. Journal of Research in Personality, 43(1):25–38.

Stark, S. (2002). A New IRT Approach to Test Construction and Scoring Designed to Reduce the Effects of Faking in Personality Assessment. (Doctoral dissertation). University of Illinois at Urbana-Champaign. Available: http://psychology.usf.edu/faculty/sestark/ [December 2014].

Stark, S., O.S. Chernyshenko, and F. Drasgow. (2005). An IRT approach to constructing and scoring pairwise preference items involving stimuli on different dimensions: The multi-unidimensional pairwise-preference model. Applied Psychological Measurement, 29(3):184–203.

Stark, S., O.S. Chernyshenko, F. Drasgow, C.D. Nye, L.A. White, T. Heffner, and W.L. Farmer. (2014). From ABLE to TAPAS: A new generation of personality tests to support military selection and classification decisions. Military Psychology, 26(3):153–164.

Sternberg, R.J. (2001). What is the common thread of creativity? Its dialectical relation to intelligence and wisdom. American Psychologist, 56(4):360–362.

Torrance, E.P. (1974). Norms Technical Manual: Torrance Tests of Creative Thinking. Lexington, MA: Ginn.

Viswesvaran, C., and D.S. Ones. (1999). Meta-analyses of fakability estimates: Implications for personality measurement. Educational and Psychological Measurement, 59(2):197–210.

von Stumm, S., B. Hell, and T. Chamorro-Premuzic. (2011). The hungry mind: Intellectual curiosity is the third pillar of academic performance. Perspectives on Psychological Science, 6(6):574–588.

Wallach, M.A., and N. Kogan. (1965). Modes of Thinking in Young Children: A Study of the Creativity-Intelligence Distinction. New York: Holt, Rinehart and Winston.

Weiss, D.S. (1981). A multigroup study of personality patterns in creativity. Perceptual and Motor Skills, 52(3):735–746.

Welsh, G. (1975). Creativity and Intelligence: A Personality Approach. Chapel Hill, NC: Institute for Research in Social Science, University of North Carolina.

West, M.A., and N.R. Anderson. (1996). Innovation in top management teams. Journal of Applied Psychology, 81(6):680–693.

Woo, S.E., P.D. Harms, and N.R. Kuncel. (2007). Integrating personality and intelligence: Typical intellectual engagement and need for cognition. Personality and Individual Differences, 43(6):1,635–1,639.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×

Woolley, A.W., C.F. Chabris, A. Pentland, N. Hashmi, and T.W. Malone. (2010). Evidence for a collective intelligence factor in performance of human groups. Science, 330(6,004): 686–688.

Zhang, X., and J. Zhou. (2014). Empowering leadership, uncertainty avoidance, trust, and employee creativity: Interaction effects and a mediating mechanism. Organizational Behavior and Human Decision Processes, 124(2):150–164.

Zhou, J., and I.J. Hoever. (2014). Research on workplace creativity: A review and redirection. Annual Review of Organizational Psychology and Organizational Behavior, 1:333–359.

Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 127
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 128
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 129
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 130
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 131
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 132
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 133
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 134
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 135
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 136
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 137
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 138
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 139
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 140
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 141
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 142
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 143
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 144
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 145
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 146
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 147
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 148
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 149
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 150
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 151
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 152
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 153
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 154
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 155
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 156
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 157
Suggested Citation:"7 Adaptability and Inventiveness." National Research Council. 2015. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: The National Academies Press. doi: 10.17226/19017.
×
Page 158
Next: Section 5: Methods and Methodology »
Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession Get This Book
×
 Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession
Buy Paperback | $56.00 Buy Ebook | $44.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Every year, the U.S. Army must select from an applicant pool in the hundreds of thousands to meet annual enlistment targets, currently numbering in the tens of thousands of new soldiers. A critical component of the selection process for enlisted service members is the formal assessments administered to applicants to determine their performance potential. Attrition for the U.S. military is hugely expensive. Every recruit that does not make it through basic training or beyond a first enlistment costs hundreds of thousands of dollars. Academic and other professional settings suffer similar losses when the wrong individuals are accepted into the wrong schools and programs or jobs and companies. Picking the right people from the start is becoming increasingly important in today's economy and in response to the growing numbers of applicants. Beyond cognitive tests of ability, what other attributes should selectors be considering to know whether an individual has the talent and the capability to perform as well as the mental and psychological drive to succeed?

Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession examines promising emerging theoretical, technological, and statistical advances that could provide scientifically valid new approaches and measurement capabilities to assess human capability. This report considers the basic research necessary to maximize the efficiency, accuracy, and effective use of human capability measures in the military's selection and initial occupational assignment process. The research recommendations of Measuring Human Capabilities will identify ways to supplement the Army's enlisted soldier accession system with additional predictors of individual and collective performance. Although the primary audience for this report is the U.S. military, this book will be of interest to researchers of psychometrics, personnel selection and testing, team dynamics, cognitive ability, and measurement methods and technologies. Professionals interested in of the foundational science behind academic testing, job selection, and human resources management will also find this report of interest.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!