4

Emerging Understandings of Group-Related Characteristics

In the military, as in most other modern organizations, little work is done by individuals working alone. Thus it is important to be able to assess individuals not only on the basis of their individual performance potential but also on the basis of how their characteristics might operate in a group setting. During the second day of the workshop, the third panel, Group Composition Processes and Performance, was devoted specifically to issues related to assessments that can provide insights into group performance and the effective assembly of groups. Invited presentations from Anita Williams Woolley, Scott Tannenbaum, and Leslie DeChurch included discussion of the “collective intelligence” of a team, how to assess and predict team performance, and how best to assemble teams.

COLLECTIVE INTELLIGENCE

To begin her presentation, Anita Williams Woolley, an assistant professor of organizational behavior and theory from Carnegie Mellon University, described various types of animals that exhibit “collective intelligence”—memory or problem-solving behaviors that are the product of an interaction among members of the group rather than simply a reflection of the capabilities of the individual members. Ant colonies, Woolley said, provide one of the best examples in that individual ants are simple creatures with little memory or problem-solving ability, but collectively they exhibit impressive behavior. They create complex structures, for



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 51
4 Emerging Understandings of Group-Related Characteristics I n the military, as in most other modern organizations, little work is done by individuals working alone. Thus it is important to be able to assess individuals not only on the basis of their individual per­ formance potential but also on the basis of how their characteristics might operate in a group setting. During the second day of the workshop, the third panel, Group Composition Processes and Performance, was devoted specifically to issues related to assessments that can provide insights into group performance and the effective assembly of groups. Invited pre­ sentations from Anita Williams Woolley, Scott Tannenbaum, and Leslie DeChurch included discussion of the “collective intelligence” of a team, how to assess and predict team performance, and how best to assemble teams. COLLECTIVE INTELLIGENCE To begin her presentation, Anita Williams Woolley, an assistant professor of organizational behavior and theory from Carnegie Mellon University, described various types of animals that exhibit “collective i ­ ntelligence”—memory or problem-solving behaviors that are the product of an interaction among members of the group rather than simply a reflec­ tion of the capabilities of the individual members. Ant colonies, Woolley said, provide one of the best examples in that individual ants are simple creatures with little memory or problem-solving ability, but collectively they exhibit impressive behavior. They create complex structures, for 51

OCR for page 51
52 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL example, and they locate sources of food and assign collection priority according to distance from the nest. After these introductory observations, Woolley described her research on collective intelligence in groups of people. She began her research pro­ gram with two specific questions: (1) Is there evidence that groups of ­ eople p have some form of collective intelligence—a product of collaboration within the group that goes beyond what the group’s members can accomplish indi­ vidually? (2) If collective intelligence exists, is it something that transcends domains—that is, if a group excels in one area or on one type of task, is it likely to excel in other areas or on other tasks? Her research shows that the answer to both questions is a clear “Yes” (Woolley et al., 2010). Collective intelligence, Woolley explained, can be thought of as a group version of the general intelligence factor, g, for individuals. The existence of g, which was originally hypothesized by Charles Spearman (1904) in the early part of the 20th century, can be inferred from the fact that people who do well on one type of task also tend to do well on other types of tasks (Deary, 2000). The idea behind g, Woolley said, is that it is a capability that is not specific to a particular domain but rather one that transcends domains. Similarly, if it could be shown that groups that do well on one type of task also tend to do well on other types of tasks, then one could infer the existence of a collective intelligence, c, associated with groups. Woolley and her colleagues initiated a research program to investigate this hypothesis. In particular, Woolley said, there were several specific questions that she sought to answer with her research: • Is there evidence of a general collective intelligence (c) in groups? • Can we isolate a small set of tasks that is predictive of group per­ formance on a broader range of more complex tasks? • Does c have predictive validity beyond the individual intelligence of group members? • How can we use this information to build a better science of groups? Woolley hoped it might be possible to develop tests for collective intelligence in groups that played a role similar to intelligence quotient (IQ) tests for individuals—that is, tests that sample from a relatively small number of domains but that generalize to a broader set of domains and thus could provide a relatively convenient way of predicting the likely performance of groups on a large variety of tasks. Woolley’s first study to investigate the possible existence of a collec­ tive intelligence involved 40 groups that each spent five or more hours in her lab (Woolley et al., 2010). After a group’s members were assessed on

OCR for page 51
EMERGING UNDERSTANDINGS OF GROUP-RELATED CHARACTERISTICS 53 individual intelligence and a number of other characteristics, the group worked together on a range of tasks and also, after the tasks, on a video game simulation to provide a collective measurement of performance. The tasks were divided into four types: (1) generating tasks, which consisted of such things as brainstorming sessions and which benefited from a variety of inputs so as to devise more creative solutions; (2) choosing tasks, typical decision-making tasks in which the group needed to iden­ tify the individual who had the right answer to a question; (3) negotiating tasks, which involved trading off against competing interests to come up with a solution that best suited the group as a whole; and (4) executing tasks, which required careful coordination of inputs to accomplish goals quickly and accurately. Each type of task required a fundamentally differ­ ent approach for the group to perform at a high level, Woolley explained, so there was no obvious reason to assume that a group that was good at one task, such as brainstorming, would also be good at another task, such as figuring out which group member knew the right answer for a decision-making task. Nonetheless, analysis of the data from the 40 groups found a clear correlation between performances on different tasks, Woolley reported, such that a group that did well on one task type was more likely to do well on another. “We also had a first-principle component that accounted for 43 percent of the variance,” Woolley said, “which compares favorably to IQ tests, where the first component generally accounts for between 30 and 50 percent of the variance.” When they carried out a confirmatory factor analysis on the data, they found clear evidence that a single fac­ tor explained the relationships among the tasks better than a multifactor solution, and, furthermore, that a single general factor of collective intel­ ligence was also a strong predictor of how the groups performed later on the criterion task (Woolley et al., 2010). Most importantly, Woolley emphasized, the collective intelligence factor did a far better job of pre­ dicting performance on the video game simulation than did the IQ of the individual group members. “We modeled this in terms of the maximum IQ score, the average IQ score, et cetera,” Woolley said, “and it didn’t really add any explanatory value to our model.” Later she repeated the study with a larger sample, a broader range of group sizes, and a dif­ ferent criterion task, and she found very similar results: The collective group intelligence, as determined from the range of tasks, was much more predictive of performance on the criterion task than the IQ scores of the individual group members (Woolley et al., 2010). In later studies she investigated how well collective intelligence pre­ dicted group learning (Aggarwal et al., unpublished). It is well known that individual intelligence predicts learning, she noted. “So we were interested to see whether this would be true at the group level as well.”

OCR for page 51
54 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL To test this idea, Woolley assembled about 100 teams to study. After administering the collective intelligence tests to each of the teams, the teams played a number of rounds of a behavioral economics game in which the goal is to earn as much money as possible. After each round, the team received feedback about how much money it had earned. As shown in Figure 4-1, on average, the groups with low collective intel­ ligence improved very little through 10 rounds of the game, while the groups with high collective intelligence improved greatly in terms of the amount of money they earned. Additionally, Woolley and her colleagues looked at teams of students in a Master’s program in business administration, who work together on projects over the course of a term (Aggarwal et al., unpublished). The stu­ dents took four different exams as a team, with the individual members first taking each exam individually and then repeating the same exam with their group, without first receiving feedback on their individual per­ formances (the dashed lines in Figure 4-2 illustrate the highest individual score obtained in each team). The teams with low collective intelligence and those with high collective intelligence scored almost equivalently on the first group exam. However, as the solid lines in Figure 4-2 illustrate, while teams with either high or low collective intelligence improved, $3,400 $3,200 Earnings per Round $3,000 $2,800 $2,600 $2,400 Rounds FIGURE 4-1 Collective intelligence and learning as measured in a behavioral economics game. SOURCE: Adapted from Aggarwal et al. (unpublished). Figure 4-1 R02494

OCR for page 51
Test Score FIGURE 4-2  Collective intelligence (CI) in classroom teams. 55 SOURCE: Adapted from Aggarwal et al. (unpublished).

OCR for page 51
56 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL by the fourth group exam, the teams with high collective intelligence improved significantly more and outscored the teams with low collec­ tive intelligence. Woolley noted that, when the group scores (solid lines in Figure 4-2) were compared with the highest individual score on each team (dashed lines in Figure 4-2), the teams with low collective intel­ ligence scored no better than their best team member 50 percent of the time, while the teams with high collective intelligence consistently scored significantly better than their best team member. Since the collective intelligence of these teams depends on more than the intelligence of their individual members, Woolley said, the question is what factors influence collective intelligence. Or, in other words, what are the best predictors of collective intelligence? “We’ve administered a variety of measures of group climate, things like group satisfaction, cohesion, or motivation, and have not found any significant relationships [with collective intelligence],” Woolley said. “We’ve administered a variety of personality measures, largely based on the Big Five [personality traits], . . . and we haven’t found consistent relationships with personality.” However, she said, one predictor of col­ lective intelligence that has emerged repeatedly in her studies is the pro­ portion of females in the group. Judging from data collected from other studies, the relationship is a curvilinear one (see Figure 4-3). Groups with a low percentage of women tend to show lower collective intelligence than groups consisting solely of men. Whereas groups with more than about 20 percent females in the group up to about 75 to 80 percent female display increasing collective intelligence. The trend reverses above 80 per­ cent females and collective intelligence drops slightly as the percentage approaches 100. This trend is consistent with research done by Myaskovsky and col­ leagues (2005), Woolley said. What they found, Woolley explained, is that in groups with just one female, you often don’t hear much from them. Whereas in groups with only one male, you actually hear a lot from the women, so the amount of communication overall in the groups that are predominately female is much greater than in groups that are predomi­ nantly male (Myaskovsky et al., 2005). A second factor that is predictive of collective intelligence, Woolley noted, is the social perceptiveness of the group’s members. This can be measured with a simple test that asks the test taker to select one of four options that best describes the mental state of a person shown in a photo­ graph, but limited to only showing the person’s eyes. People who are more socially perceptive are more accurate in inferring mental states from these narrow-view photographs (Baren-Cohen et al., 2001). Woolley found that groups whose members have a higher average score on social percep­ tiveness also tend to have higher collective intelligence. Because women

OCR for page 51
EMERGING UNDERSTANDINGS OF GROUP-RELATED CHARACTERISTICS 57 Collective Intelligence Factor Percentage of Females FIGURE 4-3  Relationship between collective intelligence and the percentage of females in the group. SOURCE: Adapted from Engel et al. (unpublished). Figure 4-3 R02494 uneditable bitmapped image tend to score higher on social perceptiveness, having more women in a group will generally raise its average social perceptiveness score, Woolley said, which explains most, but not all of the effects associated with the proportion of women in the group. By studying communication in these groups, Woolley has also found that uneven distribution in speaking turns is negatively correlated with collective intelligence, so groups in which one person dominates the con­ versation tend to have lower collective intelligence (Woolley et al., 2010). “We found it was true even for online groups that were communicating by chat only,” she said (Engel et al., unpublished). She has also examined the effects of cognitive diversity on collective intelligence. There are various cognitive styles, she noted. Some people are verbalizers, while others are visualizers. Among visualizers, some are better with objects, while others are better with spatial patterns. When she examined the relationship between diversity of cognitive styles and col­ lective intelligence, Woolley found a curvilinear pattern: Collective intel­ ligence tends to increase as the cognitive diversity of a group increases,

OCR for page 51
58 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL but only up to a point; once a group gets too cognitively diverse, its collective intelligence tends to drop. Groups that are highly diverse in cognitive styles tend to experience difficulty communicating and arriving at agreement on strategies to deal with different problems because they are fundamentally different in the way they think and approach a task (Aggarwal and Woolley, 2013). Looking ahead, Woolley said, she is interested in discovering what else predicts collective intelligence. She has also been working to refine her battery of tests so that it can be used in other environments to predict team performance and also so that it can be used to experiment with tools that enhance various processes that improve collective intelligence. Discussion: Collective Intelligence in Online Groups In the discussion period following her presentation, Woolley noted that she had recently finished a study comparing face-to-face teams with online teams and found that the pattern of relationships still held. “­ urprisingly,” she said, “the proportion of women is even more influ­ S ential in the online teams than in the face-to-face teams, even when they are anonymous and they don’t necessarily know that the other people are male or female.” Gender information is not provided explicitly to online team mem­ bers, but, Woolley noted, the team members assign themselves chat names, which may or may not indicate their sex. If a team member assigned her­ self a name that is clearly feminine, she said, then other team members could guess that she was a woman. Committee member Randall Engle commented that Woolley’s results indicate that the collective intelligence of the group is dependent on the number of females in the group, even when the group members them­ selves do not know how many females are in the group. This raises the possibility, he observed, of doing some interesting experiments in which the team members’ perceptions of their teammates could be manipulated to study, for instance, whether the belief that the team was mostly male or mostly female might make a difference to performance. Woolley responded that she has some collaborators who have been manipulating perceptions of cultural background rather than of gender, so that while everyone in the group is American, the group members are led to believe that some of their teammates are Arabs. They have found that such perceptions do indeed affect the group’s collective intelligence. Furthermore, Woolley said, social perceptiveness is even more influ­ ential in the online teams than in the face-to-face teams. Rodney ­ owman, L who led the discussion on ethical implications at the end of the work­ shop’s first day, pointed out that while Woolley described measuring

OCR for page 51
EMERGING UNDERSTANDINGS OF GROUP-RELATED CHARACTERISTICS 59 social perceptiveness in the lab with visual tests that ask a subject to deter­ mine a person’s emotional state from looking at a picture of the person’s eyes, people have no such visual data to work with online. “That’s cor­ rect,” Woolley responded, adding that conceptually this measure of social perceptiveness is tapping into “theory of mind”: the ability to represent to oneself another person’s mental states based on subtle cues. “What it suggests,” Woolley noted, “is that the measure generalizes to other modes than simply the visual identification of emotional expression.” PREDICTING TEAM PERFORMANCE To a certain degree, well-qualified individuals are more likely to per­ form well as a team on various tasks, relative to a team of less-qualified individuals. However, as Woolley’s work on collective intelligence found, individual characteristics tell only part of the story (Woolley et al., 2010). Other factors also play a role in team effectiveness. The question, then, is what those factors are and how to predict the effectiveness of a team from the characteristics of its individual members. Scott Tannenbaum, president and cofounder of The Group for Organizational Effectiveness, Inc., discussed this issue in his presentation on team composition. Modeling Team Composition Tannenbaum began with a discussion of theoretical considerations. Noting that there are many different approaches to putting together an effective team, he described a simple classification system as a way of imposing some order on the variety of approaches. In his system, there are four types of team composition models (see Table 4-1). The first and most traditional model, represented by the upper left quadrant in Table 4-1, is the individual selection model, in which indi­ viduals are assessed in various ways and then matched to a job. “One way you could think about this,” Tannenbaum said, “is that this is about picking individuals who are most qualified to do [a specific] job.” It is also possible to consider individual characteristics that are related to the functioning of a team—that is, characteristics that make a person a good team player. A personnel model with teamwork considerations (upper right quadrant in Table 4-1), Tannenbaum said, seeks to select peo­ ple based not only on their individual competencies but also on how well they are likely to collaborate and coordinate when working as part of a team. There are many different types of team competencies, ­ annenbaum T explained, and some of them are generic, meaning that, regardless of what team a person is on and what sorts of tasks the person is asked to do, these competences will help the person be a better team player. For

OCR for page 51
60 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL TABLE 4-1  Four Models of Team Composition Effectiveness Individual Focus Team Focus Individual Traditional Personnel— Personnel Model with Models Position Fit Model Teamwork Considerations Position-Specific KSAOs Team Generic KSAOs Cognitive Ability Organizing Skills Psychomotor Ability Cooperativeness Conscientiousness Team Orientation Team Relative Contribution Model Team Profile Model Models Relative KSAOs KSAOs Distributions Weakest Member Average Experience Highest Leader Propensity Functional Diversity Cooperativeness of Most Team Requisite KSAOs Central Person NOTE: KSAOs = knowledge, skills, abilities, and other characteristics. SOURCE: Reprinted with permission from Mathieu et al. (2013). example, this might include communications skills, organizational skills, cooperativeness, and team orientation. These first two types of models are relatively well studied and well accepted, Tannenbaum said; the remaining two types of models are where he expects many future developments. These models approach team composition from the team perspective, rather than from the indi­ idual v perspective. “This is about thinking about a team member’s talent,” T ­ annenbaum said, “but you can’t look at it in isolation. It is relative to other people on the team.” For example, consider a team of people working an assembly line. The person with the poorest skills in a particular area will limit the over­ all team performance. In other cases, the key factor in team performance might be the most positive person or the strongest person, or it might be the level of cooperativeness displayed by the person who is most central to the team’s workings. This relative contribution model, represented by the lower left quadrant in Table 4-1, assesses individual characteristics in a team framework. The most complex model, represented by the lower right quadrant of the table, is the team profile model, which seeks to optimize the blend, synergy, and profiles of the team members. “You take a look at all these pieces simultaneously,” Tannenbaum said, and consider the team’s collec­ tion of skills. So, for example, in some instances it may not matter exactly who performs specific tasks, only that at least one person on the team has the requisite skill and that collectively the team possesses the necessary skills to fulfill the team’s mission. It is the overall team profile that matters.

OCR for page 51
EMERGING UNDERSTANDINGS OF GROUP-RELATED CHARACTERISTICS 61 As an aside, Tannenbaum noted that he and his colleagues have been working to create mathematical models that describe the effectiveness of teams under each of these models. The idea, he said, is to show that it is possible to describe these team characteristics with algorithms and not simply in words. The Team Role Experiences and Orientation Assessment With this theoretical grounding in place, Tannenbaum described the Team Role Experiences and Orientation (TREO) assessment. “TREO is a measure of team role propensities,” he explained. “In other words, in team settings, what is someone likely to do? What do they gravitate toward?” TREO is based on the premise that by understanding people’s team-related preferences and interests—what they like to do and what they are inter­ ested in when on a team—and by learning about their past behaviors when they were on previous teams, it is possible to predict how they are likely to behave on teams in the future (Mathieu et al., unpublished). With that information in hand, it should be possible to examine the propensities of the different team members to predict the performance of that team more accurately than would be possible simply from knowing about the knowl­ edge, skills, and abilities of the individual team members. TREO is a 48-item self-assessment that scores people against six roles: organizer, challenger, team builder, doer, innovator, and ­ onnector. c “These are just what they sound like,” Tannenbaum said. ­ Organizers “ are people who tend to structure and provide guidance and control over things. ­ hallengers are people who are likely to speak up and ques­ C tion things. Innovators are folks who have new ideas to bring to the table.” The doers are “head-down people that will get the work done,” while team builders “focus on the morale and engagement of the team,” and connectors build bridges with people outside the team. Using seven different samples, Tannenbaum and his team conducted a confirmatory factor analysis to test the validity and reliability of the TREO constructs, and the results are promising, he reported. They also compared the TREO characteristics with the usual Big Five ­ ersonality p traits and found that, while there are some logical correlations, the TREO characteristics generally do not overlap with the standard personality traits. Tannenbaum said that this makes sense because the Big Five are intended to measure individual personality, while TREO is looking at how people act when they are in team settings. He added, “this is back to some of that contextualization that people were talking about ­ esterday”—that y is, one way to improve assessments is to take context into account instead of simply working with context-free assessments such as the Big Five personality traits.

OCR for page 51
68 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL findings on attitude and behavior change to aid in the commercialization of a scientific breakthrough. The individual students chose how they would assemble into teams, DeChurch explained. “They had three choices. They could allow the instructor to randomly put them on a team, they could self-organize ad hoc, . . . or they could use a tool that we built called the My Dream Team Builder,” which was essentially a recommendation system that helped students decide who they might want to be on a team with—that is, data- driven self-organization. Sixty-one of the students chose to use the tool to self-organize, 11 chose ad hoc self-organization (for example, they chose to be on a team with friends), and 23 chose to be randomly assigned. “This resulted in 6 teams in which the dominant modality was assignment, 9 that we’ll call blended assembly—they had some level of assignment and some level of self-organization—and 15 that were matched purely using this builder.” Within each team, the investigators measured a variety of Level 1 assembly factors—gender, age, personality, intercultural sensitivity, and so on (for background on factors assessed, see Chen and Starosta, 2000, and Donnellan et al., 2006)—and one Level 3 factor: the individuals’ prior social networks. Four weeks after the teams were formed, relationships among teams were measured using sociometric surveys that captured the patterns of communication in a team, the efficacy of communication, people’s confidence in their ability to work with each specific member of the team, their trust in the others on the team, and their reliance on one another for leadership of that team. As a comparison, DeChurch likened the My Dream Team Builder tool to Amazon’s recommender system for products or Netflix’s system for movies, she said, “only ours recommends people you might want to form a team with.” People who used it provided information about their attributes and their social networks. They also answered questions about the sorts of people they would like on their team. They could specify, for instance, which skills were more important for team members to have and which were less important. They could specify a preference for teammates with significant prior leadership experience or little experience, or they could choose to ignore prior leadership experience entirely. They could specify a preference for people they have enjoyed working with in the past, people who are friends of friends, people who are popular, people who are social brokers—that is, who are connected with numerous groups that are not directly connected to each other—and so on. Considering all the information submitted, the tool compiles a list of recommended team­ mates, with profiles of each. Users could then choose to click an “Invite” option on the tool to send a pre-scripted e‑mail to the potential teammate, inviting him or her to join the team. That person in turn can respond in

OCR for page 51
EMERGING UNDERSTANDINGS OF GROUP-RELATED CHARACTERISTICS 69 a variety of ways, including “I’m already on a team, do you want to join mine?” or “Sorry, I have to decline.” Once the teams were formed, DeChurch and her colleagues analyzed how the modalities of assembly affected the assembly mechanisms. They used exponential random graph models, which showed, for example, which people had enjoyed working together in the past and which were now on teams together. After carrying out the analysis, the researchers were able to see how the teams that formed using the builder differed from those that formed using other methods. The analysis found that teams that had used the builder were more homogeneous in age and also more homogeneous in cultural sensitivity. That was an interesting result, DeChurch noted, “because that’s a deep- level characteristic that, if you formed organically, you wouldn’t be aware of. But it was attended to by the teams using the builder.” On the other hand, the teams using the builder were more heterogeneous by sex. And, not surprisingly, both the teams using the builder and the teams that self-organized without the builder were more likely to contain members who had previously worked together than the teams that were assigned randomly. Thus, although the work is still preliminary, DeChurch sees evidence that the modality of assembly (the four options for degree of control and information) affects assembly mechanisms (the four levels of compositional, person-task fit, relational, or ecosystem considerations) in various ways. The Effect of Team Assembly Mechanisms on Formation of Team Relationships In testing whether team assembly mechanisms explained the relation­ ships that formed in the teams, DeChurch and her colleagues modeled those relationships in two different ways. First, they modeled the depen­ dent variable as a typical team-level variable, using sociometric surveys to capture the relationships within the team, asking every person about their relations with every other person, and computing a density score for trust, communication, efficacy, and leadership. In the second approach, they modeled the dependent variable as a dyadic relationship. In the first approach, a regression analysis showed relationships simi­ lar to what had already been seen in the literature (for a review, see Bell, 2007). For Level 1 assembly mechanisms, the main effect was seen for mean extraversion in the group, which was associated with greater trust, communication, and efficacy in the group. There were no significant effects for percent female, mean conscientiousness, mean intercultural sensitivity, or age difference (coefficient of variation). For the Level 3 assembly mechanism (prior relationships), there was a significant effect for leadership but not for trust, communication, or efficacy.

OCR for page 51
70 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL However, when DeChurch analyzed the same variables at the dyadic level using Exponential Random Graph Models, the data revealed much more detail about the characteristics that predicted relationship formation in teams. The analysis allowed her to take into account the traits of the sender in the dyadic relationship, the traits of the receiver, aspects of their relationship, and endogenous structural controls. It also allowed her to jointly consider the unique contribution of each factor (i.e., sender char­ acteristics, receiver characteristics, relational variables, and endogenous controls) in predicting the likelihood that a particular type of tie will form (e.g., a trust tie) while accounting for the influence of all the other factors included in the model. This analysis, DeChurch reported, showed a number of significant effects. For example, trust ties were more likely to form when the sender— the person doing the trusting—was female. “They are also more likely to form when the receiver—the person you’re talking about, do you trust them—is either extroverted or high in conscientiousness. And they are more likely to form if there’s a prior relationship.” Similarly, leadership ties were more likely to form in teams when there was a prior relationship or when the person being rated as a leader was high in conscientiousness. People were more likely to communicate with those who were high in intercultural sensitivity, and people were more likely to feel they could have an efficacious working relationship with someone who was an extrovert. The last question DeChurch discussed was whether the way that a team formed changed the relationships that developed within it. “It’s essentially the question of, does the dating affect the marriage in teams,” she explained. “And it does.” She analyzed how four variables—relational efficacy density, com­ munication density, relational efficacy centralization, and communica­ tion centralization—varied according to whether the assembly modality was ad hoc appointment (i.e., random assignment), a blend of assigned and ad hoc self-organized, completely ad hoc self-organized, or data- driven self-organized (see Figure 4-6). DeChurch’s analysis also found that teams whose members all played a role in their organization, either by using the team builder tool or simply by choosing their friends, communicated more and were more confident in their ability to work together effectively than teams with any members who were appointed, even if three of the four team members were self- organized. Furthermore, the teams whose members all played a role in the organization talked more evenly. “So communication is not going through one person, it is going through multiple people, and they are also more balanced in their efficacy,” DeChurch said. “So they are not just confident that they can rely on one person, but everyone is confident in the ability to work with everybody else.”

OCR for page 51
Ad Hoc Blended: Assigned and Blended: Self- Data-Driven Self-Organization Appointment Self-Organized Organization Only Data-Driven Data-Driven Structured Self-Organization Assignment Information Unstructured Ad Hoc Ad Hoc Information Self-Organization Assignment Teams are self-organized Teams are assigned Average Density/Centralilzation F(3,25) = 5.82, F(3,25) = 4.27, F(3,25) = 4.63, F(3,25) = 4.50, p = .045, 2 = .44 p = .015, 2 = .34 p = .01, 2 = .36 p = .012, 2 = .35 Mode of Team Assembly FIGURE 4-6  The effect of team assembly modality on the density and centralization of team communication and efficacy networks. 71 SOURCE: DeChurch presentation. Figure 4-6 R02494 partially vector editable

OCR for page 51
72 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL Thus, DeChurch summarized, the study offers several intriguing indications of how the team assembly process can affect the team’s ulti­ mate performance. “We’re seeing evidence that the modalities of how teams assemble are fundamentally changing the information that they are attending to,” she said. For example, data-driven self-assembly, as done with the team builder program, allows people to consider deep-level characteristics when choosing team members. “Intercultural sensitivity is something you couldn’t know about. So this opens up a lot of interest­ ing possibilities about information that can be considered in advance to make a team more effective that previously would have been unavailable without some sort of infrastructure.” These new possibilities, in turn, suggest three new directions in team staffing and assembly. First, she said, programmatic research on team assembly mechanisms is needed that considers all four of the l ­evels rather than just one. Second, programmatic research examining the consequences of team assembly modalities is needed. “We need to think systematically about how tinkering with the formation of teams comes to impact the nature of the relationships that develop within the teams.” Finally, it will be important to take relational analyses seriously. “In organizational behavior we all love to say, ‘We have individuals who are nested within teams who are nested within organizations,’ but that is not really what teams look like,” DeChurch said. “We have individuals, and we have dyads that form patterns of relationships which consti­ tute team-level phenomena.” In other words, according to DeChurch, the team-level phenomena emerge from the dyadic relationships, and s ­ tudies that are more relational in nature may be the key to detect the effects of team assembly on team performance than studies that aggre­ gate everything. DISCUSSION The roundtable discussion after the presentations from the panelists on Group Composition Processes and Performance touched on a number of issues related to the understanding and measurement of group-related characteristics. Military Applications Invited presenter Paul Sackett (presentation summarized in Chapter 5) started one line of discussion by asking about group-relevant information that might be collected pre-accession. At that point in the military recruit­ ment process, he noted, there is no information about what exactly an individual might be doing or what team he or she might be joining. Later,

OCR for page 51
EMERGING UNDERSTANDINGS OF GROUP-RELATED CHARACTERISTICS 73 when jobs are assigned and teams are formed, Sackett said, it will clearly be valuable to have access to information that offers insights into how an individual will perform on a team. So, what information should be col­ lected at the pre-entry stage, he asked, before anything is known about which teams the individual might join? Gerald Goodwin, of the U.S. Army Research Institute for the Behavioral and Social Sciences (ARI), modified and expanded Sackett’s q ­ uestion. “There essentially are three types of decisions that we are considering this information to make,” he said. “One is the selection decision: Do you get into the military? One is the classification decision: What occupation or job can you have? And the third is the assignment decision: What unit do you get assigned to, where do you go?” Thus, different types of information will be needed at different times. “What information would you want to get pre-accessioning—which is where we collect most of this information,” Goodwin asked, “and what infor­ mation would you want to get somewhere else, and when and where would you want to get it?” Woolley replied that social perceptiveness and communication skills are two things that could be measured in the pre-accession phase to increase the likelihood that recruits will perform better on teams. T ­ annenbaum added that most group-related characteristics appear to be more relevant to assignment than selection. That comment led to a discus­ sion on when these assessments should be made, and ­ annenbaum noted T that it might make sense to do the assessments in the pre-­ ccession phase, a since they do not take too long to complete and would then be available for future assignment decisions. That would only work, however, if the traits were stable over time. Because recruits are often about 18 years old and entering a new phase of their lives, the stability of some of these traits may be an issue. For example, Tannenbaum continued, he does not have the necessary data to say with certainty that TREO scores would be stable. If the evidence indicates that they are not always stable, then it would be necessary to carry out the tests later than pre-accession and closer to the time that soldiers are assigned to their teams. Following up on that point, Sackett said, “Let’s assume [the attri­ bute being measured is] stable, or let’s assume we’re making an assign­ ment decision shortly enough after accession that you’re not worrying about instability. Is there any reason per se to have the information pre-­ accession, or is it simply needed pre-assignment?” Tannenbaum replied that the main reason to do it pre-accession would be because there is already a testing mechanism in place, and the group-related measures could be jointly administered. On the other hand, he said, given the large numbers of applicants who never enlist or who do not make it through basic training, it might make sense to hold off, if there is some

OCR for page 51
74 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL later point at which it is convenient to assess soldiers’ group-related characteristics. DeChurch then offered a reason to administer the tests during pre- accession testing. Because the research indicates that some combinations of team members may be more effective than others, it may make sense to try to have a good distribution of recruits in terms of the different traits they bring to teams. Otherwise, once the soldiers are assigned to teams, it may turn out that there are not enough of some types of team mem­ bers and too many of others—not enough innovators, say, or too many doers—and some teams will end up with a less than optimum collection of members. So, she said, it might be useful to have a better idea of team factors that should be considered during the selection process. The assignment phase may benefit the most from assessments of group-related characteristics, Tannenbaum suggested. For example, TREO could prove to be quite useful in assisting assignment decisions, as it allows for multiple variables to be maintained and manipulated simulta­ neously when considering potential team combinations. DeChurch added that, when assembling groups, she considers it important to know about previous relationships among the potential team members. So she sees value in taking into account the Level 3 (relational) and Level 4 (eco­ system) compositional factors from her model. This would probably not be useful for soldiers during their first assignments, she added, since they will be unlikely to have worked with any of their team members before joining the Army, but it will be an important consideration in later assignments. Woolley reinforced Tannenbaum’s comments by saying that, while the individuals’ skills, abilities, and interests are important to take into account when assigning people to a team, what the team members do in combination will be as important, if not more important, than their individual capabilities. “All of the research presented [at the workshop] strongly suggests that there is a combination of capabilities that comes together and influences how the unit performs and that needs to be taken into account in making these assignments,” Woolley said. Testing to inform team selection does not have to be administered during the initial screening of recruits, she added, but it will likely be beneficial to conduct in the early stages of a soldier’s career. Adaptability Paul Gade, a research professor at George Washington University and previously on the ARI staff, introduced the issue of adaptability by describing a study of surgical teams in a shock trauma center (Klein et al., 2006). The study found that the individuals on the team changed their

OCR for page 51
EMERGING UNDERSTANDINGS OF GROUP-RELATED CHARACTERISTICS 75 roles, who was in charge, and how the team functioned, depending on what the situation was and who was the best qualified to make decisions in that particular situation. Could such adaptability be included in assess­ ments, he asked, and would it be an individual measure of adaptability or would it be a team measure that was either in place of or in addition to individual adaptability? DeChurch answered that it is important to think about where team adaptability resides, and she suggested that it can be found more in the processes and the states of teams, rather than in their performance. For example, do the team members understand where the different types of knowledge and skills on the team lie? Knowing that could help predict how well the team members could adapt if they were given a different task or if something in the environment changed. “I think understanding adaptability is really understanding the nature of the interactions in teams and the collective properties,” she said. Building on DeChurch’s comment, Woolley noted that her own data linking collective intelligence with learning suggest that collective intel­ ligence probably plays a large role in adaptability as well. “There are various definitions of learning in the literature,” she said, “but almost all of them include some level of adaptation when you’re talking about it at the group level. So I would say that the same principles that enhance col­ lective intelligence in groups probably also enhance adaptability.” Tannenbaum added that there are probably some individual corre­ lates to adaptability as well. Openness to learning and other personality traits in team members are likely to have at least a small relationship with the adaptability of the team. Still, he said, team adaptability is probably related more to the mix of who happens to be on the team, as well as the culture of the team and the surrounding organization. For example, is it acceptable for someone on the team to step up and say, “I’m not the des­ ignated team leader, but I’m going to speak up here”? This is an important factor in the medical world, he said. “The extent to which the attending surgeon sets the stage before an operation has way more to do with whether people are likely to speak up and say, ‘Sorry this is the wrong leg you’re about to operate on’ than [with] individual personality variables. So I think the intervention point there is probably more at the team level.” Tannenbaum added that a related concept is team resilience, which he characterized as referring to how teams respond when they find them­ selves in difficult situations—and whether they do it in a way that main­ tains resources and team functionality in addition to just getting through it. “What’s interesting about it,” he said, “is that at the team level it’s different than at the individual level.” For instance, a team could be com­ posed of members who are all very individually resilient—they are not likely to succumb to post-traumatic stress disorder—but they may not be

OCR for page 51
76 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL great team members, and the team itself does not end up being resilient. So this is another team variable that may be beneficial to consider at the collective level, he concluded. The Issue of Sufficient Data Committee member Patrick Kyllonen observed that a fundamental challenge to doing research on teams is collecting sufficient data. It is already difficult to obtain sufficient data to facilitate research on indi­ viduals, he said, and the history of intelligence research has had a number of “false alarms” that resulted from sample sizes that were too small. Research on teams requires even more data, Kyllonen continued, in part due to the fact that each team has multiple individuals, but there are other reasons as well. As an example, he noted that DeChurch’s claim that research on teams should involve Level 3 and Level 4 observations means that one will need data on individuals’ prior relationships with each other and on individuals’ prior team memberships. Kyllonen asked what strategies are available for gathering the large amounts of data that will be necessary for this sort of research. “To get to these higher levels, it’s going to require not tens or hundreds, but it’s going to require [data on] thousands of people to get anything at all gen­ eralizable,” he said. Tannenbaum acknowledged that this is a serious problem. “As the unit of analysis goes up, it’s more difficult to gather data.” It is harder to gather data at the team level than at the individual level and harder to gather data at the organizational level than at the team level, he said. If the payoffs seem great enough, then it might make sense to carry out “­ nobtrusive measurements” on Army teams that have already been u assembled and are operational, specifically to gather data to use in analy­ ses. But the field is still in its infancy, he suggested, and “at some point there may be some breakthroughs that occur that allow data to be gathered more readily from existing teams that we could use in future research.” DeChurch offered two additional suggestions to facilitate data gather­ ing on teams. One was to use longitudinal research. “I think we have to get beyond the variances between teams and look more meaningfully at modeling the variance within a team over time.” Gathering data over time may make it possible to access much more explanatory power than is pos­ sible with static measurements. Her second suggestion was to build a com­ munity infrastructure and use it to link and share databases. As has been done in other areas of science, rules could be instituted that a paper would not be accepted for publication unless the researchers submitted their data to the central repository so that other researchers could have access to it.

OCR for page 51
EMERGING UNDERSTANDINGS OF GROUP-RELATED CHARACTERISTICS 77 REFERENCES Aggarwal, I., and A.W. Woolley. (2013). Do you see what I see? The effect of members’ cogni­ tive styles on team processes and errors in task execution. Organizational Behavior and Human Decision Processes, 122(1):92-99. Aggarwal, I., A.W. Woolley, C.F. Chabris, and T.W. Malone. (unpublished). Learning How to Coordinate:  The Moderating Role of Cognitive Diversity on the Relationship Between Collective Intelligence and Team Learning. Carnegie Mellon University Working Paper. Abstract available: http://works.bepress.com/anita_woolley/1 [July 2013]. Baren-Cohen, S., S. Wheelwright, J. Hill, Y. Raste, and I. Plumb. (2001). The “Reading the Mind in the Eyes Test” revised version: A study with normal adults, and adults with Asperger syndrome or high-functioning autism. Journal of Child Psychology and Psychia- try, 42(2):241-251. Bell, S.T. (2007). Deep-level composition variables as predictors of team performance: A meta-analysis. Journal of Applied Psychology, 92(3):595-615. Bell, S.T., A.J. Villado, M.A. Lukasik, L. Belau, and A.L. Briggs. (2011). Getting specific about demographic diversity variable and team performance relationships: A meta-analysis. Journal of Management, 37(3):709-743. Chen, G.M., and W.J. Starosta. (2000). Intercultural sensitivity. In L.A. Samovar and R.E. Porter ­ (Eds.), Intercultural Communication: A Reader (pp. 406-413). Belmont, CA: Wadsworth. Deary, I.J. (2000). Looking Down on Human Intelligence: From Psychometrics to the Brain. New York: Oxford University Press. Donnellan, M.B., F.L. Oswald, B.M. Baird, and R.E. Lucas. (2006). The mini-IPIP scales: Tiny- yet-effective measures of the Big Five factors of personality. Psychological Assessment, 18(2):192-203. Engel, D., A.W. Woolley, L.X. Jing, C.F. Chabris, and T.W. Malone. (unpublished). Read­ ing the Mind in the Eyes Predicts Collective Intelligence, Even Without Seeing Eyes. Available: https://mitsloan.mit.edu/about/detail.php?in_spseqno=51955 [July 2013]. Guimera, R., B. Uzzi, J. Spiro, and L.A.N. Amaral. (2005). Team assembly mechanisms deter­ mine collaboration network structure and team performance. Science, 308(5722):697-702. Keegan, B., D. Gergle, and N. Contractor. (2012). Do editors or articles drive collaboration? Multilevel statistical network analysis of Wikipedia coauthorship. In S. Poltrack and C. Simone (Eds.), Proceedings of the 2012 ACM Conference on Computer-Supported Cooperative Work (CSCW) (pp. 427-436). New York: Association for Computing Machinery. Avail­ able: http://dl.acm.org/citation.cfm?doid=2145204.2145271 [July 2013]. Klein, K.J., C.J. Ziegert, A.P. Knight, and Y. Xiao. (2006). Dynamic delegation: Shared, hierar­ chical and deindividualized leadership in extreme action teams. Administrative Science Quarterly, 50(4):590-621.  Lungeanu, A., N.S. Contractor, D. Carter, and L.A. DeChurch. (2013). A hypergraph approach to understanding the assembly of scientific research teams. In D. Carter (Chair), Teams on the Hyper-Edge: Using Hypergraph Network Methodology to Understand Teams. Symposium conducted at the Interdisciplinary Network for Group Research Conference, July, ­Atlanta, GA. Presentation information available: https://www.conftool.net/ingroup2013/ index.php?page=browseSessions&form_session=3&CTSID_INGROUP2013= RHaHuEd1CuQISf6V3MS6t85TnJ1 [July 2013]. Mathieu, J.E., S.I. Tannenbaum, J.S. Donsbach, and G.M. Alliger. (2013). Achieving optimal team composition for success. In E. Salas, S.I. Tannenbaum, D. Cohen, and G. Latham (Eds.), Developing and Enhancing Teamwork in Organizations: Evidence-based Best Practices and Guidelines (pp. 520-551). San Francisco, CA: Jossey-Bass.

OCR for page 51
78 NEW DIRECTIONS IN ASSESSING PERFORMANCE POTENTIAL Mathieu, J.E., S.I. Tannenbaum, M.R. Kukenburger, J.S. Donsbach, and G.M. Alliger. (un­ published). Team Role Experiences and Orientations: The Development of a Measure of Tests of Nomological Network Relations. Available: http://admin.business.­ conn. u edu/PortalVBVS/DesktopModules/Staff/staff.aspx?&uid=jmathieu [July 2013]. Myaskovsky, L., E. Unikel, and M.A. Dew. (2005). Effects of gender diversity on performance and interpersonal behavior in small work groups. Sex Roles, 52(9):645-657. Spearman, C. (1904). “General intelligence” objectively determined and measured. American Journal of Psychology, 15(2):201-292. Stewart, G.L. (2006). A meta-analytic review of relationships between team design features and team performance. Journal of Management, 32(1):29-55. Tannenbaum, S.I., J.S. Donsbach, G.M. Alliger, J.E. Mathieu, K.A. Metcalf, and G.F. Goodwin. ­ (2010). Forming Effective Teams: Testing the Team Composition System (TCS) Algorithms and Decision Aid. Paper presented at the 27th Annual Army Science Conference, O ­ rlando, FL. Available: http://www.groupoe.com/groupoe-company/team/97-scott-­ tannenbaum.html [July 2013]. Woolley, A.W., C.F. Chabris, A. Pentland, N. Hashmi, and T.W. Malone. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004):686-688.