The quality of analyses depends on the intellect, talents, and skills of the analysts and the efficacy of their training and career development.
The intelligence community (IC) operates in an increasingly complex, turbulent, and fast-changing threat environment (see Fingar, 2011). This environment has important implications for recruiting, training, organizing, retaining, and managing the IC workforce. In stable environments, organizations can rely on stable work practices overseen by a relatively rigid administration that directs a generally hierarchical and compliant workforce. In turbulent environments, organizations require innovative work practices, flexible administration, and a creative, inventive workforce, given rein to find new approaches and novel solutions.
This chapter considers the findings on workforce issues from the behavioral and social sciences. Specifically, it looks at the discipline of strategic human resource management, whose focus is determining the best ways to create and manage a workforce that meets an organization’s needs. Building an IC workforce that is well suited for the challenges of the 21st century will require two broad efforts: (1) recruiting and selecting analysts—and other specialists—with the abilities, skills, and temperaments needed for success in this new environment; and (2) building the capabilities of that workforce by enhancing continuous learning, motivation, and collaboration (see Crook et al., 2011, for an analysis of the relationship between human capital and organization performance).
RECRUITMENT AND SELECTION
The IC employs approximately 20,000 analysts with a wide range of talents and expertise, and it has begun to define the array of competencies that analysts will need through their careers (e.g., Director of National Intelligence, 2008). However, the IC definitions rely on “face validity” or intuitive appeal rather than on an evidence-based evaluation.
Strategic human resource management offers an objective, scientific approach to developing the best possible workforce. It is grounded in the findings that individuals differ on a wide range of psychological characteristics—such as cognitive ability, personality, and values—that predict corresponding differences in educational achievement, job performance, and career success. Some of these characteristics are relatively stable, such as cognitive ability, personality, and values, while others are more malleable, such as job knowledge, job-specific skills, attitudes, and motivational characteristics.1 Stable characteristics can influence the malleable ones. For example, it is well established that individuals with relative higher cognitive ability gain more from experience and training than those with relatively lower cognitive ability (e.g., Judge et al., 2010; Schmidt and Hunter, 2004).
To assemble and develop individuals with the optimal collection of characteristics, the IC needs to pay attention to recruiting and selecting the right people, as well as to their training, motivation, and support. Both of these efforts will be important, but recruitment and selection is especially important because the quality of the human resources pool assembled in this first step facilitates or constrains an organization’s subsequent ability to build and develop its workforce. A failure to maximize the talent pool at this step cannot be rectified by subsequent efforts.
Psychological research has identified a wide range of characteristics that differ from individual to individual and can help to identify people with the greatest potential to become successful analysts. It is important to note that the optimal qualities for IC analysts may turn out to be quite different from the current criteria. For example, current practices may undervalue raw cognitive ability (a stable characteristic) and overvalue historical or political area knowledge (a malleable characteristic). Furthermore, the IC may need to shift from proxy measures, such as having a college degree, to direct measures of cognitive ability, as there is generally substantial variation in the cognitive abilities of college degree holders even from the same institution. Direct measures with strong psychometric validation are readily
available. Ignoring them will cause the IC to lose the opportunity to ensure the highest quality pool of human resources for its needs.
The IC should also design its recruitment strategies to reach the best possible pool of potential recruits, from which the best candidates can then be selected. Finding the best candidates will require overcoming the common tendency of organizations to rely on “traditional” sources or pathways by which potential recruits become part of the applicant pool. These practices can create insufficiently diverse and talented applicant pools. There are well-developed recruitment methods (e.g., Rynes and Cable, 2003) to help ensure that applicants to the IC offer a wide range of the capabilities needed for intelligence analysis.
Given the difficulty and importance of its mission, the IC needs to use methods that have been evaluated and proven to be effective. For example, it is very common for an unstructured interview to be one component or even the only component of a selection system. But unstructured interviews are known to suffer from significant problems (see Huffcutt and Arthur, 1994; Kozlowski, 2011; McDaniel et al., 1994). One such problem is that interviewers select candidates they like personally, who tend to be people who are similar to themselves. In the current fast-changing intelligence environment, that tendency could be costly in terms of the diversity of people and skills needed.
In addition to improving its recruitment and selection practices, the IC needs to provide its workforce with the training, management, and organization needed to maximize their potential. Recruitment and selection establish the quality of the human resources pool, but the knowledge, skills, abilities, and other characteristics of potential new analysts are desirable to a wide range of jobs and organizational settings and makes potential analysts desirable to other potential employers as well. It is the specialized training that analysts should receive from the IC that develops the unique skills necessary to be successful analysts. The IC needs to make optimal use of its information advantages, its knowledge of what national security decision makers need from the IC, and its ability to tap into expertise both inside and outside the federal government. To do so successfully, it needs to create knowledge and skills in its workforce that are specific to the organizational mission and that provide an advantage in innovation and agility (i.e., to ensure U.S. intelligence is superior to that of its adversaries) (Barney and Wright, 1998; Crook et al., 2011; Ployhart, 2006).
In contrast to recruitment and selection, the process of workforce development will unfold over a long period of time and should evolve as scientific knowledge and IC experience dictate. A new approach to work-
force development requires not a one-time fix, but, rather, a basic shift in managerial practices to enhance the value and effectiveness of the IC workforce. The rest of this section discusses three key elements of development: continuous learning, motivation and assessment, and collaboration.
As demands on analysts shift, their performance will be largely determined by the extent to which the IC embraces and values continuous learning and training in the face of the normal pressures to give higher priority to the demands of the moment. Training should not be viewed as an impediment to “getting work done,” nor should it be provided only to entry-level personnel. Instead, it must be seen as a career-long commitment, as much a part of the job as preparing analyses or providing guidance to intelligence collectors.
A good starting place would be the creation of a common curriculum of essential analytic tradecraft skills in joint settings, such as those taught in Analysis 101 by the Office of the Director of National Intelligence (ODNI) and its newer companion Analysis 101 for managers, which trains managers to support the use of new analytic methods. Neither of these are nor can be viewed as taking time away from analysts’ “real work.” These two starting elements would be important steps toward enhancing skills and overcoming organizational and cultural barriers to collaboration. But more needs to be done to develop a culture of continuous learning. All such efforts at improving performance should receive publication-quality evaluation—even if the results of those studies will never be shared outside the IC.
One key element of training is a proper range of procedures and contents. Chapter 3 identifies basic behavioral and social science knowledge that would be helpful to analysts. In addition to this basic skills training, there is a science of training effectiveness that has well-developed methods for identifying training needs, designing instructional methods, and evaluating their validity (for a review, see Kozlowski, 2011).
Analysts would also benefit from hands-on experience on a wide range of problems to improve their analytic tradecraft skills. Analysts today face several obstacles to improving their judgments (for a review, see Fingar, 2011). First, feedback regarding the accuracy and value of assessments is often limited, which makes learning from experience difficult and ambiguous. It has been extensively documented that effective feedback is necessary for learning (e.g., Kluger and DeNisi, 1996; for a review, see McClelland, 2011). Second, many critical analytic problems have a relatively low base rate (i.e., they are encountered only rarely), which creates great pressure to analyze them right the first time. Analysts should not be forced to “feel their way” through major challenges.
One way to address both issues is to use simulations to create synthetic experiences, providing exposure to infrequently encountered events along with timely, precise feedback. Simulations also make it possible to role play in situations in which “failure” can be used for reflection, learning, and innovation, rather than being a source of blame. As noted in Chapter 3, research shows that more learning generally accrues from failure than from success. The science of simulation-based training is well established and in widespread use by the aviation industry, the military, the medical community, the National Aeronautics and Space Administration, and other organizations that face analogous challenges (e.g., Bell et al., 2008; Cannon-Bowers and Bowers, 2010; Salas et al., 2011).
The culture of training should also include informal practices that provide opportunities for learning on the job (e.g., socialization, mentoring, cross training, job rotation, and career progression paths), strengthening institutional supports (e.g., time, money, and encouragement) for continuing education, both inside or outside the organization, and removing institutional barriers that inhibit continuous learning.
Several features of the IC environment require substantive and systematic training. Analysts must communicate with others both up and down the information chain, and they must collaborate with others who have different information and different types of expertise. At the same time, the IC environment can change swiftly. This situation suggests the need for various types of cross training, from acting in a different role during a training simulation to serving in different organizations and even on different types of assignments. Experiencing other people’s jobs and situations from their perspectives can help analysts better know how to communicate with people in those roles, and the expertise that one develops from experiencing a variety of situations leads to greater flexibility and insight when dealing with a new situation.
Everyone should be involved in training. Those who “know the most” should teach what they know. Teaching is itself a learning experience: in trying to explain things to other people, a teacher must first see those things from the student’s point of view, which can lead to a questioning of one’s own knowledge and assumptions.2
The IC does not now embody strong self-reflective norms in its teaching and mentoring programs. Rather, the programs seem to emphasize the transfer and preservation of institutional knowledge and practices. Although that is certainly an important task (especially given the expected
high turnover because of the IC’s younger, more fluid analytic workforce), it is also important for those doing the teaching to receive feedback from the new generation of analysts and for senior managers to create conditions that foster and capitalize on the skills and backgrounds of both current and future analysts.
Promoting continuous learning requires serious investments in training and development. It also requires the systematic elimination of practices that inhibit continuous learning, such as insufficient time, resources, and incentives. If the IC is to develop a scientific approach to continuous learning, its senior and middle managers need to be committed to the concept and communicate that commitment in their goals and programs.
Motivation and Assessment
Once valuable, unique, and difficult-to-replicate capabilities have been developed in a workforce, management practices can help shape employee attention, provide motivation, reward effectiveness, and encourage continuous improvement. Some widely practiced strategies, such as annual evaluations, often produce inaccurate feedback (Murphy and Cleveland, 1995), and it is difficult to ensure the factors being rated are aligned with organizational goals (for further discussion see Kerr, 1995; Lawler and Rhode, 1976). In contrast, research shows the value of ensuring that supervisors have appropriate training in continuous performance management, including skill at setting goals, providing frequent feedback, coaching, and development (e.g., Aguinis, 2007; Smither, 2011).
One important factor in motivation is that the reward system that is most effective is different for different types of employees. With extrinsically motivated employees—that is, employees who are motivated mostly by external factors, such as pay, recognition, and advancement—it is important to link compensation to desired performance (see Bartol and Locke, 2000). With employees who are intrinsically motivated by their work—that is, those for whom internal satisfaction is more important than external recognition—signs of the inherent value of their work is the key.
In the IC, intrinsic motivation is quite common, with analysts and other staff motivated by such things as the opportunity to save lives, serve their country, or solve important and interesting puzzles. For such workers, the most important approach is often to let them use and expand their skills with as few obstacles as possible. Fortunately, although extrinsic rewards are always constrained and limited in supply, intrinsic rewards are not.
Several structured rating methods for assessing performance effectiveness could be adapted for use by the IC (for further discussion of performance appraisals, see Murphy and Cleveland, 1991, 1995). It has long been known that, if assessments fail to address organizational needs, employees
will focus on only those things that they see as being rewarded, to the detriment of other goals (Lawler and Rhode, 1976), or they will seek workplaces that better fit their needs. For the IC, a particular challenge will be to design performance evaluations that take into account both individual actions and teamwork.
When organizations face complex problems in changing and uncertain environments, teamwork and collaboration are essential. Team-based work systems locate decision making closer to the source of problems (in the IC, this refers to analytic decisions), capitalize on diverse perspectives and expertise, and encourage innovation and agility3 (Ilgen et al., 2005; Kozlowski et al., 1999). Team-based intelligence analysis can bring more information to bear on the analytic task and allow teams to dampen errors (for a review, see Hastie, 2011). If done correctly, expanding the information search and generating more alternatives have been shown to improve the effectiveness of forecasting (e.g., Dailey and Mumford, 2006). Creating communities of practice can allow like-minded analysts and others to focus on common interests, pool their knowledge, generate more alternatives for problem solving, and disseminate innovations.
The issue of collaboration is explored more thoroughly in the next chapter, but it is worth noting here that increasing teamwork and collaboration—in appropriate ways—can improve analysts’ performance. For example, differentiation of expertise—that is, distributing knowledge and abilities among numerous individuals, each with specific areas of expertise—promotes greater exploration and innovation (Argote, 2011; Jansen et al., 2006). In order to harness this potential, the IC has to integrate diverse expertise (Miller et al., 2007; Fang et al., 2010).
A positive example of the IC’s growing focus on teamwork and collaboration is the Analytic Resources Catalog, which allows intelligence officers to locate other IC personnel with particular knowledge and skills. Another recent development, A-Space, is aimed at helping the development of collaborative, self-organizing networks of analytic activity. In particular, A-Space allows analysts to query the community, share information and perspectives, and collaborate on solving problems.
Electronic communities of practice initially grow and improve more slowly than other knowledge management tools, such as knowledge portals and team rooms, but over the long term they demonstrate more continual
improvement (Kane and Alavi, 2007). A-Space allows the formation of flexible networks for linking disparate expertise, allowing a self-organizing, exploratory, and agile integration of skills and knowledge that makes it possible to take advantage of the IC’s differentiation. A-Space is conceptualized as an ongoing “experiment,” so it is particularly important that A-Space be studied and improved over time and that the results be disseminated as a way of helping the IC embrace high-quality teamwork in a continuous learning environment.
As discussed in Chapter 2, objective evaluation is key to organizational learning. Given the importance of the IC’s missions and the complexity of the issues it deals with, the IC’s adoption and application of outcome-based evaluation has the potential to significantly improve its various workforce-related practices, from recruitment and hiring to training and incentives.
Applying scientific methods to understanding its own work environment will allow the IC to make appropriate changes to recruitment, selection, and training. One key topic is identifying the specific factors that affect analysts’ ability to learn and be successful in their job, such as cognitive ability, personality, education, and training. These can then be sought when recruiting and selecting employees. It is important to note, however, that the IC should carry out its evaluations in a way that focuses on systemic learning rather than on the assessments of individual analysts. Evaluation should be seen as a positive factor for the community—a process that will enable the entire IC to become more effective—rather than as a punitive process to be dreaded and, if possible, avoided. Effective evaluation, reflection, and continuous improvement are the underpinnings of organizational learning, innovation, and agility.
The IC, like any organization or group, must decide how to balance process evaluation (how well correct procedures are followed) and outcome evaluation (the quality of the analyses). This balancing act is especially difficult in environments, like that of the IC, in which there appears to be little scientific foundation for the best practices embodied in many current procedures, however sincerely and conscientiously they are applied (for a review, see Tetlock and Mellers, 2011).
It is notoriously difficult to devise either process or outcome evaluation procedures that do not create perverse incentives (Kerr, 1975; Wilson 1989). In the case of intelligence analysis, a natural risk is emphasizing the number of work products, which is easily assessed, over their quality, which is very difficult to assess. Indeed, poorly designed evaluation processes can undermine morale and productivity by encouraging extrinsically motivated employees to game the system and discouraging intrinsically motivated
employees, who just want to do interesting work and be treated fairly. Conversely, well-designed evaluations look separately at the performance of the organizations, given its methods and structures, and at the performance of individuals, given the opportunities and constraints that the organization presents them.
Aguinis, H. (2007). Performance Management. Upper Saddle River, NJ: Pearson Prentice Hall.
Argote, L. (2011). Organizational learning and knowledge management. In S.W.J. Kozlowski, ed., The Oxford Handbook of Industrial and Organizational Psychology. New York: Oxford University Press.
Barney, J.B., and P.W. Wright. (1998). On becoming a strategic partner: The role of human resources in gaining competitive advantage. Human Resource Management, 37(1), 31-46.
Bartol, K.M., and E.A. Locke. (2000). Incentives and motivation. In S.L. Rynes and B. Gerhart, eds., Compensation in Organizations: Current Research and Practice (pp. 104-147). San Francisco, CA: Jossey-Bass.
Bell, B.S., A.M. Kanar, and S.W.J. Kozlowski. (2008). Current issues and future directions in simulation-based training in North America. International Journal of Human Resource Management, 19(8), 1,416-1,434.
Cannon-Bowers, J., and C. Bowers. (2010). Synthetic learning environments: On developing a science of simulation, games, and virtual worlds for training. In S.W.J. Kozlowski and E. Salas, eds., Learning, Training, and Development in Organizations (pp. 229-260). New York: Routledge Academic.
Crook, T.R., S.Y. Todd, J.G. Combs, D.J. Woehr, and D.J. Ketchen. (2011). Does human capital matter? A meta-analysis of the relationship between human capital and firm performance. Journal of Applied Psychology.
Dailey, L., and M. Mumford. (2006). Evaluative aspects of creative thought: Errors in appraising the implications of new ideas. Creativity Research Journal, 18(3), 367-384.
Director of National Intelligence. (2008). Intelligence Community Directive (ICD) 610: Competency Directories for the Intelligence Community Workforce. July 16, 2008. Available: http://www.dni.gov/electronic_reading_room/ICD_610.pdf [June 2010].
Fang, C., J. Lee, and M.A. Schilling. (2010). Balancing exploration and exploitation through structural design: The isolation of subgroups and organizational learning Organization Science, 21(3), 625-642.
Fingar, T. (2011). Analysis in the U.S. intelligence community: Missions, masters, and methods. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Hastie, R. (2011). Group processes in intelligence analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Huffcutt, A.I., and W. Arthur, Jr. (1994). Hunter and Hunter (1984) revisited: Interview validity for entry-level jobs. Journal of Applied Psychology, 79(2), 184-190.
Ilgen, D.R., J.R. Hollenbeck, M. Johnson, and D. Jundt. (2005). Teams in organizations: From input-process-output models to IMOI models. Annual Review of Psychology, 56, 517-543.
Jansen, J.L.P., F.A.J. Van Den Bosch, and H.W. Volberda. (2006). Exploratory innovation, exploitative innovation, and performance: Effects of organizational antecedents and environmental moderators. Management Science, 52(11), 1,661-1,674.
Judge, T.A., R. Klinger, and L. Simon. (2010). Time is on my side: Time, general mental ability, human capital, and extrinsic career success. Journal of Applied Psychology, 95(1), 97-102.
Kane, G.C., and M. Alavi. (2007). Information technology and organizational learning: an investigation of exploration and exploitation processes. Organization Science, 18(5), 796-812.
Kerr, S. (1975). On the folly of rewarding A, while hoping for B. Academy of Management Journal, 18(4):769-783. Republished in 1995 in Academy of Management Executive, 9(1), 7-14.
Kluger, A.N., and A. DeNisi. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254-284.
Kozlowski, S.W.J., S.M. Gully, E.R. Nason, and E.M. Smith. (1999). Developing adaptive teams: A theory of compilation and performance across levels and time. In D.R. Ilgen and E.D. Pulakos, eds., The Changing Nature of Work Performance: Implications for Staffing, Personnel Actions, and Development (pp. 240-292). San Francisco, CA: Jossey-Bass.
Kozlowski, S. (2011). Human resources and human capital: Acquiring, building, and sustaining an effective workforce. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Lawler, E.E., and J.G. Rhode. (1976). Information and Control in Organizations. Pacific Palisades, CA: Goodyear Publishing.
Lewis, M.M. (2003). Moneyball: The Art of Winning an Unfair Game. New York: W.W. Norton.
Marsh, P. (1995). Training trainers. Technical and Skills Training, 6, 10-13.
McClelland, G. (2011). Use of signal detection theory as a tool for enhancing performance and evaluating tradecraft in intelligence analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
McDaniel, M.A., D.L. Whetzel, F.L. Schmidt, and S.D. Maurer. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79(4), 599-616.
Miller, D.J., M.J. Fern, and L.B. Cardinal. (2007). The use of knowledge for technological innovation within diversified firms. Academy of Management Journal, 50(2), 308-326.
Murphy, K.R., and J.N. Cleveland. (1991). Performance Appraisal: An Organizational Perspective. Boston, MA: Allyn and Bacon.
Murphy, K.R., and J.N. Cleveland. (1995). Understanding Performance Appraisal: Social, Organizational, and Goal-Based Perspectives. Thousand Oaks, CA: Sage.
Office of the Director of National Intelligence. (2009). National Intelligence Strategy of the United States of America. Available: http://www.dni.gov/reports/2009_NIS.pdf [August 2010].
Ployhart, R.E. (2006). Staffing in the 21st century: New challenges and strategic opportunities. Journal of Management, 32(6), 868-897.
Rynes, S.L., and D.M. Cable. (2003). Recruitment research in the twenty-first century. In W.C. Borman and D.R. Ilgen, eds., Handbook of Psychology: Industrial and Organizational Psychology, 12, 55-76. New York: John Wiley and Sons.
Salas, E., S.J. Weaver, and M.S. Porter. (2011). Learning, training, and development in organizations. In S.W.J. Kozlowski, ed., Oxford Handbook of Industrial and Organizational Psychology. New York: Oxford University Press.
Schmidt, F.L., and J. Hunter. (2004). General mental ability in the world of work: Occupational attainment and job performance: General intelligence, objectively defined and measured. Journal of Personality and Social Psychology, 86(1), 162-173.
Smither, J.W. (2011). Performance management. In S.W.J. Kozlowski, ed., The Oxford Handbook of Industrial and Organizational Psychology. New York: Oxford University Press.
Tetlock, P.E., and B. Mellers. (2011). Structuring accountability systems in organizations: Key trade-offs and critical unknowns. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Wilson, J.Q. (1989). Bureaucracy: What Government Agencies Do and Why. New York: Basic Books.