Learning and Evaluation
Continuous learning and sustainable improvement require systematic procedures to assess performance and provide feedback.
The success of the intelligence community (IC) depends on achieving sound judgments, drawing on the best available analytical methods, workforce, internal collaboration, and external communication. Chapters 3-6 describe the science relevant to each task. Chapter 2 sets the stage by briefly describing the challenges, viewed from the perspective of the continuous learning that the IC needs to fulfill its missions.
Learning from experience is central to the IC, which must stay ahead of adversaries committed to exploiting weaknesses in U.S. national security. The IC’s commitment is seen in its after-action lessons-learned procedures and in the voluminous literature produced by former senior intelligence officers, former directors of the Central Intelligence Agency (CIA), government commissions, and others (Berkowitz and Goodman, 1991, 2002; Betts, 2007; Diamond, 2008; Firth and Noren, 1998; Gates, 1996; George and Bruce, 2008; Godson et al., 1995; Helms and Hood, 2004; Jervis, 2010; Keegan, 2003; Kirkpatrick, 1968; Lowenthal, 2009; MacEachin, 1994, 1996, 2002; Matthias, 2007; May and Zelikow, 2007; Paseman, 2005; Prados, 1982; Tenet, 2007; S. Turner, 1991, 2005; M. Turner, 2006; Wright, 1987; Zegart, 1999, among many others).
A continuing thread in these studies has been the natural barriers to learning faced by the IC. One of those barriers is the IC’s secrecy needs, which can limit its ability to examine its performance when that risks revealing its sources and uncertainties. A second barrier is the reactive
nature of the IC’s work, whose predictions can change the world, so that they become self-fulfilling or self-defeating prophesies. A third barrier is being judged in the court of public opinion, where the details of its work may be unknown or even deliberately distorted.
Two examples illustrate the difficulty of extracting a clear signal regarding the IC’s performance. One is that the apparent U.S. surprise at the end of the Cold War obscured the fact that the IC had consistently given strategic warnings about the Soviet Union’s instability during the final year of U.S.-Soviet competition and had correctly predicted the coup attempt against Mikhail Gorbachev that took place in August 1991 (MacEachin, 1996). A second example is the U-2 spy plane shot down by the Soviet Union on May 1, 1960: the IC is often blamed for the risks it took in having the U-2 fly over Soviet territory, but it is not given credit for the information gained by using the U-2’s unique technical capabilities.
The IC has taken notable steps toward improving its ability to learn from its experiences. The National Defense Intelligence College1 began granting a master’s degree in strategic intelligence in 1980. The CIA’s Sherman Kent School for Intelligence Analysis opened in 2000, followed 2 years later by the CIA University. The CIA’s Center for the Study of Intelligence produces reflective reports based on declassified data, which are published in Studies in Intelligence. The Defense Intelligence Journal is another source of new analytical and scientific techniques. The IC has also supported the numerous publications reviewing the contribution to learning from structured analytical techniques, such as analysis of competing hypotheses, team A/B work, and alternative futures (Heuer and Pherson, 2010; U.S. Government, 2009). The success of these efforts depends on how well they accommodate the strengths and weaknesses of the human judgment needed to accomplish this learning.
The conditions for learning are well understood. Central to them are prompt, unambiguous feedback, with proper incentives. The threats to learning are also well understood. They arise at the levels of individuals, teams, and organizations. A brief reprise of these threats will set the stage for the solutions proposed in the succeeding chapters of this report. Fuller exposition of the research can be found in the companion volume.
Analysis is an exercise in judgment under conditions of uncertainty. The failings of those judgments are well documented (Ariely, 2008; Gilovich
et al., 2002; Kahneman et al., 1982). The research also identifies theoretically founded ways to improve judgment (Phillips et al., 2004; Weiss and Shanteau, 2003). For example, Slovic and Fischhoff (1977) and Arkes et al. (1988) found that hindsight bias could be reduced by requiring individuals with outcome knowledge to explain how they would have predicted other outcomes. Arkes et al. (1988) implemented this simple “debiasing” procedure with physicians, who were required to list reasons that each of several possible diagnoses might have been correct, before assessing the probabilities that they believe that they would have given in foresight. This procedure is similar to “what if?” analysis (U.S. Government, 2009), in which analysts reason backwards to identify events critical to the assumed outcome. However, as a learning tool or a reevaluation of current analysis, Arkes’ debiasing procedure would require analysts to consider alternatives to a known (or assumed) outcome and identify events or data which would support alternative assessments.
Effective debiasing procedures build on the strengths of individuals’ normal ways of thinking, while avoiding known weaknesses. Thus, people are naturally good at recruiting reasons that support favored explanations, but they can produce contrary reasons if required to do so. Arkes’ procedure does just that, allowing physicians to take better advantage of what they know. Milkman and colleagues (2009) offer a recent summary of debiasing research; Fischhoff (1982) offers an earlier one, reaching similar conclusions. Creating conditions that counter hindsight bias is one way to improve intelligence analysis, by helping analysts to make better use of the evidence at hand and to recognize its limits.
Similar patterns are found in another judgment task central to intelligence analysis: assessing the confidence to place in analyses. Appropriate confidence, or calibration, was a key topic in Heuer’s (1999) The Psychology of Intelligence Analysis, with much additional research having emerged since then (see Chapter 3). That research shows the central role of feedback in learning. Without orderly feedback, a common outcome is overconfidence, with experts and laypeople expressing greater confidence than their knowledge warrants (e.g., Dawson et al., 1993; Tetlock, 2006). However, some experts are well calibrated, including meteorologists, expert bridge players, and professional horserace handicappers (Arkes, 2001). What these experts have in common is receiving large quantities of high-quality feedback on their judgments.
Building on this research, elements of the Canadian intelligence community have implemented a strategy to provide its analysts with feedback about the quality of their judgments. In a presentation before the committee, David Mandel of Defence Research and Development Canada described how simple training resulted in significant improvement in analysts’ perfor-
mance, with minimal disruption to normal work flows (Mandel, 2009; see Murphy and Daan, 1984, for another successful example).
Kahneman and Lovallo (1993) offer another approach to improving judgment by restructuring tasks. They contrast the “inside view” with the “outside view” for analyzing the probability that missions will succeed. The former considers all aspects of the mission from the perspective of the people performing it, including plausible obstacles and future scenarios. The latter view ignores all the specifics, while considering just the success rate of similar missions in the past. The outside view generally produces superior predictions (Buehler et al., 1994). Kahneman and Tversky (1979) offer a related procedure for integrating inside and outside views. Spellman (2011) reviews the literature more generally. Chapter 3 treats these issues in greater detail.
One recurrent element in successful debiasing procedures is helping individuals to organize their thought processes without losing the intuition and judgment that their tasks require. For example, the contrast between inside and outside views helps users to combine case-specific information (inside) with base-rate information (outside). This fundamental perspective is often nonintuitive or unknown, like other rules of thought that are not part of most curricula. Analysts need familiarity with such fundamental analytical perspectives if they are to understand the rationale of debiasing procedures. Such familiarity is valuable, even without mastery. Basic familiarity will not provide analysts with enough skill to fully apply the methods in complex situations. However, an important element of analytical judgment is recognizing situations in which additional analyses or methods are needed, going beyond the limits of intuitive judgment. Therefore, analysts who know about a variety of analytical methods can appropriately ask for the services of experts in the most relevant ones. The benefits of understanding when and how to seek out expert assistance far outweigh any minimal risk that a familiar, although nonexpert, analyst might attempt to apply the method beyond his or her understanding.
Chapter 3 describes the set of analytical methods that, in the committee’s judgment, all analysts should understand. The committee’s companion volume (National Research Council, 2011), as a resource for individual analysts and training courses, provides discussion of these methods at the level necessary to benefit analytic work. For each method, mere familiarity will protect analysts from errors in judgment, while opening the door to fuller applications. Two examples will suggest the learning that is possible only with knowledge of analytical methods.
The first example is from game theory. Game theory predicts choices
in strategic situations, where choices depend on what other actors are expected to do. Individuals familiar with its basic rationale can avoid naïve projections from one side’s plans (adopting, in effect, a purely inside view). In the IC, fuller applications have revealed nonintuitive aspects of economic sanctions (Smith, 1995) and terrorist threats to peace processes (Bueno de Mesquita, 2005; Kydd and Walter, 2002).
The second example is from signal detection theory. Signal detection theory makes the fundamental, and often nonintuitive, distinction between what people know and what they reveal in their behavior. The former depends on their discrimination ability, the latter on their incentives for avoiding particular kinds of error (e.g., appearing to cry wolf). Just knowing about these distinctions can limit naïve interpretations, such as giving too much credit to investment advisers who predicted a market crash, without knowing how many times they had erroneously predicted crashes. Fuller analyses have been essential to assessing the validity of polygraph testing (National Research Council, 2003) and improving the usefulness of mammography (Swets et al., 2000).
Much IC analysis is done by teams or groups. Here, too, behavioral and social science research has identified ways to improve learning, by taking advantage of the shared knowledge that group members can contribute if their work is organized effectively. For example, teams’ productivity depends on their composition and decision-making procedures. Diversity in members’ knowledge can be very helpful (Page, 2007) if their specialization is recognized (Austin, 2003; see also Hastie, 2011). Decision making can be more effective if “unblocking” techniques allow members to develop novel solutions, achieving a balance between creative thought and undisciplined speculation.
Chapter 4 discusses research into the barriers to effective teamwork and ways to overcome them. For example, one such barrier arises from intergroup rivalries, which can form very quickly and become remarkably resilient. One way to make the boundaries between disparate groups more permeable is by temporary assignments to other groups. Indeed, even asking people to imagine being in the position of a person from the other group can reduce denigration of that group’s inputs (Dearborn and Simon, 1958; Galinsky and Moskowitz, 2000).
As an example of an intervention designed to overcome such barriers, the National Aeronautics and Space Administration’s (NASA’s) Goddard Space Flight Center instituted a “pause and learn” process in which teams or groups discuss what they have learned, prompted by reaching a project milestone—and not as a sign that something has “gone wrong.” These
sessions are reinforced by workshops in which teams share their lessons with other project teams, so as to diffuse learning in a noncrisis, nonblame environment. A familiar counterexample is the Challenger disaster, in which decision makers were physically separated and constrained by a division of labor that fostered needed specialization, but without facilitating the synthesis needed to take full advantage of it (see Zegart, 2011).
Whether organizations learn depends on their commitment to evidence-based evaluation. Even medicine has examples of procedures being practiced for many years before being tested and found to be wanting. One example is right-heart catheterization: when it was rigorously evaluated, after it was standard practice, it was found to increase mortality rates (Connors et al., 1996). As another example, physicians used patient behaviors to diagnose the damage caused by closed-head injuries, before research showed that those behaviors revealed little (Gouvier et al., 1988). The advent of “evidence-based medicine” has revolutionized medical practice by speeding this process of finding out what works rather than relying exclusively on personal intuition and experience.
Other fields have gradually begun to adopt such approaches. For example, a summary of studies investigating police techniques found that the Drug Abuse Resistance Education (DARE) program designed to persuade youngsters not to use illegal drugs was ineffective, whereas police-probation collaborations have been shown to reduce criminal recidivism (Lum, 2009). Other systematic evaluations have found little evidence to support lie detection techniques (e.g., National Research Council, 2003; Szucko and Kleinmuntz, 1981) or voice-stress evaluation (Rubin, 2009). Ineffective techniques not only waste resources, but also incur the opportunity costs of not looking for effective ones.
The well-known 1993 Supreme Court case (Daubert v. Merrell Dow Pharmaceuticals [509 U.S. 579]) established empirical testing, peer review, and the use of the scientific method as the acceptable bases for admitting evidence into court. Conversely, the Court’s decision lowered the value of conventional practice and intuition unless they are supported by evidence. These “Daubert criteria” establish a default standard for any organization committed to evaluating the methods that it is currently using or considering adopting. Throughout this report, we adopt these evidentiary standards in proposing ways to strengthen the scientific foundations of intelligence analysis.
Such evaluation encounters natural opposition. As discussed above, intuition is a misleading guide to the actual effectiveness of analytical methods. In addition, many people are threatened by any change and so resist
evaluation. To counter these natural oppositions, research on organizational behavior has identified conditions conducive to ensuring proper evaluation. One of those conditions is strong leadership. A second is creating incentives that allow admitting mistakes and changing approaches without blame or punishment. A third is hiring individuals suited to the job. For many analyst positions, that will mean individuals with strong intellectual skills, perhaps in preference over individuals with strong domain-specific knowledge.
Organizations’ ability to adopt such practices depends, in part, on their customers’ willingness to allow them. The public nature of some of the IC’s work can limit its ability to admit to the need to learn. The constraints on its ability to communicate with diverse, harried clients can further limit its ability to do its job. As a result, effective communication is a strategic necessity for performing the most relevant analyses and making their results most useful. Chapter 6 discusses the science of communication, along with its application to defining analyses and conveying the content, rationale, and authoritativeness of their results.
A REALISTIC AGENDA FOR CHANGE
Although the agenda proposed in the following chapters is ambitious, the committee concludes that, in many cases, the IC can make the needed changes with modest modifications in its procedures. Indeed, an evidence-based approach to analysis should be within its grasp, based on the changes already under way in the IC and the knowledge that the behavioral and social sciences can immediately bring to bear on them. In medicine, the adoption of evidence-based medicine has accelerated with the increased support of the community’s leaders and the increased accumulation of evidence demonstrating its value (Dopson et al., 2001). The IC is in a position to achieve similar success.
Ariely, D. (2008). Predictably Irrational: The Hidden Forces That Shape Our Decisions. New York: HarperCollins Publishers.
Arkes, H.R. (2001). Overconfidence in judgmental forecasting. In J.S. Armstrong, ed., Principles of Forecasting: A Handbook for Researchers and Practitioners (pp. 495-516). Norwell, MA: Kluwer Academic Publishers.
Arkes, H.R., D. Faust, T.J. Guilmette, and K. Hart. (1988). Eliminating the hindsight bias. Journal of Applied Psychology, 73(2), 305-307.
Austin, J.R. (2003). Transactive memory in organizational groups: The effects of content, consensus, specialization, and accuracy on group performance. Journal of Applied Psychology, 88(5), 866-878.
Berkowitz, B.D., and A.E. Goodman. (1991). Strategic Intelligence for American National Security (with new afterword). Princeton, NJ: Princeton University Press.
Berkowitz, B.D., and A.E. Goodman. (2002). Best Truth: Intelligence in the Information Age. New York: Yale University Press.
Betts, R.K. (2007). Enemies of Intelligence: Knowledge and Power in American National Security. New York: Columbia University Press.
Buehler, R., D. Griffin, and M. Ross. (1994). Exploring the “Planning Fallacy”: Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67(3), 366-381.
Bueno de Mesquita, E. (2005). Conciliation, counterterrorism, and patterns of terrorist violence. International Organization, 59, 145-176.
Connors, A., Jr., T. Speroff, N.V. Dawson, C. Thomas, F.E. Harrell, D. Wagner, N. Besbiens, L. Goldman, A.W. Wu, R.M. Califf, W.J. Fulkerson, H. Vidaillet, S. Broste, P. Bellamy, J. Lynn, and W.A. Knaus. (1996). The effectiveness of right heart catheterization in the initial care of critically ill patients. Journal of the American Medical Association, 276(11), 889-897.
Dawson, N.V., A.F. Connors, Jr., T. Speroff, A. Kemka, P. Shaw, and H.R. Arkes. (1993). Hemodynamic assessment in managing the critically ill: Is physician confidence warranted? Medical Decision Making, 13(3), 258-266.
Dearborn, D.C., and H.A. Simon. (1958). Selective perception: A note on the departmental identifications of executives. Sociometry, 21(2), 140-144.
Diamond, J. (2008). The CIA and the Culture of Failure: U.S. Intelligence from the End of the Cold War to the Invasion of Iraq. New York: Stanford Security Studies.
Dopson, S., L. Locock, D. Chambers, and J. Gabbay. (2001). Implementation of evidence-based medicine: Evaluation of the Promoting Action on Clinical Effectiveness Programme. Journal of Health Services Research and Policy, 6(1), 23-31.
Firth, N.E., and J.H. Noren. (1998). Soviet Defense Spending: A History of CIA Estimates, 1950-1990. College Station, TX: Texas A&M University Press.
Fischhoff, B. (1982). Debiasing. In D. Kahneman, P. Slovic, and A. Tversky, eds., Judgment Under Uncertainty: Heuristics and Biases (pp. 422-444). New York: Cambridge University Press.
Galinsky, A., and G.B. Moskowitz. (2000). Perspective taking: Decreasing stereotype expression, stereotype accessibility and ingroup favoritism. Journal of Personality and Social Psychology, 78(4), 708-724.
Gates, R.M. (1996). From the Shadows: The Ultimate Insider’s Story of Five Presidents and How They Won the Cold War. New York: Simon and Schuster.
George, R.Z., and J.B. Bruce, eds. (2008). Analyzing Intelligence: Origins, Obstacles, and Innovations. Washington, DC: Georgetown University Press.
Gilovich, T., D. Griffin, and D. Kahneman, eds. (2002). Judgment Under Uncertainty II: Extensions and Applications. New York: Cambridge University Press.
Godson, R., E.R. May, and G.J. Schmitt, eds. (1995). U.S. Intelligence at the Crossroads: An Agenda for Reform. New York: New Discovery Books.
Gouvier, W.D., M. Uddo-Crane, and L.M. Brown. (1988). Base rates of post-concussional symptoms. Archives of Clinical Neuropsychology, 3(3), 273-278.
Hastie, R. (2011). Group processes in intelligence analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Helms, R., and W. Hood. (2004). A Look Over My Shoulder: A Life in the Central Intelligence Agency. New York: Random House.
Heuer, R.J., Jr. (1999). Psychology of Intelligence Analysis. Washington, DC: Center for the Study of Intelligence, Central Intelligence Agency.
Heuer, R.J., Jr., and R.H. Pherson. (2010). Structured Analytical Techniques for Intelligence Analysis. Washington, DC: CQ Press.
Jervis, R. (2010). Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War. New York: Cornell University Press.
Kahneman, D., and D. Lovallo. (1993). Timid choices and bold forecasts: A cognitive perspective on risk taking. Management Science, 39(1), 17-31.
Kahneman, D., and A. Tversky. (1979). Intuitive prediction: Biases and corrective procedures. Management Science, 12, 313-327.
Kahneman, D., P. Slovic, and A. Tversky, eds. (1982). Judgment Under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.
Keegan, J. (2003). Intelligence in War: Knowledge of the Enemy from Napoleon to Al-Qaeda. London, UK: Hutchinson.
Kirkpatrick, L.B. (1968). The Real CIA: An Insider’s View of the Strengths and Weaknesses of Our Government’s Most Important Agency. New York: Macmillan.
Kydd, A., and B.F Walter. (2002). Sabotaging the peace: The politics of extremist violence. International Organization, 56(2), 263-296.
Lowenthal, M.M. (2009). Intelligence: From Secrets to Policy. 4th ed. Washington, DC: CQ Press.
Lum, C. (2009). Evidence-Based Practices in Criminal Justice. Paper presented at the Workshop on Field Evaluation and Cognitive Science-Based Methods and Tools for Intelligence and Counter-Intelligence. Washington, DC. September 22. Department of Criminology, Law and Society, George Mason University.
MacEachin, D.J. (1994). The Tradecraft of Analysis: Challenge and Change in the CIA. Working Group on Intelligence Reform Series, No. 13. Washington, DC: Consortium for the Study of Intelligence.
MacEachin, D.J. (1996). CIA Assessments of the Soviet Union: The Record Versus the Charge—An Intelligence Monograph. Washington, DC: Center for the Study of Intelligence, Central Intelligence Agency.
MacEachin, D.J. (2002). U.S. Intelligence and the Confrontation in Poland, 1980-1981. University Park: Pennsylvania State University Press.
Mandel, D. (2009). Canadian Perspectives: Applied Behavioral Science in Support of Intelligence Analysis. Paper presented at the meeting of the Committee on Behavioral and Social Science Research to Improve Intelligence Analyses for National Security, Washington, DC. May 14. Available: http://individual.utoronto.ca/mandel/nas2009.pdf [October 2010].
Matthias, W.C. (2007). America’s Strategic Blunders: Intelligence Analysis and National Security Policy, 1936-1991. University Park: Pennsylvania State University Press.
May, E.R., and P.D. Zelikow. (2007). Dealing with Dictators: Dilemmas of U.S. Diplomacy and Intelligence Analysis, 1945-1990. Cambridge, MA: The MIT Press.
Milkman, K.L., D. Chugh, and M.H. Bazerman. (2009). How can decision making be improved? Perspectives on Psychological Science, 4,379-4,383.
Murphy, A.H., and H. Daan. (1984). Impacts of feedback and experience on the quality of subjective probability forecasts: Comparison of results from the first and second years of the Zierikzee experiment. Monthly Weather Review, 112(3), 413-423.
National Research Council. (2003). The Polygraph and Lie Detection. Committee to Review the Scientific Evidence on the Polygraph. Board on Behavioral, Cognitive, and Sensory Sciences and Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
National Research Council. (2011). Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Page, S.E. (2007). The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton, NJ: Princeton University Press.
Paseman, F. (2005). A Spy’s Journey: A CIA Memoir. St. Paul, MN: Zenith Press.
Phillips, J.K., G. Klein, and W.R. Sieck. (2004). Expertise in judgment and decision making: A case for training intuitive decision skills. In D.J. Koehler and N. Harvey, eds., Blackwell Handbook of Judgment and Decision Making (pp. 297-315). Malden, MA: Blackwell.
Prados, J. (1982). The Soviet Estimate: U.S. Intelligence Analysis and Russian Military Strength. New York: The Dial Press.
Rubin, P. (2009). Voice Stress Technologies. Paper presented at the Workshop on Field Evaluation of Behavioral and Cognitive Sciences-Based Methods and Tools for Intelligence and Counterintelligence, National Research Council, Washington, DC. September 22. Haskins Laboratories.
Slovic, P., and B. Fischhoff. (1977). On the psychology of experimental surprises. Journal of Experimental Psychology: Human Perception and Performance, 3(4), 544-551.
Smith, A. (1995). The success and use of economic sanctions. International Interactions, 21(3), 229-245.
Spellman, B. (2011). Individual reasoning. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Swets, J.A., R.M. Dawes, and J. Monahan. (2000). Psychological science can improve diagnostic decisions. Psychological Science in the Public Interest, 1(1), 1-26.
Szucko, J.J., and B. Kleinmuntz. (1981). Statistical versus clinical lie detection. American Psychologist, 36(5), 488-496.
Tenet, G. (2007). At the Center of the Storm: The CIA During America’s Time of Crisis. New York: Harper Perennial.
Tetlock, P.E. (2006). Expert Political Judgment: How Good Is It? How Can We Know? Princeton, NJ: Princeton University Press.
Turner, M.A. (2006). Why Secret Intelligence Fails. Chicago, IL: Potomac Books Inc.
Turner, S. (1991). Terrorism and Democracy. Boston, MA: Houghton Mifflin.
Turner, S. (2005). Burn Before Reading: Presidents, CIA Directors, and Secret Intelligence. New York: Hyperion.
U.S. Government. (2009). A Tradecraft Primer: Structured Analytical Techniques for Improving Intelligence Analysis. Available: https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/Tradecraft%20Primer-apr09.pdf [April 2010].
Weiss, D.J., and J. Shanteau. (2003). Empirical assessment of expertise. Human Factors, 45(1), 104-116.
Wright, P. (1987). Spy Catcher: The Candid Autobiography of a Senior Intelligence Officer. New York: Viking Adult.
Zegart, A. (1999). Flawed by Design: The Evolution of the CIA, JCS, and NSC. Stanford, CA: Stanford University Press.
Zegart, A. (2011). Implementing change: Organizational challenges. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.