The complexity of the world precludes one-size-fits-all analytic approaches. Knowing which techniques to use for different problems is essential to sound analysis.
Analysts in the intelligence community (IC) have to perform many different tasks, including—but not limited to—answering the questions posed by their customers, providing warnings, and monitoring and assessing current events and new information (Fingar, 2011). In performing these tasks, they must consider the quality of information; the meaning of observed, reported, or assessed developments; and sources for additional information. The quality of each judgment is a function of the evidence, assumptions, analytic methods, and other aspects of the “tradecraft” at each stage of the process. As a result, IC analytic judgments are no better than the weakest link in any of the chains of analysis.
The IC has a long track record of successfully applying a wide variety of approaches to its tasks. It has, however, made limited use of behavioral and social sciences approaches used by other professions that face analogous problems. Those neglected approaches include probability theory, decision analysis, statistics and data analysis, signal detection theory, game theory, operations research, and qualitative analysis. This chapter begins by characterizing the cognitive challenges that analysts face, then provides brief descriptions of approaches designed to meet these challenges.
The committee concludes that basic familiarity with the approaches discussed in this chapter is essential to effective analysis. That familiarity should be deep enough to allow analysts to recognize the kinds of problems that they face and to secure the support that they need from experts for detailed applications of particular approaches. Analysts need not be game theorists, for example, in order to see a game theoretic situation and seek input from someone with the relevant expertise. However, without basic
familiarity of a range of analytic approaches, they are unlikely to identify the basic kinds of interdependency between actors’ decisions inherent in game theoretic situations.
Often, analysts are left to reach conclusions by applying their own expert judgment to situations about which they have deep knowledge. Indeed, many analysts spend years, even decades, developing substantive expertise on specific countries or geographic regions, cultures, languages, religions, terrorist organizations, political movements, weapons systems, or industrial processes. This expertise will always be the primary resource in intelligence analysis.
Taking full advantage of domain-specific knowledge requires being able to apply it to new situations, to combine it with other forms of expertise, and to assess the definitiveness of the result. As discussed in Chapter 2, evidence from other areas finds that even knowledgeable individuals may make poor inferences and have unwarranted confidence in them (for reviews, see Arkes and Kajdasz, 2011; Spellman, 2011). For example, experienced stock analysts often do little better than chance in selecting profitable stock portfolios (Malkiel, 1999). The same has been found for doctors’ predictions of how faithfully individual HIV-infected drug users would adhere to antiretroviral therapy (Tchetgen et al., 2001). Foreign policy subject-matter experts do little better than well-informed lay people (or simple extrapolation from recent events) when predicting future political scenarios (Tetlock, 2006).
One condition that contributes to such overconfidence is the lack of task structure. Experts outperform novices (and chance) when tasks have well-structured cues, but when tasks are ill structured—as occurs with the ambiguous cues that often confront intelligence analysts—experts perform no better than novices (Devine and Kozlowski, 1995). A second condition that contributes to overconfidence is hindsight bias, which leads even experts to exaggerate how much they know or would have known if they had had to make others’ predictions (Fischhoff, 1975; Wohlstetter, 1962). A third condition is the ambiguity of many forecasts, allowing people to give themselves the benefit of the doubt when interpreting their predictions (Erev and Cohen, 1990).
A cornerstone of the behavioral and social sciences is a suite of analytical methods designed to address these conditions by structuring tasks, reducing their ambiguity, and providing evaluative criteria. The committee believes that all analysts should have basic familiarity with these analytical methods, taking advantage of the rigorous evaluation that they have undergone. Analysts’ familiarity should be minimally sufficient to identify the
fundamental structure of different classes of problems and to communicate with experts capable of fully applying them.
STRUCTURED ANALYTIC TECHNIQUES
It is not news to the IC that relying on expert judgment and intuition has drawbacks, and, indeed, the IC has long recognized characteristic analysts’ biases in judgment and decision problems (see Arkes and Kajdasz, 2011; Spellman, 2011). These biases include “mindset” or “group think,” in which a team prematurely converges on one hypothesis (or small set of hypotheses) and then confirms that hypothesis by seeking out supportive data or interpreting existing data in ways favorable to it, rather than seeking data that might disprove it.
A number of methods, known collectively as structured analytic techniques, have been developed specifically to overcome or at least limit such biases. These methods, devised largely by former intelligence officers, date back to the pioneering writings of Richards Heuer, Jr. (1999; recently expanded and updated in Heuer and Pherson, 2010; Heuer, 2009). They have been included in introductory classes in intelligence analysis offered in the IC,1 in recently created intelligence studies programs,2 and in IC intelligence analysis tradecraft primers (Defense Intelligence Agency, 2008; U.S. Government, 2009). Besides avoiding some of the biases of judgment and intuition, these structured methods seek to improve teamwork and document the reasoning that underlies intelligence judgments (Heuer, 2009).
Perhaps the best known structured analytic technique, the analysis of competing hypotheses, has analysts create a matrix, with rows for individual data and columns for alternative hypotheses (Heuer, 1999). The method directly addresses the problems just described, by directing an analyst’s attention at the full sets of data and hypotheses and requiring an explicit tally of data consistent with each hypothesis. However, it is open to several possible objections (see National Research Council, 2010, pp. 18-21). One is that it gives no weight to the hypotheses’ a priori plausibility. Approaches grounded in probability theory (see the next section) require an assessment of the prior probability of each hypothesis’s being correct (e.g., relations between two countries staying constant, improving, or deteriorating).
Second, the usefulness (or diagnosticity) of data depends on how con-
sistent they are with different hypotheses. For example, recalling diplomats or putting forces on alert could be consistent with both intending hostilities and hoping to prevent them. That ambiguity might be missed without more explicit assessment of conditional probabilities. Alternatively, the presence of many unlikely hypotheses may give a misleading tally to a favored hypothesis.
The committee heard presentations advocating wider use of various forms of structured analytic techniques. In our view, all potential methods should be evaluated in light of their plausibility, given basic science, and their performance, in conditions as close as possible to those faced by analysts. The remainder of this chapter briefly reviews that science; the companion volume provides further details on the research.
Although analysts routinely entertain hypotheses that might explain particular observations and are trained to seek alternative explanations (see previous section), they rarely formalize those beliefs in the probabilities needed to communicate their degrees of belief and evaluate them in the light of future events.
Even though probability computations can become complicated, the basic ideas are quite simple. First, probability is a measure of an analyst’s belief that an event will occur (probability can also measure an analyst’s belief that something is true; e.g., an observed event has a particular significance). Second, the probability that something will happen equals 1. Third, if two events are mutually exclusive (the occurrence of one event precludes the occurrence of the other), then the chance that one or the other of these two events will occur equals the sum of the two probabilities. The rules of Bayesian inference build on these simple principles, leading to orderly judgments about uncertain events. As analysts understand the basic logic, their judgments are likely to improve. (For more information on the logic and value of probability theory, see, among other, Drake, 1999.)
Contrast this orderly use of probability with the estimative language (or verbal quantifiers) used to ascribe degrees of likelihood in National Intelligence Estimates; see Figure 3-1. Although the likelihood of an event clearly increases as one moves from the left to the right on the scale in Figure 3-1, it is very difficult to say anything more than that. (For an early discussion on estimative language, see Kent, 1964.) Suppose one needed to know the chance that something other than two mutually exclusive events would occur, when one was “unlikely” and the other “even chance.” How much should one worry about the remaining possibilities?
Some skeptics argue that using probabilities in intelligence analysis is inappropriate. For example, the National Intelligence Council (2007b, p. 4)
explicitly states, “Assigning precise numerical ratings to such judgments would imply more rigor than we intend.” Similarly, a National Intelligence Estimate on Iran (National Intelligence Council, 2007a, p. 4) said, “Because analytic judgments are not certain, we use probabilistic language to reflect the community’s estimates.”
The committee disagrees with these blanket dismissals of considering probabilities. If analytic rigor and certainty are to be improved, probability has to be included in the analytic process. In his classic essay “Words of estimative probability,” Sherman Kent (1964) captured a truth whose details have been studied extensively by behavioral scientists: numeric probabilities convey expectations clearly, while verbal quantifiers (e.g., likely, rarely, a good chance) do not (see Budescu and Wallsten, 1995). Verbal quantifiers can mean different things to different people—and even to the same person in different situations (e.g., unlikely rain, unlikely surgical complication).
Consider, for example, possible interpretations of the statement, “When military exercises are performed, the president rarely attends.” The meaning of “rarely” might be interpreted to include a wide range of numeric values. Although this particular example is easily solved by providing a percentage of known historical events, the implications for national security become clear if an analyst does not clarify the historical certainty and follow it with additional verbal quantifiers of an expected future event on which a decision maker may act. For example, “Because the president announced he will attend next week’s exercise, it is likely an offensive provocation rather than a routine exercise.” By assigning a numerical value to historically known events, an analyst can more easily apply numeric probability to future events and thus improve clarity and value of assessments provided to decision makers (see discussion about communication in Chapter 6).
Concerns that teaching probability theory and application to analysts is too difficult are not well founded, as evidenced in the numerous academic
programs that include it as a core subject. Probability (and the other formal methods discussed in this chapter) is regularly taught to master’s degree students in applied programs in business administration, public health, and other fields, as well as being required for undergraduates in economics, political science, psychology, and other behavioral and social sciences. That is, students with intellectual talents like those of intelligence analysts routinely develop skills at the level the committee recommends. The committee sees no reason that analysts cannot be made familiar with these approaches through a combination of in-house training for working analysts and hiring new analysts whose education includes the relevant training. The committee also notes that many, if not most, people can make orderly probability judgments (see O’Hagen et al., 2006), including representative samples of U.S. 15- and 16-year olds (Bruine de Bruin et al., 2006; Fischhoff et al., 2000) and adults judging disease risks (Woloshin et al., 2008). The committee is confident that intelligence analysts, if provided with basic familiarity, are capable of both understanding and applying probability theory in their work.
Indeed, basic probability principles such as Bayes’ rule have occasionally been used in intelligence analysis. Zlotnick (1972) reports an application to the Cuban missile crisis; Schweitzer (1978) discusses how Bayesian reasoning was applied to continual assessment of the probability that there would be an outbreak of hostilities in the Middle East; Mandel (2009) reports well-calibrated probabilities from Canadian analysts.
A suitable protocol for probability judgments should address potential concern that precise numbers convey unwarranted precision in analytic assessments. One approach is embodied in the “confidence in assessments” characterizations that currently accompany the verbal likelihood statements in National Intelligence Estimates. A second approach is to systematically summarize the quality of the underlying evidence, to consider such issues as: the kind of data used, the rigor of the review processes, and the maturity of the underlying theory (e.g., the numerical unit spread assessment pedigree [NUSAP]) system, Funtowicz and Ravetz, 1990). A third approach is to conduct sensitivity analyses, showing how summary probability judgments would change with plausible variations in underlying assumptions (e.g., if economic growth were 7 percent instead of 3 percent). Properly formulated, such analyses might support meaningful ranges of summary probabilities (e.g., 60-70 percent chance of elections before the end of the year).
In addition to having individual analysts express their personal beliefs in probability terms, there are methods for eliciting such judgments from groups. One promising method is the prediction market. In it, participants trade positions on “securities” on the basis of well-defined events, such that the value of the security depends on whether the event occurs. The security might be worth $1 if it occurs and nothing if it does not occur.
The security’s trading price represents the market’s probability that the event will occur. Prediction markets have been found to “… increase the accuracy of poll-based forecasts of election outcomes, official corporate experts’ forecasts of printer sales, and statistical weather forecasts used by the National Weather Service” (Arrow et al., 2008, p. 877; see also Chen et al., 2008; Wolfers and Zitzewitz, 2004, 2006). Many companies, including Google (Cowgill, 2005) and Intel (Intel, 2007), now use prediction markets internally to forecast the likelihood of future events of corporate interest. In popular culture, on January 20, 2010, Intrade’s market value was 70 cents for the security “Tiger Woods will play in a PGA Tour Event before April 30, 2010.” It was 7-9 cents for “Osama Bin Laden will be captured/neutralized before midnight ET on 30 Jun 2010.”3
Despite the misadventure by the Defense Advanced Research Projects Agency in first proposing, then canceling, a policy analysis futures market (Hanson, 2003), the committee concludes that the use of prediction markets in the IC bears systematic empirical evaluation.
Decision analysis provides another family of methods potentially suited to intelligence problems (Howard and Matheson, 1983; Raiffa, 1968). Decision analysis offers systematic procedures for formulating and solving problems that involve choices under uncertainty. Decision analysis could provide a vehicle for structuring and analyzing intelligence problems that require analysts to infer or interpret the choices of adversaries and others, both of interest in their own right and as inputs to game theory analyses (see below).
A central concept in decision analysis is the “value of information” (Fischhoff, 2011; Howard and Matheson, 1983; Raiffa, 1968), that is, how much better can decisions be made if analysts have some information than if they do not have it. Decision analysis provides a way to formalize this assessment for various kinds of decisions and information. However, just thinking in these terms can help customers to determine what they really need to know, so that they can make more precise requests for information, while at the same time helping analysts to assess their customers’ needs. Decision analysis can also provide a check against collecting and reporting information simply because “we’ve always done it” or because it seems like it would be good to know.
As with probability theory, decision analysis is regularly taught as a core subject in professional programs to students with no prior exposure and even modest analytical aptitude. Readily available computer software
See http://www.intrade.com [November 2010].
(e.g., Pallisade’s precision tree or Treeage’s product of the same name4) can guide training and applications. These programs are often compatible with common spreadsheet programs, such as Excel, which makes it possible for students with minimal mathematical training to use standard decision analysis tools, such as decision trees and influence diagrams. As with the other methods in this chapter, the committee concludes that familiarity with these basic concepts of decision analysis is essential to intelligence analysis.
STATISTICS AND DATA ANALYSIS
Of all social science research methods, statistics and data analysis probably represent the most recognized family of tools. The committee concludes that basic (not expert) data analytic and statistical familiarity should be a requirement for any intelligence analyst. This familiarity would include such knowledge as how to organize and display data, how to calculate descriptive measures of central tendency (e.g., means, medians, and modes) and variability (e.g., range, variance, standard deviation, and mean absolute deviation), how to construct simple point and interval estimates (e.g., confidence intervals), how to perform simple statistical hypothesis tests, and how to search for relationships among variables (e.g., correlation and regression).
The committee recognizes that intelligence work has constraints that can complicate statistical analysis. For example, analysts may have less opportunity to ensure the representativeness of the data that they have to analyze. But even in such cases, they can benefit from statistical approaches for characterizing imperfect samples (e.g., length-biased sampling, truncation, censoring, or multiple systems analysis). Intelligence analysts often must work with data that have been deliberately manipulated to deceive them. Here, too, they may benefit from statistical procedures for identifying outliers and inconsistencies. However, many intelligence issues involve the routinely challenging problems of data quality that statistics can clarify; studies on climate change, economic development, or election forecasts face many of the same problems.
SIGNAL DETECTION THEORY
Although perhaps less well known than the other methods discussed in this chapter, signal detection theory deals with a fundamental problem when making judgments under uncertainty: how to differentiate between an analyst’s knowledge and response biases (for a review, see McClelland,
2011). Two people looking at the same evidence regarding an uncertain event (e.g., a political crisis, a change in military readiness, or an impending hurricane) may make different predictions either because one has a better understanding of the situation or because one is more willing to predict the event (e.g., warn decision makers about a political crisis, military readiness problem, or a hurricane).
Signal detection theory can be used externally to sort out why people say different things or why sensors have different response patterns. Indeed, signal detection theory is a standard technique in signals intelligence (SIGINT). However, it can also be used to provide clear reporting incentives for all-source analysts so that they know what level of surety is needed before they issue a signal. Signal detection theory embodies the principles of Bayesian reasoning in that it establishes the importance of expectations in predictions. If an event is very unlikely, it should not be predicted unless there is a very strong signal or there is very strong need not to miss it.
Of all social science methods, game theory best captures many of the arenas in which intelligence analysis take place. Game theory is a formal structure to anticipate decisions, taking into account each decision maker’s expectations about how others will respond to alternative choices and always picking the action expected to yield the greatest net return. It assumes that whenever individuals interact, they do so on the basis of rational calculations that maximize their own self-interests (Bueno de Mesquita, 2009a, 2011; Dixit and Nalebuff, 2008; Myerson, 1991). Game theory assumes, further, that people (agents) conduct decision analyses of their circumstances, with one important extension—each player imagines how the other agents make the same calculations on their own behalf. Game theory models then determine what happens in equilibrium—that is, when no agent can improve his or her position by choosing another action. Rather than extrapolating forward from the past, as with common statistical time-series analysis (Box and Jenkins, 1976), game theory models look forward and reason backwards.
Imagine a country that has developed a new weapon or strategy to counter terror attacks, such as Israel’s development of the “Iron Dome” system to counter Qassam rockets and other missiles (Frenkel, 2010). A naïve forecast of the future use of that system might extrapolate past trends in rocket attacks and presume preventive fire in proportion to the rate of incoming projectiles. A game theory model, however, might conclude that if the new system is truly effective, then it would rarely, if ever, be used. The reasoning is that those responsible for firing rockets would realize the futility of their efforts in the face of an effective air defense system. Hence,
they would switch tactics from rocket fire to something different, meaning that the new system would never be used in response to terrorist threats.
Similar logic was at the heart of the “mutually assured destruction” strategy that characterized the nuclear standoff between the United States and the former Soviet Union during the Cold War. The game theoretic analysis associated with U.S. policy at the time was classified, but it has since been made public (see Aumann et al., 1995). One of the key developers was awarded the 2005 Nobel prize in economics. For specific aspects of game theory with special relevance for intelligence and foreign policy analysis, see Bueno de Mesquita (2011).
Game theory models can quickly become quite complicated, but, as with decision analysis, software tools facilitate the formulation and solution of elementary game models (e.g., Gambit, 2007; Bueno de Mesquita, 2009b). As with the other methods, the committee does not advocate that all analysts become expert game theorists; rather, it concludes that a basic familiarity with key concepts and constructs from game theory can help analysts better formulate and think through the problem sets they confront and help them recognize when more advanced technical knowledge is needed.
Operations research refers both to the scientific study of operations for the purpose of making better decisions (see Kaplan, 2011) and to the collection of quantitative methods tailored for such study. The “operations” can involve the repetitive procedures and tasks that individuals and organizations undertake in order to achieve their goals. Familiar examples include the activities involved in the manufacture of cars or other physical products, the processing of patients in hospitals or other health care centers (including the details of needed medical procedures), the distribution and routing of people or materiel across transportation networks, and the procedures that bank tellers, phone operators, or Internet help desk advisers use in serving customers. The main methods include optimization models used to determine how to minimize costs, maximize profits, maximize lives saved, or minimize the time required to complete a project; stochastic processes, which build on basic probability theory to address situations where randomness and uncertainty dominate; and decision analysis (e.g., Hillier and Lieberman, 2010).
For intelligence analysts, these methods could answer questions concerning the operations, capabilities, or procedures underlying adversaries’ (or allies’) systems of interest. Although the mathematical methods that underlie operations research methods are deep, the basic concepts can be grasped without advanced mathematics. Moreover, easy-to-use com-
puter software allows formulating and solving simple models with modest training. Examples of such software include Frontline System’s Solver, the standard version of which ships as part of Microsoft Excel; the operations research modeling suite contained in SAS; and Microsoft Project.5
Qualitative analysis is a major part of what the IC produces. Most intelligence analysts spend a substantial portion of their careers doing qualitative investigations of countries, regions, issue areas, nonstate actors, and transnational threats. When performed correctly, qualitative research can be as objective and rigorous as quantitative research (King et al., 1994). Because qualitative analysis is more easily read than quantitative analysis, it can seem less demanding. As a result, sophisticated qualitative research has been the exception, not the rule, especially for studies with a small number of cases. However, accurate description and reliable explanation are fundamental to science—and are the hallmark of analytic, structured qualitative research.
The same basic rules of research design hold for qualitative research that seeks to describe and explain past events as for any research that strives to make informed forecasts. Central to such studies is the “plot” (Cronon, 1992), the integrative perspective that can bias stories. One safeguard is to ask theoretical questions about the variables and relationships in the narrative, regarding whether the claimed process is generally true. Analysts can provide that essential service because of their unique position, between information collectors and customers (policy makers), allowing them to help customers reframe their questions into testable hypotheses.
Structured qualitative analysis goes beyond a focus on individual hypotheses to generate observable implications, clarifying their meaning and suggesting additional data and hypotheses. That structure reduces the natural tendency to “condition on consequences,” treating the outcome as the natural result of a linear chain of events (Dawes, 1993, 2005; Fischhoff, 1975, 1978), while also guarding against hindsight bias. It is part of the game theory method of looking off the equilibrium path (Bueno de Mesquita, 2011), which requires analysts to consider what might have happened had different events and decisions occurred, providing a more complete understanding of the challenges and constraints that decision makers face. Thus, structured qualitative research incorporates elements of the quantitative intellectual tool kit (e.g., game theory, decision theory) (see Skinner, 2011). Even when these strategies do not eliminate biases (e.g.,
For details, see http://www.sas.com/technologies/analytics/optimization/or/ [June 2010] and http://www.microsoft.com/project/en/us/default.aspx [June 2010].
mind set, ideology, creeping determinism), they help analysts be more mindful of their assumptions and cautious about their conclusions (Fischhoff, 1980).
For example, applied to open source information (Chapter 1), analysts would likely benefit from the application of basic scientific research methods to the identification and use of public domain data including:
following open sources routinely, developing the mastery needed to compare their practices and detect changes in their reporting;
searching for observable implications of hypotheses derived from secret sources that can be tested in open sources, and vice versa;
deriving hypotheses from open sources, then cross-checking them with “trusted” secret sources, and vice versa; and
explicitly reporting open sources in assessments provided to policy makers, so as to reveal their provenance.
Following these methods would subject qualitative intelligence analyses to the discipline imposed on scholarly research, but without the irrelevant encumbrances of academic research (see Skinner, 2011).
The behavioral and social sciences have a large number of analytic methods that have been developed through the interplay of theory and applications, conducted in the harsh light of open scientific peer review. The best of these methods belong in the IC’s tool kit. The IC’s analysts need to know enough about these fundamental ways of looking at the world to enrich their own thinking and to secure the services of experts when needed. In order to serve its customers, the IC needs to be a critical consumer of analytical methods, both identifying those best suited to its needs and avoiding the temptation to rely on unproven methods that promise to do the impossible.
Arkes, H., and J. Kajdasz. (2011). Intuitive theories of behavior as inputs to analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Arrow, K.J., R. Forsythe, M. Gorham, R. Hahn, R. Hanson, J.O. Ledyard, S. Levmore, R. Litan, P. Milgrom, F.D. Nelson, G.R. Neumann, M. Ottaviani, T.C. Schelling, R.J. Shiller, V.L. Smith, E. Snowberg, C.R. Sunstein, P.C. Tetlock, P.E. Tetlock, H.R. Varian, J. Wolfers, and E. Zitzewitz. (2008). Economics—The promise of prediction markets. Science, 320(5878), 877-878.
Aumann, R.J., and M. Maschler (with the collaboration of R.E. Stearns). (1995). Repeated Games with Incomplete Information. Cambridge, MA: MIT Press.
Box, G.E.P., and G.M. Jenkins. (1976). Time Series Analysis: Forecasting and Control. San Francisco, CA: Holden-Day.
Bruine de Bruin, W., B. Fischhoff, L. Brilliant, and D. Caruso. (2006). Expert judgments of pandemic influenza. Global Public Health, 1(2), 178-193.
Budescu, D.V., and T.S. Wallsten. (1995). Processing linguistic probabilities: General principles and empirical evidence. In J. Busemeyer, D.L. Medin, and R. Hastie, eds., Decision Making from a Cognitive Perspective (pp. 275-318). New York: Academic Press.
Bueno de Mesquita, B. (2009a). The Predictioneer’s Game: Using the Logic of Brazen Self-Interest to See and Shape the Future. New York: Random House.
Bueno de Mesquita, B. (2009b). The Predictioneer’s Game. Available: http://www. predictioneersgame.com/game [March 2010].
Bueno de Mesquita, B. (2011). Applications of game theory in support of intelligence analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Chen, M.K., J.E. Ingersoll, Jr., and E.H. Kaplan. (2008). Modeling a presidential prediction market. Management Science, 54(8), 1,381-1,394.
Cowgill, B. (2005). Putting Crowd Wisdom to Work. Available: http://googleblog.blogspot. com/2005/09/putting-crowd-wisdom-to-work.html [March 2010].
Cronon, W. (1992). A place for stories: Nature, history, and narrative. The Journal of American History, 78(4), 1,347-1,376.
Dawes, R. (1993). Prediction of the future versus an understanding of the past: A basic asymmetry. American Journal of Psychology, 166(1), 1-24.
Dawes, R. (2005). The ethical implications of Paul Meehl’s work on comparing clinical versus actuarial prediction methods. Journal of Clinical Psychology, 61(10), 1,245-1,255.
Defense Intelligence Agency, Directorate for Analysis. (2008). Analytic Methodologies: A Tradecraft Primer: Basic Structured Analytic Techniques. Washington, DC: Defense Intelligence Agency.
Devine, D.J., and S.W.J. Kozlowski. (1995). Expertise and task characteristics in decision making. Organizational Behavior and Human Decision Processes, 64, 294-306.
Dixit, A.K., and B.J. Nalebuff. (2008). The Art of Strategy: A Game Theorist’s Guide to Success in Business and Life. New York: W.W. Norton.
Drake, A.W. (1999). Fundamentals of Applied Probability Theory. (Re-issue of 1967 text.) New York: McGraw-Hill.
Erev, I., and B.L. Cohen. (1990). Verbal versus numerical probabilities: Efficiency, biases and the preference paradox. Organizational Behavior and Human Decision Processes, 45, 1-18.
Fingar, T. (2011). Analysis in the U.S. intelligence community: Missions, masters, and methods. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Fischhoff, B. (1975). Hindsight ≠ foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 288-299.
Fischhoff, B. (1978). Intuitive use of formal models. A comment on Morrison’s “qualitative models in history.” History and Theory, 17(2), 207-210.
Fischhoff, B. (1980). For those condemned to study the past: Reflections on historical judgment. New Directions for Methodology of Social and Behavioral Science, 4, 79-93.
Fischhoff, B. (2011). Communicating about analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Fischhoff, B., A. Parker, W. Bruine de Bruin, J. Downs, C. Palmgren, R.M. Dawes, and C. Manski. (2000). Teen expectations for significant life events. Public Opinion Quarterly, 64(2), 189-205.
Frenkel, S. (2010). Israel says tests on Iron Dome missile system have been a success. The Times. January 8. Available: http://www.timesonline.co.uk/tol/news/world/middle_east/article6980098.ece [March 2010].
Funtowicz, S.O., and J.R. Ravetz. (1990). Uncertainty and Quality in Science for Policy. Dordrecht, The Netherlands: Kluwer Academic Publishers.
Gambit. (2007). About Gambit. Available: http://gambit.sourceforge.net/index.html [March 2010].
Hanson, R. (2003). The Policy Analysis Market (and FutureMAP) Archive. Available: http://hanson.gmu.edu/policyanalysismarket.html [March 2010].
Heuer, R.J., Jr. (1999). Psychology of Intelligence Analysis. Washington, DC: Center for the Study of Intelligence, Central Intelligence Agency.
Heuer, R.J., Jr. (2009). The Evolution of Structured Analytic Techniques. Notes on a paper presented to the Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, National Research Council, December 8, Washington, DC. Available: http://www7.nationalacademies.org/bbcss/DNI_Heuer_Text.pdf [September 2010].
Heuer, R.J., Jr., and R.H. Pherson. (2010). Structured Analytic Techniques for Intelligence Analysis. Washington, DC: CQ Press.
Hillier, F.S., and G.J. Lieberman. (2010). Introduction to Operations Research. 9th ed. New York: McGraw-Hill.
Howard, R.A., and J. Matheson, eds. (1983). The Principles and Applications of Decision Analysis (2 volumes). Palo Alto, CA: Strategic Decisions Group.
Intel. (2007). Using forecasting markets to manage demand risk. Intel Technology Journal, 11(2). DOI: 10.1535/itj.1102.04. Available: http://www.intel.com/technology/itj/2007/v11i2/4-forecasting/4-mechanisms.htm [March 2010].
Kaplan, E. (2011). Operations research and intelligence analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Kent, S. (1964). Words of estimative probability. Studies in Intelligence, Fall. Available: https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/sherman-kent-and-the-board-of-national-estimates-collected-essays/6words.html [April 2010].
King, G., R.O. Keohane, and S. Verba. (1994). Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton, NJ: Princeton University Press.
Malkiel, B.G. (1999). A Random Walk Down Wall Street. New York: W.W. Norton.
Mandel, D. (2009). Applied Behavioral Sciences in Support of Intelligence Analysis. Presentation at the public workshop of the Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, National Research Council, Washington, DC. May 15. Audio available: http://www7.nationalacademies.org/bbcss/DNI_Audio.html [November 2010].
McClelland, G. (2011). Use of signal detection theory as a tool for enhancing performance and evaluating tradecraft in intelligence analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Myerson, R.B. (1991). Game Theory: Analysis of Conflict. Cambridge, MA: Harvard University Press.
National Intelligence Council. (2007a). Iran: Nuclear Intentions and Capabilities. Available: http://www.dni.gov/press_releases/20071203_release.pdf [June 2010].
National Intelligence Council. (2007b). Terrorist Threat to the US Homeland. Available: http://www.dni.gov/press_releases/20070717_release.pdf [June 2010].
National Research Council. (2010). Field Evaluation in the Intelligence and Counterintelligence Context: Workshop Summary. Robert Pool, Rapporteur. Planning Committee on the Field Evaluation of Behavioral and Cognitive Sciences-Based Methods and Tools for Intelligence and Counterintelligence. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
O’Hagan, A., C.E. Buck, A. Daneshkhah, J.R. Eiser, P.H. Garthwaite, D.J. Jenkinson, J.E. Oakley, and T. Rakow. (2006). Uncertain Judgments: Eliciting Expert Probabilities. Chichester, UK: John Wiley and Sons.
Raiffa, H. (1968). Decision Analysis: Introductory Lectures on Choices Under Uncertainty. Reading, MA: Addison-Wesley.
Schweitzer, N. (1978). Bayesian analysis: Estimating the probability of Middle East conflict. In R.J. Heuer, Jr., ed., Quantitative Approaches to Political Intelligence: The CIA Experience (pp. 11-30). Boulder, CO: Westview Press.
Skinner, K. (2011). Qualitative analysis for the intelligence community. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Spellman, B. (2011). Individual reasoning. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Tchetgen, E., E.H. Kaplan, and G.H. Friedland. (2001). The public health consequences of screening patients for adherence to highly active antiretroviral therapy. Journal of Acquired Immune Deficiency Syndromes, 26(2),118-129.
Tetlock, P.E. (2006). Expert Political Judgment: How Good Is It? How Can We Know? Princeton, NJ: Princeton University Press.
U.S. Government. (2009). A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis, Washington, DC. Available: https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/Tradecraft%20Primer-apr09.pdf [March 2010].
Wohlstetter, R. (1962). Pearl Harbor: Warning and Decision. Stanford, CA: Stanford University Press.
Wolfers, J., and E. Zitzewitz. (2004). Prediction markets. Journal of Economic Perspectives, 18(2), 107-126.
Wolfers, J., and E. Zitzewitz. (2006). Interpreting Prediction Market Prices as Probabilities. National Bureau of Economic Research Working Paper 12200. Washington, DC: National Bureau of Economic Research.
Woloshin, S., L.M. Schwartz, and H.G. Welch. (2008). Know Your Chances: Understanding Health Statistics. Berkeley: University of California Press.
Zlotnick, J. (1972). Bayes’ theorem for intelligence analysis. Studies in Intelligence, 16(2), 43-52.