Clear, open communication between analysts and customers is essential for analyses that are timely, targeted, and useful.
Good communication can be challenging under the best of conditions. Even with extended direct interaction and incentives for candor, customers and analysts may not know what to ask one another or how to detect residual misunderstandings. Communication challenges increase when time is short and the interactions are constrained (e.g., by status relations, politically charged topics, or time pressures). They are tougher still when there is no direct communication between analysts and customers. In such cases, analysts and their customers need organizational procedures that effectively guide requesting, formulating, editing, and transmitting analyses.
Additional pressures arise when analysts know that people other than their direct customers may read, judge, and act on their assessments (e.g., tactical military commanders may access national level strategic analyses by Central Intelligence Agency analysts). The opportunities for miscommunication grow if these secondary readers lack shared understanding and opportunities to ask clarifying questions. Even when analysts have no obligation to serve these other readers, they have an interest in protecting the integrity of their work from others’ inadvertent or deliberate misinterpretations.
This chapter first looks at common obstacles to communication and then at two directions of communication in the intelligence community (IC): from analysts to customers and from customers to analysts. The last section considers issues in organizing for effective communication.
OBSTACLES TO EFFECTIVE COMMUNICATION
Misunderstandings between analysts and customers can arise from the same sources that complicate any communication. For example, people tend to exaggerate how well they have understood others and vice versa (for a review, see Arkes and Kajdasz, 2011). People unwittingly use jargon and everyday terms (e.g., risk, accountable, secret) in special ways, not realizing that others use them differently. People use verbal quantifiers (“unlikely,” “most,” “widespread”) for audiences that want numeric ones (Erev and Cohen, 1990). People guess wrong about what “goes without saying” for their communication partners, sometimes repeating the obvious, sometimes omitting vital facts and assumptions (e.g., Schwarz, 1999). People speak vaguely when they are not sure what to say, hoping that their partners or audience will add clarity. People resolve ambiguities in self-serving ways, hearing what they want to hear (for a review, see Spellman, 2011).
A well-known philosophical account (Grice, 1975) holds that good communications say things that are (a) relevant, (b) concise, (c) clear, and (d) truthful. Fulfilling these conditions can, however, be difficult unless the parties interact directly, allowing the trial-and-error interaction needed to identify and eliminate ambiguities. Without feedback, for example, individuals can unintentionally violate truthfulness (condition d) when their messages are not interpreted as intended. Achieving relevance and conciseness requires understanding what problems the customers are trying to solve and what facts they already know. Achieving that understanding requires assessing customers’ information needs in a disciplined way, then determining how well those needs have been met (see Fischhoff, 2011, for a review of research on communication).
COMMUNICATING ANALYTICAL RESULTS
Current and forward-looking intelligence analyses contain assessments about events and expectations about possible future events. Those assessments and expectations inevitably involve uncertainty. Analyses are conditional on assumptions about the world, which must be recognized in order to know when analyses need to be reviewed. In this section we briefly describe the research on each of these features as it applies to the IC’s communication needs. Some of that research, such as studies on how to communicate probabilities, is directly usable by the IC (e.g., Beyth-Marom, 1982). Other research is embedded in findings on research methods, which depend on successfully communicating with the individuals being studied: posing questions and interpreting answers (e.g., Ericsson and Simon, 1993; Murphy and Winkler, 1987; Poulton, 1994).
Expectations As discussed in Chapter 3, numeric probabilities convey expectations clearly, while verbal quantifiers (e.g., likely, rarely, a good chance) do not. Well-established probability elicitation methods can avoid known problems, such as overstating hard-to-express low probabilities, expressing probabilities inconsistently with formally equivalent questions, or saying “50” in the sense of 50-50 rather than as a numerical value. These procedures lead to probabilities that capture experts’ beliefs in clearly understood terms (Morgan and Henrion, 1990; O’Hagan et al., 2006; Woloshin et al., 1998).
Events Intelligence analyses cannot be evaluated unless the assessments are clear enough that one could eventually know whether they were true (e.g., Iraq disbanded its nuclear program in 1991) or expected events have occurred (e.g., North Korea will test a long-range missile within 6 months). Even seemingly common terms (e.g., risk, safe sex) have been found to have multiple meanings that individuals often fail to realize or clarify (Fillenbaum and Rapoport, 1971; Fischhoff, 2009; Schwarz, 1999). Well-established research methods provide approaches that can be used in communicating analytic results (see Fischhoff, 2011, for descriptions and references). One such method for minimizing misunderstanding is the manipulation check, asking customers to interpret a given analysis in order to assess its consistency with the analysts’ intent (Mitchell and Jolley, 2009). A second such method is back translation, in which an independent analyst translates a customer’s interpretation, hoping to reproduce the meaning of original analysis (Brislin, 1970). A third is the think-aloud protocol, in which customers say whatever comes to mind when reading an analysis in order to reveal unexpected misinterpretations (Ericsson and Simon, 1993).
Uncertainty Because no analysis is guaranteed, decision makers must understand the underlying uncertainties. How solid is the evidence? How reliable are the supporting theories? Different kinds of evidence have different expressions of uncertainty (Politi et al., 2007). For example, ranges can be used to express uncertainties in quantitative estimates (O’Hagan et al., 2006). Uncertainty about theories can be expressed in terms of the extent of controversies in the field and the maturity of its underlying science (Funtowicz and Ravetz, 1990). In medical research, the study design (e.g., randomized controlled trials, clinical observations) conveys important information about uncertainties (Schwartz et al., 2008). The probabilistic language used in National Intelligence Estimates (e.g., National Intelligence Council, 2007) invites empirical evaluation of the uncertainty understood by decision makers (see Figure 3-1).
Rationale Customers often need to know not only what analysts have concluded, but also why they have reached those conclusions. This knowledge affords customers deeper mastery of the analysis and the ability to explain their decisions to others. A scientific formulation of this challenge is ensuring that customers have accurate mental models of the key drivers of the events. Psychology has a long history of studying mental models in different domains (Bartlett, 1932; Ericsson and Simon, 1993; Furnham, 1988). Typically, such studies begin with think-aloud protocols asking people to explain their implicit theories, allowing communications to build on what they already know and fill critical gaps (Morgan et al., 2001).
Assumptions Analyses always depend on assumptions about underlying conditions. The communication process is not complete unless customers know what changes in the world, or beliefs about the world, should trigger redoing an analysis. These boundary conditions should make sense given the rationale of the analysis (explaining why the assumptions matter) and its uncertainty (providing the probability of their being violated). Stating these assumptions explicitly protects customers from having to deduce them and alerts customers to changes that warrant attention. Doctors’ warnings about the potential side effects of a prescribed drug are meant to play the same role.
COMMUNICATING ANALYTICAL NEEDS
Communication from customers to experts (including analysts) has been studied far less than communication from experts to customers. Yet, failure in this direction can lead to analysts’ addressing the wrong problems as a result of not understanding customers’ needs.
The same basic behavioral and social processes complicate communication in this direction. One such factor is status differences, which make it difficult for analysts to ask clarifying questions. A second is assumptions about common knowledge, which lead experts to assume that customers see the world in more common terms than is actually the case, as occurs in ineffective doctor-patient communication (Epstein et al., 2008).
From the perspective of decision theory (see Kaplan, 2011; McClelland, 2011), the most valuable information is that which will have the greatest effect on a decision maker’s choices or predictions. The field of decision analysis has methods for identifying those needs (e.g., Clemen and Reilly, 2002; von Winterfeldt and Edwards, 1986). Formal applications of these methods can be quite technical (e.g., optimal sampling of information for assessing the quality of products or the size of an oil reservoir). However,
their logic applies to any situation in which there are limits to analysts’ ability to create information and customers’ ability to absorb it. The first step is sketching the customers’ decision tree and asking what might be missing (e.g., options that have escaped their notice, precise probability assessments, and challenges to unrecognized assumptions): for treatment of graphical analyses, see Clemen and Reilly (2002).
ORGANIZATION AND EVALUATION
Most of the scientifically validated methods for improving communication can be implemented with modest expense and effort. They could be incorporated into routine training so that analysts have a better understanding of the challenges and pitfalls in communicating about analyses. The methods might even be taught to customers, perhaps during introductory briefings for new office holders. Some of the issues are already relatively well known from popularizations of the research (e.g., Ariely, 2008; Gawande, 2002; Thaler and Sunstein, 2008).
As for other types of organizations, there is no substitute for empirical evaluation of specific communications with actual customers. If a formal evaluation under these conditions is impossible, an informal one is likely to be better than nothing: for example, having someone uninvolved with an analysis write a summary and answer some manipulation checks, as a way of showing analysts how well their message has been understood. The intensive internal review, coordination, and approval processes used by the IC are designed to improve clarity and accuracy. However, the committee found no evidence on how these processes affect how well analyses are understood—and did hear concerns about the problems that can arise when too many people edit an analytic product.
Communication about technical issues has been addressed by several reports from the National Research Council (e.g., 1989, 1996) and other bodies (e.g., Canadian Standards Association, 1997; Presidential/Congressional Commission on Risk Assessment and Risk Management, 1997). In addition to calling for the use of methods such as those cited here, these reports recommend organizational processes that ensure continuing communication between experts and customers in order to ensure the relevance and comprehensibility of analytical products. For example, the Food and Drug Administration (2009) recently issued a strategic communication plan that may provide a partial road map for other agencies that deal with sensitive information and have multiple audiences that scrutinize their actions.
Ariely, D. (2008). Predictably Irrational: The Hidden Forces That Shape Our Decisions. New York: Harper.
Arkes, H., and J. Kajdasz. (2011). Intuitive theories of behavior as inputs to analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Bartlett, F.C. (1932). Remembering: A Study in Experimental and Social Psychology. 2nd edition. Cambridge, UK: Cambridge University Press.
Beyth-Marom, R. (1982). How probable is probable? A numerical translation of verbal probability expressions. Journal of Forecasting, 1(3), 257-269.
Brislin, R.W. (1970). Back-translation for cross-cultural research. Journal of Cross-Cultural Research, 1(3), 185-216.
Canadian Standards Association. (1997). Risk Management: Guideline for Decision Makers (reaffirmed 2002). Etobicoke, Ontario, Canada: Canadian Standards Association.
Clemen, R.T., and R. Reilly. (2002). Making Hard Decisions: An Introduction to Decision Analysis. Belmont, CA: Duxbury.
Epstein, R.M., P. Franks, K. Fiscellla, C.G. Shields, S.C. Meldrum, R.L. Kravitz, and P.R. Duberstein. (2008). Measuring patient-centered communication in patient-physician consultations. Theoretical and practical issues. Social Science and Medicine, 61(7), 1,516-1,528.
Erev, I., and B.L. Cohen. (1990). Verbal versus numerical probabilities: Efficiency, biases, and the preference paradox. Organizational Behavior and Human Decision Processes, 45(1), 1-18.
Ericsson, K.A., and H.A. Simon. (1993). Protocol Analysis: Verbal Reports as Data. Revised edition. Cambridge, MA: MIT Press.
Food and Drug Administration. (2009). FDA’s Strategic Plan for Risk Communication. Available: http://www.fda.gov/downloads/AboutFDA/…/Reports/UCM183683.pdf [June 2010].
Fillenbaum, S., and A. Rapoport. (1971). Studies in the Subjective Lexicon. New York: Academic Press.
Fischhoff, B. (2009). Risk perception and communication. In R. Detels, R. Beaglehole, M.A. Lansang, and M. Gulliford, eds., Oxford Textbook of Public Health. 5th ed. (pp. 940-952). Oxford, UK: Oxford University Press.
Fischhoff, B. (2011). Communicating about analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Furnham, A. (1988). Lay Theories: Everyday Understanding of Problems in the Social Sciences. London, UK: Whurr Publishers.
Funtowicz, S., and J. Ravetz. (1990). Uncertainty and Quality in Science for Policy. Dordrecht, The Netherlands: Kluwer Academic Publishers.
Gawande, A. (2002). Complications: A Surgeon’s Notes on an Imperfect Science. New York: Picador.
Grice, H.P. (1975). Logic and conversation. In P. Cole and J.L. Morgan, eds., Syntax and Semantics, Vol. 3: Speech Acts (pp. 133-168). New York: Academic Press.
Kaplan, E. (2011). Operations research and intelligence analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
McClelland, G. (2011). Use of signal detection theory as a tool for enhancing performance and evaluating tradecraft in intelligence analysis. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Mitchell, M.L., and J. Jolley. (2009). Research Design Explained. Belmont, CA: Wadsworth Publishing.
Morgan, M.G., B. Fischhoff, A. Bostrom, and C. Atman. (2001). Risk Communication: A Mental Models Approach. New York: Cambridge University Press.
Morgan, M.G., and M. Henrion. (1990). Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. New York: Cambridge University Press.
Murphy, A.H., and R.L. Winkler. (1987). A general framework for forecast verification. Monthly Weather Review, 115(7), 1,330-1,338.
National Intelligence Council. (2007). Iran: Nuclear Intentions and Capabilities. Available:Available: http://www.dni.gov/press_releases/20071203_release.pdf [June 2010].
National Research Council. (1989). Improving Risk Communication. Committee on Risk Perception and Communication. Commission on Physical Sciences, Mathematics, and Resources and Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.
National Research Council. (1996). Understanding Risk: Informing Decisions in a Democratic Society. P.C. Stern and H. Feinberg, eds. Committee on Risk Characterization. Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.
O’Hagan, A., C.E. Buck, A. Daneshkhah, J.R. Eiser, P.H. Garthwaite, D.J. Jenkinson, J.E. Oakley, and T. Rakow. (2006). Uncertain Judgements: Eliciting Expert Probabilities. Chichester, UK: John Wiley and Sons.
Politi, M.C., P.K.J. Han, and N. Col. (2007). Communicating the uncertainty of harms and benefits of medical interventions. Medical Decision Making, 27, 681-695.
Poulton, E.C. (1994). Behavioral Decision Theory: A New Approach. New York: Cambridge University Press.
Presidential/Congressional Commission on Risk Assessment and Risk Management. (1997). Available: http://www.riskworld.com/riskcommission/Default.html [June 2010].
Schwartz, L.M., S. Woloshin, and H.C.G. Welch. (2008). Know Your Chances: Understanding Health Statistics. Berkeley: University of California Press.
Schwarz, N. (1999). Self reports: How the questions shape the answers. American Psychologist, 54(2), 93-105.
Spellman, B. (2011). Individual reasoning. In National Research Council, Intelligence Analysis: Behavioral and Social Scientific Foundations. Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for National Security, B. Fischhoff and C. Chauvin, eds. Board on Behavioral, Cognitive, and Sensory Sciences, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Thaler, R.H., and C.R. Sunstein. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. New Haven, CT: Yale University Press.
Von Winterfeldt, D., and W. Edwards. (1986). Decision Analysis and Behavioral Research. New York: Cambridge University Press.
Woloshin, S., L.M. Schwartz, S. Byram, B. Fischhoff, and H.G. Welch. (1998). Scales for assessing perceptions of event probability: A validation study. Medical Decision Making, 14, 490-503.