Committee Conclusion: Cognitive biases, such as confirmation bias, anchoring, overconfidence, sunk cost, availability, and others, appear broadly relevant to the military because of findings, from both the analysis of large-scale disasters and the broader literature on cognitive biases, that show how irrational decision making results from failing to reflect on choices. Research on a tendency to engage in cognitive biases as a stable individual-differences measure is limited, and there are measurement challenges that must be dealt with before operational cognitive bias assessment could be implemented. The conceptual relevance of this topic, paired with the limited research to date, which takes an individual-differences orientation, leads the committee to conclude that cognitive biases merit inclusion in a program of basic research with the long-term goal of improving the Army’s enlisted accession system.
Decision biases or cognitive biases refer to ways of thinking or a thought process that produces errors in judgment or decision making, or at least departures from the use of normative rules or standards (Gilovich and Griffin, 2002). A prevailing model is that cognitive biases result from the use of thinking shortcuts or heuristics, where such shortcuts lead to wrong decisions (Tversky and Kahneman, 1974). Not all thinking shortcuts, or heuristics, lead to wrong or poor decisions; in fact they can lead to good decisions in many contexts, and in some contexts they can lead to better decisions than those given by more deliberate approaches (e.g., Gigerenzer et al, 2011; Vickrey et al., 2010). Nevertheless in many circumstances cognitive biases can lead to poor decisions. In such cases, the thinking associated
with cognitive biases is often assumed to be fast, nonconscious, automatic, not requiring working memory resources, and independent of cognitive ability. It is sometimes referred to as System 1 thinking, in contrast to the more careful, controlled, memory-dependent, rule-based, correlated with cognitive ability, and deliberate System 2 thinking (Evans and Stanovich, 2003; Kahneman, 2011).
An example of the kind of irrationality in thinking and judgment produced by cognitive biases was described by Ariely (2008). He conducted an experiment based on a magazine advertising campaign offering choices of $59 for Internet-only subscriptions, $125 for print-only, and $125 for Internet-plus-print subscriptions. The latter seems like the best deal because it seems to offer the Internet access for free, and most people in the experiment took it. But if not given the print-only option, people were twice as likely to choose the Internet option. Ariely’s experiment demonstrates how decision making is influenced by relative advantages of one option over another. The print-only option was a decoy, presented only to make the $125 combination offer more attractive. No one chose the print-only option, but it affected people’s choices between the other two options. In many military decision-making contexts (e.g., how to approach a target, who is judged to be friend or foe), cognitive biases may influence the quality of the decisions and their outcomes.
Disasters and Tragedies
Many reports of major disasters invoke cognitive biases as at least partly responsible for errors in judgment that may have led to the disaster. For example, in 1996, eight mountain climbers died on Mt. Everest when a snowstorm caught them near the summit. Roberto (2002) reviewed accounts from surviving climbers and suggested that three cognitive biases may be partly responsible for the tragedy. The sunk cost effect may have occurred when climbers insisted on continuing to the summit after expending much time and energy on the ascent. The escalated commitment to get to the top meant that insufficient resources were left for a safe descent during the storm. Second, two expedition leaders may have been overconfident in their skills, biasing their judgments and risk assessments to bring their clients to the summit. Third, past expeditions were conducted under good weather conditions. A recency bias may have contributed to the leaders’ overconfidence and underestimation of the dangers from a storm. Unfortunately, the poor decisions by these two leaders led to their deaths and the deaths of three of their team members.
Consider the Iran Air Flight 655 incident in which an Iran Air civilian passenger flight was shot down by surface-to-air missiles fired from the USS Vincennes over the Persian Gulf, killing all 290 passengers on board. The commanding officer had incorrectly acted upon the belief that the Iranian Airbus was actually an F-14 fighter from the Iranian Air Force; a belief developed in the context of a high pressure situation with complicated, confusing, and contradictory information to be interpreted and reconciled within minutes (U.S. Department of Defense, 1988). As tragedies like this often go, there were many factors that contributed to the mistake (U.S. Department of Defense, 1988). However, a possible contributing cause is certainly confirmation bias, in which the context of high tensions and prior incidents in the area (including the 1987 incident in which an Iraqi jet determined to be nonhostile shot upon the USS Stark, killing 37 sailors and injuring 21 more) contributed to confirmatory thinking such that the evidence of a military aircraft was overweighted compared to the disconfirmatory evidence of a civilian aircraft.
Confirmation biases negatively affect decisions when individuals interpret information, including conflicting evidence, as confirmation of previously held beliefs. This is a tendency of special concern in situations where information is incomplete or unclear and critical decisions must be made under high levels of uncertainty, such as decisions that must be made in combat or by intelligence analysts (see Spellman, 2011, for further discussion of individual reasoning applied to the tasks of intelligence analysts). The detrimental effects of confirmation bias are well known to the Intelligence Community, and many tools and techniques have been developed to assist intelligence analysts in avoiding them (Heuer, 1999; Heuer and Pherson, 2011).
Cognitive Biases in Everyday Reasoning and Decision Making
Besides the role cognitive biases might play in well-known tragedies and disasters, cognitive biases may routinely enter into everyday decision making and may be particularly important in military contexts. For example, soldiers are often put into the position of having to judge others’ motives, such as having to judge the motives of host-nation citizens or international coalition military members. Cognitive biases such as projection (assuming others share our own feelings, attitudes, and values) can distort such judgments. Judging whether another person is friend or foe can be influenced by various cognitive biases. Humans are often poor judges of current and future events; for example, people often assume that someone or something can be adequately categorized on the basis of a single feature, such as an article of clothing or a head covering. An example is the murder of Balbir Singh Sodhi, a Sikh and a gas station owner in Mesa, Arizona,
shortly after the September 11, 2001, terrorist attacks in the United States (Lewin, 2001). Or people jump to a conclusion about someone else’s motives and assume that the other person is doing something because of “the way they are”—their culture or personality—without taking into account a more local and specific reason for the action. This is an example of fundamental attribution error: the tendency to attribute others’ mistakes to something about them and to attribute our own mistakes to something external to ourselves (Ross, 1977). Cognitive biases can also creep into ratings—for example, our first impressions of something or someone might lead to a hard-to-alter belief about that thing or person due to confirmation bias, to fundamental attribution error, to anchoring (the tendency to place undue value on the first pieces of information received), or to representativeness (the perception of similarity between objects and comparison to a prototype; see Tversky and Kahneman, 1974). Cognitive biases are often invoked in explanations for failures “to connect the dots” and for failures of sensitivity to cultural differences.
These examples suggest that cognitive biases operate in everyday reasoning and decision making, as well as playing a role in life-and-death disasters. Thus, it is useful to explore the nature of cognitive biases and individual differences related to susceptibility or resistance to them. Salient issues include whether susceptibility to cognitive biases can be mitigated, such as through training and processes, like the structured analytic techniques employed by intelligence analysts (Heuer and Pherson, 2011), and the degree to which susceptibility is related to other human performance factors such as cognitive ability, working memory, executive functioning, and personality (see Chapter 2 for discussion of some of these factors).
There have been several systematic attempts to catalog cognitive biases. To get a sense for the diversity of cognitive biases that have appeared in the literature, it is useful to note that Wikipedia1 lists 92 “decision-making, belief, and behavioral biases,” 27 “social biases,” and 48 “memory errors and biases.” Not all of these are distinct, and some may not be considered cognitive biases at all (i.e., the evidence for some of the listed biases, such as the bizarreness effect, is inconclusive), but it is a useful starting point.
1See “List of Cognitive Biases.” Available: http://en.wikipedia.org/wiki/List_of_cognitive_biases [January 2015].
Cognitive Biases and Cognitive Ability
A question that arises is whether tasks that measure cognitive biases are measuring general cognitive ability. Stanovich and West (1998) investigated performance on tasks representing 28 cognitive biases (some based on prior literature and some first identified for the study) and found that for roughly half the tasks, there was no correlation with general cognitive ability. However, the denominator neglect problem is an example of a task for which there was a correlation: Participants are told they will win money by choosing a black marble in a tray of white and black marbles mixed as either 1 black in 10 marbles (10 percent chance of winning) or 8 black in 100 marbles (8 percent chance of winning). Participants tend to choose the 100-marble tray despite it being longer odds, perhaps with the idea that having 8 black marbles is interpreted as having 8 chances of winning, which is better than having only 1. However, participants with a higher general cognitive ability tended to choose the option with the better odds of winning, thereby demonstrating resistance to this type of cognitive bias. Another cognitive-ability-related task is the probabilistic reasoning task, in which respondents are asked to predict the number on the down side of 10 dealt cards when they are told that 7 cards have the number 1 and 3 cards have the number 2 on the down side (Stanovich and West, 1998). Most participants choose a strategy of predicting which 7 are 1 card and which 3 are 2 cards, even though a winning strategy is to predict 1-card status for all 10 cards. However, participants with higher cognitive ability are less likely to make this error.
An example of a task in which the cognitive bias is not correlated with cognitive ability is the anchoring effect task (Stanovich and West, 1998). In this task, participants are asked two questions, such as “Do you think there are more or less than 65 African countries in the United Nations?” and then “How many African countries do you think are in the United Nations?” Instead of “65” in the first question, half the participants were given the number “12.” For those who were given “65” in the first question, the mean of their responses to the second question was 45.2; for those given “12” in the first question, the mean of the responses was 14.4. This discrepancy illustrates the anchoring effect, in that the information presented first heavily influenced the later decision. In this test, there was no correlation between SAT score and the size of the estimate in responding to the second question.
Another example of a task where the cognitive bias does not correlate with cognitive ability is the sunk cost task (Stanovich and West, 1998). Participants say that they would be more willing to drive an extra 10 minutes to save $10 on a $30 calculator than they would to save $10 on a $250 jacket, despite the fact that in either case the $10 savings is exactly the same.
The difference in willingness to drive an extra 10 minutes for the two items did not differ for groups with high versus low SAT scores. Another task for which there is no correlation with SAT scores is the “myside bias” task, related to confirmation bias, in which participants regardless of SAT scores are shown to more highly favor banning an unsafe German car in America than in having Germans ban an equally unsafe American car in Germany (Stanovich et al., 2013). Although biases are not necessarily detrimental (including, for example, “myside bias” may protect self-interests), the tendency toward them and their correlation with cognitive ability is important to understand in relation to performance.
An Individual-Differences Framework for Cognitive Biases
Oreg and Bayazi (2009) suggested that an individual-differences perspective could provide a theoretical framework for categorizing biases and could help account for the variance in predicting judgment and decision-making outcomes. Their framework suggests three categories of biases:
- Simplification biases are motivated by comprehending reality, reflect information processes, and are related to cognitive ability and cognitive styles. Examples are denominator neglect—paying more attention to the number of times something has happened than to the number of opportunities for it to happen, such as believing that 1,286 cancer incidents out of 10,000 indicates a higher likelihood of cancer than 24.14 incidents out of 100 (Yamagishi, 1997)—and probability matching (for instance, if told that a card deck contains 60 percent red cards and 40 percent black cards, then when predicting the color of a card randomly drawn from the deck, the subject predicts “red” 60 percent of the time, rather than predicting “red” 100 percent of the time).
- Verification biases are motivated by the desire to achieve consistency, reflect self-perception processes, and are related to core self-evaluation (which is a combination of self-efficacy [belief in one’s ability to perform a task successfully] and locus of control [tendency to attribute successes and failures to one’s own efforts and abilities rather than to external factors]). Examples are false consensus (believing others think like oneself) and learned helplessness (not acting due to prior experiences in which actions have not helped, even when actions would help in the current situation).
- Regulation biases are motivated by the desire to approach pleasure and avoid pain, reflect decision-making processes, and are related to a person’s approach/avoidance temperament. Examples are framing bias (being differentially sensitive to loss-and-gain fram-
- ing) and endowment effects (once something is owned, its value increases). Also see Chapter 6 for a discussion of individual differences associated with abilities to function under circumstances of high emotion or “hot cognition” to include defensive reactivity, emotion regulation, and performance under stress.
Although this summary and framework are primarily rational, it seems that further research along these lines could validate or improve on this scheme and promote advances in understanding how cognitive biases can be integrated with other cognitive ability and personality factors research.
In addition to such trait factors being correlates of cognitive bias susceptibility, there may be other state factors that can affect decision making and susceptibility to cognitive biases. These include physical fatigue, sleeplessness, and emotional fatigue (or self-control depletion) (Muraven and Baumeister, 2000). For example, coping with stress, regulating negative affect, and resisting temptations have been found to affect subsequent self-control. The explanation has been that self-control is a limited resource, analogous to a muscle, and that continuous exercise of self-control degrades over time. If this is true then by exercising self-control, one might be more susceptible to inappropriate System 1 thinking, resulting in cognitive biases.
With respect to both trait and state cognitive bias factors, it seems reasonable that in an individual-differences framework their relationship could be fruitfully explored with potentially more powerful explanatory variables from an information processing perspective such as working memory, executive attention, and inhibitory control. These information-processing variables have been long known to correlate with general cognitive ability (e.g., Engle, 2002; Kyllonen and Christal, 1998), which is discussed in Chapter 2.
In 2011, the Intelligence Advanced Research Projects Activity (IARPA; a research entity within the Office of the Director of National Intelligence) announced the Sirius Program, whose goal was “to create experimental Serious Games to train participants and measure their proficiency in recognizing and mitigating the cognitive biases that commonly affect all types of intelligence analysis.”2 The program identified the following six cognitive biases for examination:
2 Information is from the Sirius website, available: http://www.iarpa.gov/index.php/researchprograms/sirius/baa. IARPA, Office of the Director of National Intelligence [January 2015].
- Confirmation bias (interpreting events to support prior conclusions);
- Fundamental attribution error (attributing events to others’ personality rather than to circumstances);
- Bias blind spot (not being aware of one’s own biases);
- Anchoring bias (overreliance on a single piece of information);
- Representativeness bias (ignoring the base rate when categorizing or judging a likelihood of an event); and
- Projection bias (attributing to others one’s own beliefs, feelings, or values).
The significance of the IARPA project with respect to this report is twofold. First, the fact that bias mitigation strategies are being investigated in the Intelligence Community indicates the importance that community assigns to cognitive biases in judgment and decision making and to their broader significance and importance in intelligence analysis. Second, the identification of six specific cognitive biases suggests that these biases might be particularly important for intelligence analysts, a career field for civilian employees of the Intelligence Community as well as a military occupational specialty, and they may therefore warrant special attention in a program of research.
A number of important issues could be addressed in a broad program of research on cognitive biases. A key issue concerns individual differences and individual-level measurement: how can an individual’s inherent susceptibility or resistance to cognitive biases best be measured? Much of the literature is concerned with documenting cognitive bias phenomena but is not concerned with developing individual measures of susceptibility to cognitive biases. A notable exception is the work on the cognitive reflection task (Toplak et al., 2014).
This distinction is important because many of the experimental designs in the cognitive bias literature operate differently depending on whether the bias manipulation is administered between or within groups. Consider, for example, the conjunction fallacy, which is the belief that it is more likely that someone is a member of both groups a and b than a member of just group a, after hearing a description that highlights group b traits. In the Linda Problem (Tversky and Kahneman, 1982; 1983), participants were told the following:
Linda is 31-years-old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
(Tversky and Kahneman, 1983, p. 297)
They then were asked “to check which of two alternatives was more probable”:
Linda is a bank teller.
Linda is a bank teller and is active in the feminist movement.
(Tversky and Kahneman, 1983, p. 299)
The finding was that 85 percent of the undergraduate respondents said that alternative 2 was more likely. According to Kahneman (2011) some studies did not show respondents both possibilities, as was done in the above example, but instead showed only one—either “bank teller” or “bank teller and is active in the feminist movement,” that is a between-persons rather than within-persons design. In this case, in the between-persons design, the difference in the preference for the conjunction was even higher.
Tests of anchoring effects operate similarly: priming someone to guess high by presenting them with a high number is typically compared with priming someone else to guess low by presenting this different person with a low number. Low and high priming on the same person can be operationally difficult to test, due to carryover effects.
Much of cognitive bias research is based on a difference between conditions—one condition in which the bias is not invoked, and another in which it is. The challenges of using difference scores in within-person designs are a central part of cognitive bias measurement. In general, very little systematic research has been done on how best to measure the full range of cognitive biases.
Another issue concerns the theoretical structure of cognitive biases. It would be useful and desirable to have a general empirically based taxonomy of cognitive biases. The effort by Oreg and Bayazi (2009) seems to be a start, but considerable additional empirical work is required to develop such a taxonomy. Through such work, a more systematic taxonomy might lead, for instance, to a broader theoretical framework that enables predictions of an individual’s susceptibility to biases, based on both individual and situational characteristics. It may also help answer questions about how cognitive biases are related. Is bias A simply a particular instance of bias B or a result of a very different set of mental processes? And even more important for military applications, can mitigation training on bias A result in reduction in susceptibility to both biases A and B?
Another key issue for theoretical explorations of cognitive biases has to
do with their usefulness. Cognitive biases represent the application of thinking heuristics for problem solving; such heuristics are often useful shortcuts that enable faster decision making with less working memory burden. A good example is in medicine where physicians routinely use shortcuts or heuristics in their practice to sift through extensive information and formulate diagnoses (Vickrey et al., 2012). Heuristics are therefore not always ill-advised and do not always lead to improper decision making (Gigerenzer et al., 2011). A key research issue is when are they useful, and when should their use be curtailed? What are the training implications?
These research questions lead to a third key area for research, which concerns the effectiveness of training. If cognitive bias susceptibility is a relatively stable and enduring characteristic of individuals, a habitual way of thinking, then it might make sense at least for certain occupations to select out individuals with high susceptibility to cognitive biases. But if cognitive biases can relatively easily be effectively mitigated through training, then there may be less need to select for resistance to them. Currently far too little is known about the degree to which cognitive bias training is effective, how much is needed, and the degree to which training transfers to mitigation of related and of unrelated cognitive biases. Even if training is not effective, it could still be the case that system or job aids, such as the structured analytic techniques advocated within the Intelligence Community (Heuer and Pherson, 2011), can mitigate susceptibility to cognitive biases. For example, software might serve as a workaround to cognitive biases, but even less is known about this topic than about the preceding issues.
The U.S. Army Research Institute for the Behavioral and Social Sciences should support research to understand cognitive biases and heuristics, including but not limited to the following topics:
- Research should be conducted to ascertain whether various cognitive biases and heuristics are accounted for by common bias susceptibility factors or whether various biases reflect distinct constructs (e.g., confirmation bias, fundamental attribution error).
- A battery of cognitive bias and heuristics assessments should be developed for purposes of validity investigations.
- Research should be conducted to examine the cognitive, personality, and experiential correlates of susceptibility to cognitive biases. This should include both traditional measures of personality and cognitive abilities (e.g., the Armed Services Vocational Aptitude Battery), and information-processing measures of factors such as working memory, executive attention, and inhibitory control.
- Research should be conducted to identify contextual factors, that is, situations in which cognitive biases and heuristics may affect thought and action, and then to develop measures of performance in such situations, for use as criteria in studies aimed at understanding how cognitive biases affect performance. The research should consider the differentiating characteristics of contexts that determine when the use of heuristics for “fast and frugal” decision making might be beneficial, and when such thinking is better thought of as biased and resulting in poor decision making.
Ariely, D. (2008). Predictably Irrational: The Hidden Forces that Shape Our Decisions. New York: Harper Collins.
Engle, R. (2002). Working memory capacity as executive attention. Current Directions in Psychological Science, 11(1):19–23.
Evans, J.St.B., and K.E. Stanovich. (2003). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8(3):223–241.
Gigerenzer, G., R. Hertwig, and T. Pachur, Eds. (2011). Heuristics: The Foundations of Adaptive Behavior. New York: Oxford University Press.
Gilovich, T., and D.W. Griffin. (2002). Heuristics and biases: Then and now. In D.G.T. Gilovich and D. Kahneman, Eds., Heuristics and Biases: The Psychology of Intuitive Judgment (pp. 1–18). Cambridge, UK: Cambridge University Press.
Heuer, R.J., Jr. (1999). The Psychology of Intelligence Analysis. Center for the Study of Intelligence, Central Intelligence Agency. Available: https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/ [January 2015].
Heuer R.J., Jr., and R.H. Pherson. (2011). Structured Analytic Techniques for Intelligence Analysis. Washington, DC: CQ Press.
Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus, and Giroux.
Kyllonen, P.C., and R.E. Christal. (1990). Reasoning ability is (little more than) working-memory capacity?! Intelligence, 14(4):389–433.
Lewin, T. (2001). Sikh owner of gas station is fatally shot in rampage. New York Times, September 17, 2001. Available: http://www.nytimes.com/2001/09/17/us/sikh-owner-of-gas-station-is-fatally-shot-in-rampage.html [January 2015].
Muraven, M., and R.F. Baumeister, (2000). Self-regulation and depletion of limited resources: Does self-control resemble a muscle? Psychological Bulletin, 126(2):247–259.
Oreg, S., and M. Bayazi. (2009). Prone to bias: Development of a bias taxonomy from an individual differences perspective. Review of General Psychology, 13(3):175–193.
Roberto, M.A. (2002). Lessons from Everest: the interaction of cognitive bias, psychological, safety and system complexity. California Management Review, 45(1):136–158.
Ross, L. (1977). The intuitive psychologist and his shortcomings. In L. Berkowitz, Ed., Advances in Experimental Social Psychology, vol. 10. (pp.173–220). New York: Academic Press.
Spellman, B.A. (2011). Individual reasoning. In B. Fischhoff and C. Chauvin, Eds., Intelligence Analysis: Behavioral and Social Scientific Foundations (pp. 117–142). Washington, DC: The National Academies Press.
Stanovich, K.E., and R.F. West. (1998). Individual differences in rational thought. Journal of Experimental Psychology: General, (127):161–188.
Stanovich, K.E., R.F. West, and M.E. Toplak. (2013). Myside bias, rational thinking, and intelligence. Current Directions in Psychological Science, 22(4):259–264.
Toplak, M.E., R.F. West, and K.E. Stanovich. (2014). Assessing miserly processing: An expansion of the Cognitive Reflection Test. Thinking and Reasoning, 20(2):147–168.
Tversky, A., and D. Kahneman. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4,157):1,124–1,131.
Tversky, A., and D. Kahneman. (1982). Judgments of and by representativeness. In D. Kahneman, P. Slovic, and A. Tversky, Eds., Judgment Under Uncertainty: Heuristics and Biases (pp. 84–98). Cambridge, UK: Cambridge University Press.
Tversky, A., and D. Kahneman. (1983). Extension versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90(4):293–315.
U.S. Department of Defense. (1988). Investigation Report: Formal Investigation into the Circumstances Surrounding the Downing of Iran Air Flight 655 on 3 July 1988. Washington, DC: U.S. Department of Defense. Available: http://www.dod.mil/pubs/foi/International_security_affairs/other/172.pdf [September 2014].
Vickrey, B.G., M.A. Samuels, and A.H. Ropper. (2010). How neurologists think: A cognitive psychology perspective on missed diagnoses. Annals of Neurology, 67(4):425–433.
Yamagishi, K. (1997). When a 12.86% mortality is more dangerous than 24.14%: Implications for risk communication. Applied Cognitive Psychology, 11(6):495–506.