mation fairly and unduly difficult to conclude that the initial hypotheses were wrong. This is often manifested by what is known as “anchoring,” the well-known tendency to rely too heavily on one piece of information when making decisions. Often, the piece of information that is weighted disproportionately is one of the very first ones encountered. One tends to seek closure and to view the initial part of an investigation as a “sunk cost” that would be wasted if overturned.

Another common cognitive bias is the tendency to see patterns that do not actually exist. This bias is related to our tendency to underestimate the amount of complexity that can really exist in nature. Both tendencies can lead one to formulate overly simple models of reality and thus to read too much significance into coincidences and surprises. More generally, human intuition is not a good substitute for careful reasoning when probabilities are concerned. As an example, consider a problem commonly posed in beginning statistics classes: How many people must be in a room before there is a 50 percent probability that at least two will share a common birthday? Intuition might suggest a large number, perhaps over 100, but the actual answer is 23. This is not difficult to prove through careful logic, but intuition is likely to be misleading.

All of these sources of bias are well known in science, and a large amount of effort has been devoted to understanding and mitigating them. The goal is to make scientific investigations as objective as possible so the results do not depend on the investigator. Certain fields of science (most notably, biopharmaceutical clinical trials of treatment protocols and drugs) have developed practices such as double-blind tests and independent (blind) verification to minimize the impact of biases. Additionally, science seeks to publish its discoveries, findings, and conclusions so that they are subjected to independent peer review; this enables others to study biases that may exist in the investigative method or attempt to replicate unexpected results. Avoiding, or compensating for, a bias is an important task. Even fields with well-established protocols to minimize the effects of bias can still bear improvement. For example, a recent working paper15 has raised questions about the way cognitive dissonance has been studied since 1956. Although these results must be considered preliminary because the paper has yet to be published, they do demonstrate that continual vigilance is needed. Research has been sparse on the important topic of cognitive bias in forensic science—both regarding their effects and methods for minimizing them.16

15

M.K. Chen. 2008. Rationalization and Cognitive Dissonance: Do Choices Affect or Reflect Preferences? Available at www.som.yale.edu/Faculty/keith.chen/papers/CogDisPaper.pdf.

16

See, e.g., I.E. Dror, D. Charlton, and A.E. Peron. 2006. Contextual information renders experts vulnerable to making erroneous identifications. Forensic Science International 156:74-78; I.E. Dror, A. Peron, S. Hind, and D. Charlton. 2005. When emotions get the better of us:



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement