To the human mind, certain things seem intuitively correct. The world seems flat and motionless; objects seem solid rather than composed of empty space, fields, and wave functions; space seems Euclidian and 3-dimensional rather than curved and 11-dimensional. Because scientists are equipped with human minds, they often take intuitive propositions for granted and import them—unexamined—into their scientific theories. Because they seem so self-evidently true, it can take centuries before these intuitive assumptions are questioned and, under the cumulative weight of evidence, discarded in favor of counterintuitive alternatives—a spinning Earth orbiting the sun, quantum mechanics, relativity.
For psychology and the cognitive sciences, the intuitive view of human intelligence and rationality—the blank-slate theory of the mind—may be just such a case of an intuition-fueled failure to grapple with evidence (Gallistel, 1990; Tooby and Cosmides, 1992; Cosmides and Tooby, 2001; Pinker, 2002). According to intuition, intelligence—almost by definition—seems to be the ability to reason successfully about almost any topic. If we can reason about any content, from cabbages to kings, it seems self-evident that intelligence must operate by applying inference procedures that operate uniformly regardless of the content domains they are applied to (such procedures are general-purpose, domain-general, and content-independent). Consulting such intuitions, logicians and mathematicians developed content-independent formal systems over the last two centuries that operate in exactly this way. Such explicit formalization then allowed computer scientists to show how reasoning could be automatically carried out by purely “mechanical” operations (whether electronically in a computer or by cellular interactions in the brain). Accordingly, cognitive scientists began searching for cognitive programs implementing logic (Wason and Johnson-Laird, 1972; Rips, 1994), Bayes’ rule (Gigerenzer and Murray, 1987), multiple regression (Rumelhart et al., 1986), and other normative methods—the same content-general inferential tools that scientists themselves use for discovering what is true (Gigerenzer and Murray, 1987). Others proposed simpler heuristics that are more fallible than canonical rules of inference [e.g., Gigerenzer et al. (1999), Kahneman (2003)], but most of these were domain-general as well.
Our inferential toolbox does appear to contain a few domain-general devices (Rode et al., 1999; Gallistel and Gibbon, 2000; Cosmides and Tooby, 2001), but there are strong reasons to suspect that these must be supplemented with domain-specific elements as well. Why? To begin with, much—perhaps most—human reasoning diverges wildly from what would be observed if reasoning were based on canonical formal methods. Worse, if adherence to content-independent inferential methods constituted intelligence, then equipping computers with programs implement-