On September 22-23, 2009, the Board on Behavioral, Cognitive, and Sensory Sciences of the National Research Council held a workshop on the field evaluation of behavioral and cognitive sciences–based methods and tools for use in the areas of intelligence and counterintelligence.1 Broadly speaking, the purpose of the workshop was to discuss the best ways to take methods and tools from behavioral science and apply them to work in intelligence operations. More specifically, the workshop focused on the issue of field evaluation—the testing of these methods and tools in the the context in which they will be used in order to determine if they are effective in real-world settings.
This report is a summary and synthesis of the two days of presentations and discussions that took place during the workshop.2 The workshop participants included the members of the committee that planned the workshop, along with invited speakers and a number of other participants, including experts from a number of areas related to the behavioral sciences and the intelligence community. The goal of the workshop was not to provide specific recommendations but to offer some insight—in large part through specific examples taken from other fields—into the sorts of issues that surround the area of field evaluations. The discussions
For ease of reading, the phrase “intelligence and counterintelligence” is not repeated throughout the summary. Such terms as “intelligence community” and “intelligence operations” are intended to include both intelligence and counterintelligence.
Presentations from the workshop are available at: http://nationalacademies.org/bbcss/Field_Evaluation_Workshop_Presentations.html.
covered such ground as the obstacles to field evaluation of behavioral science tools and methods, the importance of field evaluation, and various lessons learned from experience with field evaluation in other areas.
It is important to be specific about the nature of this report, which documents the information presented in the workshop presentations and discussions. Its purpose is to lay out the key ideas that emerged from the workshop and should be viewed as an initial step in examining the research and applying it in specific policy circumstances. The report is confined to the material presented by the workshop speakers and participants. Neither the workshop nor this summary is intended as a comprehensive review of what is known about the topic, although it is a general reflection of the literature. The presentations and discussions were limited by the time available for the workshop. A more comprehensive review and synthesis of relevant research knowledge will have to wait for further development.
This report was prepared by a rapporteur and does not represent findings or recommendations that can be attributed to the planning committee. Indeed, the report summarizes views expressed by workshop participants, and the committee is responsible only for its overall quality and accuracy as a record of what transpired at the workshop. Also, the workshop was not designed to generate consensus conclusions or recommendations but focused instead on the identification of ideas, themes, and considerations that contribute to understanding the current state of field evaluation of behavioral and cognitive sciences–based methods and tools for use in the areas of intelligence and counterintelligence.
To fully appreciate the workshop, the reader needs two important bits of context. The first is the relationship between the behavioral sciences and the intelligence community and, in particular, what the intelligence community has to gain from establishing a close relationship with the community of behavioral scientists. The second is the current urgency to improve the performance and capabilities of the intelligence community.
THE BEHAVIORAL SCIENCES AND THE INTELLIGENCE COMMUNITY
In one of the workshop presentations, David Mandel, a senior defense scientist at Defence Research and Development Canada (DRDC), discussed the ways in which the behavioral sciences can benefit intelligence analysis and why it is important for the intelligence community to build a partnership with the behavioral sciences community.
First, however, Mandel offered a working definition of behavioral science: it is science aimed at understanding human behavior in a broad
sense, including both the causes and the consequences of that behavior. As such, it includes a variety of scientific fields, such as psychology, sociology, anthropology, political science, economics, and, on the biological side, the neurosciences. Although traditionally these fields have been seen as separate areas of science, increasingly they have come to overlap and intersect, to the point that behavioral science is more of a continuum than a collection of independent fields.
The intelligence community has long relied on science and technology for insights and techniques, Mandel noted, so one might wonder why it is necessary to talk about the importance of strengthening the relationship between the intelligence community and the broad community of behavioral scientists. One important reason, he said, is that there are a number of factors that tend to weaken the relationship between the two communities and make analysts less likely to take advantage of what the behavioral sciences can offer.
First, Mandel said, there is a natural inclination among most people—including those in the intelligence community—to react poorly to “scholarly verdicts that deal with issues such as the quality of their judgment and decision making, their susceptibility to irrational biases, their use of suboptimal heuristics, and overreliance on nondiagnostic information.” Like most people, experts have the sense that they are competent. Psychological research shows that most people believe themselves to be better than average at what they do. Thus, Mandel said, experts are prone to challenge conclusions offered by behavioral scientists with their own knowledge gained from personal experience and, furthermore, to believe that such a challenge is completely legitimate. This is a fundamental problem that behavioral scientists face in making contributions to any practitioner community, Mandel said, “Their research is very easily disregarded on the basis of intuition and common sense.”
A second reason that analysts tend to disregard lessons from behavioral science is that it is seen as being “soft” science. Thus its knowledge is considered to be less objective or trustworthy than knowledge generated by the “hard” sciences and technology, such as satellite imaging or electronic eavesdropping. Although that attitude is common in the intelligence community, Mandel cautioned, it is misguided and underestimates both the value and the analytical power of behavioral science. “When someone uses the term ‘soft science,’ I correct them. I say ‘probabilistic science’ and [note that] we deal with some very difficult problems.”
Third, Mandel said, the relationship between the intelligence community and the behavioral science community is still relatively new, so analysts do not necessarily understand what behavioral science has to offer. Thus, he noted, forums like this workshop are important for explor-
ing ways in which the partnership between the two communities can be developed.
Developing such a partnership is important for a number of reasons, Mandel said. From 1978 to 1986, Richards J. Heuer, Jr., an analyst with the Central Intelligence Agency, wrote a number of articles surveying the cognitive psychology literature, translating it into terms that other analysts could easily understand, and suggesting ways that those research findings could be applied to improve performance in various tasks undertaken by the intelligence community. The articles were later collected in a book, Psychology of Intelligence Analysis (Heuer, 1999).
It was a remarkable feat, Mandel said, for one person from outside the field of cognitive psychology not only to effectively interpret a significant portion of the literature in that field but also to come up with recommendations for various procedures based on that literature which would, in many cases, become part of the training and practice of intelligence analysts. But it is because Heuer’s accomplishment was so singular that it becomes clear that there should be some mechanism or systematic arrangement for applying insights and knowledge from behavioral science to the field of intelligence analysis. The intelligence community cannot afford to rely on the occasional emergence of an inspired maverick like Heuer to make those connections.
Mandel offered several supporting arguments for this conclusion. The first is an opportunity cost argument: Heuer’s work has such a valuable payoff for the intelligence community that maintaining the status quo—with no established mechanisms for applying behavioral science to intelligence analysis—means missing out on many valuable applications that could be expected from a more systematic effort to exploit knowledge from behavioral science.
Second, relying on the occasional maverick is not a good way for the intelligence community to remain current on what is being discovered in the diverse areas of behavioral science. Intelligence analysts have their own full-time jobs; they cannot be expected to also keep up with all the relevant advances in research in behavioral science and determine how to integrate that knowledge into intelligence work.
It is telling, Mandel noted, that no one else has come along since Heuer to continue his work of translating cognitive psychology and other areas of behavioral science into tools for analysis. “In cognitive psychology alone there is at least a quarter century of new research since Heuer published Psychology of Intelligence Analysis that is waiting to be exploited by the intelligence community.”
Another way in which establishing a connection with the research community can help the intelligence community is with validation, Mandel said. Once knowledge and insights from behavioral science are
used to develop new tools for the intelligence community, it is still necessary to validate them. Simply basing recommendations on scientific research is not the same thing as showing scientifically that those recommendations are effective or testing to see if they could be substantially improved. Even Heuer was unable to do much to validate his recommendations, Mandel noted, and, more generally, this is not something that the intelligence community is particularly well equipped to do.
It is, however, exactly what research scientists are trained to do. Science offers a method for testing which ideas lead to good results and which do not. Thus partnering with the behavioral science community can help the intelligence community zero in on the techniques that work best and avoid those that work poorly or not at all.
In theory, Mandel said, it would be possible for the intelligence community to build its own applied behavioral research capability, but that would draw significant resources away from other operational areas and add an entirely new focus and purpose to the intelligence community’s existing tasks. Furthermore, if the intelligence community were to hire behavioral scientists, it would find itself in competition with both academia, with its unparalleled freedoms, and industry, with its lucrative salaries. It makes more sense, Mandel suggested, for the intelligence community to develop partnerships with universities and other institutions that already have the expertise and capability to perform behavioral science research.
A final advantage of partnering with the existing behavioral science community, Mandel said, is the “multiplier effect.” By working with scientists in academia, for example, the intelligence community is not only drawing on the knowledge of those subject-matter experts but on all of their contacts. “As a researcher in an R&D [research and development] organization and government,” Mandel said, “I am very keen on partnering with academics because I understand that they have the ability to reach back into other areas of academia and connect me with other experts who could be of use.” There is a tremendous amount of such leverage that can be achieved by building relationships rather than trying to do everything in-house.
There is a good deal more pressure to perform on the U.S. intelligence community now than at any time in the recent past. There is a major threat that requires accurate intelligence to combat. For evidence one need look no further than the terrorist attacks of September 11, 2001. Although the intelligence community has successfully identified other terror plots before they could come to fruition, this one was missed, and more than
3,000 people died. At the same time, U.S. troops in Iraq and Afghanistan are faced with regular threats, and one important line of defense is the work of intelligence officers there.
As Steven Kleinman, a consultant on intelligence and national security policy, pointed out, one of the key tools in dealing with such threats is HUMINT, or human intelligence. Although there is a great deal that can be learned with SIGINT (signals intelligence) or IMINT (imaging intelligence), much of the work of the intelligence community inevitably relies on such human-centered activities as asking questions and figuring out if someone is lying or predicting what someone will do in a particular situation, and HUMINT is generally acknowledged as the more important element in defeating terrorism and winning wars. And it is in HUMINT that insights and techniques from the behavioral sciences offer the potential of providing new and improved capabilities. Just as dramatically improved satellite imaging technology has greatly increased the capabilities of IMINT, Kleinman said, the hope is that behavioral science can improve the country’s HUMINT capacities. “We are in a multifront war,” he said. “There are some young men and women, and not so young men and women, who are out there putting their lives on the line.” They deserve the best and most accurate intelligence that can be provided, he said.
In a similar vein, Anthony Veney, chief of counterintelligence investigation and functional services at U.S. Central Command, offered a dramatic description of the stakes and expressed his hopes that the scientific community could soon provide some tools and techniques that could make a difference in the field.
First, he said, it is important to understand that the fight in Iraq and Afghanistan is, in essence, an information war. It is less about bullets than about who controls the airwaves, he said, and about who is getting their message across the fastest to the most people with the most credibility.
Second, one of the most important tasks of intelligence officers in Iraq and Afghanistan is to determine who from among all the people with whom they come in contact can be trusted. Here, for example, is the sort of person they must make a decision about: “He is a tribal elder today. He is an intelligence source tomorrow. He is a drug trafficker on Wednesday. He doesn’t care how he makes his money; he is just trying to make money.” It is in this person’s best interest to provide everybody with information about what is going on, be it the Iranians, the Russians, the Taliban, the Chinese, or even some American who wants some information. So how much can he be trusted? How can it be detected if he starts to lie? These are potentially questions of life and death.
But these are questions that must be answered without a large intelligence infrastructure. Such a structure does not provide enough flexibil-
ity or time to react. So, Veney said, “We have infantry officers relying on translators for information. Do I raid this house? Do I raid that house?”
At the same time, many of the support services that historically were provided by army members are now provided by locals. So now it is important to be able to tell—with most of the people involved speaking a foreign language—which people can be trusted enough to be let onto the base on a regular basis. “People are provided access that 40 years ago, 50 years ago we would have never given to our facilities. Those are the people we are trying to discern what is it that they are doing…. Are they being honest?”
So, Veney said, there is an urgent need for devices that perform such functions as accurately detecting when someone is lying. “What I am asking for is for people to hurry up because we don’t have time for years and years of research,” he said. “Give me something I can use on the battlefield. It can’t be as big as an MRI machine—I can’t move that.” It also needs to be simple enough that a soldier can be trained to use it in 24 hours. “If a dog smells fear, why can’t I have a pen that can tell me if somebody is lying? That is what I am asking for. I am losing friends out there.”
This pressure to save lives is a major driving force behind the current interest in applying behavioral science to intelligence, noted Robert Fein, a forensic psychologist at Harvard Medical School and a planning committee member, but it is a pressure that must be resisted to a certain degree if the science is to be done correctly. As discussed at various points in the workshop, a sense of urgency can lead to techniques and devices being adopted before they have been carefully evaluated, and this in turn can lead to reliance on methods that are ineffective or are less effective than available alternatives. Indeed, the purpose of field evaluation is to avoid such situations by determining what works and what does not. But effective field evaluations take time and thus can come into conflict with an urgent sense that something needs to be done now, if not sooner.
And, indeed, the issue of finding the right balance between the urgency and the need for field evaluation was one of the themes underlying much of the discussion throughout the workshop. It is important to provide the men and women of the intelligence community with new and improved tools to help them do their jobs, but it is equally important to take the time to make sure that those tools actually work.