The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Field Evaluation in the Intelligence and Counterintelligence Context: Workshop Summary
ligence community, there is so little attempt to evaluate it in the field to see if it really works?
It is particularly puzzling, he said, in light of a comment by Steven Kleinman, who had suggested that one of the weaknesses of the American intelligence community is that it has too much money. Because so much money is thrown at intelligence work, he said, “there is a built-in assumption that if we don’t get it right, somebody else will.” If the HUMINT (human intelligence) groups don’t figure something out, then the SIGINT (signal intelligence) people will, and if SIGINT doesn’t get it, then IMINT (imagery intelligence) will. But why, Kleinman asked, hasn’t more of this money been used for field evaluation studies?
A number of the workshop presenters and participants spoke about various obstacles to field evaluation inside the intelligence community—obstacles they believe must be overcome if field evaluation of techniques and devices derived from the behavioral sciences is to become more common and accepted.
Lack of Appreciation of the Value of Field Evaluations
Perhaps the most basic obstacle is simply a lack of appreciation among many of those in the intelligence community for the value of objective field evaluations and how inaccurate informal “lessons learned” approaches to field evaluation can be. Paul Lehner of the MITRE Corporation made this point, for instance, when he noted that after the 9/11 attacks on the World Trade Center there was a great sense of urgency to develop new and better ways to gather and analyze intelligence information—but there was no corresponding urgency to evaluate the various approaches to determine what really works and what doesn’t.
David Mandel commented that this is simply not a way of thinking that the intelligence community is familiar with. People in the intelligence and defense communities are accustomed to investing in devices, like a voice stress analyzer, or techniques, such as ACH, but the idea of field evaluation as a deliverable is foreign to most of them. Mandel described conversations he had with a military research board in which he explained the idea of doing research on methods in order to determine their effectiveness. “The ideas had never been presented to the board,” he said. “They use ACH, but they had never heard of such a thing as research on the effectiveness of ACH.” The money was there, however, and once the leaders of the organization understood the value of the sort of research that Mandel does, he was given ample funding to pursue his studies.
One of the audience members, Hal Arkes of Ohio State University, made a similar point when he said that the lack of a scientific background among many of the staff of executive agencies is a serious problem. “If we