Skip to main content

Currently Skimming:

6 Looking to the Future
Pages 71-88

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 71...
... OBSTACLES TO FIELD EVALUATION In one of the discussion periods, Neil Thomason commented that he had been struck by the difference in testing and evaluation between law enforcement and the intelligence community. Christian Meissner had identified many hundreds of research papers from the past several decades that applied to eyewitness identification, Thomason noted, while Thomason himself had been able to identify only six papers on the Analysis of Competing Hypotheses (ACH)
From page 72...
... Lack of Appreciation of the Value of Field Evaluations Perhaps the most basic obstacle is simply a lack of appreciation among many of those in the intelligence community for the value of objective field evaluations and how inaccurate informal "lessons learned" approaches to field evaluation can be. Paul Lehner of the MITRE Corporation made this point, for instance, when he noted that after the 9/11 attacks on the World Trade Center there was a great sense of urgency to develop new and better ways to gather and analyze intelligence information -- but there was no corresponding urgency to evaluate the various approaches to determine what really works and what doesn't.
From page 73...
... "Finally, one gentleman raised his hand in some degree of agitation, got up and said, ‘Listen, the research suggests that psychological tests don't work, the research suggests that background investigations don't work, the research suggests interviews don't work. If you take the polygraph away, we've got nothing.'" A year and a half later, Fein said, he attended a meeting of persons and organizations concerned with credibility assessment, at which one security agency after another described how they were still using polygraph testing for personnel security evaluations as often as ever.
From page 74...
... So those doing field evaluations must think carefully about what options they can offer the user commu nity to replace a tool that is found ineffective. Philip Rubin offered a similar thought.
From page 75...
... The first factor is the requirement in the intelligence community to get permission for anything you want to do. This makes sense, given that the release of the wrong information could result in people getting killed, but it creates a situation in which it is easy for information to be suppressed.
From page 76...
... . So just going with the base rates, I would guess that one of these methods works and two do not." LESSONS FOR THE PATH FORWARD Although there are many obstacles to reaching a point at which field evaluations are a regular and accepted part of the process of adapting techniques from the behavioral sciences for use in intelligence and coun terintelligence, workshop speakers identified a number of things that can make that path easier.
From page 77...
... If field evaluation of techniques in intelligence and counterintelligence is to advance, it will require a steady, reliable funding stream that is struc tured to attract academic researchers to work with those in the field to develop a body of evidence. A Research Base If field evaluations are to be convincing and useful to practitioners, Meissner said, they need to be part of a larger, multimethodological
From page 78...
... Indeed, there are formal mathematical models of eyewitness identification that not only replicate previous work but also predict future findings. Ongoing work on interrogation, Meissner said, is also engaging in a systematic program of research.
From page 79...
... George Brander of the UK Ministry of Defence agreed that telling stories is vitally important to practitioners. The model that has evolved in the United Kingdom, he said, is that people join the research community with skills in anthropology, psychology, sociology, or some other area of behavioral science; they start doing their research, they get closer to the practitioners and learn how best to interact with them, and eventually they figure out how to effectively provide them with advice and guidance -- which often includes telling stories.
From page 80...
... "It is particularly powerful when it comes from outside basic researchers versus inside researchers." Mandel offered a different perspective, suggesting that a more important skill than storytelling for scientists is being able to listen and being open to looking at scientific issues from the point of view of the practitio ners. Research scientists are generally more interested in testing theories than in examining practical problems that are of importance to the practi tioner community, and the scientists who will be able to engage best with the practitioners are those who can become interested in the challenge of trying to solve their problems, rather than just working to test theories.
From page 81...
... This will make it very difficult for researchers to perform useful field evaluations, McClelland said, and it will make it very difficult to convince practitioners to switch to more effective methods. "When we talk about when things will change," he said, "I think it has to come from the intelligence community deciding they will keep score." Brandon echoed McClelland's comments.
From page 82...
... For example, the DNA exonerations created a very clear metric -- people wrongfully convicted on the basis of eyewitness identification -- and led to the push to study eyewitness identification with the goal of improving it. Test and Field Versus Field and Test Because of the pressure to put new methods out in the field as quickly as possible, one school of thought holds that the best approach is to skip detailed laboratory testing and experimentation and do the testing out in the field once the method has been put to work -- the "field-and-test" approach.
From page 83...
... By the same token, he said, the scientific community needs to get over the idea that one has to complete all of the scientific research before something is put into the field. What scientists can do to help is to help figure out ways to improve evaluations, to study what constitutes a good process for evaluations based on case experience and personal field experience.
From page 84...
... Getting Practitioners to Use New Techniques Steven Rieber from the Office of the Director of National Intelligence observed that, depending on the particular area in the intelligence community, it can be easy or difficult to get practitioners to try new tech niques. In the area of deception detection, people tend to want tools immediately.
From page 85...
... The first is that there really isn't an internal research tradition within the intelligence community, and an intelligence institute could go a long way toward establishing such an internal tradition. The second is that there are many well-trained people outside the intelligence community who would be very interested in working on intelligence-related issues if the opportunity arose, and an intelligence institute could, if it was well financed, accelerate the collaboration process.
From page 86...
... "We have a critical mass of mostly Ph.D.-level social scientists and psychologists who provide a stable source of knowledge and hands-on experience for understanding personnel security needs, working with the key players in the field and leadership positions, and conducting both long-term and short-term research. And we can make a case for the practical value that both kinds of research provide." Lang argued that the intelligence community needs something similar -- an organic, ongoing research infrastructure and capability, rather than just commissioning an isolated project here and a collaboration there.
From page 87...
... "This is a very tough problem," Fein commented. The workshop discussions, particularly those that presented experiences from other fields, made it clear there are many obstacles to effective field evaluations of behavioral science techniques.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.