Ethical, Regulatory, and Cultural Considerations
The evaluation of devices and techniques for use in intelligence and counterintelligence inevitably involves the use of human subjects, and the use of human subjects requires that researchers follow various ethical and regulatory guidelines that depend on the details of the research. A session on the morning of the second day of the workshop was devoted to exploring the particular ethical, regulatory, and cultural issues that come into play when carrying out field evaluations.
ETHICAL CHALLENGES OF TRANSLATING RESEARCH INTO EFFECTIVE TECHNOLOGIES
Adil Shamoo, chair of the medical ethics subcommittee of the Defense Health Board1 and editor-in-chief of Accountability in Research, opened the session with a discussion of ethical challenges of translating research in psychophysiology and neuroscience into technologies than can be used in intelligence and counterintelligence. Technologies based on psycho-physiology include such things as the polygraph, voice stress analysis, the electrogastrogram, thermal imaging, and truth serums and narcoanalysis. The technologies based on neuroscience include functional magnetic resonance imaging, electroencephalography, positron emission tomography, and transcranial magnetic stimulation.
The ethical challenges to developing and evaluating such technologies can be divided roughly into two categories, Shamoo said: those that arise during research and those related to the use of the technologies. Each area has its own particular issues and considerations that must be taken into account.
Ethical Challenges Associated with Human Research Subjects
The field of ethics relating to human research subjects has developed over the past 60 years, and much of that development was prompted by concerns over ethical lapses. For example, the Nuremberg Code was developed in 1947 to set out principles for human medical experimentation in response to what had been uncovered during the Nuremberg trials about experiments performed by Nazi doctors on Jews in concentration camps and other prisoners. The Helsinki Declaration of 1964 was produced by the World Medical Association to be a universally accepted set of ethics principles governing the behavior of doctors and other researchers doing studies with human subjects, and it included many of the same principles set out in the Nuremberg Code. In the United States the Tuskegee Syphilis Study, a controversial 40-year study of nearly 400 poor black farmers with syphilis, led to the establishment in the early 1980s of regulations to protect human subjects and later to the creation of the Office of Protections from Research Risks and to the requirement that federally funded human subjects research be overseen by institutional review boards.
The spirit of all these ethical guidelines was captured in the words of an 85-year-old survivor of the Nazi concentration camps, Shamoo said. Eva Mozes Kor and her identical twin sister were subjects of the experiments that Josef Mengele performed on Jewish concentration camp prisoners during World War II. “They both survived, but they went through several months of hell, and they have come to this country, and she lives now in Terra Haute, Indiana.” Once, Shamoo said, Kor had been invited to talk to a meeting of about 3,000 doctors and medical researchers, and her words remained with him. “She said, ‘You, the scientists of the world, must remember that research is done for the sake of mankind and not for the sake of science.’”
Although there are a variety of regulations covering various areas of research, they are all attempting to formalize the behavior that Kor was advocating: that researchers always remember that their work is done to benefit mankind, not simply to advance science. And as such, Shamoo said, the responsible conduct of research can be encapsulated in a few basic principles.
The first is honesty. Easy to say, easy to understand, but not always easy to adhere to.
The second is objectivity. Shamoo says that he gives his students a list of 20 steps in conducting research, from forming a hypothesis and doing a literature review to the collection and analysis of the data and publication, and he asks them in which steps they could bias the outcome. “They all pick one or two,” Shamoo said, “and usually there is one very smart student who says in every one of these steps, which is true.” Objectivity requires doing nothing in any of these steps to bias the outcome.
As an example of how such bias can creep into a study, Shamoo mentioned the case of Viagra. It is marketed on television to men in their sixties and seventies because those are the people most likely to use the drug. But the original clinical trial was conducted on a population whose average age was 56, raising questions about the applicability of the trial to the market audience.
The third principle is respect for research subjects, and such respect demands that a research project meet a number of criteria. It should be scientifically valid, for if it is not then the research subjects are risking potential harm for no potential gain. It should have social value and be beneficial to individuals or to the larger society in some way. The researcher should obtain informed consent from all of the subjects. That is, each subject must understand the purpose of the research, how it will be conducted, the possible risks to the subject, and the potential benefits to the subject and, based on that understanding, must voluntarily agree to take part. The subjects should be selected equitably, and no potential subjects should be taken advantage of simply because they are easily available. The more vulnerable subjects are, the more they should be protected. And there should be independent review of the research, such as by institutional review boards.
A few years ago, the National Research Council released a report, The Polygraph and Lie Detection (National Research Council, 2003). One of its recommendations was that any research in this area should follow “accepted standards for scientific research” and should “use rules and procedures designed to eliminate biases that might influence the findings.” These are two key principles, Shamoo said—following accepted standards and eliminating biases. “These are the heart and soul of responsible conduct of research.”
A variety of regulations govern ethical issues dealing with human research, Shamoo noted, and which regulations apply to any particular research study depends on who is funding the study. If a study is funded by the federal government and it involves human subjects, the researchers must follow the Common Rule (45 CFR 46). This includes research funded by the U.S. Department of Defense, the Department of Energy, the National Institutes of Health, and many other departments and agencies. The agency responsible for oversight of research involving human sub-
jects is the Office for Human Research Protections in the U.S. Department of Health and Human Services, although individual agencies have their own oversight offices.
Much of the research performed by private industry with human subjects in support of data for marketing a drug or device is regulated by the Food and Drug Administration (FDA), which also regulates the marketing of various drugs and devices for human consumption or use. Privately funded research, however, is not regulated in the United States, Shamoo noted.
To finish up the part of his presentation devoted to human subjects research, Shamoo provided some details on the FDA’s regulation of drugs and devices. First, he noted, not only does the FDA follow federal regulations on human subject research, such as 20 CFR 50, which closely resembles the Common Rule, but it also has a phased approach to the development, testing, and marketing of drugs and other products, with each phase having a different set of requirements for approval. Phase I studies are initial human studies done in healthy volunteers to determine the minimal toxic dose. In Phase II studies, the drugs are used for the first time in people with the illness the drug is designed to treat; they usually involve about 20 to 80 people, continue the examination of the drug’s safety, and begin to get initial data on its efficacy. Phase III studies are randomized, controlled, multicenter trials on a few hundred to a few thousand people. Once a drug is approved by the FDA and marketed, it enters Phase IV, which is postmarketing surveillance looking for unexpected side effects that might not have shown up in the earlier clinical trials.
Ethical Challenges Associated with Using Technologies
Besides ethical issues related to human subject research, a variety of ethical issues are raised when technologies move beyond the research stage and are put to general use. In particular, a number of technologies now under development could be very useful to the intelligence community, but they also raise serious ethical concerns. Jonathan Moreno, a professor of medical ethics and of the history and sociology of science at the University of Pennsylvania, described a number of these developing technologies and the sorts of issues that will need to be addressed if these technologies are to be put to work.
Two reports have appeared recently from the National Academies that discuss potential applications of neuroscience research, Moreno noted. The first one, Emerging Cognitive Neuroscience and Related Technologies (National Research Council, 2008),2 has a bland title but contains
some very interesting ideas about how neuroscience techniques might be applied to the fields of intelligence and counterintelligence. The technique that has attracted the most interest to date is functional magnetic resonance imaging, or fMRI. It makes it possible to watch which parts of the brain are most active during different activities, which has potential for allowing researchers to determine, to at least some degree, what types of things a person is thinking about. It may be, for example, that more of the brain is activated when there is intentional deception than when one believes one is telling the truth, so some believe that fMRI or a similar imaging technology might someday serve as an accurate lie detector.
A number of other neuroimaging techniques could be used in similar ways, including positron emission tomography and near infrared spectroscopy. “I think within 10 years we will have much more granular pictures of what is going on in the brain while people are doing things or looking at things,” Moreno said. “Is that mind reading? Is that brain reading? I don’t know. I have my doubts.”
One problem with fMRI, he noted, is that the machines are not particularly practical for use in the field because they are very heavy and also quite noisy. Some researchers have been working on the development of portable fMRI units, he said, although he did not know how close to success they are—if at all. “My guess is that it hasn’t advanced very far.” Another problem is that the very notion of a lie is conceptually far more complex than people ordinarily realize, which creates a fundamental obstacle to “objective” deception detection.
A second neuroscience-related area with potential applications to the fields of intelligence and counterintelligence is psychopharmacology, or the study of drugs that affect thinking, mood, and behavior. One example is the use of oxytocin, a neurotransmitter that is associated with a number of behaviors, including trust and love. It can be administered in spray form through the nose, Moreno said, and about a half-dozen studies have found some evidence that oxytocin can cause people to act in a more trusting way under experimental conditions. However, some neuroscientists do not believe that oxytocin can get past the blood-brain barrier into the brain, Moreno noted, so there is some controversy as to whether the studies are valid.
These sorts of technologies raise a variety of ethical questions that society has not yet begun to address, Moreno said. Suppose, for example, that the oxytocin research shows that it is indeed possible to get people to answer an interrogator’s questions because a quick squirt of it in the nose leads them to feel as though they can trust the person asking them questions. “Would that be more acceptable than pressuring him or her through physical means,” Moreno asked, “or is this going to the heart of what it is to be a human being? Does this violate cognitive privacy? I don’t
know how to answer that question, but I think it is one that we will face. If not with oxytocin, then with something like it.”
In his presentation, Shamoo voiced similar concerns about the use of fMRI and other neuroimaging techniques. He quoted the bioethicist George Annas as saying that these new devices are particularly threatening to individual privacy because of the potential that they could be used to peer into a person’s brain with or without that person’s permission. How can the privacy concerns be addressed? Do the potential benefits to society outweigh the risks to the individual? These are the sorts of questions, Shamoo said, that people must ask themselves before moving ahead with these devices.
And these are not just theoretical issues, Moreno said. Just a few months before the workshop, the National Research Council published Opportunities in Neuroscience for Future Army Applications (National Research Council, 2009), which discusses a number of potential technologies of these types. “These are serious scientists who think there are going to be advances that are plausible for the army to invest in during the next five or ten years.”
Moreno mentioned transcranial magnetic stimulation in particular. This technique, which uses magnetic fields to induce changes in brain activity, influences such brain functions as visual perception, memory, speech, and mood, and it may have the potential to alter a person’s social behavior or attitudes. One of the report’s recommendations was that the army should examine transcranial magnetic stimulation for enhanced learning in soldiers.
If the army chooses to pursue such applications, Moreno pointed out, it will require extensive research and, eventually, field testing, both of which will raise ethical issues that have yet been worked through. Perhaps even more challenging will be the ethical issues associated with the widespread use of such technologies. “We have already had some preliminary experience with this with the anthrax vaccine controversy,” Moreno said, referring to the controversial policy that ordered more than 200,000 soldiers during the first Gulf War to get an anthrax vaccination in case of a bioweapons attack. With more and more technologies being developed to improve the performance of soldiers, the question arises of how modified soldiers will have to be in the future. How much will society require them to accept? How much will the individual soldier accept? In developing these technologies and putting them to use, Moreno said, the researchers and others involved should be careful that it is all done with respect for the people involved and with respect for the proper ethics at each step.
FIELD TESTING VERSUS RESEARCH
When discussing ethical and regulatory issues in field evaluations, it is important to keep in mind the differences between field testing and research. As Moreno explained, the two are not identical, and they require somewhat different approaches and considerations.
“Obviously not all research is field testing,” Moreno noted, “and that is illustrated by the fact that there are people in labs who do research but who are not necessarily going out in the field.” This is not particularly surprising to most people. What is surprising, however, is that not all field testing is considered to be research. As an example, Moreno pointed to the more than 200,000 men who were deployed at above-ground atomic bomb tests from 1948 to 1963. Many of these men were given radiation badges that indicated levels of exposure to radiation. Some of the pilots who flew through the mushroom clouds were dusted for radioactive particles. Their urine and other bodily fluids were checked for radioactivity. Still, Moreno said, they were not considered to be human research subjects at the time, and even within the current understanding of research rules they might not be considered to be research subjects.
The reason that they were not research subjects even though scientists were able to gain a great deal of information from these activities is that they were there for other purposes. Specifically, they were there for training and for desensitization to the atomic battlefield.
The key point here, Moreno said, is that field testing that is not considered research is not subject to the various ethical and regulatory requirements that govern research. For instance, the usual research rules about informed consent and prior peer review do not apply to this kind of field testing. This doesn’t mean that no ethical or regulatory standards apply, but it does mean that many of the usual requirements governing human subjects research may not apply, such as the Common Rule, FDA regulations, and certain Department of Defense regulations.
There is inevitably a certain amount of gray area between research and field testing, and this opens up the possibility of gaming the system. Moreno described a study done in the early 1990s at a hospital in New York looking at two different ways of doing sutures for face-lifts. Each of the approximately 20 face-lift patients had one type of suture done on one side of the face and the second type of suture done on the other side. The surgeons did not consider the study to be human subjects research and so did not fulfill the usual regulatory requirements, such as getting approval from an institutional review board. And it was close enough to the gray area, Moreno said, that they probably could have characterized this practice as innovative surgery, except for one thing: they published their results as a research study.
Of course, Moreno commented, the surgeons should have considered their work to be a research study from the beginning because they were going about it in a very controlled and systematic way, but the problem became very clear and public for them only when they published the results in the research section of a surgery journal.
While some field testing is not research, a great deal of it is, and the overlap between field testing and research is referred to as research field testing. It can be defined as systematic investigations that are carried out under actual field conditions.
It turns out that some of the soldiers and marines deployed near the above-ground nuclear tests actually were considered to be research subjects taking part in research field testing between 1953 and 1962. In particular, they were taking part in psychological studies known as “panic studies.” The Department of Defense was concerned about how soldiers would react if they were close to an atomic explosion and what the psychological effects might be, so a group of psychologists and psychiatrists were hired to perform tests on a group of subjects before and after an atomic blast. In one test, for example, soldiers were told to disassemble and reassemble their rifles within minutes of the explosion, while the researchers observed them for signs of panic. The soldiers who took part in these studies were treated as test subjects and gave their consent to participation in the studies. Thus they were treated differently from the tens of thousands of other soldiers and marines who were near the blast sites when the bombs went off but were considered as being deployed for training exercises.
Today, by contrast, there are many field trials undertaken in hospital emergency departments around the country, such as the testing of a new method to treat heart attacks, and they are considered to be clinical trials and therefore require informed consent from the patients and prior approval by an institutional review board. An FDA rule covers these emergency medicine trials and specifies the procedures that must be followed—a situation that creates bureaucratic hurdles that frustrate many who do research in emergency medicine, Moreno said.
A key issue here—and one that is often not mentioned in the ethics literature—is who decides whether an activity is a field test or a research study. “If sending you to function within a mile or two of ground zero is not considered to be a human experiment, then informed consent does not apply,” Moreno noted. “This is a key point, because it is possible to game the system.” Moreno said he sees examples of this in medical fields, such as surgery. Some physicians may carry out experiments but do not characterize them as clinical trials. They keep track of the results as a series of cases and are careful not to publish the series in a journal,
which means the work may not be covered by the requirements governing clinical research.
Despite the opportunities for taking advantage of the gray area, researchers doing field testing should adhere to normal ethical and regulatory procedures, Moreno said. “Field testing that includes development, testing, and evaluation designed to develop or contribute to generalizable knowledge is subject to prevailing ethical and legal conventions governing research.”
The discussion session following the two presentations expanded on the speakers’ comments and introduced some new topics as well.
As Robert Fein noted, the ethical issues involved with field evaluation sometimes come face to face with various political and economic issues. For example, the Department of Defense is interested in getting devices and other technologies to detect deception into the field as quickly as possible, and private companies have economic incentives to do the same thing. How, he asked, do these pressures interact with privacy and individual rights concerns in field evaluation?
“The answer,” Moreno said “is that in a pinch there is a tendency for the bar to be lowered because of political pressure and legitimate public concern about taking care of our men and women.” As an example, he mentioned the drug pyridostigmine bromide (PB), which was given to troops in the first Gulf War in case of exposure to the nerve agent soman; the drug was a pretreatment that would improve the effectiveness of the treatment for soman exposure, but at the time it had not received FDA approval for medical use. Later, many alleged exposure to PB and other drugs to be associated with the development of various health problems in Gulf War veterans. When members of congress asked the FDA why it had given the Department of Defense a waiver for the informed consent that would have normally been required to use the drug, Moreno continued, “the FDA said, we’re not the war fighters. If the Defense Department comes to us and says we need to do this to protect our people, are we going to say no? We’re in the business of approving drugs and devices for medicine, not for fighting a war.”
The situation was similar with the anthrax vaccine given to Gulf War soldiers. Some of the soldiers later blamed the vaccine for various health problems that were part of the Gulf War syndrome, and they complained that they felt like human guinea pigs, given something without consent. From the defense department’s point of view, the move was necessary to save lives—potentially thousands of lives—and to maintain force readi-
ness. At the same time, Moreno said, if there is not transparency, if there is not public confidence in a decision, then people can end up feeling as if they were exploited.
The bottom line, he said, is that the bar is naturally set lower in those situations in which the use of a drug or technology still being tested could save the lives of many soldiers or other people defending their country. In such cases, the tendency is to loosen the ethical restrictions somewhat.
Shamoo noted, however, that in response to the experience with PB, the FDA is now required to get approval from the president to bypass its usual regulations, as was done in that case. In the future, only the president will have the power to loosen the guidelines, even in the most pressing cases.
On the issue of what constitutes research versus simple surveillance, planning committee member Robert Boruch of the University of Pennsylvania noted that the level of record-keeping seems to play a large role. He mentioned a recent case involving researchers from a top-tier university who conducted a randomized trial in hospitals in which doctors and other health care providers were encouraged to wash their hands and engage in a series of other check-listed activities to enhance hygiene and reduce infection. The researchers did not seek permission from an institutional review board, which led to the university being sanctioned by the Office for Human Research Protections. That office judged that the trial was actually an experiment because it was an effort to systematically understand the extent to which hospitals could get health care providers to be more conscientious about hygiene for the sake of their patients and to estimate the effect of the effort on such things as infection rates.
Moreno responded that deciding what research is can be a difficult problem. Suppose, for example, that a researcher approached a nursing home with a project to convince staff members to wash their hands between patient encounters in order to avoid bacterial infection. The researcher might even help them develop a program to increase hand washing based on some information that the researcher had gathered about the employees’ baseline hand-washing practices. If the researcher then published the results, that might suggest that it had been a research study for which approval from an institutional review board was required and to which the patients would have had to give their consent. But not necessarily—it might still be considered a hygiene program that got reported as a case study. But if the researcher then went on to compare the program with educational hand-washing programs in other nursing homes, then the program moves closer to being a research study. “It’s not an easy line to draw,” he said, “but I think you can intuit those lines.”
Christian Meissner commented that, from his experience as chair of an institutional review board, he knows that there is a significant gray
area between program evaluation and research. Indeed, he said, it is quite possible to field test things under the guise of program evaluation. But once one begins manipulating factors and having control groups, the studies clearly amount to research.
David Mandel touched on a similar issue. In his studies of analysts and analyst trainees, he often deals with research that has begun years earlier. In one particular study of the calibration of intelligence estimates, he was dealing with estimates generated years before his research group was even created. The agency that produced the estimates was not interested in research ethics issues, he said—from the agency’s perspective, it was a quality control exercise rather than an example of research, and they had been going about it long before they came to think of it as research as a result of their partnership with Mandel and his team. From Mandel’s perspective, however, he was engaged in research and was bound to go through the institutional review board process, even though the research had already been done. “Once it moves from an internal quality control exercise to a collaboration that has a research side to it,” he said, “we have to put it through our IRB even if we can’t go back and get informed consent because some of those analysts have moved on.” And, as the research moves forward, Mandel’s group has to get consent forms from the analysts in order to use their assessments for research purposes. Sometimes the same exercise is both research and not research, and in those cases Mandel’s group—as researchers—must treat the work as research and follow the standard research procedures.