Celia Fisher, of Fordham University, introduced the session on risks and harms. The advance notice of proposed rulemaking (ANPRM) has stimulated a dialogue on the appropriateness of the current review and evaluation of social and behavioral research. Questions under discussion, Fisher explained, include whether the current federal regulations are biased toward biomedical research and whether “some IRBs may be overestimating the magnitude and probability of reasonably foreseeable risks in [social and behavioral] research.” Fisher added that because there is little evidence that certain types of social and behavioral research, such as surveys and interviews, carry significant risks, there are concerns that these disciplines may be over-regulated and that this over-regulation may mean that actual harms in other areas are being overlooked.
Another issue raised by the ANPRM is which types of research should be eligible for expedited review. In general, the only studies that can be given expedited review by an institutional review board (IRB) are those that include only research activities that pose minimal risk. When an IRB is considering whether to grant expedited review, it consults a limited list provided on the Office for Human Research Protections (OHRP) website.1 The ANPRM proposes expanding the list of studies that can receive expedited review. It is important, Fisher said, that an expanded list include examples of the types of studies that might be eligible for expedited
review. She also stressed the importance of ensuring that an expanded list include age-graded examples, because the Common Rule minimal risk definition and expedited category list govern IRB interpretation of the conditions under which child and adolescent research can be expedited (Fisher et al., 2013). Fisher noted that no list can adequately cover all the variations in research procedures that meet minimal risk criteria, so the expanded category list could explicitly state that IRBs would consider as posing minimal risk any procedures not specifically listed in the expedited categories, but whose risk can be determined to be equivalent or less than that of the examples (Fisher et al., 2007).
A related issue is that the ANPRM calls for separating out the issues of informational risk from minimal risk evaluations. Fisher noted that it is important to evaluate the pros and cons of this approach for social and behavioral research. For example, the ANPRM recommends that the Health Insurance Portability and Accountability Act (HIPAA) Security Rule, designed to protect patient health information, be used as a standard for collection and storage of research data. In Fisher’s view, the HIPAA criteria for de-identification and processes permitting access to data for patients and their guardians is an inappropriate and potentially prohibitive standard for social and behavioral studies. In addition, data security criteria could be empirically supported by relevant research to ensure adequate participant protections and to guard against overly burdensome security protections for low-probability and low-magnitude information risks that could discourage research.
Finally, the issue of exempt research is of major importance to social and behavioral researchers. There are now six categories of studies that are exempt from IRB review. The categories are not clearly defined, Fisher said, and one of the difficulties that social and behavioral scientists, in particular, face is that it can be very difficult to understand exactly what is meant by such terms as “educational tests,” “survey procedures,” and “observation of public behavior.” An expanded list of exempt categories needs to include examples that facilitate IRB and investigator evaluation of research that meets the requirements of the exempt category.
There is an interplay between exempt research and informational risk that will change if the proposals in the ANPRM are adopted, Fisher noted. Under the current rules, any studies in which the participants can be identified are not included in the exempt category if the disclosure can create some sort of informational risk. The proposed changes would separate the issue of informational risk from IRB review. Although every study would still have to follow the guidelines for data security protection, studies in which information risks were the only risks could be exempted from IRB reviews. The pros and cons of removing exempt and information risk decisions from the purview of IRBs require additional deliberation.
The session’s four speakers were asked to explore aspects of risks and harms and, particularly, minimal risk. Richard Campbell provided an overview of the issue of minimal risk from the perspective of a researcher who studies racial and ethnic disparities in diagnosis and treatment of cancer using patient data and data on the distribution of health care providers. Brian Mustanski spoke about risks and harms in the context of studies of a special population—lesbian, gay, bisexual, and transgender (LGBT) youth. Steven Breckler discussed issues related to risk in psychology research. Charles Plott discussed whether some entire areas of social and behavioral research might be exempted from IRB review based on the topics they explore and the methods they employ. All four stressed the importance of using empirical data to support IRB risk assessments and to avoid over- or underestimations of research risk in the social and behavioral fields.
Richard T. Campbell, of the University of Illinois at Chicago, began his overview of minimal risk with the general observation that although human subjects research is governed by a very specific set of federal regulations, the regulations are carried out “in the context of universities and other research organizations that are free to up the ante, as they wish, and IRBs which are free to interpret them within very broad limits.” For example, he said, he recently served on an IRB with a colleague “who felt that he was free to interpret minimal risk as he wished.” This situation is highly unusual, if not unique, for the operation of regulatory processes.
Minimal risk is important, Campbell said, because it provides the threshold for determining the level of review. It determines, in part, the types of research that are eligible for expedited review, and it also determines, at least implicitly, which research is exempted, or “excused,” to use the terminology in the ANPRM.
Cognitive Complexity of the Term “Minimal Risk”
The natural place to begin in understanding minimal risk is with the definition provided in the Common Rule (45 CFR 46.102(i); 1991):
Minimal risk means that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.
A problem with this definition of minimal risk, in Campbell’s view, is the fact that the term “minimal risk” is cognitively complex—that is, people must make some mental effort to comprehend it and use it appro-
priately. However, because the term is used repeatedly in the regulations, people tend to start thinking of it in simpler terms and ignore its complexities. For example, he said, it is easy to fall back into the assumption that both the magnitude of harm and the probability of it occurring could be minimal even though this is not what the definition says.
The term is complex for several reasons, Campbell said. First, he explained, probability is a “notoriously difficult concept that many people—even sophisticated academics—often do not fully understand or use consistently.” Risk is an even more difficult concept, he added. In common usage it has at least three meanings. The formal definition that is used in epidemiology is the probability of a particular event occurring within some unit of time or period of exposure. Risk may also refer to a negative outcome, with or without reference to probability involved, as in the phrase “risky behaviors.” Risk may also refer to general uncertainty, as in the idea of a “risky investment.” In that case, the outcome might be good or bad, but the implication is that it will be bad. In discussions of minimal risk, Campbell said, any of these meanings may be implied by the speaker. Similarly, the members of IRBs often use the two terms interchangeably, leading to potential confusion.
Campbell suggested several ways that investigators and reviewers can respond to these difficulties. First, he said, it is important to keep clear the distinction between risk in the sense of the probability of harm, and the magnitude of potential harms. The goal for IRBs is to determine the worst harms that could result from participation in a study, he added, so it is important for them to ask whether there is some reasonable estimate of the probability of various possible harms. He also cited the importance of research to develop realistic probability estimates, and to study the perceptions that people have concerning the magnitude of various types of harms. It is important to know which aspects of a study the participants themselves are likely to see as most harmful or stressful, he noted.
Areas of Improvement for IRBs
Campbell discussed several issues he believes need further attention. One is how to better deal with surveys that include sensitive questions. In his experience, Campbell said, there are two main reasons that IRBs tend to be cautious about questions involving intimate behaviors, criminal activities, and the like. In such cases, reviewers worry about accidental disclosure and also about participants’ psychological reactions to sensitive questions. Respondents should be told during the informed consent process that they have the right to not answer any question and that they can terminate the interview at will, he said.
Campbell also noted the distinction between absolute and relative
risk. The “daily life” standard, specifies that research participants should not be exposed to greater harm than they might expect to experience in their daily lives. The question, Campbell noted, is “whose lives are the standard?” Many populations frequently face much higher risks in their daily lives than most investigators are likely to face, he noted. “Should risk be evaluated relative to the “average person” or to the population being studied?” he asked.
Another important distinction is between voluntary and involuntary risks, Campbell said. People accept every day certain risks over which they have little or no control, such as exposure to an illness. Other risks are more voluntary—those assumed when one gets into a car and drives down the street, for instance. Similarly, study participants are asked to voluntarily accept some risks, usually with no benefit to themselves. “But,” Campbell pointed out, “the daily life standard refers to risks that we accept with the expectation of some benefit.” That makes the risks that study participants are being asked to accept of a different nature. “This is an ethical issue which I don’t think has been thoroughly explored and could bear some discussion,” Campbell observed.
Another distinction is between permanent and transitory harm. “An unstated aspect of the daily life standard is that we assume the presumed harms are of low magnitude and short lasting,” he said. But it is possible to imagine permanent harm coming from participation in a study—a simple blood draw could lead to an infection with long-term consequences, for example. The probability of that may be extremely low, he noted, “but it is not zero. How does this fit with the daily life standard?”
There is also a difference between the probability of harm to a given person and the probability that at least one person in a study will be harmed. The larger a study is, the greater the chance of harm to at least one person, even if the probability of harm to any one person is small. “It is important to keep this distinction in mind if you want to think clearly about risk and harm,” Campbell observed.
Campbell concluded with his thoughts about how the committee could help improve the current situation regarding minimal risk. “I suspect that it’s unlikely that a new definition of minimal risk will appear,” he said. “The concept is too deeply embedded in the fabric of human subjects regulations.” However, the committee might elaborate on the concept in its report, he suggested, and with that prodding, perhaps the OHRP will issue official guidance that elaborates on the definition and suggests how it can be applied more consistently.
Brian Mustanski, of Northwestern University Feinberg School of Medicine, discussed the types of risks and harms that may arise in research with LGBT youth.
The Benefits of Research on Risky Behaviors
Mustanski began by highlighting the benefits as well as the risks of research, noting some of the reasons there is a need for research on risky or sensitive behaviors among youth. Adolescent risk behaviors, such as substance use, conduct problems, and sexual risk-taking, are primary contributors to both direct and indirect causes of morbidity and mortality among young people in this country, he said, so studying them is an important part of addressing the health issues of adolescents (Blum, 2009; Feigelman and Gorman, 2010; Eaton et al., 2011).
Consider, he suggested, men who have sex with men. According to the Centers for Disease Control and Prevention (CDC), from 2007 to 2010 nearly 58 percent of all HIV infections in the United States occurred among men who had sex with men. Furthermore, Mustanski said, 13- to 24-year-old men who have sex with men are the only group in the country that is showing an increase in HIV infections, and they are close to being the highest-risk group in the United States. Thus, in his view, research on risky behaviors among males in this age group is critical for dealing with the ongoing HIV/AIDS epidemic.
Unfortunately, Mustanski said, there has been very little research into such behaviors. For example, the CDC has endorsed a collection of 74 evidence-based HIV-prevention programs. Of those, 17 are aimed at youth, but there is not a single such prevention program aimed at adolescent men who have sex with men, despite the fact that this is a high-risk group and the only group in the United States in which the rate of HIV infections is increasing (Centers for Disease Control and Prevention, 2012). In general, Mustanski explained, the funders of prevention programs require some evidence that a program will be effective before providing funds, and because there has been little research into the effectiveness of prevention programs among adolescent men who have sex with men, there are no prevention programs for this group.
Mustanski suggested that IRBs are partly responsible for the lack of research on this group. “I can say from my conversations with many researchers in the areas of adolescent health and HIV prevention,” he observed, “that researchers shy away from doing research on adolescent [men who have sex with men] because of the belief or experience that they could not receive IRB approval to do that work.” The IRBs’
hesitance to approve such studies, he said, is often motivated by “value-laden concepts” and general concerns about the psychological and other risks posed to the participants by such studies, rather than by any solid evidence about the effects that these studies—which generally ask the participants to answer questions—actually have on the participants.
The Risks of Research on Risky Behaviors
Despite these issues, quite a bit is known about the risks of social and behavioral research with these youths, Mustanski said. When the Society for Adolescent Medicine (SAM) reviewed literature on the topic, it concluded that there are three possible ways in which asking adolescents about risky behavior could itself pose risks (Santelli et al., 2003). First, asking adolescents about risky behavior could promote that same risky behavior—for example, asking questions about sexual behavior could lead adolescents to go and have sex. But, Mustanski said, the SAM review found no such relationship in the large body of research it reviewed. Another possible risk is that adolescents who answered questions about various sorts of risky behavior could see those answers made public in some way. The third possibility is that participants could have a negative psychological reaction to their participation. For instance, people who are asked questions about their drug use or about having sex with someone of the same sex could be stressed by being asked the question. These behaviors are often kept private and may, in some cases, be illegal, so being asked about them could be psychologically distressing.
Because of such concerns, Mustanski said, many IRBs have considered that these surveys pose greater than minimal risk, and they often encourage or require researchers to provide a statement to potential participants that includes such warnings as “Some of these issues could make you feel uneasy or embarrassed” or “You may be very upset by answering these questions” or even “You may need psychological services after answering these questions.”
However, Mustanski said, there is evidence that the risks of such psychological stress are actually quite low, citing research on the participants in his own Project Q2 study, a long-running longitudinal study of LGBT youth, which asked questions about mental health problems, substance use, HIV, and sexual behavior. He asked the study participants how they felt about being in the study and what the psychological effects of being in the study were. In particular, he asked them how they felt answering questions about sexual behavior, drug and alcohol use, mental health, and suicide. The results, which were published in the Archives of Sexual Behavior in 2011, showed little stress to participants from answering such questions (Mustanski, 2011). “About 90 percent of the participants
said that they were comfortable or very comfortable answering questions about sexual behavior, drug use, and mental health,” he noted.
Although there is relatively little literature on the subject, the few other studies that have asked participants how they felt about answering questions on sensitive topics have had similar findings, Mustanski said. One such study involved adult men who have sex with men who answered questions about their sexual behavior and substance use (Fendrich et al., 2007). A second examined adults participating in a mental health survey (Jacomb et al., 1999), and a third looked at adults in South Africa questioned about HIV and gender-based violence (Jewkes et al., 2012). Two others surveyed adolescents about drug use, suicidal behavior, and physical and sexual abuse (Langhinrichsen-Rohling et al., 2006) and youth about sexuality (Kuyper et al., 2012). Across these studies, Mustanski said, there is “consistent evidence of very low rates of people saying that they were very uncomfortable answering such questions.”
Should Research on Risky Behaviors Be Considered Minimal Risk?
Mustanski posed the question of whether this evidence means that research on risky behaviors should no longer be considered “minimal risk” for the purpose of an IRB review, acknowledging that it is not an easy question to answer. When he asked the participants in one of his studies to compare their level of comfort answering survey questions with a typical visit to a doctor or counselor, 54 percent said it was more comfortable answering the survey questions, another 35 percent said it was about the same, and 11 percent said it was more uncomfortable answering the survey questions.
Mustanski suggested that it is not really clear what it means for research to pose more than minimal risk to participants. The regulations do not offer enough guidance even for cases about which there is more evidence than exists for risky behaviors. Indeed, it is not even clear whether being uncomfortable should be considered a risk in the first place, he added. Nevertheless, Mustanski said, questions about minimal risk are critically important because their answers can determine whether or not a particular study involving adolescents can even be carried out. For example, he said, his research with LGBT youth would not be possible without waivers of parental consent because many of these youth have not told their parents about their sexual orientation. Waivers of parental permission can only be issued when research risks are minimal or only a slight increase over minimal risk.
To illustrate how these issues can play out in practice, Mustanski described his experiences with IRB reviews for his Project Q2 study. Two IRBs were involved, one at the University of Illinois at Chicago, and one
at a community-based organization where the majority of work was being done. The community-based organization’s IRB was composed of representatives from the LGBT community. He received approval from that IRB within a month, while it took six months and four rounds of review to gain approval from the university IRB. Most of the university board’s questions centered on the risks of the study, and the board ultimately decided that the study posed only a slight increase over minimal risk. The IRB never specified what the risks were, but because it had found Mustanski’s privacy and confidentiality protection plan to be adequate, it seems likely that their concerns centered on the potential of psychological harm to participants.
After receiving approval from the institutional IRB, Mustanski had to return to the community IRB to once again get its approval. “All in all,” he said, “it took 10 months out of a 24-month grant to receive IRB approval.”
Mustanski closed with a brief discussion of the benefits that participants reported from being included in the study. The Common Rule specifies that a basic element of informed consent is letting participants know of any benefits—to the participants or to others—that can reasonably be expected from the study, but “benefit” is not clearly defined. When the participants in his study were questioned about how they benefited from taking part, they said things like “It made me feel like I’m part of something important” and “It helped me to talk to somebody about my experiences” (see Table S2-1). However, Mustanski said, “We were not allowed to actually mention these benefits in our consent form because the IRB pushed back saying, ‘Well, that’s not a defined benefit, that’s not a personal benefit.’” Mustanski observed that the participants in his study might disagree.
Steven Breckler, of the American Psychological Association, focused on the issues of calibrating level of review to the risk of harm and of defining and assessing minimal risk. He spoke first about the concept of risk. Noting that the title of the session was “Risks and Harms,” he suggested that it made more sense to speak of the “risks of harms” because that phrasing points to the fact that there is a second, parallel issue to consider—benefits. Instead of focusing entirely on the risks of harm, he said, the goal would be to find the proper balance between the probability of harm on the one hand and the probability of benefit on the other. This idea of balance is particularly important in discussions of minimal risk, he said, “because the benefits of research participation often get set aside in our preoccupation with harm, and I think it harms us in the process by doing that.”
TABLE S2-1 Benefits of Research Participation in Crew 450 Study
|Benefit||Ages: 16–17 (n = 52)||Ages: 18–20 (n = 221)|
|It made me feel like part of something important.||1.6||0.7||1.8||0.8|
|It made me feel like I am helping my community.||1.8||0.7||1.7||0.8|
|It helped me to have someone to talk to about my experiences.||1.8||0.8||2.0||0.9|
|It made me feel like I am helping other young men like myself.||1.8||0.8||1.8||0.8|
|It gave me the opportunity to meet successful LGBT adults.||2.0||0.9||2.0||0.9|
|It helped to know people care about other young men like myself.||1.6||0.7||1.8||0.8|
|Answering the questions helped me reflect on who I am.||1.8||0.8||1.8||0.9|
|Participating in Crew 450 made me feel supported.||1.6||0.8||1.8||0.9|
|Participating in Crew 450 helped me to think about my behavior.||1.8||0.9||1.7||0.9|
NOTES: Scale: 1 = strongly agree; 2 = agree; 3 = neutral; 4 = disagree; 5 = strongly disagree; SD = standard deviation.
SOURCE: Unpublished data, Mustanski (2013).
Breckler then turned to the issue of determining the appropriate level of review based on the risk of harm. “We must have a better delineation of research studies that qualify for expedited review and for those that would be considered exempt from review,” he observed. In particular, he added, such delineations need to be based on evidence concerning what the risks of harm truly are.
Such evidence already exists for most of what happens in social and behavioral research, he said. Data are available on many foreseeable sources of harm: the methods and procedures used, the particular topics chosen, the features of the populations under study, and interactions among these factors. Data are also available on many foreseeable types
of harm: economic, physical, psychological, and social harm. “We can estimate the probability and magnitude of many of these reasonably foreseeable harms,” he said. Furthermore, he added, in those cases where the data do not yet exist, it would be straightforward to obtain it. Thus, in his view, it makes sense to use those data to make decisions about which research studies would qualify for expedited review or be classified as exempt from review.
The ANPRM contains a proposal that the continuing review of expedited studies be eliminated. This is a change “that many of us desire,” Breckler said. Exceptions could be made in certain circumstances, he said, but IRBs should treat the continuing review of expedited studies as an exception, not as the norm.
Another proposal in the ANPRM is to expand the list of studies to be considered exempt from review. Many researchers would agree that such an expansion is a good idea, Breckler said, but the determination of which studies are exempt should not be made purely on the basis of the methodologies being used because a methodology by itself does not provide a sufficient basis on which to judge the risk of harm. In reality, he added, the risk of harm is really determined by an interaction among many factors, such as the topic of study, the population being studied, the person conducting the study, and the methodology.
Breckler echoed earlier speakers in highlighting the importance of how minimal risk is defined and assessed for social and behavioral research. The existing definition as Richard Campbell had explained is rooted in the concept of the risks ordinarily encountered in daily life. Although many researchers are comfortable with this definition or some close variant of it, Breckler suggested that it is worth considering another standard for dealing with risk, the “relative standard.” Under the relative standard, the probability and magnitude of harm or discomfort caused by the research are assessed in comparison with those ordinarily encountered by typical individuals in the study population in their daily lives. In other words, the relative standard determines minimal risk by looking at risks that the individuals enrolled in the study—rather than individuals in the general population—experience in their daily lives.
For Breckler, one of the most pressing issues—and one for which there is very little guidance for researchers and IRBs—is how to assess minimal risk in the context of a particular study. He noted that a potentially useful approach to the issue was developed at a 2005 conference sponsored by the American Psychological Association and Fordham University. To help researchers and reviewers deal with the cognitively complex task
of assessing minimal risk, participants at the conference developed a flowchart. The flowchart lays out a step-by-step process for determining whether a study is minimal risk (Fisher and Panicker, 2005).
“It gets us to focus first on whether a study involves any reasonably foreseeable sources of harm or any reasonably foreseeable types of harm,” Breckler explained. “In the absence of any reasonably foreseeable sources or types of harm, we are done. We have a minimal risk study.” On the other hand, he explained, if possible sources or types of harms do exist, the flowchart points to a new set of questions: What are those harms? What are their probabilities and magnitudes? Are the probabilities and magnitudes typical of the harms found in daily life? If so, then the study is of minimal risk.
A major advantage of such an approach, Breckler said, is that it forces a researcher or an IRB to focus on both probability and magnitude. These are difficult issues, he noted, and ones that IRBs very often fail to understand, but they are important. If the researcher determines that the probability and magnitude of the reasonably foreseeable harm are greater than those encountered in daily life, then the flowchart points to questions about how the protections in the study can reduce the risk to the level that would be encountered in daily life. This balance between the risk of harms and the protections included in the study is key to determining minimal risk, Breckler said. “This point is often lost on IRBs—that it’s possible to mitigate reasonably foreseeable harms with protections that render those potential harms as minimal risk.” That is why decision-making tools of this sort can be so valuable in determining which studies are minimal risk, he suggested.
The possibility of developing and using such decision-making tools suggests, Breckler said, “that there is hope that the Common Rule and all of the guidance that goes with it can be revised without introducing wild new interventions.” Some of the problems with the system may be less about the rules and regulations themselves and more about the availability of “clear, useful, and pragmatic guidance and tools.”
In closing, Breckler referred both to Citro’s comment that there is no need to reinvent the wheel and to the data presented by Rodamar suggesting that researchers are not particularly dissatisfied with the current IRB system. All of this, he said, suggests “that the regulations can be improved and that they should evolve but that draconian changes may not be needed.”
Charles Plott, of the California Institute of Technology, spoke on risks and harms in economics and related areas and asked whether it might be possible to exempt all research in certain areas from IRB review. As an economist dealing with mathematical economics, experimental economics, and political science, Plott became interested in the question of whether and what types of harms might occur in the areas of economics, political science, game theory, and judgment and decision making. “I suspect that there are no risks and no harms associated with experimental research in these areas,” he said. “The questions are: What are the researchers doing? How do they avoid risks and harms? Is there anything special about these particular research areas? Are there analogies with other areas that provide hints about the limitations on exposure to risk?”
Research in economics and political sciences is particularly interesting to examine in the context of risks and harms, Plott commented, because they are different than the medical sciences, and because they account for a tremendous amount of the research being carried out in the behavioral and social sciences. Economics and political science, for example, are large areas with many researchers working in them, he said, and much of what is studied in business schools can be found in these areas, including operations research, management science, economics, applied economics, and antitrust studies.
Furthermore, the research done in these areas has significant effects on society, he noted. For example, cell phone licenses are sold using a process based on many years of research regarding the best ways to carry out auctions of complex goods. The Kidney Exchange, a program that allows transplant kidneys to be “traded” so that their recipients get the best possible matches, was designed using basic science and experiments in economics. Pollution permits markets, the auctioning off of toxic assets in financial markets, and the buying of network access for such things as phones and electricity, Plott added, are additional examples from research done in these areas. “These are large, important areas in which billions of dollars’ worth of decisions are made,” Plott said, and they depend upon the experimental use of human subjects.
Risks and Harms Economics and Related Areas
To examine the risks and harms posed to subjects in the areas of economics, political science, game theory, and judgment and decision making, Plott surveyed major researchers and laboratories in these areas, as well as members of the Society for Judgment and Decision Making. He asked three questions: What are the potential subject risks and harms
that exist in these sciences? What are the experiences of these particular scientific communities with respect to potential harms? Are there other scientific disciplines that have similar features concerning risks and harms? He asked this last question, he explained, because there are a number of areas of science, including sociology, psychology, and social psychology that have similar features with respect to risks and harms, and it might be possible to share the lessons learned.
Plott’s survey of major research groups or researchers in economics and political science had 30 respondents. Together they reported on experiments in which more than 104,000 subjects participated. Across this set of studies, there was only a single adverse event. There were no reports of harm and no reports of risk, physical, psychological, social, or informational.
Plott conducted a survey of members of the Society for Judgment and Decision Making. Eighty-five respondents reported on studies in which a total of 680,000 people participated. There were no adverse incidents reported and 73 reports of harm. Of the 73 reports of harm, 60 came from one researcher, who characterized the harm as “stress due to negative feedback about personal performance.” The other 13 reports of harm in the survey were varied and very minor in their nature. For example, one person could not understand the nature of lotteries, got very frustrated, and asked to leave the experiment. Some people complained about photos they were shown and another had feelings of guilt about defecting in a study involving a prisoner’s dilemma. There were complaints about equipment that did not work. “One person was irritated because he was asked about the value of life,” Plott said, adding that “another … was upset because there was a mix-up on the addresses. That is it.”
If there were true risks involved in such research, Plott said, they would show up in a study that involved this many people. As it was, the only harms reported were extremely mild, and Plott suggested that they are not what would be considered “real harms.”
Part of the reason why there is no risk or harm involved in these studies, Plott said, is that the research topics are drawn from daily life. Some studies look at how markets operate, for example, and ask participants to participate in buying and selling, motivated by financial incentives. Others involve participants in voting or playing computerized games. He explained that the researchers are “studying processes and the way individuals are coordinated by complex systems of institutions. So the individual never shows up.”
Another reason these sorts of studies carry no risk of harm, Plott continued, is that the methods used generally involve no risk. Research into judgment and decision making, for example, primarily relies on questionnaires. Other research uses computer and Internet games that
have no consequences for the participants, aside from the possibility that they may win money. Before they participate, the subjects know what to expect, he observed. They are trained and tested on the rules “because understanding the rules is a primary reason for doing the research to start out with,” he explained. Plott added that in experimental economics there is a belief that deception should not be used in designing studies because it could affect the subjects’ trust of the researchers. Finally, no confidential data are collected from the participants beyond what is needed for accounting.
One possible exception to the general rule that the methods pose no risk of harm, Plott said, is studies that use functional magnetic resonance imaging to observe people’s brains as they respond to stimuli and make decisions. However, this technology is used only in a small percentage of studies in these areas.
Exempting Large Areas of Research
Plott argued that the possibility of an exemption or an excused category should be pursued. “We should ask ourselves,” he commented, whether there “are there large areas that might not be part of this [IRB] process.” He added that, “if no evidence of risk or harm exists, then the appropriate techniques, methods, areas, and fields of the social sciences might be identified and exempted, or excused.” These considerations may hold not just for economics, political science, game theory, and judgment and decision making, he said. “I suspect that many social sciences have similar types of features and themselves should be exempt or excused, depending on what those categories are,” he noted. “Understanding risk and harm means recognizing when they do not exist,” Plott added. “So maybe that’s one of the places we might start: Are we dealing with areas of research where risks to subjects do not exist? If so, we should identify them and move from there,” he concluded.
Blum, R.W. (2009). Young people: Not as healthy as they seem. Lancet 374(9693):853–854.
Centers for Disease Control and Prevention. (2012). Compendium of evidence-based HIV prevention interventions. Available: http://www.cdc.gov/hiv/topics/research/prs/evidence-based-interventions.htm [July 2012].
Eaton, D.K., L. Kann, S. Kinchen, et al. (2011). Youth risk behavior surveillance—United States, 2011. Morbidity and Mortality Weekly Report: Surveillance Summaries 61(4):1–162.
Feigelman, W., and B.S. Gorman. (2010). Prospective predictors of premature death: Evidence from the National Longitudinal Study of Adolescent Health. Journal of Psychoactive Drugs 42(3):353–361.
Fendrich, M., A.M. Lippert, and T.P. Johnson. (2007). Respondent reactions to sensitive questions. Journal of Empirical Research on Human Research Ethics 2:31–37.
Fisher, C.B., and S. Panicker. (2005). Assessing and balancing research risks in social, behavioral, and educational research. Presentation at the 25th Annual HRPP Conference, December 4–6, Boston, MA.
Fisher, C.B., S.Z. Kornetsky, and E.D. Prentice. (2007). Determining risk in pediatric research with no prospect of direct benefit: Time for a national consensus on the interpretation of federal regulations. American Journal of Bioethics 7:5–10.
Fisher, C.B., D.J. Brunnquell, D.L. Hughes, V. Maholmes, P. Plattner, S.T. Russell, S. Liben, and E.J. Susman. (2013). Preserving and enhancing the responsible conduct of research involving children and youth: A response to proposed changes in federal regulations. Social Policy Report 27(1):1, 3–15.
Jacomb, P.A., A.F. Jorm, B. Rodgers, A.E. Korten, A.S. Henderson, and H. Christensen. (1999). Emotional response of participants to a mental health survey. Social Psychiatry and Psychiatric Epidemiology 34:80–84.
Jewkes, R., Y. Sikweyiya, M. Nduna, N.J. Shai, and K. Dunkle. (2012). Motivations for, and perceptions and experiences of participating in, a cluster randomized controlled trial of a HIV-behavioral intervention in rural South Africa. Culture, Health and Sexuality 14:1167–1182.
Kuyper, L., J. de Wit, P. Adam, and L. Woertman. (2012). Doing more good than harm? The effects of participation in sex research on young people in the Netherlands. Archives of Sexual Behavior 41:497–506.
Langhinrichsen-Rohling, J., C. Arata, N. O’Brien, D. Bowers, and J. Klibert. (2006). Sensitive research with adolescents: Just how upsetting are self-report surveys anyway? Violence and Victims 21:425–444.
Mustanski, B. (2011). Ethical and regulatory issues with conducting sexuality research with LGBT adolescents: A call to action for a scientifically informed approach. Archives of Sexual Research 40(4):673–686.
Mustanski, B. (2013). Risks and harms in the context of research with LGBT youth. Presentation to the National Research Council Workshop on Proposed Revisions to the Common Rule in Relation to the Behavioral and Social Sciences, Washington DC. Available: http://www.tvworldwide.com/events/nas/130321/# [June 2013].
Santelli, J.S., A. Smith Rogers, W.D. Rosenfeld, et al. (2003). Guidelines for adolescent health research. A position paper of the Society for Adolescent Medicine. Journal of Adolescent Health 33:396–409.