The Return of Individual-Specific Research Results from Laboratories: Perspectives and Ethical Underpinnings1
Over the past two decades one of the more challenging ethical questions in research has concerned what obligations investigators have, if any, to share information with those who serve as research subjects. Should they share aggregate results once the study is complete? If so, should this occur pre- or post-publication? Should they share incidental findings that, while not part of the study’s objectives, could carry important health implications for an individual subject? If so, how important, how well verified, and how actionable must such incidental findings be to warrant the extra effort of (re)contacting that particular subject? And what about individual-specific results? If those should be shared, are there limits, or must all person-specific data be individually shared? And should it be shared incrementally as it is gathered, or only upon completion of the study?
In addressing such questions, a critical issue concerns the theoretical ethical basis on which the answers are determined. An assertion that “Of course we must (or must not) share X” is vacuous if not supported by a persuasive “because.” In recognition of that, a variety of theories have emerged. As briefly described below, some are grounded in the relationship between investigators and their research subjects and propose varying obligations on that basis. Others look to more basic concepts such as the rule of rescue, the duty to warn, or a “common humanity” duty to be helpful.
1 A white paper commissioned by the National Academies of Sciences, Engineering, and Medicine’s Committee on the Return of Individual-Specific Research Results Generated in Research Laboratories, written by Haavi Morreim, J.D., Ph.D., University of Tennessee.
Unfortunately, as this paper will argue, these theories largely turn out to be little more than collections of intuitions, flat assertions, and thinly supported inferences. Nevertheless, that does not mean that we can only shrug helplessly. As explained by Jonsen and Toulmin (1988) many years ago, we need not always agree on our theories in order to reach reasonable consensus regarding what to do in a given situation.
As we proceed, a few caveats can be noted. First, a definition or two. An incidental finding (IF) is commonly defined as a “finding concerning an individual research participant that has potential health or reproductive importance and is discovered in the course of conducting research but is beyond the aims of the study” (Wolf et al., 2012, p. 364). An individual research result (IRR) is a “finding concerning an individual contributor that has potential health or reproductive importance and is discovered in the course of conducting research and is not beyond the aims of the study” (Wolf et al., 2012, p. 364). Although the current NAS project focuses mainly on the return of individual results, this paper encompasses the entire range because theories about ethical underpinnings are essentially the same across the board. That is, the focus here is on return of results (RoR) generally.
A second caveat is that for any proposed theory to tell us how we should manage the returns of research results, numerous practical questions then arise concerning how best to craft an adequate consent process up front, how much funding should be built into research projects to cover the costs of (re)identifying and (re)contacting subjects, whether and in what ways an apparently important finding should be verified before sharing, how best to share news that the person may find difficult to hear, etc. These issues, though important, are not the focus of this paper.
Third, although this paper refers to research subjects and questions about whether and when their individual research results or incidental findings should be returned to them, technically such persons may not be research subjects at all. Once one’s information—e.g., genetic information stored in a biobank—has been de-identified, the use to which that information is put is no longer defined as research, and the person who contributed it is no longer deemed by the Common Rule to be a research subject (Richardson and Cho, 2012; Wolf, 2013; Wolf et al., 2012). Nevertheless, economy of language suggests that we refer here to “investigators” and research “subjects” or “participants.”
Fourth, the focus here is limited to a narrow, fundamental question: When, if ever, is returning results, whether IFs or IRRs, morally imperative for all human subjects research, solely by virtue of the fact that it is research and it involves human beings—and if so, why would such returns be morally required?
Note that discussing whether RoR is morally imperative is not equivalent to asking whether it is ever permissible, perhaps even desirable, to share results. Surely there are very good reasons to support sharing. Many projects move forward far better when participants are active as partners. For some particularly
devastating diseases, people banding together in a mode of “entrepreneurial philanthropy” may require that, as a condition of receiving funding and other assistance from the group, researchers must agree to share whatever they learn with the scientific community as a whole and individually with participants as well. Building on each other’s work can then promote progress more quickly than if investigators operate in secretive separate silos, desperately competing to see who publishes first. Similar utilitarian considerations support building RoR into other kinds of research. It may, for instance, be difficult to recruit enough people to participate in a research effort unless they are promised that they will learn what the scientists ultimately learn, perhaps including their own personal results. In such cases RoR happens not because of any global moral imperative, but via express decisions to incorporate RoR into the protocol. In essence, returning results becomes a kind of contractual right. That said, this paper explores only the narrower questions whether and why RoR might be imperative in any human subjects research.
Finally, an important regulatory issue could upend even the most thoughtful discussion. Its resolution must be regulatory, not philosophical, hence the issue will be only briefly noted here. According to the Secretary’s Advisory Committee on Human Research Protections (SACHRP, a unit of the Department of Health and Human Services), a conflict has emerged between regulations pursuant to the Health Insurance Portability and Accountability Act (HIPAA) of 1996 and mandates rooted in the Clinical Laboratory Improvement Amendments (CLIA) of 1988 mandates (OHRP, 2015). On one hand, HIPAA can effectively preempt questions about whether to return individual research results because it requires that individuals gain access, upon request, to any records generated by a HIPAA-covered laboratory. The adverse implications for single- or double-blinded research are obvious. At the same time, CLIA prohibits returning results from non-CLIA-certified laboratories—which are used in many research projects. Thus, a non-CLIA lab existing within a HIPAA-covered entity is in an impossible predicament: it both must return, and must not return, individual results. Thus noting this potential regulatory snag, we turn to various proposed ethical underpinnings that would require RoR.
EVOLVING CONSENSUS PERSPECTIVES
We begin by recalling that sometimes it is possible to reach a consensus even without agreeing on its moral basis. A number of working groups have produced powerful consensus documents over the years. Among the first was the National Bioethics Advisory Commission (NBAC) in 1999 (NBAC, 1999; see also Wolf et al., 2012). NBAC recommended returning only those genetic or genomic research findings (whether IFs or IRRs) that are scientifically valid and confirmed and which have significant health implications and a readily available treatment. Similarly, a 2001 paper sponsored by the Centers for Disease Control and Prevention
proposed that IRRs in population-based research should only be returned when they are valid and when a proven intervention is available for reducing risk (Beskow et al., 2001; see also Wolf et al., 2012).
In 2006 a working group for the National Heart, Lung, and Blood Institute (NHLBI) produced a position paper holding that genetic results should be returned “when the associated risk for the disease is significant; the disease has important health implications such as premature death or substantial morbidity or has significant reproductive implications; and proven therapeutic or preventive interventions are available” (Bookman et al., 2006, p. 1033; also see discussion in Jarvik et al., 2014). And in 2008 a symposium sponsored by the National Institutes of Health (NIH)/National Human Genome Research Institute published a recommendations regarding IFs, distinguishing between “should return” (strong net benefit), “may return” (possible net benefit), and “should not return” (unlikely net benefit), with recommendations in each category that were dependent on the degree of analytic and clinical validity and on the likelihood that reporting could actually make an important health difference in the person’s life (Wolf et al., 2008).
In 2010 a follow-up NHLBI group offered updated guidelines, suggesting that genetic research should be returned to study participants if the information has important health implications that are valid, actionable (with established therapeutic or preventive interventions available), and consented to by the participant. This group likewise distinguished among “should return,” “may return,” and “should not return” (Fabsitz et al., 2010; see also Wolf et al., 2012).
A 2-year NIH project focused on biobanks and archived datasets, evaluating responsibilities to return results, whether they were IFs or IRRs. In 2012 the members of that project offered the CARR approach: “(1) clarify the criteria for evaluating findings and the roster of returnable findings, (2) analyze a particular finding in relation to this, (3) reidentify the individual contributor, and (4) recontact the contributor to offer the finding” (Wolf et al., 2012).
A year later the American College of Medical Genetics and Genomics (ACMG) recommended that all clinical laboratories that conduct genetic sequencing should seek out and report pathogenic mutations for 56 specified genes (Green et al., 2013; Jarvik et al., 2014; McGuire et al., 2013). Importantly, the group believed that this information should be provided regardless of the patient’s preferences. “[I]n selecting a minimal list that is weighted toward conditions where prevalence may be high and intervention may be possible, we felt that clinicians and laboratory personnel have a fiduciary duty to prevent harm by warning patients and their families about certain incidental findings and that this principle supersedes concerns about autonomy” (Green et al., 2013, p. 6).2
2 With this recommendation in mind the Working Group emphasized the importance of talking with patients in advance about the possibility of uncovering certain kinds of important genetic findings.
Meanwhile, in 2014 the Clinical Sequencing Exploratory Research Consortium and the Electronic Medical Records and Genomics Network—multisite research programs—offered a consensus statement regarding practical strategies for when to return genomic results to research participants (Jarvik et al., 2014). Their principles included
- research differs significantly from clinical care, hence standards of disclosure differ;
- researchers have no duty to use limited funds affirmatively to hunt for actionable genomic findings;
- analytically and clinically valid information should, if actionable and important, be returned to research participants; and
- participants have a right to refuse such information.
Across the board, certain themes are common. The finding, whether an IF or IRR, must be analytically and clinically valid. Speculative possibilities do not warrant return. The finding must be important to the person’s health, although there is not universal agreement about whether results implicating reproductive decisions should be returned (Fabsitz et al., 2010; see also Wolf, 2013). And the results should be actionable, meaning that there must be some sort of meaningful intervention that can prevent or at least ameliorate the course that would likely occur without the information and intervention.
Fairly broad agreement about what to do, then, appears possible. The reasons why we might embrace such conclusions, however, are open to far greater dispute.
PROFFERED BASES OF INVESTIGATORS’ OBLIGATIONS
A. Bases with Little Support
Two potential justifications for requiring researchers to share IFs and IRRs have little support. First, although empirically it seems fairly well established that many people would like to receive such results, the bare fact that that desire exists does not, of itself, mean that investigators must ipso facto comply. The reasons are numerous. Returning results can be costly, from the process to verify whether an apparent result is clinically valid to the challenges in re-identifying someone whose data has been anonymized and the difficulties of locating someone whose contact information may have changed. Moreover, even if someone has said “I want all the information,” such a broad statement does not necessarily tell us what the person’s more nuanced preferences would be, under more specific circumstances (Beskow and Burke, 2010).
Second, the goals of research are very different from those of clinical medicine. Although investigators have obligations to protect research subjects from harm, those obligations stem from a very different relationship. Unlike the case
for a clinical physician–patient relationship, there is very little enthusiasm for the notion that investigators could be deemed fiduciaries of subjects (Clayton and McGuire, 2012; Miller et al., 2008; Morreim, 2005; Richardson and Belsky, 2004; Wolf, 2012). Whereas a physician’s loyalty and primary obligation are to promote the patient’s best interests, as in classic fiduciary relationships, the investigator’s primary allegiance is necessarily pinned on something else—namely, on the science: high-quality methods, data and inferences.
[T]he physician owes the patient a robust duty of clinical care. The physician’s goal is to serve the patient’s interests. A great deal follows from this, including informational obligations to disclose to the patient the diagnosis, treatment options, and other information material to treatment decisions. However, on the research side, the researcher’s core goal is to seek generalizable knowledge for the benefit of the many. The researcher owes a much thinner duty of clinical care, focused on averting and addressing research-caused harm. (Wolf, 2013, p. 561)
B. Investigator–Subject Relationship
More commonly, ethical analyses of investigators’ obligations to return IFs and IRRs have relied on particular conceptions of the investigator–subject relationship, from which specific ethical obligations are then said to flow. Several such theories have emerged, and we begin with the one most commonly described: partial entrustment. Critical evaluation of this and the other theories is reserved for Part IV.
1. Partial Entrustment
The theory of “partial entrustment,” articulated by Richardson and colleagues (Richardson, 2008; Richardson and Belsky, 2004; Richardson and Cho, 2012; Richardson et al., 2017), proposes that “participants permit researchers access to their private data, specimens, and bodies, access that researchers otherwise would not have. This grant of access represents an act of partial entrustment (‘partial’ because participants are not fully entrusting their medical welfare to the researcher, as they would to a clinician)” (Wolf, 2013, p. 561). Investigators therefore shoulder certain duties of ancillary care—not clinicians’ full duties of care, but not the “no duty of care” that we attribute to pure scientists (Wolf, 2013, p. 561).
“The model’s core argument is this: Having gotten the participants to waive their rights against such access to private aspects of their bodies, the researchers obtain special responsibilities to look after the fundamental values that those rights normally protect” (Richardson and Cho, 2012, p. 470). That core argument stems from two basic realities: participants’ vulnerability and investigators’ discretion (Richardson and Belsky, 2004). Participants authorize the researcher to
employ significant personal judgment in deciding how to act on the behalf of something the beneficiary cares about,” so that “how the entrusted person chooses to exercise this discretion may considerably affect the beneficiary’s wellbeing. . . . They allow researchers “to collect confidential medical information about them; to touch, poke, or cut them; to collect bodily samples from them; or to undertake medical procedures on them. In addition, they may agree to give up some of their normal control over their own health, as happens if they agree to participate in blinded studies or in psychiatric drug trials involving washout phases. (Richardson and Belsky, 2004, pp. 27–28)
Such “broad discretionary control over someone’s wellbeing” also means the investigator is forbidden conflicting loyalties, hence “will count as trustees and take on a trustee’s fiduciary obligation to decide matters solely on the basis of the beneficiary’s best interests” (Richardson and Belsky, 2004, p. 28). The situation is analogous to the old legal concept of a bailment: someone who has accepted custody of another’s property (or here, specified areas of one’s body and privacy) has accepted an accompanying responsibility to take due care to protect that property, and must use one’s superior position to discern how best to protect the vulnerable one (Richardon and Belsky, 2004).
The moral obligations arising from such entrustment are compassion, engagement, and gratitude. Compassion requires being attentive and responsive to the person’s needs; engagement means engaging with research participants as whole people and not limiting the relationship just to the research interaction; and gratitude can require recognizing participants’ other health needs (Richardson and Belsky, 2004). The resulting duties include returning any IFs or IRRs that could make a difference to the participant (Richardson and Cho, 2012)—so long as those results fall within the range of entrustment—and, beyond this, providing medical care for any health conditions that are discovered within the range of entrustment. All such duties, however, are said to be constrained by various factors that affect the strength of the participant’s claim: the degree of the participant’s vulnerability, dependence on the research team for receiving care, the intensity of the engagement between investigator and participant, the level of gratitude the investigators owe participants, and the costs to the research enterprise that would arise as investigators try to honor their obligations (Richardson and Cho, 2012).
2. Professional Relationship
Miller, Mello, and Joffe (2008) also offer a relationship-based rationale for investigators’ obligations to return findings, but with broader roots and narrower obligations than the partial entrustment model. Rather than focusing only on the investigator–participant relationship, they reflect on professional relationships generally. A professional is
a person who possesses specialized knowledge, whose work involves the frequent exercise of discretion, and who can claim membership in a learned profession with a regulatory structure and ethical code of conduct. The hallmarks of a professional relationship are that the professional is entrusted by another with access to private information and/or other domains of individual privacy, such as the home or the body. Professional relationships are often, though not always, characterized by a service role, and may, but do not necessarily, involve a fiduciary relationship. (Miller et al., 2008, p. 274)
Research subjects entrust their bodies and private information to investigators who are professionals with enhanced capacities to recognize the significance of such things as incidental findings, which in turn shapes the obligation to respond to them. Miller et al. (2008) provide the example of a plumber who enters someone’s basement and sees signs of termites. His professional relationship with the homeowner and his superior ability to recognize this problem, it is suggested, create an obligation to disclose this finding. Similarly, a company physician performing a work physical on a prospective employee is not the fiduciary of that person. And neither is an insurance physician examining an injured person to determine how large the insurance payment should be. But in those cases, too, the professional’s greater capacity to recognize a problem—e.g., an aortic aneurysm—combined with a privacy that has given the professional a privileged access to information, create an obligation for that professional to share important findings with the vulnerable person.3
Miller et al. (2008) distinguish their approach from Richardson’s partial entrustment. Whereas Richardson focuses solely on the investigator–subject relationship, Miller et al. derive obligations for any professional relationship in which privileged access to private matters has been conveyed. More narrowly, however, Miller et al. do not demand that the investigator actually care for the health of the research participant (within the domain of entrustment). Conveying one’s findings is one thing; taking on clinical care responsibilities is quite another. This is because clinical research does not aim to promote participants’ health, hence participants are not entrusting their health to the investigator. Nevertheless, a professional relationship plus privileged access to private information provides sufficient basis, they argue, to warrant an obligation to return IFs. Although their writing does not specifically address IRRs, it is reasonable to suppose the same rationale would warrant returning IRRs, or at least those are valid, important, and
3 “We argue that if (but not only if) A is in a professional relationship with B, such that A has consensual access to private information bearing on the welfare of B, then A has a limited obligation to intervene to help B based on incidental findings outside the scope of the contractual professional relationship. In contrast, when A and B are strangers, unless the conditions that trigger the rescue principle apply, the fact that A detects a potential problem pertaining to B does not give rise to an obligation to help” (Miller et al., 2008, p. 276).
actionable. Additionally, the distinction between whether the result was within the research aims or incidental to them would seem superfluous for Miller et al.
3. Additional Relationship-Focused Theories
Several other approaches are somewhat less far-reaching than the ones discussed so far, but nevertheless base an obligation to return IFs and IRRs on some aspect of the investigator–subject relationship. One such view deems the investigatory and the subject to be partners working toward a common goal. In this view, research participants are not simply disenfranchised providers of material; they are in some sense actively collaborating on the project and should be treated as such (Kohane et al., 2007; Partridge and Winer, 2002).
Another perspective suggests that the fundamental bioethical principle of “respect for persons” requires that investigators bear special obligations to treat their subjects in certain ways—for instance, to exhibit gratitude for the subjects’ contributions. Shalowitz and Miller, for instance, maintain that respect for research participants requires, at minimum, that investigators should not coerce or deceive the participants and that they must obtain informed consent to honor participants’ self-determination (Shalowitz and Miller, 2005). Accordingly, with respect to IFs and IRRs, “[i]t would be disrespectful to treat research volunteers as conduits for generating scientific data without giving due consideration to their interest in receiving information about themselves derived from their participation in research” (Shalowitz and Miller, 2005, p. 738). Sharing IRRs respects self-determination, permitting subjects to use such information for their health care and lending special consideration for the information those subjects helped to generate. Indeed, investigators should not merely respond to requests for information sharing; with IRB oversight they should affirmatively invite such requests (Shalowitz and Miller, 2005). The obligation is not absolute, however, as IRRs could appropriately be withheld if their disclosure might compromise someone’s safety (for instance, in cases of misattributed paternity).
Finally, Illes et al. (2006) extend the theme of respect for participants’ autonomy and interests to encompass the need to recognize participants’ generosity with appropriate reciprocity. Investigators can only proceed with their scientific mission if they receive subjects’ contributions, and it is only right to recognize that in some concrete way. Here, reciprocity is said to require communicating those findings that may affect participants’ health or, at the very least, to share aggregate findings (Clayton and McGuire, 2012; Ossorio, 2012).
C. Obligations Not Based on the Investigator–Subject Relationship
Several commentators suggest that we need not refer at all to the investigator–subject relationship in order to find an obligation to return certain IFs or IRRs. The duty to warn, for instance, comes from the age-old principle that if one
person sees that another is unwittingly about to enter a high danger that quite likely he or she would not voluntarily embrace, then the person seeing the danger has an obligation to warn the other. John Stuart Mill, in On Liberty, gives the famous bridge example: If a person sees that someone is unwittingly about to cross a bridge that is terribly unsafe, it may even be acceptable to “seize him and turn him back, without any real infringement of his liberty; for liberty consists in doing what one desires, and he does not desire to fall into the river” (Mill, 1859, p. 57).
In the research setting no “seizing” is contemplated, but only a duty to inform someone of a serious, validated, actionable hazard.4 Indeed, as noted above, the ACMG concluded that investigators need not even obtain subjects’ prior consent to be warned about serious incidental findings. Rather, subjects should be counseled, in advance, that if such IFs are found, they will be relayed.5
The rule of rescue is a somewhat broader, also very basic moral precept. Beskow and Burke (2010) emphasize that the “duty to rescue is based on the premise that, when confronted with a clear and immediate need, an individual who is in a position to help must take action to try to prevent serious harm when the cost or risk to self is minimal” (Beskow and Burke, 2010, p. 1). It applies mainly if not exclusively to rather dire situations (Miller et al., 2008). If an investigator discovers, e.g., that a research participant has a gene that carries a high risk of early-onset colorectal cancer in the absence of any family history for that disease, then conveying that information to that patient could be life-saving.
Rescuing is typically a more involved process than merely warning. The rescuer could incur cost or risk, himself, if the rescue is to be successful. Hence, ordinarily the rule of rescue is said to apply only when the burden on the rescuer is minimal (Beskow and Burke, 2010). “Although the duty to rescue is a legal concept, our intent is to propose an ethical underpinning for what participants have called basic ‘human decency’ when discussing researchers’ obligations concerning genetic information” (Beskow and Burke, 2010, p. 2). These cases, it is suggested, will be “exceptionally rare” (Beskow and Burke, 2010, p. 2).
The duty to help, or to be helpful, is a still broader concept implying an obligation to produce positive benefit, not just to avoid a clear and imminent harm. The principle applies when we can be of great benefit to someone else, without significant sacrifice to ourselves (Miller et al., 2007). Ossorio, for instance,
4 In the Tarasoff case, somewhat similarly, the California Supreme Court found that a mental health professional had a duty to warn a family about an imminent threat of grave danger posed by a patient. Tarasoff v. Regents of the University of California, 17 Cal. 3d 425, 551 P.2d 334, 131 Cal. Rptr. 14 (Cal. 1976).
5 See Green et al. (2013: “The Working Group therefore recommended that whenever clinical sequencing is ordered, the ordering clinician should discuss with the patient the possibility of incidental findings, and that laboratories seek and report findings from the list described in the Table without reference to patient preferences. Patients have the right to decline clinical sequencing if they judge the risks of possible discovery of incidental findings to outweigh the benefits of testing.” See also Wolf et al. (2008, p. 229), discussing duty to warn of foreseeable harm.
examines the duty to help as grounding a duty even for secondary researchers (those working with tissues or data gathered by others) to return certain kinds of findings, so long as doing so poses little or no risk or burden to the helper, and does not interfere with that person’s legitimate aims (Ossorio, 2012). Ossorio cites philosophers Frances Kamm and Thomas Scanlon: “[I]f a person can be of great help to somebody else (i.e., save her a great deal of time, money, irritation) in pursuing an important life project, at essentially no cost/burden to the helper, it would be wrong not to help absent a compelling reason not to help” (p. 462).
Across the duty to warn, the rule of rescue, and the duty to help, the unifying theme seems to be common decency, or a shared sense of common humanity. There are some things we do for each other, simply because we are moral beings who can, do, and in some sense must think beyond our own selfish interests.
A related but somewhat distinctive approach, stemming partly from contractual elements of the investigator–subject relationship, is the concept of stewardship (Ossorio, 2012; Richardson and Cho, 2012). Someone who shares his time and information and even permits bodily invasion should legitimately be able to expect at the least that the terms on which he shared will be honored with due care: that a tissue specimen will not be wasted or lost; a biobank will store samples at proper temperatures; the analysis will not be so poorly done that it is useless; and in general, the promises made by those who asked for subjects’ participation will be kept, and a fruitful research effort will be pursued.
If we are to conclude that investigators are morally required to return IFs or IRRs under certain circumstances, then we should be able to adduce fairly forceful reasoning to support that supposed obligation. “Well, isn’t it just obvious?!” is not good enough. A flat assertion—a “Hey, presto!” move, as one of my professors in graduate school used to call it—is insufficient. Unfortunately, many of the theories discussed in Part III rely heavily on “Hey, presto.”
Let us begin with Richardson et al.’s theory of partial entrustment. As noted above, it embraces several core moves:
- participants waive certain rights to privacy and bodily integrity, rendering themselves vulnerable;
- such waivers are defined and circumscribed by the informed consent;
- these waivers grant investigators discretion over the health of participants, within the identified range;
- hence, such waivers count as partially entrusting one’s health to the researchers, within the identified range of waiver;
- therefore, investigators have obligations to care for participants’ health, within the waiver and discretion; and
- the strength of those obligations can be adjusted by such factors as cost, degree of vulnerability, etc.
Virtually all of these moves are open to challenge, once we get past (1) and (2). Although research participants do waive certain rights and grant certain permissions as specified in the consent form, by no means does this grant researchers vast discretion over their health. In reality, most research protocols afford investigators very little discretion, because high-quality medical science commonly involves tight controls. Protocols are designed to control as many variables as possible, so that the results in the end can be attributed specifically to the factors under study. Thus, for instance, subjects in a Phase III trial of a new hypertension drug will commonly be limited to people who have only hypertension—not also diabetes, heart failure, cirrhosis, and cancer. The more variables are at play—the more complex the enrolled subjects’ health—the less it will be possible to pin the results just on the new drug. So the protocol keeps the variables to a minimum.
In this sense, the more scientifically pristine (well controlled) the study, the less discretion the investigator actually has. He or she might perhaps have discretion to decide which laboratory personnel to hire, or perhaps which shipping company will carry specimens to an out-of-state laboratory. The investigator may or may not have the discretion to decide which laboratory equipment to use. After all, where the choice of laboratory equipment could affect results, especially in a multisite study, individual investigators may have no discretion at all to deviate from the protocol-specified laboratory equipment. As another example, a genetic study that simply seeks to list associations between certain genotypes and certain phenotypes may grant the investigator no discretion whatever over participants’ health. It is one thing to have discretion over certain processes in the research, and another thing entirely to have discretion over someone’s health. To move from the former to the latter is simply a non sequitur.
Even where investigators do have some discretion that can affect participants’ health, that leeway will ordinarily be closely limited. For instance, an investigator may need to make a judgment call when it is not clear whether someone is eligible to enter a study. Perhaps the blood pressure fluctuates between being “high enough” and “not quite high enough.” This, however, hardly amounts to direct discretion over that person’s health. It simply addresses the question whether the person can enter this particular study and incur whatever potential risks or benefits the study carries. At a later stage, if an enrolled participant experiences problems related to the study, investigators typically have only two sorts of health-affecting discretion: whether to remove the person from the study entirely, or whether to avail oneself of protocol-permitted ancillary care. A drug study might, e.g., permit symptom-relieving medications for a cold, even while forbidding other medications during the course of the trial.
These limited forms of discretion hardly amount to vast control over someone’s health, or even over the person’s health within the scope of the trial. After
all, research avowedly does not seek to benefit any particular person. Rather, it seeks generalizable knowledge that is hoped to benefit future persons, even if by fortunate happenstance it also might benefit some of the current study subjects. And the investigator may have little control even over the aspects of health under study. A study that simply, e.g., adds a new seizure medication to one’s usual regimen may leave ordinary care up to the participant’s usual clinical physician, leaving the investigator with little if any discretion over the person’s health, or even his seizure-related health.
In sum, the leap from “I let you look at my genes” to “You looked at my genes, so now you must take care of all my genetic illnesses” or even “Now you must tell me everything my genes say about my health—give me a freebie 23 & Me!” simply does not logically follow. Hey presto.
The gap becomes even clearer when we look at other arenas of information sharing. Many people share remarkable amounts of highly private information to friends and to “friends” on social media or even to a stranger-seatmate on a subway. Such waivers of privacy convey no discretion to the other person, other than presumably the right to tell my friends all about “that crazy person I sat next to on the subway this morning.” Moreover, my sharing intimate, excruciating details of my noxious, oozing skin disease hardly makes you responsible to care for my health. Even if we are friends. Or “friends.”
Richardson and Belsky (2004) actually come close to acknowledging that their schema is built on little more than intuition. As they reject the polar opposites between “Investigators are responsible for every health need of their subjects” at one end and “Investigators are mere scientists with no obligations whatever” at the other, they recognize that these are “intuitive grounds for rejecting polar positions” (p. 26). In the end we are left with intuition posing as some sort of elaborate inference. Richardson and colleagues provide no particular reason for rejecting, as an alternative, Clayton and McGuire’s option of simply stating, in the informed consent, “We will tell you nothing about what we find,” perhaps reserving the option to share an IF or IRR under the most extraordinary circumstances (Clayton and McGuire, 2012).6
6 As a side note it should be observed that Richardson and Belsky err when they claim that the investigator’s relationship with the subject is essentially parallel to a bailment. First, there are many types of bailment relationships (gratuitous bailment, bailment for hire, bailment for mutual benefit, involuntary bailment, etc.; see Black’s Law Dictionary: “bailment”). “A bailment relationship can be implied by law whenever the personal property of one person is acquired by another and held under circumstances in which principles of justice require the recipient to keep the property safely and return it to the owner.” Black’s Law Dictionary, citing 8A Am. Jur. 2d Bailment § 1 (1997). In the research setting, virtually never is it contemplated to return the original property, intact, to the owner. At most, some down-the-road product might, or might not, be returned. Importantly, as we consider what sort of “property” might be returned to a research participant, we need to recognize that bailments are ordinarily defined by contract, and that the specific type of bailment circumscribes the bailee’s duties and discretion. A bailment in which I have entrusted/loaned my car to you for the evening would generally mean you must take reasonable care of it and return it to me at the end of the evening. You
We turn next to the “professional relationship” approach proposed by Miller et al. (2008). As we recall, their theory focuses on professional relationships and privileged access to private information.
We argue that if (but not only if) A is in a professional relationship with B, such that A has consensual access to private information bearing on the welfare of B, then A has a limited obligation to intervene to help B based on incidental findings outside the scope of the contractual professional relationship. In contrast, when A and B are strangers, unless the conditions that trigger the rescue principle apply, the fact that A detects a potential problem pertaining to B does not give rise to an obligation to help. (p. 276)
Thus, if the plumber I hired to work on a problem down in my basement sees termites there, he is obligated to tell me.
The thesis seems to be overkill in several respects, as evidenced by the plumber example. The fact that the person has some sort of expertise does not imply that he or she is a “professional.” And it is not clear on what basis a plumber would have any particular expertise regarding termites. Yet Miller et al. have placed a firm moral duty of disclosure on the poor plumber. Perhaps we can appreciate this overkill better by exploring a series of hypotheticals.
- An employee at a quickie oil change shop may know barely more than the customer, if that, about changing oil. And yet, because my car is on the hoist, he may see something (let’s say, a worn brake line about to rupture) that I am unlikely to see, simply because I don’t spend much time underneath my car. Actually none, if I can help it. Is the oil change guy suddenly to be deemed a professional? And is his look at the underside of my car somehow a “privileged” access? Likely not. It’s just that not many people are likely to spend time underneath my car. That peek at the underbelly is not some sort of sacred conveyance or privilege. Rather, it’s simply a matter of (un)likelihood: so few people are under my car that, if I’m to get any early warning at all about the leak or the worn hose, then the oil change guy pretty much has to be the source. The same goes for my dark, dank, dungeony, now also termite-infested basement—very few people (including me) will spend time there. So if the plumber says nothing about my termites, or if the oil guy doesn’t comment on my about-to-rupture brake line, the consequences could be disastrous.
- Suppose a woman visits a luxury lingerie boutique, staffed by people who are trained to help customers find the best fit for their undergarments.
do not become responsible for all its mechanical defects—or even the defects that crop up while you are driving it. And yet this is precisely what Richardson and Belsky seem to want: the entrustment to you somehow makes you responsible for the problems that emerge while you use it, at least if you’re using my car for the agreed-on purpose. The analogy quickly falls apart and should best be abandoned.
The woman consults with a bra-fitter who sees a mole on the woman’s skin, in a place usually covered by a bra. Because her aunt recently died of melanoma, the bra-fitter knows all too well that this mole is quite possibly a melanoma. Is “bra-fitter” a profession? The two women are not strangers (the woman shops here several times per year), and the bra-fitter has some specialized knowledge, so are they somehow in a “professional relationship”? Hardly. This is quite an ordinary relationship between a customer and a service person. The bra-fitter has “privileged access” only because the woman in this story is modest enough that she does not wear clothing that reveals cleavage. Otherwise the mole would be quite public. But in this case, if the bra-fitter does not say something, it is quite possible the melanoma may remain unrecognized for quite some time. As above, the reality seems to be more that, by happenstance, not many people are likely to see the problem and, equally by happenstance, the bra-fitter recognizes the mole’s foreboding significance. One would hope that the fitter would mention something to the woman. If so, we need not resort to elaborate theories of professions and privilege. It just seems like common human decency.
- Now suppose the woman’s likely-melanoma mole is located instead on her forearm. She is on a plane sitting next to someone who happens to be a dermatologist. They exchange the usual seatmate pleasantries. As the woman settles in and pushes her sweater sleeves up toward her elbows, the mole is revealed. It is unmistakable to any dermatologist even though, to the less-trained eye, it probably just looks unattractive. As luck would have it, her seatmate is a dermatologist from Florida, where melanoma is quite common. Here, simply being seatmates is hardly a “professional relationship” even though the dermatologist is a professional. And there is no “privileged access” because the mole is exposed for anyone to see. And yet we may well wish the dermatologist would suggest that she have it checked out.
Once again, it is not highly likely that anyone else will notice the problem in a timely way, especially if it is winter and the woman normally wears her sleeves at full length. But again, circumstances that actually are just a matter of happenstance could create something of an obligation. If they do, that obligation stems from the fact that the situation could be serious and, for various reasons, no one else is likely to warn in time. We need not strain to posit a “professional relationship”—an oil change guy, a bra-fitter, or even a plumber—and we need not insist on “privileged access” to recognize that any of these people might be in a situation where (1) they happen to recognize something serious and (2) it is not likely anyone else will see the problem in time to avoid an adverse outcome. Conversely, if a problem is highly visible and widely recognizable as being serious,
then any responsibility to warn becomes more widely diffused. In that setting we are hard-pressed to insist that any specific person bears the moral obligation.
In sum, in the research setting the main reason the investigator may have a specific, personal duty to return an IF or IRR need not rely on any sort of professional relationship or privileged access. It is enough that (1) the investigator is among the few who will actually see the relevant data and (2) the investigator may be the only one who will recognize the significance of such data for the individual research subject. These two factors are sufficient to trigger a duty to convey the information. Occam’s razor: there is no need for high-flying philosophical pirouettes to accomplish something very basic.
We turn next to Shalowitz and Miller (2005), who focus on respect for participants. Since “[i]t would be disrespectful to treat research volunteers as conduits for generating scientific data without giving due consideration to their interest in receiving information about themselves derived from their participation in research” they wrote (p. 738), investigators must return IFs and IRRs.
The problem with this approach is that it begs the question. Logically, a question is begged when the arguer presupposes as true the very thing he or she is trying to prove. Here, the authors build their conclusion—“Investigators should share IFs and IRRs”—into the very definition of “respect.” However, it is not clear why respect must necessarily be exhibited in this particularly rich way. One could alternatively respect subjects and their autonomy by informing them, up front: “If you sign up for this trial we will not return any results to you [barring exceptional circumstances].” Or one could say “We will only share aggregate results at the conclusion of the trial.” In that way, those who wanted IRRs could simply decline to participate. Or one could pay subjects financially for their time and trouble, as is often done for normal volunteers in Phase I trials of new pharmaceuticals.
In some cases, subjects enroll in research for purely altruistic reasons. Jesse Gelsinger, for instance, was said to have enrolled in a gene transfer study solely to help infants who had far more devastating cases of ornithene transcarbamylase deficiency than his own (Wilson, 2009). Similarly, those who supplied tissue for research on Canavan disease (a fatal, incurable genetic disorder most commonly seen in Ashkenazi Jewish families) had hoped that their donations would be used to further scientific understanding of the disease and to develop ways of testing for it prenatally.7 They had expected that
any carrier and prenatal testing developed in connection with the research for which they were providing essential support would be provided on an affordable and accessible basis, and that [the investigator’s] research would remain in the public domain to promote the discovery of more effective prevention techniques and treatments and, eventually, to effectuate a cure for Canavan disease.
7Greenberg v. Miami Children’s Hosp Research Institute, Inc., 208 F.Supp.2d 918 (2002).
Their expectation was based on similar efforts to address Tay-Sachs disease.8 Participants were bitterly disappointed when the investigators later applied for a patent on the gene and its application, which would mean significantly restricting access to the fruits of their contributions. In this instance “respect” for participants meant not that they would receive any sort of individual-specific return of information, but rather that their efforts would help scientists detect and treat this deadly disease in a way that would make breakthroughs widely accessible to everyone who was suffering.
More broadly, good stewardship of resources may be an important way to exhibit respect for those who provide the resources (Ossorio, 2012, p. 465). If I donate money to the Humane Society my expectation is not that they will show gratitude by spending lots of my money thanking me, but rather by helping as many animals as possible. A thank-you note, or even an automatic email thanks, may be appropriate and wise. But exercising good stewardship may be the best way to respect my donation.
All this is not to say that any of these alternatives is the “correct” way to exhibit respect. To the contrary, the upshot is simply that one cannot credibly insist that returning IRRs and IFs is the one and only, or even a required, way of exhibiting respect (see also Clayton and McGuire, 2012, p. 475).
Our response must be the same regarding appeals to reciprocity as the justification for a mandate to return IFs and IRRs (Illes et al., 2006, p. 783). Reciprocity is essentially an expression of gratitude and respect. Even if gratitude is appropriate in return for someone’s contribution to research, that gratitude might have many different forms. In reality, the contribution of any one individual is often miniscule relative to the broader research project (Ossorio, 2012, p. 465), even if in some other studies the contribution is substantial and ongoing. And even if the person is making a significant sacrifice, there are other ways to recognize it, from paying money, to returning aggregate results, to exercising the utmost good stewardship. To assert that the reason we must return IRRs and IFs is because we must exhibit gratitude, and then to define gratitude exclusively as requiring return of IRRs and IFs begs the question.
As noted at the outset, and as explained by Jonsen and Toulmin (1988) many years ago, we need not always agree on our theories, to reach reasonable consensus regarding what to do in a given situation. Here, quite a broad consensus has emerged suggesting that investigators should return IFs and IRRs when they are valid, clinically important, and actionable. Perhaps one day we might identify a clear moral underpinning—a universally agreed-on, clear and helpful moral keystone—that can tell us, in more controversial situations, just what to do.
8Greenberg v. Miami Children’s Hosp Research Institute, Inc., 208 F.Supp.2d 918, 921 (2002).
Unfortunately, that appears unlikely. The more detailed and prescriptive the theories we have seen, the more they seem to rely on leaps of faith, non sequiturs and question-begging. Essentially they each are products of diverging moral intuitions. Even the ostensibly simple Rule of Rescue can easily take us beyond a supportable consensus. Active rescue, after all, does not merely warn someone of a danger. It involves actively delivering help—here, presumably some form of clinical care to address the medical peril uncovered in the IF or IRR. Admittedly, the rule of rescue requires such assistance only if the risk and burden to oneself is minimal. But once that threshold into active assistance is crossed, we must then consider how great that “minimal” burden is, ushering us into controversies analogous to those discussed above.
Accordingly, it appears that once we venture beyond a duty to warn and the “common human decency” concept on which it is based, we risk several problems. We can end up reinforcing the therapeutic misconception (Clayton and McGuire, 2012); burdening researchers with heavy costs (Illes et al., 2006; Ossorio, 2012; Partridge and Winer, 2002; Shalowitz and Miller, 2005; Wolf et al., 2006), potentially in the form of asking researchers to make up for lack of access to health care elsewhere in the system; and potentially diverting legitimate research into some sort of chimeric entity that does not distinguish well between research and clinical care (Clayton and McGuire, 2012; Miller et al., 2008). The upshot is not hopeless, it is simply a recognition that the more complex the situation, the less likely we will achieve any solid theoretical basis on which to base a strong consensus. And that should come as no surprise.
Beskow, L. M., and W. Burke W. 2010. Offering individual genetic research results: Context matters. Science Translational Medicine 2(38), pp. 38cm20.
Beskow, L. M., W. Burke, J. F. Merz, P. A. Barr, S. Terry, V. B. Penchaszadeh, L. O. Gostin, M. Gwinn, and M. J. Khoury. 2001. Informed consent for population-based research involving genetics. JAMA 286:2315–2321.
Bookman, E. B., A. A. Langehorne, J. H. Eckfeldt, K. C. Glass, G. P. Jarvik, M. Klag, G. Koski, A. Motulsky, B. Wilfond, T. A. Manolio, R. R. Fabsitz, and R. V. Luepker. 2006. Reporting genetic results in research studies: Summary and recommendations of an NHLBI working group. American Journal of Medical Genetics A 140(10):1033–1040.
Clayton, E. W., and A. L. McGuire. 2012. The legal risks of returning results of genomics research. Genetics in Medicine 14:473–477.
Fabsitz, R. R., A. McGuire, R. R. Sharp, M. Puggal, L. M. Beskow, L. G. Biesecker, E. Bookman, W. Burke, E. G. Burchard, G. Church, E. W. Clayton, J. H. Eckfeldt, C. V. Fernandez, R. Fisher, S. M. Fullerton, S. Gabriel, F. Gachupin, C. James, G. P. Jarvik, R. Kittles, J. R. Leib, C. O’Donnell, P. P. O’Rourke, L. L. Rodriguez, S, D. Schully, A. R. Shuldiner, R. K.F. Sze, J. V. Thakuria, S. M. Wolf, and G. L. Burke. 2010. Ethical and practical guidelines for reporting genetic research results to study participants: Updated guidelines from an NHLBI working group. Circulation: Cardiovascular Genetics 3:574–580.
Green, R. C., J. S. Berg, W. W. Grody, S. S. Kalia, B. R. Korf, C. L. Martin, A. L. McGuire, R. L. Nussbaum, J. M. O’Daniel, K. E. Ormond, H. L. Rehm, M. S. Watson, M. S. Williams, L. G. Biesecker, and the American College of Medical Genetics and Genomics. 2013. ACMG recommendations for reporting of incidental findings in clinical exome and genome sequencing. Genetics in Medicine 15(7):565–574.
Illes, J., M. P. Kirschen, E. Edwards, L R. Stanford, P. Bandettini, M. K. Cho, P. J. Ford, G. H. Glover, J. Kulynych, R. Macklin, D. B. Michael, and S. M. Wolf. 2006. Incidental findings in brain imaging research. Science 311:783–784.
Jarvik, G. P., L. M. Amendola, J. S. Berg, K. Brothers, E. W. Clayton, W. Chung, B. J. Evans, J. P. Evans, S. M. Fullerton, C. J. Gallego, N. A. Garrison, S. W. Gray, I. A. Holm, I. J. Kullo, L. S. Lehmann, C. McCarty, C. A. Prows, H. L. Rehm, R. R. Sharp, J. Salama, S. Sanderson, S. L. Van Driest, M. S. Williams, S. M. Wolf, W. A. Wolf; eMERGE Act–ROR Committee and CERC Committee; CSER Act–ROR Working Group, and W. Burke. 2014. Return of genomic results to research participants: The floor, the ceiling, and the choices in between. American Journal of Human Genetics 94(6):818–826.
Jonsen, A., and S. E. Toulmin. 1988. The abuse of casuistry: A history of moral reasoning. Berkeley: University of California Press.
Kohane, I. S., K. D. Mandl, P. L. Taylor, I. A. Holm, D. J. Nigrin, and L. M. Kunkel. Reestablishing the researcher–patient compact. Science 316:836–837.
McGuire, A. L., S. Joffe, B. A. Koenig, B. B. Biesecker, L. B. McCullough, J. S. Blumenthal-Barby, T. Caulfield, S. F. Terry, and R. C. Green. 2013. Ethics and genomic incidental findings. Science 340(6136):1047–1048.
Mill, J. S. 1859. On liberty (People’s edition, 1913). New York: Longman, Greens, and Co.
Miller, F. G., M. M. Mello, and S. Joffe. 2008. Incidental findings in human subjects research: What do investigators owe research participants? The Journal of Law, Medicine & Ethics 36:271–279.
Morreim, E. H. 2005. The clinical investigator as fiduciary: Discarding a misguided idea. The Journal of Law, Medicine & Ethics 33(3):586–598.
NBAC (National Bioethics Advisory Commission). 1999. Research involving human biological materials: Ethical issues and policy guidance, Vol. 1. Rockville, MD: NBAC.
OHRP (Office for Human Research Protections). 2015. Attachment C: Return of individual results and special consideration of issues arising from amendments of HIPAA and CLIA. https://www.hhs.gov/ohrp/sachrp-committee/recommendations/2015-september-28-attachment-c/index.html (accessed April 17, 2018).
Ossorio, P. 2012. Taking aims seriously: Repository research and the limits on the duty to return individual research findings. Genetics in Medicine 14:461–466.
Partridge, A. H., and E. P. Winer. 2002. Informing clinical trial participants about study results. JAMA 288:363–365.
Richardson, H. S. 2008. Incidental findings and ancillary-care obligations. The Journal of Law, Medicine & Ethics 36:256–270.
Richardson, H. S., and L. Belsky. 2004. The ancillary-care responsibilities of medical researchers. Hastings Center Report 34(1):25–33.
Richardson, H. S., and M. K. Cho. 2012. Secondary researchers’ duties to return incidental findings and individual research results: A partial-entrustment account. Genetics in Medicine 14(4):467–472.
Richardson, H. S., N. Eyal, J. I. Campbell, and J. E. Hargerer. 2017. When ancillary care clashes with study aims. New England Journal of Medicine 377:1213–1215.
Shalowitz, D. I., and F. G. Miller. 2005. Disclosing individual results of clinical research: Implications of respect for participants. JAMA 294:737–740.
Wilson, J. M. 2009. Lessons learned from the gene therapy trial for ornithine transcarbamylase deficiency. Molecular Genetics and Metabolism 96:151–157.
Wolf, S. M. 2013. Return of individual research results and incidental findings: Facing the challenges of translational science. Annual Review of Genomics and Human Genetics 14:557–577.
Wolf, S. M., F. P. Lawrenz, C. A. Nelson, J. P. Kahn, M. K. Cho, E. W. Clayton, J. G. Fletcher, M. K. Georgieff, D. Hammerschmidt, K. Hudson, J. Illes, V. Kapur, M. A. Keane, B. A. Koenig, B. S. Leroy, E. G. McFarland, J. Paradise, L S. Parker, S. F. Terry, B. Van Ness, and B. S. Wilfond. 2008. Managing incidental findings in human subjects research: Analysis and recommendations. The Journal of Law, Medicine & Ethics 36(2):219–248.
Wolf, S. M., B. N. Crock, B. Van Ness, F. Lawrenz, J. P. Kahn, L. M. Beskow, M. K. Cho, M. F. Christman R. C. Green, R. Hall, J. Illes, M. Keane, B. M. Knoppers, B. A. Koenig, I. S. Kohane, B. Leroy, K. J. Maschke, W. McGeveran, P. Ossorio, L. S. Parker, G. M. Petersen, H. S. Richardson, J. A. Scott, S. F. Terry, B. S. Wilfond, and W. A. Wolf. 2012. Managing incidental findings and research results in genomic research involving biobanks and archived datasets. Genetics in Medicine 14:61–384.