KEY SPEAKER THEMES
• Policy constraints are currently a greater barrier to large simple trials (LSTs) than technical limitations, which have largely been resolved.
• The current ethical framework for clinical research focuses on the risks of research and ignores the risks of clinical care practices, most of which are based on weak evidence.
• A new ethical framework is needed that allows the joining of regular clinical care and research activities, including LSTs, by appropriately balancing the risks and benefits of research when the safety and effectiveness of routine clinical practices are often unknown.
• Current privacy regulations may unnecessarily hinder the needed reuses of clinical data for learning purposes because they more stringently regulate activities that fall under the
definition of research intended to contribute to generalizable knowledge and only minimally regulate activities considered to be routine, such as treatment and internal operations.
• A new, more rational approach to the oversight of the use of patient data should depend on trade-offs between expected benefits and risks to patients and should be developed through the use of the basic principles of fair information practices.
Large simple trials (LSTs), like all clinical trials, are governed by policies designed to protect the health, safety, and privacy of participants. The session of the workshop described in this chapter focused on whether these policies are appropriate or need to be revised in a way that still protects participants adequately but facilitates the conduct of LSTs as an important evidence-generating activity in the learning health care system.
Robert M. Califf, director of the Duke Translational Medicine Institute, professor of medicine, and vice chancellor for clinical and translational research at the Duke University Medical Center, provided an overview of the current policy context and highlighted issues that might be candidates for review and revision to better balance ethical and privacy concerns to facilitate LSTs. Ruth R. Faden, Philip Franklin Wagley Professor of Biomedical Ethics and executive director of the Johns Hopkins Berman Institute of Bioethics at The Johns Hopkins University, spoke about the shortfalls of the current ethical framework that guides oversight of clinical research and suggested that a new one might be necessary. Deven McGraw, director of the Health Privacy Project at the Center for Democracy and Technology, discussed the development and promotion of more workable privacy and security protections for electronic personal health information.
Robert M. Califf gave an overview of some of the major policy barriers to the more widespread performance of LSTs. He pointed out that the vision of a national clinical research system that extended into the community was not a new concept; in fact, it had been a 10-year goal of the National Institutes of Health (NIH) Roadmap in 2002. Although most (85 percent) recommendations in clinical practice guidelines are based on low-quality evidence, the current clinical trials enterprise cannot produce results fast enough to close the gap. Noting that technology is no longer the limiting factor in conducting LSTs and producing this evidence, he asked, what is preventing the conduct of LSTs?
Califf posited that the barriers to implementation are not technical because it is possible to collect standardized reliable data from electronic records. Rather, the barriers have to do with policy constraints. Califf suggested that a window of opportunity to revise policies that impede LSTs and other clinical research exists, because the leaders of federal health agencies have shown that they are open to reforming the national clinical trials system, health care providers have proved willing to participate if the studies answer questions important to them and they do not lose money, and the experience with potential trial participants is that the majority of individuals will participate if they are asked.
So, the question is, what policies will more quickly allow the selection of standardized, reliable data that could serve as a backbone for a learning system that, Califf would argue, includes LSTs? Califf cited his experiences as principal investigator of the NIH-funded Health Systems Research Collaboratory Coordinating Center, where he was charged with helping pilot projects navigate these challenges. He indicated that those involved in the pilot projects contend that the top issues are interfacing with the health system and regulatory ethics.
Looking at the challenges of interactions with health systems, Califf asked, what policies can motivate health systems administrators to participate in research, ensure that the trials done answer questions of interest to clinicians and patients, and, assuming that the clinical trials are relevant, motivate providers to participate? Highlighting the major regulatory changes that are needed, Califf asked if the ethical review and institutional requirements for oversight of research could be streamlined without putting research participants at undue risk and if the U.S. Food and Drug Administration could actively encourage streamlining of procedures. He suggested that key federal agencies could encourage novel approaches to reviews by institutional review board and informed consent, highlighting the issue of dealing with cluster randomization as a major issue if LSTs are to be widespread in the community.
Califf closed his presentation by asking whether, given the hurt that the current system has caused patients by its failure to answer critical questions, the underlying construct of separation of research and practice is appropriate and reasonable.
Ruth R. Faden addressed whether the separation of clinical research and clinical care is ethically appropriate. She explained that the current framework for health care ethics was developed in the 1970s, when abuses of participants in research projects were salient. This was exemplified by
the Tuskegee research experiment conducted from 1932 to 1972, in which subjects were infected with the spirochete that causes syphilis without their knowledge and treatment was withheld, even after penicillin was proven to be an effective cure. In the context of the Tuskegee study, regulations to protect human subjects in health research, principally, the Common Rule, were developed, and the Office for Human Research Protections was established in the U.S. Department of Health Services.
Given this context, the framework for regulation of clinical research with human participants was based on a sharp distinction between research and clinical care, since the focus was protecting individuals’ rights and interests when they participated in clinical research.
Faden noted that the concept of a learning health system calls this division into question, as it proposes that it is essential to learn from care, therefore integrating research and practice. Reconciliation of the division requires a different way of thinking about the relationship between research ethics and clinical ethics. One of the bases for this distinction is the assumption that research places patients at higher risk than regular clinical care. However, she noted that many approaches, such as LSTs, are likely to challenge this assumption. It is now known that many commonly accepted clinical practices have a weak evidence base, and some have been proven to have no benefit or even to be harmful to patients. To reduce the harm done to patients, the current framework, Faden argued, constrains the ability to conduct research that could potentially demonstrate that some commonly used treatments are less effective than other treatments or even harmful compared with other treatments.
Faden noted that the moral requirement to show respect for patients and honor their necessary role in this research also exists. This, she explained, has implications for how consent is thought about in a learning health system, as alternative approaches to individual consent must also fulfill this requirement.
Faden and other health ethicists have been working for several years to develop new ethical guidelines that would support a learning health care system. She noted that these guidelines would have to include a sense of reciprocity in which researchers commit to respect for patients and patients commit to contribute to the process of knowledge generation. She suggested that this reciprocity must be expressed through practices, such as transparency, disclosure, and oversight, that are acceptable to patients and involve patients in a different way than is currently the norm, but these practices remain to be determined.
Deven McGraw’s presentation about privacy issues paralleled Faden’s, in that McGraw found that the distinction between research and clinical practice in current regulations stands in the way of the conduct of LSTs and other important clinical research as part of the process of providing regular health care. She described how the Health Insurance Portability and Accountability Act distinguishes the use of patient data as either routine and nonroutine. Patient data can be routinely used without additional authorization, for example, as a basis for patient treatment, as a basis for the treatment of another patient, or for health care operations (such as billing and quality improvement). In fact, McGraw points out, patients are not able to opt out of the use of their information for these purposes. Research, however, is considered to be a nonroutine use of data, and in most cases, researchers are required to obtain specific authorization.
The distinction between routine use and research is based on the definition of “research” as something that contributes to generalizable knowledge. This creates a situation in which, for example, the use of patient information for quality improvement is considered routine if it is kept within the treating organization but as nonroutine if the results obtained with those data are shared more broadly. McGraw pointed out that these two scenarios involve the exact same use of data in terms of what data are accessed, who accesses the data, and the questions being posed. This, she argued, is problematic for a learning health system, for which dissemination of learning is critical.
McGraw suggested two possible remedies. One, which she noted would be problematic, would be to make a case to consider routine individual data uses that are currently considered nonroutine. An alternative approach would be to develop criteria to guide the use of data so that oversight is decided according to the trade-offs between the benefits to patients expected from the use of those data and the risks that the data might be leaked and misused. In this setting, issues of how many and which people would have access to the data, how individually identifiable the data are, and what mechanisms for data security will be used would drive oversight, rather than whether the results will be shared. McGraw suggested that fair information practices, which are already the basis for most uses of personal data, could serve as a good foundation for the legal frameworks and systems of accountability necessary to put this into practice. For example, oversight would emphasize that the data be kept in the least identifiable form possible and that the fewest possible number of people possible have access to the data. Mechanisms such as distributed networks could help ensure that the data are used effectively without a loss control over their use.
McGraw noted that as different approaches are considered, piloting of their use and the study of their effects will be crucial to the development of practical approaches and to appeal to policy makers as well as patient and consumer groups to permit the use of patient data. Similar to the ethics guiding research with human subjects in a learning health care system, new ways of engaging patients that respect their right to have a say in what research is done and to learn how their data will be used to improve health care would be needed.