The Regulatory Framework for Protecting Humans in Research
Through federal regulations, the U.S. government has established a system of protections for research participants. Eighteen federal agencies and departments adhere to the Federal Policy for the Protection of Human Subjects, or the Common Rule (45 CFR 46),1 which is a set of identical regulations codified by each agency. This system of protections, however, applies only to research that is conducted or funded by an agency that is subject to the Common Rule or that is subject to Food and Drug Administration (FDA) review and approval. Many institutions hold assurances of compliance to the Common Rule, which are negotiated with the federal government. Such assurances cover all of the institution’s research involving humans that is conducted or supported by one of the federal departments or agencies that have adopted the Federal Policy.
In considering the appropriate oversight of third-party human research conducted for Environmental Protection Agency (EPA) regulatory purposes, it is useful to understand the development of the system of protections to which EPA must adhere under the Common Rule, as well as the practices of other federal agencies in this regard, as lessons learned from the past and in other research contexts can inform the development
and improvement of EPA regulatory policy for third-party studies. Of note, EPA previously has not applied the Common Rule protections to privately sponsored (third-party) studies of regulated substances. Were EPA to include such studies in its oversight system, it would be useful to consider how those regulations might apply.
HISTORY OF THE DEVELOPMENT OF FEDERAL REGULATIONS
Public policies regarding the ethical treatment of humans in research began forming in the late 1940s, largely in response to atrocities committed by Nazi investigators who were tried before the Nuremberg Military Tribunal (United States v. Karl Brandt et al.).2 In 1946, the American Medical Association adopted its first code of research ethics (AMA, 1946), which ultimately influenced the Nuremberg Tribunal’s standards for ethical research (Moreno, 1999), embodied in the ten “basic principles” for human research, now known as the Nuremberg Code.
The first principle of the Nuremberg Code states that, “the voluntary consent of the human subject is absolutely essential.” This absolute requirement reflects the code’s origins in discussions about research with healthy individuals, particularly those who had no opportunity to refuse. According to the code, investigators alone are responsible for obtaining informed consent and deciding whether their research is in accord with the ethical principles.
Following the issuance of the Nuremberg Code, several federal agencies began establishing policies for human research. In 1953, Department of Defense Secretary Charles Wilson issued a directive outlining a policy for human research related to atomic, biological, and chemical warfare (Wilson, 1953). Wilson’s policy included a prohibition on research involving prisoners of war and a requirement that the secretary of the appropriate military service approve human research studies. Also in 1953, the National Institutes of Health (NIH) Clinical Center established a policy requiring independent review of research and participants’ written consent, at least for research involving patient volunteers and/or “unusual hazard” (NIH, 1953). In 1954, these dual protections of independent review and written informed consent were extended to all NIH intramural research involving “normal volunteers.”
However, widespread adoption of ethical principles in the conduct of human studies was slow to develop. Some believed that the Nuremberg
Code was meant to apply only to research with healthy individuals and not to research with patients as participants. Moreover, U.S. policy makers were concerned about intruding into the doctor-patient relationship, and until national attention focused on some research scandals in the 1960s, specific human protections in that context seemed unnecessary.
In 1962, Congress passed the Kefauver-Harris amendments to the Federal Food, Drug, and Cosmetic Act. The amendments are best known for requiring FDA to evaluate new drugs for efficacy in addition to safety (P.L. 87-781). The amendments also required the informed consent of participants in the testing of investigational drugs, although permissible exemptions applied, and they emphasized the need for investigators to control the drug supply.
Then, a series of events began to focus attention on the need for closer regulation of human studies. In early 1964, newspapers began to report on an NIH-funded study at the Brooklyn Jewish Chronic Disease Hospital in which investigators had injected cancerous cells into elderly patients. The investigators claimed to have obtained informed consent from the study participants, but many were incapacitated or did not speak English, and those able to give consent were not told that the cells to be injected were cancerous (Faden and Beauchamp, 1986; Jonsen, 1998).
In 1966, Henry Beecher published a startling indictment of research practices in the United States, presenting 22 examples of “unethical or questionably ethical studies” published in major medical journals (Beecher, 1966). One of the studies described by Beecher was an investigation of hepatitis involving the injection of a mild strain of the virus into children at the time of their admission to the Willowbrook State School for the Retarded in New York. Parental consent had been obtained, but the consent form might have been misleading, and parents may have been unduly influenced by the fact that research participants were put at the top of a long waiting list for admission (Faden and Beauchamp, 1986).
In response to growing concerns about documented and alleged research abuses, NIH developed policies to force NIH units to take more responsibility for research ethics (Faden and Beauchamp, 1986). In 1966, the Public Health Service (PHS) issued a new policy for studies sponsored but not conducted by the agency, requiring independent review of research by a committee of the investigator’s “institutional associates” (PHS, 1966). A memorandum accompanying the policy stated that a group of people from different disciplines, familiar with the investigator but “free to assess his judgment without placing in jeopardy their own goals,” would be required for the review (Stewart, 1966). NIH initiated a system in which it negotiated assurances of compliance with the PHS policy from each institution receiving funding. As an enforcement measure, NIH could withhold funds.
NIH would later formally establish the Office for Protection from Research Risks (OPRR) in 1972 to implement and enforce these policies, and eventually this office—renamed the Office for Human Research Protections (OHRP) in 2000—assumed a lead role in the protection of research participants within the entire Department of Health and Human Services (DHHS).
Until 1966, the PHS Policy for Clinical Investigations with Human Subjects applied only to extramural research, and only to NIH grantees. In 1971, 5 years after the PHS policy was established, what was then the Department of Health, Education and Welfare (DHEW) developed more detailed guidance and justification for review committees in the form of the “Yellow Book” (DHEW, 1971).
Perhaps the most significant event to force the development and use of a more uniform and systematic approach to protecting research participants came in the aftermath of a 1972 New York Times article that reported the details of the Tuskegee Syphilis Study, sponsored by PHS since the early 1930s (Heller, 1972). Although a formal protocol never existed, the study aimed to trace the natural history of syphilis in poor African American males living in Macon County, Alabama. Participants were not told of the purpose of the study and were actually misled into believing that they were being treated for syphilis. Investigators continued the study even after penicillin became widely available and prescribed for the treatment of syphilis. In exchange for participation, the men received some unrelated health care services, free meals, and transportation, and later in the study a $50 burial stipend (Jones, 1981). A PHS investigation in 1973 found the study to be ethically unjustified, and it was halted. The surviving participants were offered treatment. In addition, a PHS advisory panel determined that existing procedures for protecting research participants were not adequate. The panel recommended that “Congress should establish a permanent body with the authority to regulate at least all Federally-supported research involving human subjects” (Tuskegee Syphilis Study Ad Hoc Advisory Panel, 1973).
In 1973, the Senate Labor and Public Welfare Committee began a series of hearings on human experimentation, which led to an agreement that DHEW would issue regulations governing research with humans (ACHRE, 1995). The resulting regulations were promulgated in May 1974 (DHEW, 1974) (21 CFR Part 50), and the National Research Act was signed in July of that year (P.L. 93-348). The National Research Act also established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (National Commission) to provide ethical and policy analysis related to human research. The National Commission is perhaps best known for its Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research (National Com-
mission, 1979). This report identified three fundamental ethical principles applicable to research with humans—respect for persons, beneficence, and justice—which translated respectively into provisions for informed consent, assessment of risk and potential benefits, and selection of participants. For example, the application of the ethical principle of respect for persons gives rise to the concern that consent be properly obtained from fully informed participants and that special consideration be given to vulnerable persons who may lack the capacity to consent. The application of the principle of beneficence leads to the necessity of assessing and balancing risks and potential benefits. The principle of justice requires investigators to attend to the process of recruiting research participants, with particular attention to vulnerable populations. The National Commission also recommended that special regulations be adopted to protect children in research, which formed the basis of Subpart D of the Common Rule.
DHEW regulations already contained specific provisions for obtaining and documenting informed consent and guidance on assessing risk and benefit. The Belmont Report recommended that additional attention be given to the equitable selection of participants. In response to the Belmont Report, DHHS and FDA revised their regulations (45 CFR 46; 21 CFR 50, 56). The revised regulations placed primary emphasis on obtaining and documenting voluntary informed consent, but provided little guidance on assessment of risk and potential benefit or the selection of research participants.
In 1981, the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research (President’s Commission) was established. In several reports the President’s Commission examined the general structure and implementation of existing research protections (President’s Commission, 1981; President’s Commission, 1983). Its notable recommendations from its 1981 and 1983 reports include the following:
All federal agencies should adopt the regulations of DHHS (45 CFR 46).
Each federal agency should apply one set of rules consistently to all of its subunits and funding mechanisms.
Principal Investigators should be required to submit annual data on the number of subjects in their research and the number and nature of adverse events.
Federal agencies should clarify the meaning of certain procedural requirements of existing regulations, particularly what is meant by “Institutional Review Board (IRB) review.”
Federal agencies that do not already do so should, as soon as prac-
ticable, identify the IRBs responsible for the initial and continuing review of research for which they have regulatory authority.
The prospective review of institutional assurances of compliance with applicable regulations should consider the amount and types of research that each IRB anticipates reviewing and should determine that requirements regarding IRB composition are met, that sound procedures have been established for the IRB’s review of the research, and that the institution understands its responsibilities for protecting participants.
A broad educational and monitoring program covering the protection of research participants and designed to reach investigators, IRB members, and research administrators should be conducted. Among the various activities included in the program should be site visits of research institutions using experienced IRB members and staff as site visitors.
The President’s Commission also recommended, as did the National Commission, that special protections be codified for children. In response, DHHS promulgated regulations in 1983 governing research with children (Subpart D).
In response to the President’s Commission’s concern about the lack of standardization of regulations across federal agencies and departments, the White House convened an interagency ad hoc committee to develop what would become the Common Rule (the Federal Policy for the Protection of Human Subjects), a set of identical regulations codified by various agencies. The standardization process was slow, taking nearly 10 years to occur. In 1991, the regulations known as the Common Rule were simultaneously published in the Federal Register by 15 departments and agencies. The Office of Science and Technology Policy in the Executive Office of the President did not codify the Common Rule, even though it signed the Federal Policy, because it did not conduct or sponsor research (NBAC, 2001). The Common Rule also regulates research conducted or sponsored by two other federal agencies that are not signatories to the Common Rule but that are bound nonetheless through public law (the Social Security Administration [P.L. 103-296]) or by Executive Order (the Central Intelligence Agency [E.O. 12333]). Thus, the Common Rule has 15 codifications and 16 signatories, and it covers 18 federal agencies (see Table 2.1). The rule expanded the scope of regulated research and provided some standardization across departments, with DHHS, primarily through OPRR, playing a key role in its development.
THE COMMON RULE
The Common Rule applies to all research involving humans “conducted, supported or otherwise subject to regulation by any federal de-
TABLE 2.1 Federal Agencies Subject to the Common Rule
partment or agency which takes appropriate administrative action to make this policy applicable to such research.” Thus, it specifically allows agencies with regulatory authority to apply the Common Rule to regulated research (40 CFR 26.101(a)).3
Even though the federal regulations cover a large portion of human research conducted domestically, and in some cases overseas, they are limited in their reach. In fact, if federal funds are not involved or if regulatory approval is not required, research activities involving humans might not be subject to any form of oversight. The regulations also do not apply to many areas of research funded and conducted by businesses, private nonprofit organizations, and state or local agencies, although such research is subject to federal regulation if it involves the development of medical devices or drugs requiring approval by the FDA or if it is con-
ducted at an institution that has voluntarily agreed to apply Common Rule requirements to all research it conducts (see the discussion of assurances below).
Moreover, the Common Rule did not create a shared mechanism for interpreting and implementing the regulations at the federal level. Some departments have not established offices for interpreting and implementing the regulations; in some cases, a single individual is responsible for oversight activities (NBAC, 2001). In 2001, the National Bioethics Advisory Commission (NBAC) found that departments and agencies bound to the Common Rule sometimes interpret the regulatory requirements differently.
Finally, the Common Rule has four subparts. Subpart A is the only part signed on to by all participating agencies. Subparts B through D address specific additional protections and considerations for research involving fetuses, pregnant women, and human in vitro fertilization (Subpart B), prisoners (Subpart C), and children (Subpart D). Only DHHS and the Department of Education are signatories to Subpart D, and only DHHS adheres to Subparts B and C. EPA has signed on to Subpart A only.
Nonetheless, there are basic concepts contained in the regulations that provide a framework and guidance for federal oversight, even though the specific policies and procedures adopted by a department or agency for implementation might differ.
Determining whether a study poses more than minimal risk is a central ethical and procedural function of the IRB as outlined in the federal regulations (40 CFR 26.102(i)). The regulations call for the classification of research as involving either minimal risk or greater than minimal risk. When used as a sorting mechanism, this classification determines the level of review required of an IRB. For example, under the current regulations, if a research study is determined to pose only minimal risk and involves a procedure contained on an expedited review list, it may be evaluated using the expedited review process in which the IRB chair or a designee may review the research study in accordance with all the required regulations (40 CFR 26.110(b)).
Research involving more than minimal risk requires full IRB review. As the risk of research increases above the minimal risk threshold, protections for participants become more stringent. For example, with greater than minimal risk research, the process of informed consent cannot be waived or altered (40 CFR 26.116(d)).
The language of the regulations, however, provides an ambiguous standard for minimal risk, under which risks involved in a research study
are compared to those encountered in daily life. As defined in the federal regulations:
Minimal risk means that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests (40 CFR 26.102(i)).
It is unclear whether this applies to those risks found in the daily lives of healthy individuals or those of individuals who belong to the group targeted by the research. In 2001, NBAC recommended that IRBs use a standard related to the risks of daily life that are familiar to the general population for determining whether the level of risk is minimal or more than minimal, rather than using a standard that refers to the risks encountered by particular persons or groups. At present, minimal risk is most commonly applied to studies in which there is no pharmacologic intervention (e.g., epidemiological studies or studies in which drug blood levels are measured in people already receiving the drug for a therapeutic purpose). Venipuncture is generally considered a minimal risk. There are, however, many kinds of studies that would seem to involve a very small movement above minimal risk, such as most bioavailability studies of marketed drugs or very short studies of the effects of a usual dose of a drug on a biomarker (blood pressure, blood sugar). These sorts of risks are not extensively discussed, although the concept of “a minor increase over minimal risk” appears in the Subpart D of the Common Rule related to children.4
Institutional Review Board Approval of Research
The current regulations at 40 CFR 26.111 provide IRBs with the following instructions:
In order to approve research … the IRB shall determine that all of the following requirements are satisfied:
Risks to subjects are minimized: (i) by using procedures which are consistent with sound research design and which do not unnecessarily expose subjects to risk, and (ii) whenever appropriate, by using procedures already being performed on the subjects for diagnostic or treatment purposes.
See also the discussion in Clarifying Specific Portion of 45 CFR 46 Subpart D That Governs Children’s Research, Report from the National Human Research Protections Advisory Committee. Available at ohrp.osophs.dhhs.gov/nhrpac/documents/nhrpac16.pdf.
Risks to subjects are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result. In evaluating risks and benefits, the IRB should consider only those risks and benefits that may result from the research (as distinguished from risks and benefits of therapies subjects would receive even if not participating in the research). The IRB should not consider possible long-range effects of applying knowledge gained in the research (for example, the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility.
Selection of subjects is equitable. In making this assessment the IRB should take into account the purposes of the research and the setting in which the research will be conducted and should be particularly cognizant of the special problems of research involving vulnerable populations, such as children, prisoners, pregnant women, mentally disabled persons, or economically or educationally disadvantaged persons.
Informed consent will be sought from each prospective subject or the subject’s legally authorized representative, in accordance with, and to the extent required by 26.116.
Informed consent will be appropriately documented, in accordance with, and to the extent required by 26.117.
When appropriate, the research plan makes adequate provision for monitoring the data collected to ensure the safety of subjects.
When appropriate, there are adequate provisions to protect the privacy of subjects and to maintain the confidentiality of data.
When some or all of the subjects are likely to be vulnerable to coercion or undue influence, such as children, prisoners, pregnant women, mentally disabled persons, or economically or educationally disadvantaged persons, additional safeguards have been included in the study to protect the rights and welfare of these subjects (40 CFR 26.111).
Investigators and IRBs often struggle with the meaning of crucial terms, such as “minimal risk,” “minor change,” and “minor increase over minimal risk,” on which key ethical and regulatory decisions rest (NBAC, 2001). Applying these regulatory requirements to nonclinical research (e.g., surveys) is even more difficult and cumbersome, because the limited regulatory detail provided is written in the context of clinical research (i.e., “that the research presents no more than minimal risk of harm to subjects and involves no procedures for which written consent is normally required outside of the research context” (40 CFR 117(c) (2)). As discussed in Chapter 4, the committee finds the concept of “minimal risk” to be of limited value as a guide to decision making in the context of the human dosing studies typically conducted for EPA regulatory purposes.
Balancing Risks and Probable Benefits
The principle of beneficence as elucidated in the Belmont Report states that persons should be “treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being” (National Commission, 1979, 6). The principle requires that investigators attempt to maximize possible benefits and minimize possible harms. Federal regulations incorporate the obligation of beneficence by requiring IRBs to ensure that risks are minimized to the extent possible, given the research question, and are reasonable in relation to potential benefits to the participant or to the importance of the knowledge to be gained through the research (40 CFR 26.111(a)(1)-(2)).
Continual Review and Monitoring
Continual review and monitoring of research in progress is a critical part of the oversight system. Regular, continual review is necessary to ensure that emerging data or evidence have not altered the risk-benefit assessment so that risks are no longer reasonable. In addition, mechanisms should be in place to monitor adverse events, unanticipated problems, and changes to a protocol.
The regulations currently require that “an IRB shall conduct continuing review…at intervals appropriate to the degree of risk, but not less than once per year” (40 CFR 26.109(e)). However, the regulations do not specify the purpose or content of that review. In addition to the periodic reevaluation of risks and potential benefits as part of continuing review, IRBs conduct as-needed reviews when investigators request an amendment to approved protocols or in the event of unanticipated problems with a research study. Current regulations require institutions to create written procedures for “ensuring prompt reporting to the IRB of proposed changes in a research activity, and for ensuring that such changes in approved research, during the period for which IRB approval has already been given, may not be initiated without IRB review and approval except when necessary to eliminate apparent immediate hazards to the subject” (40 CFR 26.103(b) (4) (iii)). Institutions also are required to ensure that they report to the IRB “any unanticipated problems involving risks to subjects or…any suspension or termination of IRB approval” (40 CFR 26.103(b) (5)).
Other entities not considered in the federal Common Rule regulations, such as Data and Safety Monitoring Boards (DSMBs) or Data Monitoring Committees (DMCs), are beginning to play an increasingly important role in safety monitoring (DeMets et al., 1999; Fleming et al., 2002; FDA, 2001;
Gordon et al., 1998). These boards review data primarily from Phase 2 and 3 clinical trials from all participating sites and have access to unblinded data.5
Reporting Adverse Events
As mentioned previously, one of the requirements for approval of research is that IRBs must ensure that as “…appropriate, the research plan makes adequate provision for monitoring the data collected to ensure the safety of subjects” (40 CFR 26.111(a)(6)). FDA regulations are more specific than the Common Rule in delineating what must be reported and when. For FDA, all adverse events must be reported to sponsors during the three phases of product development, and serious unexpected adverse events must be reported by sponsors promptly to FDA and to all investigators. There are also mandatory postapproval reporting requirements. FDA may require sponsors to conduct Phase 46 (postapproval) studies to obtain further information about risks, potential benefits, and optimal use of a drug (21 CFR 312.85). Accumulating information on the public’s experience with the approved drug or other FDA-regulated product can be reported to manufacturers, in which case it must be reported to FDA, or consumers may report their experiences directly to FDA (21 CFR 314.80, 314.81, 814.82, 814.34). FDA refers to this phase as postmarketing reporting.
EPA also has statutory requirements for postmarket reporting by industry of adverse events resulting from the use of regulated chemicals or products (Federal Insecticide, Fungicide, and Rodenticide Act, §6(a) and the Toxic Substances Control Act, §8e).
MONITORING BY FEDERAL AGENCIES
Current mechanisms for monitoring include assurances of compliance issued by DHHS and several other federal departments, site inspections of IRBs conducted by FDA, other types of site inspections conducted by the funding agency, and institutional audits. Two primary federal agencies take the lead in monitoring human studies subject to the Common Rule: OHRP and FDA, both housed within DHHS.
Office for Human Research Protections
OHRP is charged with protecting research participants in biomedical and behavioral research conducted or sponsored by DHHS and other federal agencies that follow the Common Rule. The office operates on a system of Written Assurances of Compliance, in which the institution assures its compliance with the regulations as a condition of receiving federal research funds. If OHRP finds an institution to be noncompliant, it can suspend or revoke its assurance, stopping all or a portion of research activities at that institution.
Assurances are negotiated with each institutional grantee, with the negotiations allowing each institution to create its own policies and procedures for protection as long as they are fully consistent with federal regulations. The negotiation process also allows federal officials to educate institutions about requirements and procedures for participant protection.
The assurance indicates what an institution intends to do to protect research participants. In essence, it is a commitment on behalf of the institution to comply with all appropriate regulations and guidance in the conduct of all of its human research. Each federal department and agency may issue its own assurance, although many rely on DHHS assurances (NBAC, 2001). An assurance document is required for domestic institutions, and another assurance document is required for foreign institutions.
Food and Drug Administration
The most extensive system of data and safety monitoring exists in the area of clinical trials of drugs, medical devices, and other products subject to FDA review and approval. FDA inspects investigators, IRBs, and occa-
sionally sponsors, to verify compliance with Good Clinical Practice (GCP) guidelines (FDA, 2003). FDA does not have the resources to inspect every investigator and thus is more likely to focus inspections on those entities that enroll large numbers of participants. Foreign investigators also are subject to inspection, but U.S. investigators are more likely to be scrutinized because of the logistics and available resources involved. Routine (not-for-cause) audits are essential elements of FDA’s oversight. Research sponsors are expected to monitor the progress of studies, and investigators are required to maintain case histories for enrolled participants that include reports of serious adverse events. A distinct oversight unit within FDA provides ongoing surveillance of clinical research investigations. FDA’s Bioresearch Monitoring Program audits the activities of clinical investigators, monitors, sponsors, and nonclinical (animal) laboratories. Its mission is to ensure the quality and integrity of data submitted to FDA for regulatory decisions, as well as to protect research participants.
The regulations that permit FDA to consider the protocols submitted to it during drug development are contained in 21 CFR 312 (human drugs) and 21 CFR 812 (medical devices). Federal regulations require that protocols submitted under an Investigational New Drug Application include detailed descriptions of the “clinical procedures, laboratory tests, or other measures to be taken to monitor the effects of the drug in human subjects and to minimize risk” (21 CFR 312.23). The submission of data, including the results of studies intended to support marketing, is required under 21 CFR 314. All relevant studies, such as drug studies that fail (i.e., that do not support the application or are incomplete), must be identified and submitted to FDA. FDA inspects study data to ensure their validity in support of an application, as well as the protection that was provided to the individuals from whom the data were collected. FDA may also audit the IRB of record for an inspected study, as well as investigate consumer complaints or reports from whistleblowers. If FDA finds that an investigator is noncompliant, he or she can be disqualified from future studies.
In the case of drugs and medical device trials, FDA inspections of clinical investigators generally are conducted after the trial is completed and a new drug application or premarket approval application for a medical device has been submitted for review, reflecting FDA’s focus on assuring data quality.
In November 2001, FDA issued draft guidance entitled Guidance for Clinical Trial Sponsors: On the Establishment and Operation of Clinical Trial Data Monitoring Committees. According to FDA, the sponsor is responsible for ensuring that a DMC or DSMB (if applicable) operates under appropriate procedures. These boards are charged with reviewing interim data to determine whether the study should continue or be stopped for safety or therapeutic reasons according to pre-established stopping rules. The
guidance document offers some perspective on criteria for establishing a DMC/DSMB, including committee composition, conflict of interest considerations, and other general considerations.
FDA also conducts surveillance (routine) and directed (when information “calls into question” regulated practices) inspections of IRBs. Usually IRB inspections are scheduled every five years, although if there are major problems, inspections can occur more frequently (FDA, 1998). During an inspection, an FDA field investigator (inspector) chooses a few studies that received initial IRB review within the past three years and follows them through the IRB review process. Inspectors look at IRB policies and procedures; minutes; membership; and records of studies, including protocol, consent form, investigator’s brochure, and correspondence between the IRB and investigator. IRBs that are found to be out of compliance may be subjected to sanctions ranging from a warning letter to rejection of the data from the trial to prosecution (FDA, 1998).
The agency requires investigators to provide a written commitment that, before initiating an investigation subject to an institutional review requirement under 21 CFR 56, an IRB will review and approve the investigation in accordance with the regulations.
NONGOVERNMENTAL ACCREDITATION PROGRAMS
In recent years, there has been growing interest in nongovernmental performance-based accreditation systems to facilitate an emphasis on outcome measures in institutional research participants’ protection programs and to meet evolving program needs. Participation in accreditation programs is a form of quality assurance, as efforts to prepare to meet accreditation standards should ordinarily have beneficial effects, and at a minimum, can help ensure that research programs conduct self-assessments, presumably noting and addressing deficient areas (IOM, 2001).
New accreditation organizations, such as the Association for the Accreditation of Human Research Protection Programs and the National Committee on Quality Assurance (NCQA), have appeared and are in the early phases of developing processes of setting and testing standards, with several institutions already having applied for accreditation status. In 2003 NCQA joined forces with the Joint Commission on Accreditation of Healthcare Organizations to form a new entity, the Partnership for Human Research Protection.
Each federal department that adheres to the Common Rule has the authority to enforce its own codification of the rule for research it con-
ducts or sponsors. However, federal agencies and institutions with assurances of compliance from OHRP are subject to enforcement from that office as well. In the case of DHHS grantees and contractors, the enforcement authority is clear because OHRP is part of DHHS. But, when the assurance holder is the grantee of another department, OHRP decisions come from outside the regular reporting line of authority. Additionally, departments that use the OHRP assurance process may also have their own separate systems for enforcement, and there is little coordination among the various offices responsible for ensuring compliance with the Common Rule.
Federal regulations give department and agency heads the authority to terminate or suspend funding for research projects that are not in compliance with the regulations (40 CFR 26.123(a)). Common enforcement tools are the requirement of written responses or the enactment of specific changes to address the identified deficiencies; those who grant assurances also can restrict or suspend institutional assurances. Under its regulations, FDA, for example, can put new studies on hold (i.e., not permit them to proceed), prohibit enrollment of new participants, and terminate studies. FDA also can issue warning letters and restrict or disqualify investigators, IRBs, or institutions from conducting or reviewing research with investigational products.
RECENT CONCERNS ABOUT HUMAN RESEARCH PARTICIPANTS
Recent debate and analysis concerning the protection of research participants has focused on the federal and local institutions and agencies charged with this task, including federal regulatory agencies, academic and industrial laboratories, IRBs, and funding organizations. In particular, in the late 1990s examinations focused on IRBs. In June 1998, the Office of Inspector General (OIG) of DHHS issued a report, Institutional Review Boards: A Time for Reform (DHHS OIG, 1998), which stated that the effectiveness of IRBs is in jeopardy due to overwhelming demands. OIG concluded that the system, originally devised as a voluntary effort to oversee a much smaller research effort in the 1970s, was having difficulty contending with its growing and broadening workload with scant resources.
At the institutional level, OHRP increasingly imposed sanctions on institutions when it found systematic deficiencies or had concerns regarding systemic protections for research participants. The deficiencies concerned IRB membership; education of IRB members and investigators; institutional commitment; initial and continuing review of protocols by IRBs; review of protocols involving vulnerable persons; or procedures for obtaining voluntary informed consent. In 2001, NBAC issued a compre-
hensive report on ethical and policy issues in human research. The report recommended that federal oversight be centralized and that various components of the oversight system be revised to clarify regulatory responsibilities and to provide more guidance to assist institutions in formulating and implementing policies (2001).
In 2003 the Institute of Medicine (IOM) issued a report, Responsible Research: A Systems Approach to Protecting Research Participants, which provided an ethical and regulatory framework for institutions to create a system of protections involving investigators, research sponsors, research institutions, health care providers, federal agencies, and patient and consumer groups. The IOM report was in part written in response to system-wide concerns expressed by investigators, research institutions, IRBs, and others. Investigators and research institutions were complaining that there is a lack of national guidance on the administrative and ethical requirements of providing adequate protections and that the current federal posture is reactive and punitive rather than proactive and positive. Institutions were complaining about an overemphasis on documentation, which can lead to unproductive use of time that would be better spent seeking substantive protections. IRBs were complaining that the regulatory language is not easily understood and that federal regulators and research sponsors often interpret this language in ways that differ from local views. Because the IRB system operates at the local level, variation exists in how these boards operate and in the decisions they might make regarding a given protocol. Although this variation reflects the intent of the original regulations to insert local norms into the review process, some are concerned that this decentralization creates an untenable diversity of expectations for the approval process for multisite studies (IOM, 2003; NBAC, 2001).
IRBs themselves are overburdened and at times focus on avoiding risk in the face of rising regulatory pressures. IRB members, who must also fulfill other professional duties and who are often ill rewarded for their IRB service, are reviewing growing numbers of increasingly complex studies that may be conducted at multiple sites and reviewed by multiple IRBs (IOM, 2003).
The IOM committee also noted that research participants too often report that “they do not understand the nature or risks of research, that they find the informed consent process confusing, and that they are frequently divorced from the decision-making processes involved in the conduct of research” (2003, 39-40). It noted that informed consent documents have become increasingly complex and legalistic and too often are used inappropriately to protect the institution rather than the participant. The committee suggested that legal issues be separated from the consent process.
Finally, the IOM committee asserted that the scientific and ethical review of protocols should be equally rigorous. Because IRBs often are not equipped to assess the technical merits of a proposal and because scientific issues can become the focus of debate rather than ethical considerations, the committee recommended that a separate, distinctive review of the scientific merit of a protocol be conducted prior to review by an IRB.
OTHER ETHICAL FRAMEWORKS
Of note, other nonfederal, nonbinding guidelines for the protection of humans in research also are available, many of which were developed by the international community. In addition to the Nuremberg Code (1949), the Declaration of Helsinki (WMA, 2002) specifies requirements for voluntary participation of research participants, informed consent, and independent review of protocols. The declaration contains 32 statements of principle to guide medical research. Its conceptual framework is the medical ethics of the doctor-patient relationship, which is extended to research through the investigator-participant relationship. Other international guidelines, such as those provided by the International Conference on Harmonisation and the Council for International Organizations of Medical Sciences, provide detailed guidelines specific to drug trials and for GCP. The International Conference on Harmonisation was formed in 1990 and involves government drug regulation authorities and pharmaceutical trade organizations from the European Union, Japan, and the United States. Its guidelines have been adopted formally by FDA (ICH, 1996).
Thus, even though a particular study might not be subject to U.S. regulatory requirements, sponsors or investigators might voluntarily comply with the regulations or with the guidelines widely accepted in the international research community. Moreover, if the study is to be used to support marketing or investigational use in the United States, it must show compliance with ethical and scientific norms (21 CFR 312.120).
The federal government regulates research involving humans through the Common Rule, which builds on the ethical principles articulated in international and national documents over the past 50 years. The regulations rest on two principal objectives in the oversight of human research: the conduct of independent review of research protocols by IRBs and the provision of voluntary informed consent to participate in research. The regulations are enforced by 16 agencies that conduct or sponsor human research.
The federal regulations provide a framework for considering risks and
potential benefits, conducting review and monitoring activities, and reporting adverse events. They also specify the conditions under which informed consent must be obtained and the substantive requirements of consent. Monitoring of institutional activities is conducted at the federal level, and agencies employ various mechanisms for enforcement.
Advisory Committee on Human Radiation Experiments (ACHRE). 1995. Advisory Committee on Human Radiation Experiments—Final Report. Washington, D.C.: U.S. Government Printing Office.
American Medical Association (AMA) Judicial Council. 1946. Supplementary Report of the Judicial Council of the American Medical Association. Journal of the American Medical Association 132:1090.
Beecher, H. K. 1966. Ethics and Clinical Research. New England Journal of Medicine 247(24): 1354-1360.
DeMets, D. L., S. J. Pocock, and D. G. Julian. 1999. The agonizing negative trend in monitoring clinical trials. The Lancet 354(9194):1983-1988.
Department of Health and Human Services. Office of Inspector General (DHHS OIG). 1998. Institutional Review Boards: A Time for Reform. Report No. OEI-01-97-00193. Washington, D.C.: DHHS.
Department of Health, Education, and Welfare (DHEW). 1971. The Institutional Guide to DHEW Policy on Protection of Human Subjects. Washington, D.C.: U.S. Government Printing Office.
DHEW. 1974. Protection of Human Subjects. Federal Register 39:18914-18920.
Faden, R. R., and T. L. Beauchamp. 1986. A History and Theory of Informed Consent. New York: Oxford University Press.
Fleming, T. R., S. Ellenberg., and D. L. DeMets, 2002. Monitoring clinical trials: issues and controversies regarding confidentiality. Statistics in Medicine 21(19):2843-2851.
Food and Drug Administration (FDA). 1998. Guideline for the Monitoring of Clinical Investigators. Available at www:fda.gov/ora/compliance_ref/bimo/clinguid.html.
FDA. 2001. Draft Guidance for Clinical Trial Sponsors: On the Establishment and Operation of Clinical Trial Data Monitoring Committees. Available at www.fda.gov/cber/gdlns/clindatmon.pdf.
Food and Drug Administration (FDA). 2003. Good Clinical Practices. Available at www.fda.gov/oc/gcp.
Gordon, V., J. Sugarman, and N. Kass. 1998. Toward a more comprehensive approach to protecting human subjects. IRB: A Review of Human Subjects Research 20(1):1-5.
Heller, J. July 26, 1972. Syphilis victims in U.S. study went untreated for 40 years. New York Times, Sec. A-1.
Institute of Medicine (IOM). 2001. Preserving Public Trust: Accreditation and Human Research Participation Protection Programs. Washington, D.C.: National Academy Press.
IOM. 2003. Responsible Research: A Systems Approach to Protecting Research Participants. Washington, D.C.: The National Academies Press.
International Conference on Harmonisation (ICH) of Technical Requirements for Registration of Pharmaceuticals for Human Use. 1996. ICH Harmonized Tripartite Guideline. Guideline for Good Clinical Practice. Geneva: ICH Secretariat, International Federation for Pharmaceutical Manufacturers Association.
Jones, J. H. 1981. Bad Blood: The Tuskegee Syphilis Experiment. New York: The Free Press.
Jonsen, A. R. 1998. The Birth of Bioethics. New York: Oxford University Press.
Moreno, J. D. 1999. Undue Risk: Secret State Experiments on Humans. New York: W. H. Freeman.
National Bioethics Advisory Commission (NBAC). 2001. Ethical and Policy Issues in Research Involving Human Participants: Vol. 1. Bethesda, MD: U.S. Government Printing Office.
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (National Commission). 1979. Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, D.C.: U.S. Government Printing Office.
National Institutes of Health (NIH). 1953. Group consideration of clinical research procedures deviating from accepted medical practice or involving unusual hazard. In: Final Report, Supplemental Vol. 1, 321-324. Washington, D.C.: U.S. Government Printing Office.
Nuremberg Code. 1949. Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10, Vol. 2, 181-182. Washington, D.C.: U.S. Government Printing Office.
President’s Commission for the Study of Ethical Problems in Medical and Biomedical and Behavioral Research (President’s Commission). 1981. Protecting Human Subjects. First Biennial Report on the Adequacy and Uniformity of Federal Rules and Policies, and of Their Implementation for the Protection of Human Subjects. Washington, D.C.: U.S. Government Printing Office.
President’s Commission. 1983. Implementing Human Research Regulations. Second Biennial Report on the Adequacy and Uniformity of Federal Rules and Policies, and of Their Implementation for the Protection of Human Subjects. Washington, D.C.: U.S. Government Printing Office.
Public Health Service (PHS). 1966. Clinical investigations using human subjects. In: Final Report, Supplemental Vol. 1, 473-474. Washington, D.C.: U.S. Government Printing Office.
Stewart, W. H. 1966. Clinical research investigations using human subjects. In: Final Report, Supplemental Vol. 1, 473-474. Washington, D.C.: U.S. Government Printing Office.
Tuskegee Syphilis Study Ad Hoc Advisory Panel. 1973. Final Report of the Tuskegee Syphilis Study Ad Hoc Advisory Panel. Washington, D.C.: DHEW.
Wilson, C. 1953. Memorandum for the Secretary of the Army, Secretary of the Navy, Secretary of the Air Force. In Final Report, Supplemental Vol. 1, 308-310. Washington, D.C.: U.S. Government Printing Office.
World Medical Association (WMA). 2002. Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects (adopted 18th WMA General Assembly. Helsinki, Finland, June 1964; amended: 29th WMA General Assembly, Tokyo, Japan, October 1975; 35th WMA General Assembly, Venice, Italy, October 1983; 41st WMA General Assembly, Hong Kong, September 1989; 48th WMA General Assembly, Somerset West, Republic of South Africa, October 1996; and 52nd WMA General Assembly, Edinburgh, Scotland, October 2000. Note of Clarification on Paragraph 29 added by the WMA General Assembly, Washington 2002). Ferney-Voltaire, France: WMA.