In the second week of March 1990, headlines across the country described an unusual scientific controversy over the distribution of an investigational drug called dideoxyinosine (ddI) to thousands of patients with acquired immune deficiency syndrome (AIDS). Newspapers reported that patients receiving the drug through a new expanded access program had a much higher death rate than patients enrolled in conventional clinical trials of the drug. In one case, a Harvard faculty member was quoted as saying that death rates in the expanded access program were ''a disgrace, an absolute disgrace.'' But many physicians advised their HIV (human immunodeficiency virus)-infected patients to keep taking the drug. Officials at the Food and Drug Administration (FDA), advocates for people with AIDS, and the drug's sponsor, Bristol-Myers Squibb, attributed most or even all of the disparity in death rates to the fact that patients enrolled in the expanded access program were sicker to begin with than those in the clinical trials.
The ddI controversy exposed sharp differences of opinion within the medical community about the appropriateness of making investigational drugs—drugs not yet approved for marketing by the FDA—available for therapeutic purposes. In August 1989, the Public Health Service (PHS) convened a committee to formalize procedures for making promising investigational agents available to people with AIDS and other HIV-related disorders who could not participate in controlled clinical trials and who had no therapeutic alternatives. The committee's recommendations were still in draft form seven months later,
This chapter is based primarily on the presentation of Peter Barton Hutt. Other contributors include Jay Lipner, Lawrence Corey, James Allen, Louis Lasagna, James Eigo, Daniel Hoth, and Ellen Cooper.
but many people regarded the ddI trial as the prototype of the new "parallel track system." (On May 21, 1990, the Department of Health and Human Services published a proposed policy statement, "Expanded Availability of Investigational New Drugs Through a Parallel Track Mechanism for People with AIDS and HIV-related Disease," in the Federal Register.)
The controversy continues today. Opponents of the parallel track worry that it will disrupt efforts to assess the safety and efficacy of drug candidates through conventional clinical trials. They question the value of information gathered through the parallel track system and express concern about exposing large numbers of people to relatively unknown agents. Advocates of parallel track acknowledge that increasing access to investigational drugs without definitive evidence of either safety or efficacy carries serious potential risks, but they believe that many desperately ill patients are willing to assume such risks. After all, they say, investigational drugs are the only hope for thousands of AIDS patients who either cannot tolerate or fail to respond to zidovudine. (commonly known as AZT), the only anti-HIV drug licensed in the United States.
One fact often ignored by both sides is that access to investigational drugs for therapeutic purposes is not new in this country. In fact, it is as old as the history of drug regulation itself. Two features that make the current situation somewhat different from the past are (1) the desire to establish a written policy and (2) the large number of people who could receive a single investigational drug in a short period of time. A brief review of earlier approaches to expanded access and a summary of the drug approval process prior to the start of the AIDS epidemic help place the debate over the parallel track mechanism in perspective.
EARLY DEVELOPMENT OF EXPANDED ACCESS
Modern drug regulation in the United States began in 1938 with enactment of the Federal Food, Drug, and Cosmetic Act, prompted by the elixir sulfanilamide tragedy of November 1937 (more than 100 people died when a drug containing the poisonous solvent diethylene glycol was marketed without animal tests). The new act contained one brief section, labeled 505(i), in which Congress authorized the FDA to issue rules governing investigational use of drug candidates. The FDA regulations that resulted from this authorization contained four requirements: (1) an experimental drug had to be labeled "for investigational use only"; (2) the drug could be delivered only to
experts and could be used by them solely for investigational purposes; (3) each expert had to have adequate facilities for investigation; and (4) the sponsor had to have a signed statement from the investigator indicating that the drug would be used solely for investigational purposes until it had been fully licensed.
The regulations did not describe "expert" qualifications or specify the nature of "adequate facilities." In fact, they did not even define ''investigational use." Thus, in practice, the sponsor could provide an investigational drug to any physician who was willing to sign the required statement. Questions of expanded access did not arise because there were no substantive barriers to obtaining investigational drugs for therapeutic purposes.
Drug Amendments of 1962
Public attention did not focus again on the drug regulatory apparatus until July 1962, when a story in the Washington Post disclosed links between the experimental drug thalidomide and severe birth defects. Three months later, the U.S. Congress unanimously passed the first major drug amendments.
Surprisingly, the 1962 amendments did not radically alter section 505(i). They authorized regulations for investigational new drugs but did not require the submission of study plans, record keeping, or statements from investigators. The only mandatory provision was that investigators had to obtain informed consent from every subject.
The regulations issued by the FDA in response to the thalidomide tragedy and the new statute provided the first formal structure for the drug development process. Before beginning clinical trials, all sponsors would have to submit an investigational new drug application, or IND. The IND would describe the chemical structure of the new compound and its probable mode of action in the body, identify investigators, describe the results of laboratory and animal tests, and outline specific elements of the study protocol.
Access for Therapeutic Purposes
The FDA press release that accompanied the final regulations in January 1963 addressed for the first time the issue of access to investigational drugs for therapeutic purposes. In an analysis of objections that had been raised to the regulations in draft form, the press release noted, "The proposed regulations were said to deny
extremely important new drugs not yet approved for general distribution to patients who might need them urgently as a lifesaving measure."
The FDA's response set the tone for the next two decades. The press release explained, "The increased flexibility in the regulations will allow the sponsor of a new drug investigation to add new investigators after the program is started. There is no bar in the regulations to giving the necessary instructions to, and obtaining the necessary commitments from, a new investigator by telephone in case this is needed to save a life."
From 1962 until the beginning of the 1980s, access to investigational drugs was an informal process governed primarily by telephone. The FDA had no written policies. If a physician determined that a severely ill patient had no recourse other than an experimental drug, the physician called the FDA and requested access to that drug. Medical officers in the agency evaluated each situation separately and either approved or denied the request. The criteria were simple. Approval required four basic elements: a manufacturer willing to supply the drug, a physician willing to prescribe it, a patient willing to give informed consent, and some basis for believing that the treatment was not an outright fraud or poison.
The flexibility of this system enabled many very sick patients to receive drugs with a minimum of delay and paperwork. But there were also drawbacks to the informal approach. First, the system only worked for patients whose physicians knew what drugs were under investigation; patients treated by physicians outside the mainstream of academic medicine were less likely to have access to experimental therapies. Second, some ineffective or even toxic drugs, such as DMSO (dimethyl sulfoxide), attained widespread distribution among patients whose original illnesses did not justify extreme measures. Finally, the lack of written policies spawned a confusing array of terms and concepts that still cloud discussions and interfere with efforts to develop a more uniform approach to the access problem.
In the 1960s, FDA medical officers permitted access to investigational drugs under several mechanisms: orphan drug INDs, individual investigator INDs, and compassionate use INDs. The orphan drug concept actually predated the 1962 amendments and remains in use today. It refers to drugs developed to treat rare or unusual conditions. The "permanent" orphan drug IND was conceived to provide
access to drugs that would never meet licensure requirements because there were simply too few patients to collect adequate data. (In 1983, Congress passed the Orphan Drug Act to provide certain tax and other financial incentives to the sponsors of therapies for rare diseases.)
The individual investigator IND enabled physicians to obtain experimental drugs for therapeutic purposes when it was not possible to enroll their patients in existing clinical trials. By the end of the 1960s, this concept had been incorporated into the compassionate use IND, which also covered the provision of experimental drugs to patients during FDA review of a new drug application, or NDA (the document submitted by a sponsor after the completion of clinical trials to request permission for marketing).
Two more expanded access concepts arose during the 1970s. Sponsors of controlled trials were permitted to develop concurrent open-label safety studies (also called open enrollment or open protocol). Through these studies, which continue today, thousands of patients received access to experimental drugs at various stages of investigation. Although the FDA requires sponsors of these studies to collect safety data, many observers of FDA policy believe that the primary purpose of the open-label studies is to provide therapy to patients. In 1976, the FDA also accepted the concept of the Group C cancer drug IND, which provides increased access to certain investigational cancer drugs distributed by the National Cancer Institute.
It is important to remember that all of these concepts evolved in the absence of any written policy. Over the years, several groups in Congress and the FDA attempted to develop a more rational approach to the use of investigational drugs for therapeutic purposes, but changes in administration and other political events intervened. Meanwhile, the drug development and approval process itself grew increasingly formal. By 1980, it took an average of 10 years for a new drug to progress from the laboratory to the medicine chest.
Modern Clinical Trials (Non-AIDS Drugs)
With some important exceptions, the basic framework of the drug evaluation process today is similar to that of 10 years ago (although a study by the Pharmaceutical Manufacturers Association suggests that the average time to FDA approval now may be closer to 12 years). If preclinical investigations indicate that a drug has biological activity against a targeted disease and does not cause unacceptable damage to healthy tissues, the drug sponsor requests permission from the FDA
to begin the first of three phases of clinical trials—that is, the sponsor files an IND.
Phase 1 studies usually take a year and may involve up to 50 normal, healthy volunteers. These are short-term tolerance and clinical pharmacology studies; their goals are to begin to establish the drug's safety in human beings and to determine appropriate dose levels and routes of administration. (Phase 1 studies of drugs for life-threatening conditions, such as AIDS and cancer, or of drugs that are very toxic may involve patients with the target disease rather than healthy volunteers. Patient studies are also preferred when investigators shorten preclinical studies to speed drug development. As a result of the shortened preclinical studies, the potential for toxicity may be too great to justify giving the drug to someone who has no chance of benefiting from it.)
Phase 2 trials, which usually take two years or more, involve 100 to 300 consenting patients. Investigators gather additional information about possible adverse effects and begin to assess a drug's clinical potential. Most phase 2 studies are randomized, controlled trials. A group of patients receiving the drug, a "treatment" group, is matched with a group that is similar in important respects, such as age, gender, and disease state (factors that could affect the course of the disease or the effect of the investigational drug). The second, or "control," group receives another treatment such as standard therapy or a placebo (an inert substance). Many phase 2 studies are double blind—that is, neither the patient nor the researchers know who is getting the experimental drug. The purpose of double-blind studies is to reduce errors in interpretation caused by unwarranted enthusiasm or other forms of bias.
Phase 3 clinical trials involve many more volunteer patients—several hundred to several thousand—and last about three years. The larger trials allow researchers to acquire more information about efficacy and to identify some of the less common side effects associated with an experimental drug.
If the net results of all three phases of clinical trials are favorable and the sponsor decides to market the drug, it submits a new drug application to the FDA. The NDA must contain all the scientific information gathered in the previous years and typically runs 100,000 pages or more. The average time between the submission of an NDA and final FDA approval is close to three years.
THE ADVENT OF AIDS
The AIDS epidemic has drawn unprecedented attention to the entire drug approval process and prompted or accelerated a variety of changes—some of which were under consideration before the epidemic began. These changes fall into three categories: efforts to broaden patient and community involvement in developing and testing new products, efforts to shorten the overall development and review process, and efforts to increase access to promising drugs before FDA approval (expanded access).
Throughout most of the 1980s, people with AIDS and their advocates were highly critical of the FDA and other government agencies involved in drug development. There was a perception that government scientists were more interested in maintaining the scientific standards of clinical trials than in providing new options for the thousands of patients who were dying as a result of HIV infection. Government scientists, on the other hand, were frustrated by misconceptions surrounding the drug development process. For example, the role of the FDA is to ensure that drugs marketed in the United States meet established standards of safety and efficacy; the FDA could not initiate or conduct clinical trials on its own, as some patient advocates were suggesting.
Over time, the adversarial relationship has relaxed somewhat, although strong disagreements remain. Persons with AIDS and their advocates now participate on advisory committees within the Public Health Service to provide practical advice about the optimal design and implementation of clinical trials from the patient's perspective. In addition, scientists at the helm of the research effort in AIDS have recognized the need for creative approaches to the problems associated with HIV infection.
One result of this cooperation was the establishment in October 1989 of a new AIDS treatment research initiative called Community Programs for Clinical Research on AIDS (CPCRA), funded by the National Institute of Allergy and Infectious Diseases (NIAID). Before the advent of CPCRA, all federally funded clinical trials of experimental AIDS drugs were conducted by investigators at the National Institutes of Health or at the 47 university-based research hospitals associated with the AIDS Clinical Trials Group (ACTG). (Of course, pharmaceutical companies and community-based physicians have also
conducted important clinical trials of AIDS drugs.) The ACTG consortium was created by NIAID in 1986 to perform the complex multidisciplinary clinical and laboratory studies required for development of new antiviral drugs.
Although AIDS activists and community care providers recognized the contributions made by ACTG investigators, they questioned the need to restrict federally funded clinical trials to university medical centers. They claimed that many important research and clinical questions could be addressed in settings that lacked the technological sophistication of the ACTG institutions. Also, demographic information on patients in ACTG studies revealed that, although some of the large medical centers are also inner-city hospitals that treat underserved patient populations, other ACTUs (AIDS clinical trial units) were not reaching certain patient groups. (Underserved populations have included people of color, women, and intravenous drug users infected with HIV.) As a result, these groups did not have access to potentially beneficial investigational drugs.
CPCRA was designed to address these issues. The 18 diverse CPCRA sites give community care providers and their HIV-infected patients opportunities to participate in clinical trials. The program is designed to take advantage of the clinical expertise acquired by physicians in private practice, in community clinics, and at larger inner-city hospitals. In addition, NIAID seeks, through these new sites, to increase access for underserved populations to experimental therapies. As noted in Chapter 7, however, much more work remains to be done to solve the access problem.
Accelerating the Pace of Drug Development
One of the hardest messages to convey to desperately ill patients has been that no changes in regulations or clinical trials can increase access to drugs unless potential drug candidates are already in the pipeline. Historically, medical science has not fared well in the battle against chronic viral infections such as herpes, hepatitis B, cytomegalo-virus, and AIDS. The successes against HIV infection—represented by zidovudine, and perhaps ddI—have resulted from very recent advances in virology, cell culture, and molecular biology.
In 1986, NIAID started the National Cooperative Drug Discovery Group to stimulate new research on targeted development of AIDS drugs. The group's efforts have complemented work by the Preclinical AIDS Drug Development Program at the National Cancer Institute, which screens thousands of natural and synthetic compounds each year
for activity against HIV. As of January 1990, the FDA had granted permission for IND studies involving more than 80 different AIDS-related antiviral or immunomodulating drugs. Experience suggests, however, that fewer than 20 percent of these will survive the trials and approval process.
Improving Response Capabilities
Recognizing that FDA would be called upon to respond rapidly to the new challenges posed by AIDS, then commissioner Frank E. Young made a number of administrative and organizational changes at the agency. First, he assigned all AIDS treatments a special 1-AA designation, giving them top review priority. This meant that the FDA intended to act on all AIDS-related NDAs within 180 days of their submission. A new division of antiviral drug products was created within FDA's Center for Biologics Evaluation and Research to expedite the review and evaluation of potential AIDS therapies. In addition, FDA established the AIDS Coordination Staff to integrate the agency's various AIDS-related activities and to interact with other agencies and outside groups interested in AIDS drug development.
Perhaps the most fundamental change, however, involved the clinical trials process itself. In October 1988, Dr. Young announced immediate implementation of a formal plan to reduce the time required for human testing of drugs for life-threatening and severely debilitating diseases, such as AIDS, Parkinson's disease, and certain aggressive cancers. The primary effect of the new "expedited development" process is to eliminate phase 3 clinical trials for drugs shown to improve survival or prevent irreversible morbidity. By planning the critical phase 2 studies well, the development and review process might be shortened by two to three years.
Expedited development follows a pattern established by the development of zidovudine. In February 1986, after a promising phase 1 trial at the National Cancer Institute and Duke University, researchers started a phase 2 study of zidovudine at 12 medical centers across the United States (the placebo-controlled randomized trial involved patients with AIDS or advanced AIDS-related complex [ARC]). The phase 2 study was stopped in September of that year, when an independent data safety monitoring board found a dramatic
difference in outcomes between the 145 patients receiving zidovudine and the 137 patients receiving placebos (19 patients in the placebo arm of the trial had died, compared with only a single death in the zidovudine group). Burroughs Wellcome, the manufacturer, submitted a new drug application for zidovudine in December 1986. The FDA approved the NDA without a phase 3 clinical trial on March 20, 1987. At the time, officials explained that one reason for the rapid approval of zidovudine was that FDA scientists had had an opportunity to work closely with the drug's sponsor from the very beginning of the development process.
Current procedures for expedited development specify that the FDA will meet with drug sponsors to help devise efficient animal and human studies—studies that answer vital questions about safety and efficacy in the least amount of time possible. The FDA also monitors the progress of clinical trials and, if necessary, helps the sponsor develop appropriate postmarketing studies to provide additional information about risks, benefits, optimal uses, and dosages. The FDA approval process for drugs in the expedited pathway takes into consideration the severity of the disease being treated and the availability of alternative therapies, as well as the statutory criteria for approval.
The urgency created by the AIDS epidemic also has focused attention on two approaches to expanded access: the treatment IND and the parallel track protocol. These mechanisms, which incorporate the expanded use practices that began in the 1960s, evolved from a growing awareness on the part of drug sponsors, government scientists, and others that the informal procedures of the past would not be sufficient to handle the distribution of investigational drugs to AIDS patients. The complexity of HIV infection and the potential toxicity of some drug candidates discouraged FDA medical officers from approving expanded access protocols for AIDS drugs on the basis of a few quick telephone conversations. There also was concern that the volume of requests might become overwhelming.
Treatment Investigational New Drugs
The treatment IND first emerged as part of a long-term effort to incorporate the concept of expanded access into the IND regulations.
In June 1983, the FDA issued proposed regulations that included a very broad interpretation of the use of investigational drugs for therapeutic purposes: at any time during the investigational process, the FDA could approve a treatment protocol for any patient with a serious disease (the definition of ''serious" was left to the discretion of the patient and physician). The proposed interpretation would have incorporated virtually all of the older versions of expanded access, including the compassionate use IND and the orphan drug IND.
Some critics believe that when the final IND regulations emerged in 1987, the definition of treatment IND was much narrower. The treatment IND mechanism allows patients suffering from serious or life-threatening conditions for which there is no satisfactory alternative therapy to obtain a promising experimental drug. Clinical evidence must be available to show that the drug is relatively safe and that it "may be effective." In addition, controlled clinical trials must be completed or ongoing and the sponsor must be pursuing marketing with "due diligence." Others at the FDA argue that the only real difference between the 1983 and 1987 versions of the regulations was that the 1987 announcement received a great deal of publicity, which reminded the public that the treatment IND was an available mechanism.
A government scientist reports that, as of March 12, 1990, the FDA had approved 18 treatment INDs for conditions ranging from AIDS to respiratory distress syndrome in infants. Almost 20,000 patients had obtained access to drugs not yet approved for marketing. Nevertheless, persons with AIDS and their advocates say that the treatment IND has fallen far short of their expectations. They suggest that the FDA's interpretation and implementation of "may be effective" have been too rigorous—too close to the standard used for final approval of a drug. With one exception, they say, treatment INDs have simply bridged the gap between the end of clinical trials and full FDA approval. They have not increased access to drugs at earlier stages of development or helped patients who were ineligible for conventional clinical trials.
Another criticism of the treatment IND regulations has been that they increased, rather than decreased, confusion about the parameters of expanded access. People inside and outside the government had hoped that the regulations would furnish a framework for all of the different approaches to providing experimental drugs to desperately ill patients. Instead, the regulations defined one particularly narrow approach and left other options open. Early dissatisfaction with the
treatment IND led to calls for a more flexible solution to the access problem.
For almost a year after the release of the new IND regulations, patient advocates, community physicians, and government scientists exchanged ideas about other possible ways to expand access to experimental drugs. Finally, at a meeting in San Francisco in June 1989, Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, presented the concept of the "parallel track" protocol. The parallel track would make selected drugs available to HIV-infected patients who could not participate in conventional clinical trials and who had no therapeutic alternatives, without disrupting the progress of controlled clinical trials. Parallel track protocols could be approved for promising investigational drugs when the evidence for effectiveness was less than that required for a treatment IND.
Several months later, an FDA Advisory Committee meeting convened by the FDA and a subgroup convened by the National AIDS Program Office began efforts to define the structure of the parallel track system. After a lengthy review process, they decided that parallel track protocols could be implemented within the framework of existing regulations. In December 1989, they submitted a proposed policy statement explaining the basic outlines of the parallel track to the Office of the Secretary of Health and Human Services. Drugs would be considered for the new track only if manufacturers could provide the following:
information showing promising evidence of efficacy based on an assessment of all available laboratory and clinical data, as well as sufficient information to recommend an appropriate starting dose and preliminary pharmacokinetic and dose-response data;
evidence that the investigational drug is reasonably safe, taking into consideration the intended use and the prospective patient population;
a description of the intended patient population;
evidence that the defined patient population lacks satisfactory alternative therapies;
assurance that the manufacturer is willing and able to produce sufficient quantities of the drug for both controlled clinical trials and the parallel track;
a statement of the status of existing controlled clinical trial protocols (drugs will be considered for parallel track only after protocols for phase 2 controlled clinical trials have been approved by the FDA; also, patient enrollment in phase 2 controlled trials must start before or concurrently with the release of drugs for parallel track);
an assessment of the impact that the parallel track study may have on patient enrollment in controlled clinical trials and a proposed plan for monitoring progress of the controlled trials; and
information describing the educational efforts that will be undertaken by the manufacturer or the sponsor to ensure that participating physicians and potential recipients have sufficient knowledge of the potential risks and benefits of the investigational agent.
Evidence for safety and efficacy might come in part from expanded phase 1 trials. As noted earlier, phase 1 trials for drugs for AIDS and other life-threatening diseases often involve persons with the disease instead of healthy volunteers. The expedited development process and the potential increase in the number of people who might get very early access to an experimental drug for therapeutic purposes have placed pressure on investigators to get as much information as possible from phase 1 trials. For example, the authors of the proposed policy statement on the parallel track indicate that expanded phase 1 trials should provide some information about potential interactions between an investigational drug and other drugs commonly used in the patient population. Other physicians suggest that expanded phase 1 trials should compare different doses of an experimental drug, primarily to avoid problems similar to those that arose with zidovudine. (Two years after the FDA approved zidovudine, a randomized trial carried out by the ACTG revealed that patients taking 600 milligrams per day of the drug did as well as patients taking the recommended dose of 1.2 grams per day. If this had been known sooner, some patients might have avoided adverse reactions, and many more would have been spared unnecessary expense.)
The proposed policy statement on parallel track also outlines eligibility requirements for patients. First, patients must have clinically significant HIV-related illness or be at imminent health risk as a result of HIV-related immunodeficiency. Second, patients must be unable to participate in related controlled clinical trials, either because they do not meet entry criteria (for example, laboratory test results are not within specified limits), because they are too sick, or because
participation would create undue hardship (the nature of possible hardships, such as travel time to a research center, must be described in the parallel track protocol). Finally, physicians who wish to enroll a patient in the parallel track must provide evidence that existing FDA-approved therapies for the condition are contraindicated for that patient, that the patient cannot tolerate them, or that they are no longer effective.
Close monitoring of the parallel track will be essential to ensure that serious adverse effects (or, conversely, unexpected benefits) are recognized at the earliest possible moment. According to the proposed policy statement, sponsors will be required to establish a data and safety monitoring board (DSMB) with responsibility for overseeing the parallel track protocol and for comparing information gathered from the parallel track with information gathered from related clinical trials. The recent experience with ddI, described at the beginning of this chapter, underscores the importance of reviewing all available materials. Although data collection in the parallel track will be minimal compared with data collection in controlled trials, the DSMB should have a sufficient basis for comparison. If necessary, the DSMB or its equivalent may recommend to the FDA, to the sponsor, or to the NIAID AIDS Research Advisory Committee that the parallel track protocol—and possibly related clinical trials—be terminated.
In conventional clinical trials, educational materials and informed consent documents that describe the potential risks and benefits associated with an experimental drug must be approved by an institutional review board (IRB) at each participating institution. The PHS working group, however, determined that such an arrangement might be impractical for parallel track protocols, in part because many community physicians who wished to participate in the parallel track would not have access to IRBs. In addition, the time required to provide sufficient information to hundreds of IRBs around the country would defeat the main purpose of the parallel track—rapid dissemination of investigational drugs to desperately ill patients.
To overcome this problem, the working group has proposed a national human subjects protection review panel to provide continuing ethical oversight of all parallel track protocols. The panel would have a diverse membership, including persons with AIDS, physicians, government scientists, and others. It would be responsible for establishing the types of information that must be given to patients and for approving all informed consent procedures.