Operations, Oversight, and Funding of Cancer Clinical Trials
Cancer clinical trials are highly complex and represent a major research undertaking. They require hundreds of steps with numerous decision points and there are multilayered and iterative review processes because multiple oversight bodies have jurisdiction over a trial. The primary focus of the Cooperative Group Program is large, definitive, randomized Phase III studies and the development efforts preceding these trials (NCI, 2006). Phase III trials are considered the “gold standard” for changing medical practice because the results of these trials are used to obtain Food and Drug Administration (FDA) approval, establish practice guidelines, and make insurance coverage decisions. They are also the most complex and costly trials to conduct. These large-scale clinical trials necessitate interactions among numerous stakeholders, including multiple governmental agencies, academic medical centers, community practices, patients, and industry. To improve the system as a whole, a revision of the roles of all these stakeholders must be considered.
This chapter describes the organization, oversight, and funding of the National Cancer Institute (NCI) Cooperative Group Program, as well as the processes and collaborations needed to develop, launch, and complete a large-scale cancer clinical trial. The chapter identifies inefficiencies and limitations of the current system and describes the committee’s recommendations, which aim to improve the speed, efficiency, and effectiveness of cancer clinical trials, especially those that the Cooperative Groups undertake.
ORGANIZATION OF THE COOPERATIVE GROUP PROGRAM
The Cancer Therapy Evaluation Program (CTEP), which is part of the Division of Cancer Treatment and Diagnosis (DCTD) of NCI, administers the Cooperative Group Program, which represents a major component of DCTD’s extramural research activities. The NCI Cooperative Groups were originally organized by geographic area or, in some cases, by type of disease or therapeutic modality. Each Cooperative Group includes a large network of physicians, statisticians, nurses, clinical research associates, pharmacists, patient advocates, and other affiliated investigators. The Groups operate independently and have their own administrative structures, operating procedures, and committees. Each Group has an operations office and statistical center overseen by the Group chair and Group statistician, respectively. To be involved with a Cooperative Group, institutions must apply for membership and meet that Group’s eligibility criteria, including accrual potential and the ability to comply with Group standards and federal requirements. Each institution participating in a Cooperative Group is represented by a principal investigator, who manages the institution’s activities within the Group (Mauer et al., 2007).
Institutions participate in the Cooperative Groups as main member institutions, affiliates of a main member institution, or members of participating Community Clinical Oncology Programs (CCOPs). The main member institutions are generally academic medical centers or other major medical centers that are centrally involved in Cooperative Group activities. Main member institutions enroll a significant number of patients in clinical trials and also contribute scientific expertise and other resources to Group activities. Affiliate members, designated by the main member institutions, include community-based organizations and physicians’ practices and have lower patient accrual rates.
Created in 1983, “the CCOP network allows patients and physicians to participate in state-of-the-art clinical trials for cancer prevention and treatment while in their local communities,” according to NCI (2009b). The CCOP network can include hospitals, clinics, health maintenance organizations, groups of practicing physicians, or a consortium that agrees to work with a principal investigator through a single administrative unit (Mauer et al., 2007). Each CCOP chooses to join one or more CCOP Research Bases, which are NCI-designated Cancer Centers or Cooperative Groups that design, develop, and conduct clinical trials (NCI, 2009b).
OVERSIGHT OF CLINICAL TRIALS
Cancer clinical trials are highly regulated activities. Multiple agencies of the U.S. Department of Health and Human Services (HHS) review and
provide oversight of cancer clinical trials, including NCI, FDA, the Office for Human Research Protections (OHRP), and the Office for Civil Rights (OCR). Many reviews are required before a Cooperative Group clinical trial can begin. These include reviews undertaken by the disease site and other scientific committees of the Cooperative Groups, various committees and branches of NCI, institutional review boards (IRBs), comprehensive cancer centers, CCOPs and their affiliates, and, in some cases, FDA and industry sponsors (Table 3-1). Additional oversight is required during the conduct of the trial and at the closure of the trial. The many oversight bodies have different objectives and responsibilities, and thus, they seek similar and overlapping but not identical information and action for compliance. This section provides a brief overview of Cooperative Group clinical tri-
TABLE 3-1 Types of Reviews Required to Develop a Cooperative Group Clinical Trial, by Stakeholder
als oversight, with emphasis on issues that the committee considered most relevant to improving the clinical trials system.
NCI Oversight of Cooperative Group Trials
The cooperative agreements that provide funding to the Cooperative Groups stipulate NCI review and oversight at each step of the clinical trial process, including selection of trials to be conducted, protocol development, and trial operations (NCI, 2006). The role of CTEP staff, as described in the NCI clinical trials Cooperative Group Program Guidelines (NCI, 2006), is to “assist, facilitate, and assure optimal coordination of Group activities. CTEP staff have very specific and well-defined responsibilities for the oversight and review of Group clinical trials and for investigational agent development.” Given this central position of NCI in the clinical trials system, the committee recommends that the current roles of NCI as well as the Cooperative Groups be reevaluated.
The 2005 report by the Clinical Trials Working Group (CTWG) recommended several ways to improve NCI oversight of cancer clinical trials (NCI, 2005b; see also Appendix A). In response to the recommendations of the CTWG, NCI created a number of offices, committees, and subcommittees, as indicated in Table 3-2 and Figure 3-1.
Trial Concept Selection
Investigators within the Cooperative Groups develop ideas for new cancer clinical trials, and these suggestions percolate through Cooperative Group committees to the Group leadership. Funding for the Cooperative Groups is based on past accomplishments but is not provided on a per trial basis or on the basis of specific trial proposals (see the section on funding for cancer clinical trials). However, all trial concepts that the Groups generate must be reviewed and approved by CTEP before they are launched. Because an excess of trials with poor enrollment raised concerns that prioritization of the trials was inadequate, the CTWG recommended the creation of a network of scientific steering committees (Box 3-1) that would leverage Cooperative Group, inter-Group, Specialized Programs of Research Excellence, and Cancer Center structures to work with NCI staff on the design and prioritization of Phase III trials to better allocate resources, increase scientific quality, and reduce duplication (NCI, 2005b; see also Appendix A). With this new organizational setup, principal investigators submit the concept for a clinical trial to CTEP for review and approval by the appropriate steering committees, with the goal of prioritizing them.
This approach to concept review remains inefficient and is not sufficiently effective in prioritizing trials. Since the steering committees were formed, the lengths of concept proposals have increased significantly (they
TABLE 3-2 NCI Oversight of Cancer Clinical Trials
Office, Committee, or Subcommittee
NCI Office, Coordinating Center for Clinical Trials
Established in 2006; supports the implementation of the initiatives of the CTWG and the Translational Research Working Group (TRWG)
Clinical and Translational Research Operations Committee (CTROC)
Established in 2005; an internal committee that provides strategic oversight for NCI clinical trials and translational research
Clinical Trials and Translational Research Advisory Committee (CTAC)
Established in 2007; provides extramural oversight for implementation of the CTWG and TRWG initiatives, including steering committees
CTAC Subcommittees/Working Groups
Investigational Drug Steering Committee (IDSC)
Provides strategic input into the clinical development (early phase) plans for new agents for which the Cancer Therapy Evaluation Program holds the investigational new drug application
Disease-Specific Scientific Steering Committees (SCs)
Prioritize concepts for Phase III and selected Phase II therapeutic clinical trials; refine and collaborate on concepts by the use of task forces, when appropriate
Patient Advocate Steering Committee
Develops and shares best practices for patient advocate participation in steering committees; identifies common concerns and needs and proposes potential solutions; disseminates information from steering committees to the appropriate communities; ensures that the concept evaluations consider the patient community at large and includes a special focus on minority and underserved populations
The Clinical Trials Management System (CTMS) Steering Committee
Provides strategic advice for the CTMS work space, advising on project selection, prioritization, and oversight
Ad Hoc Coordination Subcommittee
Provides advice on how to foster collaboration among the various components of the NCI-sponsored clinical trials infrastructure, to develop a fully integrated clinical trials system
Ad Hoc Public/Private Partnership Subcommittee
Provides advice on how to enhance NCI-sponsored clinical trials through collaborative interactions with the private sector
Cooperative Group Clinical Trials Funding Model/Complexity Model Working Group
Charged with developing a model for aligning reimbursement of Phase III treatment trials with complexity, to compensate the additional costs
Correlative Science Working Group
Charged with developing validation standards and prioritization criteria of correlative science studies associated with Phase III trials
Operational Efficiency Working Group
Charged with developing approaches to cut timelines in half
are now about 25 pages long), making the review process more arduous. Multiple layers of review still slow the process, and trial concepts are still not ranked against each other, as is usually done in peer review. Steering committees review and vote up or down on trial concepts as they are submitted, and NCI staff actively participate in the review process, unlike other NCI peer review groups. As of January 1, 2010,1 62 percent of concepts for Phase III trials reviewed by the steering committees had been approved,2 whereas the historic approval rate was about 65 percent before
Clinical Trials and Translational Research Advisory Committee Steering Committees
Investigational Drug Steering Committee (IDSC) for Early-Phase Trial Prioritization
Membership includes principal investigators of NCI’s early-phase U01 grants and N01 contracts and representatives from Cooperative Groups and other content experts. The committee has nine task forces in the areas of signal transduction, biomarkers, angiogenesis, clinical trial design, pharmacology, immunotherapy, PI3K/Akt/mTOR (PAM), cancer stem cells, DNA repair, and programmed cell death. The Group has developed recommendations for:
Disease-Specific Scientific Steering Committees
NCI established disease-specific scientific steering committees with the goal of increasing information exchange at an early stage of trial development; increasing the efficiency of clinical trial collaboration; reducing trial redundancy; and developing, evaluating, and prioritizing trial concepts. These committees are charged with prioritizing, refining, and collaborating on concepts for Phase III and selected Phase II therapeutic clinical trials. The committees use task forces when appropriate, convene planning meetings to identify the critical issues and questions about the disease to be studied, and periodically review accrual and unforeseen implementation issues.
The initial committees included the Gastrointestinal Cancer, Gynecologic Cancer, and Head and Neck Cancer Committees. Subsequent committees included the Genitourinary Cancer, Breast Cancer, and Thoracic Malignancy Committees and three committees for adult Hematologic Malignancies (Leukemia, Lymphoma, and Myeloma). Committees on brain cancers and pediatrics are in development. The full transition to disease-specific steering committees is expected in 2010.
SOURCES: NCI, 2009a,f.
the implementation of the committees.3 The approval rate for Phase II trial concepts was 53 percent. In addition, there is little interaction among the disease-specific steering committees to determine trial priorities across disease categories, nor is there consideration of how the trial portfolio should be balanced with regard to Phase II or Phase III trials, although
they are charged with guiding the development of “strategic priorities” (NCI, 2005b). A possible alternative approach might be for the steering committees to identify research priorities and then issue requests for proposals to address them. However, the trial concept review process should be strengthened and streamlined, and it should entail the evaluation of concise proposals (including the intended statistical design) that are ranked against each other. The emphasis should be on scientific strength and opportunity, innovation, feasibility, and importance to improving patient outcomes. In addition, steering committees should operate independently from NCI staff, with NCI taking a more traditional role of facilitating the review process rather than actively participating in it; and they should have a primary focus on the prioritization of clinical needs and scientific opportunities and on facilitating communication and cooperation among the Cooperative Groups.
After CTEP approval for a trial concept is achieved, the principal investigator and other key staff develop a full study protocol that must again be reviewed and approved by various branches within CTEP (Table 3-1). Although the Cooperative Group guidelines state that protocols can be “approved with recommendations,” in which investigators are requested to give serious consideration to any recommendation included in the consensus review but are not obligated to amend the study, reviewers generally do not distinguish between major and minor review concerns. The committee recommends that all review bodies distinguish between major review concerns (concerns regarding patient safety and critical scientific flaws, which must be addressed) and minor concerns (which should be considered, but are not obligatory).
Moreover, if changes are made before activation of the study, the investigators must send CTEP a revised protocol for review that details any changes in the previous CTEP-approved document. This policy includes changes to the protocol that are requested by an IRB subsequent to CTEP approval (see also the section on oversight of trials by IRBs). Similarly, minor changes requested by NCI can trigger iterative reviews by IRBs. Additional duplicative and iterative reviews can further slow the process when a trial involves an investigational new drug (IND) or an investigational device exemption (IDE), as both FDA and NCI are involved in protocol review and development (see also the section on FDA oversight). The committee recommends that federal oversight be more flexible in allowing minor amendments to the protocol or consent form to fast-track the chain of reapprovals.
In sum, the protocol development process is arduous and time-consuming.
Months are often consumed by multiple re-reviews that sometimes address only minor changes. Given the funding limits and voluntary nature of the Cooperative Group Program, it can be difficult for the Groups to devote sufficient staff time to rapidly develop and amend a protocol as the process proceeds, further compounding delays due to expectations for revisions and re-review (IOM, 2009c). The provision of funds for professional project managers could ease the workloads of principal investigators and greatly facilitate a rapid review process and adherence to timelines. As described in subsequent sections of this chapter, improved processes are also needed to reduce the time required for protocol development and trial launch. For example, use of standardized templates for some portions of the protocol might result in fewer iterative reviews and speed the review process.
Once a trial is launched, NCI takes a direct role in overseeing quality control, data and safety monitoring, data management and analysis, and compliance with federal regulatory requirements (NCI, 2006). For example, an NCI program director assisted by the Biometric Research Branch (BRB) staff assesses Cooperative Group compliance with NCI-established policies on data and safety monitoring boards for all Cooperative Group Phase III trials. At the request of CTEP, the BRB staff also review mechanisms established by the Cooperative Group for data management and analysis. BRB staff make recommendations with the goal of ensuring that data collection and management procedures are adequate for quality control and analysis yet are sufficiently simple to encourage maximum participation of physicians entering patients into studies and to avoid unnecessary expense. Data must be made available for external monitoring as well, as required by NCI’s agreement with FDA relative to NCI’s responsibility as sponsor of a therapeutic agent (NCI, 2006).
The Clinical Trials Monitoring Branch (CTMB) of CTEP provides direct oversight of each Cooperative Group’s monitoring program, which includes on-site auditing. CTMB is responsible for establishing guidance for the conduct of quality assurance audits and for overseeing and monitoring the compliance of the Groups, the CCOP research bases, and the Cancer Trials Support Unit (CTSU) with NCI’s monitoring guidelines. CTMB also monitors compliance with applicable federal regulations. CTMB staff may attend certain on-site audits, and they review audit reports and findings and assess the adequacy and acceptability of any corrective actions. CTMB staff also review and provide advice regarding the mechanisms established by the Group for quality control of the therapeutic and diagnostic modalities that it uses in its trials (NCI, 2006).
In addition to overseeing the conduct of Cooperative Group clinical
trials, NCI also provides some logistical support (NCI, 2006). For example, the Pharmaceutical Management Branch provides for the distribution of investigational new agents for which DCTD is the sponsor. However, NCI does not provide those services for other agents. Faster trials could be fostered through more active and consistent support from NCI. Thus, the committee recommends that NCI file more IND applications for agents to be tested in approved protocols and that NCI devote more funds to the distribution of drugs for approved protocols to ensure an adequate drug supply for high-priority studies. These tasks entail time- and resource-intense activities. An expanded support role for NCI would help Group investigators gain access to more experimental therapeutic agents and reduce the time that the Groups spend in negotiation with industry to acquire agents before the launch of a trial and also ensure the availability of the agent during the trial.
NCI could facilitate the more timely completion of clinical trials in other ways as well. NCI should provide resources and technical assistance to facilitate the rapid adoption of a common patient registration system. For example, the Oncology Patient Enrollment Network4 would provide a standardized Internet-based environment for the enrollment of all patients in all Cooperative Group trials. NCI should also provide a common remote data capture system.5 The availability of such a system would permit sites to enter patient-level data into a clinical database over the Internet. The implementation and adoption of these structured electronic tools would increase consistency across trials, Groups, and sites; conserve resources by reducing the workload associated with patient enrollment and follow-up; allow more timely data review; and enhance the knowledge gained from a trial. However, these transitions can be costly and temporarily disruptive, so support from NCI to facilitate rapid implementation is important.
NCI should also facilitate the establishment of more efficient and timely methods for ensuring that trial data are complete and accurate while the trial is ongoing. Many Groups wait until completion of a trial before beginning the necessary steps to ensure data quality because they lack the resources to check the data more frequently, but this can result in significant delays in analyzing and publishing the results. NCI should also develop standardized case report forms that meet regulatory requirements. The language for most clinical data elements in NCI-sponsored trials has been standardized by the NCI Common Data Elements,6 but standardized report formats would also simplify the reporting across multiple trials and multiple sites.
Oversight of Trials by IRBs
In the 1970s, concern about the inadequate protection of human subjects in research led to federal regulations and the establishment of IRBs7 (Beecher, 1966; HEW, 1979). At that time, most clinical research was done at single sites by single investigators. Since then, the increasing emphasis on evidence-based clinical practice has greatly increased the number of clinical trials. There has also been substantial growth in the number of multicenter trials as well as an increase in the complexity of clinical trials. In addition, the purview of IRBs has been expanded as additional regulations regarding human subjects research have been developed, such as the Privacy Rule promulgated under the provisions of the Health Insurance Portability and Accountability Act (HIPAA). These combined changes have overburdened IRBs and have fostered long delays in the review of study protocols and informed-consent forms (ICFs) (IOM, 2002).
IRB Oversight of Multicenter Trials
In many cases, each site participating in a multicenter trial will have its own IRB review of a study, which causes “unnecessary duplication of effort, delays and increased expenses in the conduct of multi-center trials,” as noted in a recent FDA guidance (FDA, 2006). For example, one study (Greene and Geiger, 2006) found that one-quarter of the 20 trials reviewed experienced delays (of up to 8 months) because of multiple IRB negotiations.
Multiple IRB reviews do not necessarily improve patient protection, as evidenced by the numerous inconsistencies in the rulings of local IRBs reviewing the same study (Gold and Dewa, 2005; Greene and Geiger, 2006). One survey of participating sites in a multicenter genetic epidemiology study found that the participating local IRBs used different evaluation criteria, which resulted in requirements for the use of different numbers of consent forms at each institution participating in the trial (McWilliams et al., 2003). Another analysis found that of 20 multicenter clinical trials reviewed, 17 experienced inconsistencies both in the IRBs’ review processes and in their recommendations (Greene and Geiger, 2006). McWilliams and colleagues concluded, “Lack of uniformity in the review process creates uneven human subjects protection and incurs considerable inefficiency” (McWilliams et al., 2003). The lack of consistency in consent requirements among IRBs can also lead to selection bias and decrease statistical power (Jamrozik, 2000).
In addition, the bulk of the changes that IRBs request are often minor changes to ICFs that increase the reading level of the forms, thus making
them more difficult to understand (Burman et al., 2003). Furthermore, local IRBs often ask for changes that are not local in nature (Burman et al., 2003; Tully et al., 2000). One review found that less than 2 percent of the changes made to consent forms were due to local context issues (Burman et al., 2003).
Many local IRBs also lack the expertise needed to evaluate certain studies with complex scientific and ethical dimensions, such as those using genetic tests (McWilliams et al., 2003). Finally, the integrity of patient protections is also threatened by excessive IRB work loads (HHS, 1998).
Recognizing these shortcomings, in 1998 the deputy inspector general of HHS published a report requesting the reform of IRBs (HHS, 1998). This was followed by the Armitage report from the NCI Clinical Trials Program Review Group commissioned by the NCI director (NCI, 1997), which recommended that NCI streamline or eliminate redundant processes and procedures (see also Appendix A). NCI responded in 2001 by establishing two central IRBs (CIRBs) for NCI Phase III multicenter trials (first, one for adult trials and, later, one for pediatric trials), to avoid the need for such a trial to be reviewed extensively by dozens of IRBs throughout the country. The members of the CIRBs comprise patient advocates, physicians, nurses, pharmacists, statisticians, and an ethicist.
The CIRB does the initial and continuing review of national studies (without charge) while allotting to local IRBs the responsibility of ensuring that the protocol and ICF are appropriate for the local population and institutional requirements. With this “facilitated review,” a local IRB reviews the CIRB-approved study for considerations that apply only to the local context. A subcommittee or the chair can therefore perform the local IRB review, so there is no need to wait for the next meeting of the full local IRB.
Such facilitated reviews should allow local sites to open studies within days, making it easier to conduct trials of treatments for rare diseases and for patients nearing the end of the eligibility window to participate in clinical trials. In theory, a CIRB also enhances the protection of research participants by “providing consistent expert IRB review at the national level before the protocol is distributed to local investigators” (Adler, 2009). A centralization of ethical review is ongoing in other countries for similar reasons. For example, the United Kingdom has transitioned to a more centralized system that is faster and has freed up resources for reviewer training to ensure consistent quality ethical reviews.8 Clinician investigators and academic and commercial sponsors in the United Kingdom generally agree that this new, more centralized ethics system has been a major improvement. However, it should also be noted that faster and more consistent Ethics
Committee reviews had the effect of highlighting delays that subsequently arose with other aspects of regulatory review (research and development [R&D] approval) at each participating site. In effect, the delays previously seen in ethics review were shifted to what is now the slowest component of the full system. The latter delays are now being addressed with a more centrally coordinated R&D review system, but that transition is not yet far enough along to demonstrate whether the total study start-up time will have been shortened substantially.9
Several evaluations have revealed the benefits of NCI’s CIRB. A survey in 2006 found that 80 percent of primary investigators who responded to the survey believed that participation in the CIRB saved them some or a lot of time and effort, with 65 percent rating their overall experience with the review board as good or very good (RTI International, 2007). Another analysis of the costs and benefits of CIRBs showed that the CIRB saves the local IRB and investigators time and effort (Wagner et al., 2009). Wagner and colleagues estimated that institutions using the CIRB for the initial review save $563 per study. One study that compared the use of the NCI CIRB to the use of local IRB methods found an “increase in productivity with fewer staff hours after initiation of the Central IRB” and that the CIRB process “is most efficient and provides increased benefits in terms of time, costs, and patient safety as well as other measures” (Hahn, 2009). Another study found that although a CIRB increased the workload for IRB administrators, IRB chairs, and others who conduct facilitated reviews, it improved the efficiency of the review for local IRB members, investigators, and research coordinators (McArthur et al., 2008). In addition, the study found that the use of the CIRB enabled local IRBs to focus on high-risk (earlier-phase) trials.
The NCI CIRB has been sanctioned by OHRP, which helped NCI develop its CIRB, and is officially endorsed by the American Society of Clinical Oncology. In addition, FDA wrote a guidance in 2006 stating that “use of a centralized IRB review process is consistent with the requirements of existing IRB regulations” (FDA, 2006) and urged those involved in multicenter clinical research to consider the use of a CIRB.
NCI data indicate that, as of April 2009, more than 300 institutions had enrolled to participate in the CIRB, nearly 9,000 facilitated reviews had been used for adult or pediatric studies, and the number of accepted facilitated reviews has steadily increased over the past decade (Adler, 2009). However, although more than half of NCI Cooperative Group pediatric sites participate in the central IRB, only about one-quarter of the adult sites do (IOM, 2009c). An American Association of Medical Colleges (AAMC)
survey of U.S. medical schools found that most had never used a CIRB (Loh and Meyer, 2004).
Numerous reasons have been given for the lack of participation in a CIRB, including concerns about liability and accountability, an unwillingness to take the additional steps or provide the additional documentation needed for a facilitated review, and local concerns (AAMC, 2006; McArthur et al., 2008; McNeil, 2005; OHRP et al., 2005). On the basis of the information gathered by the Science and Technology Policy Institute (STPI), the major barriers to the use of a CIRB were divided into two categories: those that could be mitigated through efforts by NCI and its CIRB, and those that would be more difficult to resolve. In regard to the former, a number of suggestions were made, including working with OHRP to develop official guidance on implementing the CIRB process at local sites, developing a set of best practices for CIRB implementation at sites, including model standard operating procedures, decreasing the time required to post materials, posting complete review materials, improving the response time for questions, and designating a single point of contact for each CIRB site (McArthur et al., 2008). NCI is taking action on many of these suggestions.10
The barriers identified as being more difficult to resolve included the increased workload for the local IRB chair and administrative staff, legal issues, and a loss of full local control. For example, the STPI analysis found that about half of the Cancer Centers that responded cited the main barriers to using a CIRB were the increased workload for IRB administrators, legal liability, regulatory compliance or control concerns, and local issues. In addition, the U.S. Department of Veterans Affairs (VA) chose not to allow VA hospitals and other sites enrolling veterans to use NCI’s CIRB (McArthur et al., 2008) but, instead, recently implemented its own CIRB.11 This variability, even among federal agencies, makes it more difficult to undertake clinical research.
Unless contractual agreements state otherwise, many local IRBs view themselves as being accountable and legally liable for any harm incurred to patients during a trial that had a facilitated review. This makes some IRBs resistant to parceling out any of the review responsibilities to a CIRB that will not be responsible for any patient harm that develops (Wechsler, 2007). There also is concern about the potential for regulatory noncompliance, given the inconsistencies between federal regulations regarding the protection of human research subjects (AAMC, 2006; McArthur et al., 2008; OHRP et al., 2005). As noted above, multiple agencies within HHS review or have regulatory jurisdiction over cancer clinical trials, including NCI,
Personal communication, Jeffrey Abrams, National Cancer Institute, September 23, 2009.
FDA, OHRP, and OCR; and at times, different federal regulations conflict with one another, as well as with state regulations. Indeed, the HHS Secretary’s Advisory Committee on Human Research Protections (SACHRP) and the Institute of Medicine (IOM) have recommended harmonization of the regulatory language, guidance, and policies associated with the Common Rule12 and the HIPAA Privacy Rule13 because of the difficulties that investigators and IRBs encounter when they try to reconcile discrepancies between the two (IOM, 2009a; SACHRP, 2005). For example, the Common Rule allows patients to provide consent for future research to be performed with the biosamples collected from the patient in a clinical trial, whereas the Privacy Rule does not. In addition, the definitions of “deidentified data” are quite different between the two rules.
At a national conference on alternative IRB models in 2006, participants called for harmonization among federal laws and regulations and “recommended that regulatory agencies give clear signals that alternative forms of review are acceptable.” The executive summary of that conference also called for HHS to consider policies akin to those of FDA, which link regulatory liability to the organization responsible for the alleged problem, as opposed to the current OHRP policy that holds institutions responsible for all compliance issues that occur under their Federalwide Assurance, regardless of where the alleged violation occurred (AAMC, 2006). Alternatively, OHRP could issue a statement that “when institutions use due diligence in selecting an external IRB, they will not be held responsible for that IRB’s decisions” (AAMC, 2006).
OHRP is considering making a rule that will “enable OHRP to hold IRBs and the institutions or organizations operating the IRBs directly accountable for meeting certain regulatory requirements.” That could encourage institutions to rely on CIRBs or other IRBs operated by another institution or organization, when appropriate, which OHRP believes will reduce the administrative burdens of ensuring adequate protection of human subjects in research without diminishing that protection (OHRP, 2009). SACHRP also believes that OHRP “should continue its efforts to develop guidance on IRB models,” including model agreements for use by institutions considering a CIRB review (SACHRP, 2008). In a letter to the HHS secretary, SACHRP requested that the secretary encourage the NIH director “to explore more widespread use of collaborative IRB models, including expanded use of Centralized IRBs for NIH-sponsored
research” (SACHRP, 2008). The NCI director’s Consumer Liaison Group also believes that OHRP should provide more guidance that enhances the acceptance of CIRBs (Director’s Consumer Liaison Group, 2008). The committee concurs. The committee thus recommends that OHRP develop guidance that clearly establishes the accountability of the NCI CIRB to encourage its wider use and acceptance by local institutions.
Two HHS regulations14 require researchers supported by HHS funding to obtain and document informed consent from patients participating in their clinical trials. In addition, researchers who want to use and report on protected health information may have to obtain HIPAA authorization from research subjects.15 Both consent processes are designed to “inform potential subjects about the research, and the use and sharing of their health information in terms that the patients can understand” (AHRQ, 2009).
Despite the requirement that ICFs be written in “understandable” language,16 one study of 107 oncology ICFs found that all of them were written above the recommended eighth-grade reading level (Sharp, 2004), which is the reading level of nearly half of the U.S. population (Kirsch et al., 2002). One study showed that even IRBs failed to meet their own standards for readability (Paasche-Orlow et al., 2003). Several studies confirm that research subjects often do not understand fundamental concepts required for their participation in clinical trials (Coletti et al., 2003; Joffe et al., 2001; Sudore et al., 2006).
The HIPAA authorization form is also typically written at a higher reading level than that which most Americans have. One study assessed the readability of HIPAA authorization forms from the 125 academic medical centers that receive the most funding from NIH and found that the median reading level for the authorization templates was the 13th grade (i.e., freshman year in college) (Breese et al., 2004). A similar study found that NIH’s model authorization form was written at a 12th-grade reading level (Nosowsky and Giordano, 2006). The authors concluded that many research participants cannot understand the forms that they are required to sign.
Not only are HIPAA authorization forms and ICFs written at a higher level of reading than most of the public has attained, but they also are often too lengthy, which is a burden for both the research subjects who need to read and understand them and the physicians who need to spend
extra time explaining them to their patients. Studies show that the length of informed-consent documents has increased over time (LoVerde et al., 1989; Tarnowski et al., 1990). The HIPAA authorization form alone adds an average of two pages of additional material to the ICF. At a recent IOM workshop, one clinical researcher noted that because of the increasing complexity of cancer clinical trials, his average ICF is between 30 and 35 pages long, which is too long for patients to digest without medical staff devoting a considerable amount of time to verbally summarize them (IOM, 2009c). The extra time required to do this, he pointed out, can deter physicians from engaging in clinical research.
ICFs that are too long and complex also hinder patients’ understanding of them and often prevent patients from reading the forms completely, research confirms (Dresden and Levitt, 2001; Sharp, 2004). This can hamper efforts to adequately protect research subjects, as studies involving greater risk tend to have longer and more complex ICFs (Dresden and Levitt, 2001). Several researchers have tried to address the shortcomings of ICFs by creating simpler or shorter forms that are easier to read. Most of those studies have found that these simpler forms foster a better comprehension by the potential research participants (Campbell et al., 2004; Dresden and Levitt, 2001; Epstein and Lasagna, 1969; Kaufer et al., 1983; Tait et al., 2005; Young et al., 1990). One particularly telling study found an inverse relationship between the length and degree of detail of an ICF and the study subjects’ comprehension of the form (Epstein and Lasagna, 1969). Those subjects who received the shorter, less detailed form scored the highest on comprehension. As an AAMC report concluded, “This study reinforced the concept that ICFs are most comprehensible when they are as concise as possible” (AAMC, 2007a).
Several organizations have tried to remedy the ICF comprehension problem by creating guidelines and templates that call for ICFs to be more concise and written in simpler language. These organizations include the Agency for Healthcare Research and Quality (AHRQ), AAMC, the Coalition of Cancer Cooperative Groups, the Children’s Oncology Group (COG), NCI, and the Group Health Center for Health Studies (Table 3-3). In addition, participants at a recent IOM workshop suggested providing a short form that can be layered on top of a long, complicated consent form (IOM, 2009c). The short form would state in a few words what is going to happen to the patient and then provide links to the rest of the document for those who want more detail. AAMC is trying to develop such a short-form approach to consent forms. SACHRP is also examining ways to improve ICFs and the consent process (HHS, 2007).
Current regulations and guidance (HHS, 2009), however, do not allow the use of a shortened summary document to obtain informed consent. The committee concluded that guidance from OHRP and OCR to allow simpli-
TABLE 3-3 Examples of Past and Ongoing Efforts to Simplify Informed-Consent Documents and Improve the Informed-Consent Process
Activity to Simplify Informed Consent
Developed sample documents and guidance for the informed-consent process
Has an ongoing project to promote universal use of short and simple informed-consent documents
Has an ongoing panel that will make recommendation on how to improve the informed-consent form and process
Group Health Center for Health Studies
Developed a “readability tool kit” that includes template language for common topics in informed-consent forms
Coalition of Cancer Cooperative Groups
Published About Clinical Trials: Informed Consent
Developed informed-consent document templates with simple language for Phase I, II, and III trials
Published Guide to Understanding Informed Consent
Joint project with the Office for Protection from Research Risks (now OHRP) to simplify informed-consent forms
SOURCES: AAMC, 2007b; AHRQ, 2009; caBIG, 2007; CCCG, 2007; Ridpath et al., 2007.
fied summaries of consent forms would improve patient communication and decision making.
FDA Oversight of Cancer Clinical Trials
Part of FDA’s mission is to ensure the safety and effectiveness of therapeutics and diagnostics on the market. To achieve this mission, FDA reviews clinical trial data on therapeutic agents and diagnostics that sponsors provide and then approves or clears those products that meet the agency’s standards for safety and efficacy. Before the launch of some clinical trials, FDA may also review and provide advice about a study’s protocol or a sponsor’s data collection proposal, including annotated case report forms (FDA, 2001).
According to Margaret Mooney, chief of CTEP’s Clinical Investigations Branch, initiatives undertaken in response to the recommendations in the CTWG report aim to increase cooperation and communication among NCI, FDA, and the pharmaceutical industry (CTAC, 2008). Cooperative Group Phase III trial concepts that are specifically identified as supporting a licensing indication are forwarded to FDA at the concept stage, and some efforts have been made to integrate and coordinate special protocol assessments with the CTEP review processes. However, other concepts for Phase
III trials with INDs or commercial agents are also forwarded to FDA for informational purposes, even if the study has not been specifically identified as supporting a potential licensing indication. The intent is to allow FDA to provide input at the agency’s discretion, but FDA does not have the staff or resources to examine proposals for trials that may or may not have registration implications. The committee recommends that NCI do more to coordinate reviews and oversight with FDA in trials involving an IND or investigational device exemption to eliminate iterative review steps.
FDA is a complex agency comprising five product centers and many offices. More than one FDA unit is often involved in reviewing Cooperative Group cancer clinical trials. Although the Office of Oncology Drug Products was recently established within the Center for Drug Evaluation and Research to review most oncology drugs, some cancer therapeutics and diagnostics may be reviewed by several offices of the Center for Biologics Evaluation and Research,17 or the Office of In Vitro Diagnostic Device Evaluation and Safety within the Center for Devices and Radiological Health (FDA, 2009).
Because more than one center may have jurisdiction over an oncology product, there may be conflicting regulatory expectations. In addition, no single FDA center or office offers the full range of specialized oncologic expertise needed to review all types of cancer therapeutics and diagnostics, including biologics (such as monoclonal antibody-based products), standard chemotherapies, genetic tests and other in vitro diagnostics, or imaging modalities. The Office of Combination Products is charged with facilitating reviews that involve more than one center. However, that office is not oncology specific, and more than coordinated review is needed. A coordinated cancer program at FDA would bring together relevant areas of science and regulation to both advise sponsors and enable the efficient review of applications that involve either combinations of agents (some of which might not have independent activity, as described in Chapter 2) or drugs that are developed together with diagnostic devices to facilitate their use. Such a program could provide more consistency and expertise in the review of oncology products (Epstein, 2009). FDA has committed in principle to the formation of such a cancer program to “facilitate cross agency expert consultation,” but it has yet to follow through on that commitment (FDA, 2004). A major challenge of putting all responsibility for all aspects of the regulation of cancer products in one place within FDA is that the many types of expertise needed, which currently reside in differ-
ent parts of FDA, would have to be duplicated in the new oversight unit, possibly requiring substantial additional resources for FDA. Nonetheless, the committee recommends that FDA establish a coordinated Cancer Program across its centers that regulate oncology products to improve both efficiency of and consistency of regulatory standards for review of oncology products.
FDA Data Requirements
To gain FDA approval, FDA requires data that indicate the effectiveness of the tested product for a specific indication, as well as data on adverse effects. The types and amounts of data required, however, are not specified in detail in FDA guidance because expectations may vary according to what is already known about a drug and how different a proposed new use of the drug is. A guidance document developed in 2001 noted that fewer data may be necessary if extensive safety data on a drug already exist because it has been on the market for another indication, if a drug has been tested in other trials with similar patient populations, or if the proposed new use of the drug is similar to that of already approved uses of the drug (FDA, 2001). However, that guidance document has had little influence on FDA’s data requirements.
The lack of a standard required data set leads to inconsistency in the data collected for cancer trials that can affect the quality of the study and limit cross-study comparisons (Curt, 2009; Epstein, 2009; McClellan and Benner, 2009). For example, studies on the collection of data on adverse events (AEs) find that the rates of reported AEs depend on how information is gathered. Patients reported more AEs if they received a checklist of AEs rather than asked open-ended questions related to AEs (Bent et al., 2006). Other factors that may affect the reporting of adverse events include the frequency of follow-up visits (Ioannidis et al., 2006).
The validity of progression-free survival as an indicator of treatment effectiveness can also vary according to the frequency of assessment and can be further confounded by the variability of tumor measurements, as noted in Chapter 2, particularly in unblinded trials (Amit et al., 2009). The use of blinded independent central review (BICR) of imaging to assess tumor progression in randomized clinical trials has been advocated to control the bias that might result from errors in progression assessments. A review of the literature for studies of breast, colorectal, lung, and renal cell cancer using retrospective BICR found high rates of discrepancy between the local and the central reviews, but these differences did not lead to different conclusions about treatment efficacy. The authors concluded that although BICR reduces some potential biases, it does not remove all biases from evaluations of treatment effectiveness. Furthermore, they found that BICRs,
as typically conducted, may introduce bias because of informative censoring,18 which results from having to censor unconfirmed locally determined progressions (Dodd et al., 2008).
Although the data requirements are not detailed in guidance, industry sponsors often expect the collection of more data than may be needed for FDA approval so that they “cover all bases.” There is an inherent tradeoff, however, in determining how much data to collect in a trial. Although investigators intuitively wish to collect as much data as possible, there is a risk that the magnitude of data collection may compromise the overall quality of the data by creating an enormous burden on investigators and clinical study sites (Schilsky et al., 2008). The collection of excess data increases the cost and duration of clinical trials, and the administrative burden not only for data collection but also for ensuring the quality control procedures for all these data contributes to the reluctance of investigators to participate in trials and enroll patients. The extensive collection of unused data can be detrimental to the overall quality of the data and the subsequent data analysis (Abrams et al., 2009). For example, all data collected must be quality controlled and edited, if necessary, so the collection of nonessential data is a drain on limited resources. In a poll of several Cooperative Group and industry trial sites, more than 85 percent noted that data optimization would moderately or significantly impact the resources of the trial site, allowing the collection of higher-quality, targeted data and greater participation in clinical trials (Abrams et al., 2009). The committee recommends that FDA update its regulatory guidelines for the minimum data required to establish the safety and efficacy of experimental therapies (including combinations of products).
Standards for data collection that differ according to whether the clinical trial is for a primary or a secondary indication could reduce the collection of excess data and improve the quality of the data collected, studies suggest. A retrospective review of the data sets from completed Phase III cancer trials, many of which were used for FDA supplemental approvals, found that gathering toxicity data for a subsample of the participants in a trial for a drug for which a substantial toxicity profile already exists led to the same conclusions that were reached in the original study that gathered this information for all patients enrolled (Abrams et al., 2009).
A similar retrospective analysis of the Avastin Non-Small Cell Lung Cancer Trial found that if toxicity data on Grade 1 and 2 AEs were collected from a subset of 200 patients per arm rather than from all 650 trial participants, there would have been a time savings of 2,500 hours and no
important AE in those categories would have been missed. The collection of Grade 3 and 4 AE data from a subset of such patients found that those AEs that occurred at least 5 percent more frequently in the study drug arm were almost always seen in the smaller subset, whereas those AEs that occurred at an increased frequency of 2 percent were missed about half the time (Schilsky et al., 2008).
Whether such subset analyses will be adequate depends on what is already known about the safety of the drug and is likely to be sufficient for many clinical trials undertaken for supplemental indications. At a recent IOM workshop, Richard Pazdur of FDA concurred that a clearer definition of an optimal safety database would be helpful (IOM, 2009c), and FDA is currently developing new guidance material on this issue.
A panel of experts convened at the Brookings Institution concluded, “Clinical trials could be designed and conducted more efficiently, and the regulatory review process could be more uniform and rapid if a set of data collection and reporting standards were consistently applied to clinical trials conducted by industry, academia, and the NCI’s Cooperative Groups” (McClellan and Benner, 2009; Schilsky et al., 2008). That panel suggested that a core set of data elements be identified, along with how those data elements need to be modified for certain situations. Ideally, such standards would be recognized by regulatory agencies worldwide. Increased investment in regulatory science studies that assess how best to craft regulations on the basis of the scientific evidence, as recently advocated by the FDA commissioner, might aid with the determination of such data standards (Christel, 2009; Grant, 2009).
OPERATIONAL INEFFICIENCIES IN TRIAL DEVELOPMENT, LAUNCH, AND CONDUCT
The complexity of the collaborative process and multi-institutional oversight of Cooperative Groups has fostered inefficiencies and long start-up times for clinical trials, with many investigators raising concerns about burdensome bureaucratic procedures that create undue delays (NCI, 2005a). To provide insight into the organizational challenges in the development of clinical trials, several studies have been undertaken to document all the steps and time required to launch Cooperative Group clinical trials opened by the Cancer and Leukemia Group B (CALGB) (Dilts et al., 2006) and the Eastern Cooperative Oncology Group (Dilts et al., 2008), as well as the steps and timing required for CTEP and the CIRB to evaluate and approve Phase III clinical trials (Dilts et al., 2009).
Many of the steps in the startup process are redundant and do not improve the value of the study, according to these analyses (Dilts et al., 2006, 2008, 2009).The problem is not how much time each step takes but
how many repetitive steps with looping there are, such that the same person or institution keeps reviewing the same study after minor alterations that other reviewers required were made. These repetitive steps result in an inefficient system that could be made more efficient by getting all parties (e.g., FDA and IRBs) to discuss a proposed trial at the same time. Often, there is also “scope creep,” which occurs when one group or organization expands the scope of its authority or power beyond what was originally intended, triggering re-reviews by the other review bodies. Furthermore, minor changes often do not significantly improve the clinical trial yet trigger another lengthy series of reviews. Contributing to the inordinate amount of time required to develop a clinical trial is the fact that many of the steps are conducted serially rather than in parallel.
Although synchronicity is an issue for any clinical trial, it is exacerbated in Cooperative Group trials because of the need to deal with multiple external agencies (Dilts et al., 2008). Startup times for Phase III Cooperative Group trials ranged from 1.25 to almost 7 years (Dilts and Sandler, 2006; Dilts et al., 2006, 2008), during which time the science can change tremendously. Because of these scientific developments, the protocol may no longer be relevant when the trial is launched. New scientific findings might also require additional changes to the protocol be made, and these changes, in turn, require additional reviews. The length of the development process for a clinical trial also appears to affect the accrual success of the trial. The longer that trials take to be developed, the less likely it is that they will meet their minimum accrual goals (Cheng et al., 2009) (Figure 3-2). The ultimate inefficiency is a clinical trial that is never completed because of insufficient patient accrual, and this happens far too often. One analysis19 found that 40 percent of CTEP-approved trials (Phase I-III) failed to achieve minimum accrual goals. A total of 8,723 patients (17 percent of the accruals) accrued to those studies that were unable to achieve the projected minimum accrual goal (Cheng et al., 2009). Among the Phase III trials, 63.9 percent (n = 39) did not achieve accrual success, and a large number of Phase III trials (49.2%, n = 30) closed to accrual with enrollments less than 25% of the originally stated accrual goal. It should also be noted, however, that some trials close early because of unanticipated side effects or because the results from another trial unexpectedly make it no longer ethical to continue the trial. Another study, a survey of study chairs and lead statisticians for 248 phase III trials by five national cooperative groups
open in 1993-2002 (response rate, 62%), found a 65% accrual success rate (Schroen et al., 2009). The findings in these studies are congruent with to those of Ramsey and Scoggins (2008), who reported that 59 percent of the clinical trials performed by NCI-supported clinical trials networks had been published during a similar time period.
A computer model that was developed on the basis of those analyses found that if individual Cooperative Groups or CTEP singly tried to improve its processes, each would cut only a few days off the trial development timeline, but if they worked together to improve the entire process, the timeline could be substantially shortened. For example, a process map
for CALGB showed that 63 percent of the decision-making steps reside with multiple organizations and agencies, none of which is under the direct control of the Cooperative Group (Dilts et al., 2006).
NCI funded those analyses in response to the CTWG report (NCI, 2005b). NCI also established the Operational Efficiency Working Group (OEWG), which was charged with identifying ways to reduce the study activation time for Cooperative Group and Cancer Center trials by 50 percent. That Group established specific, measurable goals that the IOM committee endorses. The OEWG’s report recommends strategies and implementation plans that aim to reduce the time from submission of the trial protocol to final approval of the protocol to 300 work days for Phase III trials (Figure 3-3) and 210 work days for Phase II trials (Doroshow and Hortobagyi, 2009). Those recommendations include staffing changes, more coordinated, parallel reviews, and improved project management and protocol tracking (see also Appendix A for more details). The recommendations also include time-date goals that specify, for example, that a clinical trial must open and accrue patients within 18 calendar months for Phase II trials or 2 years for Phase III trials or it will be closed (although some exceptions may be necessary, for example, in the case of rare diseases). The IOM committee concurs with the findings of the OEWG and recommends that NCI work with the extramural community to coordinate and streamline the protocol development process, as recommended by the OEWG.
Potential Ways to Improve Trial Quality and Efficiencies
Reports indicate that the review of operational data on the development of clinical trials can reveal steps that are redundant and do not add value to the resulting protocol, and could thus be eliminated (Kurzrock et al., 2009; McJoynt et al., 2009). For example, when the Mayo Clinic reviewed the steps and time taken from receipt of a new trial protocol through submission to an approving authority such as NCI or the IRB, it discovered numerous redundant review steps, as well as delays caused by waiting for e-mail responses. It then eliminated steps that added no value and provided deadlines for responding to e-mails. A review of 64 protocols submitted since the implementation of this streamlining process revealed that the mean turnaround time for both internally and externally authored protocols dropped by about 60 percent (McJoynt et al., 2009). The M.D. Anderson Cancer Center used a similar approach to streamline the steps needed to initiate Phase I trials, once FDA approved the IND. In one recent Phase I trial at the center, the study was activated and the first patient enrolled 46 days after completion of the final study protocol and about 48 hours after final FDA approval of the IND, reducing the overall timeline by about 3 months (Kurzrock et al., 2009). Real-time electronic tracking of the steps in trial protocol development, with the same protocol tracking number for each review step, would help with these evaluations and enable problems to be detected more quickly as trial development proceeds (Steensma, 2009).
The creation of standard operational metrics and best practices for the clinical trial development process for use across institutions could further facilitate improvements in the process. The operational processes used to conduct clinical trials are idiosyncratic to individual institutions or Cooperative Groups, with little sharing of best practices or lessons learned. Although Good Clinical Practice guidelines (ICH, 1996) provide an international ethical and scientific quality standard for designing, conducting, recording, and reporting on the results of clinical trials that involve the participation of human subjects, there is currently no mechanism for the systematic identification of best management and administrative practices that can be used as benchmarks by a clinical trials office in a Cancer Center or a Cooperative Group, nor can such best practices be used to aid up-and-coming Cancer Centers. Furthermore, there are few standard processes or metrics of what constitutes operational quality in the development or management of clinical trials. Organizations need to know how they are performing, independently over time and in comparison with their peer institutions. Thus, the operational performance metrics used to evaluate Cancer Centers and Cooperative Groups need to be enhanced and redefined to include metrics for the measurement of quality, outcomes, and timing.
The committee recommends that NCI work with governmental and nongovernmental agencies with relevant expertise to facilitate the identification of best practices in the management of clinical research logistics and develop, publish, and use performance, process, and timing standards and metrics to assess the efficiency and operational quality of a clinical trial.
There is also a need to make interagency processes more efficient. For example, simplifying and harmonizing regulatory methods (such as reporting of AEs), to the extent possible within the constraints of the responsibilities of the different agencies involved, could be beneficial. Inefficiencies could also be improved by standardizing the information technology infrastructure as well as data elements, collection, and reporting, as noted above in the section on trial oversight.
Some steps are already being taken to streamline reviews. For example, NCI recently created a parallel approval process for initial IRB review for adult clinical trials. Once CTEP approves a study protocol, the CIRB review can be done concurrently while the Cooperative Group Operations Office makes final study arrangements and submits the protocol to local IRBs that do not use the CIRB. In addition, no post-CIRB review is required from CTEP to activate the study. As a result, final approval of the initial review by CTEP could potentially be received 8 to 12 weeks earlier, and local IRBs that are not CIRB members should be able to begin their reviews sooner (Abrams, 2008a).
However, there is a need for bolder changes. For example, some consolidation of the Cooperative Groups and of common activities could increase operational efficiencies and conserve resources, ease the workloads of the Cooperative Groups, and offer more consistency to providers enrolling patients in trials launched by different Cooperative Groups. Each Cooperative Group devotes significant resources to support similar administrative structures and activities in what is defined in the operations management literature as “back-office operations” (Chase and Tansik, 1983). Back-office operations, such as information technology support and payroll systems, primarily occur outside the view of customers and do not differentiate the product or the service provided to the customer, so they have been the focus of consolidation in many industries and other organizations, including banking, nonprofit organizations, and governmental agencies (Dare and Reeler, 2005; Davis, 2009; Grosser, 2008; Kraus and Marjanovic, 1995; Lacity et al., 2003; Leith, 2002; Rhoades, 1998; Shortell et al., 1998; Taheri et al., 2000).
In clinical trials, back-office operations include activities such as data collection and management, data queries and reviews to ensure that the data collected are complete and accurate, patient registration, audit functions, processing of case report forms, training of clinical research associates, image storage and retrieval, drug distribution, and credentialing of
sites. Although the ways in which the Cooperative Groups accomplish these functions vary, there is little technical rationale for why they must be unique to the scientific focus of each Group. The consolidation of offices and personnel to conduct these information-based activities across all the Cooperative Groups should help to streamline the operations, reduce redundancy, lead to greater consistency, and conserve resources. The committee recommends that NCI require and facilitate the consolidation of these back-office administration and data management operations of the Cooperative Groups. It will be essential, however, to maintain high-service-quality work and a high level of responsiveness to the principal investigators and Cooperative Groups.
In addition, some consolidation of the current front-office operations of the Cooperative Groups, which primarily entail the Groups’ committees that generate and vet potential concepts for clinical trials, as well as the experts responsible for statistical design and analysis, would further reduce redundancy in the Program, enable the pooling of resources, and reduce competition for enrollment in trials on the basis of Group-specific priorities. The committee thus recommends that NCI facilitate some consolidation of the Cooperative Group front office operations to conserve resources while still maintaining rigorous competition for trial ideas.
One possible way to reorganize the Group front offices would be by disease type. For example, there could be four multidisciplinary Groups dedicated to adult cancers, with the task of performing trials for different diseases and with true cooperation occurring among all the Groups. Each Group could perhaps have four disease-specific committees to ensure broad coverage and some overlap for each disease. In other words, two Groups would undertake trials for lung cancer, two for colon cancer, two for breast cancer, two for head and neck cancer, two for hematology, and so on. One way to achieve consolidation would be to alter the peer-review process for the Cooperative Groups to focus on the accomplishments of disease committees. The committee recommends that the Cooperative Groups be reviewed and ranked using defined metrics on a similar timetable and that funding be linked to the review scores. The key planning and scientific evaluations should be at the disease site committee level, with a focus on the quality and success of the clinical trial concepts developed and on the committee’s record of developing new investigators. Committees that do well in review should be funded, and committees with low scores should be eliminated. Committees should be organized with a multidisciplinary focus on disease sites, and Group leaders should consolidate disease site committees from different Groups to strengthen their productivity and review scores. This approach would ensure that only the most innovative and successful disease site committees would thrive and expand their membership. The logical extension of the proposed consolidations will be a reduction in the number
of Cooperative Groups. For example, Groups focused on a single disease site or modality would likely need to merge with multidisciplinary Groups under this system. It will, however, be important to preserve a sense of community among the investigators focused on a particular disease.
The recent consolidation of the four Cooperative Groups focused on pediatric cancers into a single new Children’s Oncology Group is informative in this regard (Box 3-2). The goal of that merger was to consolidate talent and resources to minimize duplication, make better use of dwindling funds, and increase the efficiencies of conducting clinical trials (Benowitz, 2000; Murphy, 2009). Although concerns were raised about creating a scientific monopoly that would stifle innovation and deter involvement by young investigators who would have fewer opportunities for leadership and recognition (Benowitz, 2000), according to current Group leadership, there is still competition at the international level (Reaman, 2009). In addition, the total accruals have increased and the national childhood cancer mortality rate continues to fall. To nurture young investigators, COG has developed a formal mentoring program, and each study must have an early career investigator as the chair, with a more seasoned investigator being the cochair or vice chair. Another recent example of program consolidation with the goal of improving the design, conduct, and support of clinical studies that involve large numbers of patients from multiple centers is the recent merger of the National Marrow Donor Program and the Medical College of Wisconsin’s International Bone Marrow Transplant Registry and Autologous Blood and Marrow Transplant Registry to form the Center for International Blood and Marrow Transplant Research (CIBMTR) (2008).
Although some could argue that consolidation is unnecessary because it is now possible for members of one Group to enroll patients in trials undertaken by another Group via NCI’s Cancer Trials Support Unit (CTSU)20 and cross-group accruals have increased as a result,21 current Cooperative Group peer-review guidelines and priorities still favor the recruitment of patients into trials that originated within that Group (NCI, 2006). Furthermore, the CTSU does not address the issue of redundancy in the activities supported by the front offices of the Cooperative Groups.
Other Informative Models and Ongoing Activities
Several organizations may serve as models for the efficient conduct of clinical trials. One is the Multiple Myeloma Research Consortium (MMRC), which integrates the research efforts of 15 member institutions and whose mission is to accelerate the development of novel and combination treat-
Personal communication, Margaret Mooney, NCI, November 2009.
Overview of Creation of Children’s Oncology Group
The first pediatric cancer clinical trials group was the Children’s Cancer Group (CCG), one of the original Groups formed in the 1950s, previously known as CCGA or Group A, to distinguish it from Group B, the forerunner of Cancer and Leukemia Group B (CALGB). The Southwest Cancer Chemotherapy Study Group, the forerunner of the Southwest Oncology Group (SWOG), was originally organized as a pediatric oncology group in 1956 and only later expanded to include evaluation of adult malignancies. In 1979–1980, the pediatric division of SWOG elected to separate and seek independent status, and thus, the Pediatric Oncology Group (POG) was formed. POG grew to be virtually equal in size to CCG in terms of institutional members and patient accruals. Both POG and CCG were multidisciplinary, multidisease Groups. There were also two single-disease pediatric cancer Cooperative Groups, the National Wilms’ Tumor Study Group and the Intergroup Rhabdomyosarcoma Study Group, whose members actually comprised the investigators and member institutions of both POG and CCG, although they each maintained separate Cooperative Group statistical centers, had their own chairs, and underwent separate peer review. By the late 1990s, the four pediatric Groups had a long history and tradition of both friendly competition and close collaboration.
In 1998, the leadership of all four of the pediatric Groups, including the chair, vice chair, statisticians, and Cooperative Group administrators, gathered to discuss ways to improve the efficiencies of the intergroup process. There had been long-standing frustration with the cumbersome intergroup process, and a number of ongoing changes led to the decision to eliminate the intergroup mechanism entirely and merge into one Group. First, because of the significant success with the treatment of all forms of childhood cancer, survival rates had successively improved, such that larger and larger numbers of patients were needed to enroll in randomized clinical trials to achieve reasonable study objectives of demonstrating significant improvements in overall results within a reasonable time frame. Given the relative rarity of pediatric cancers in general and the increasing sophistication of the stratification of trials into smaller and smaller risk-adapted subgroups, it had become necessary to increase collaboration to accrue sufficient numbers of patients. By merging, the Groups would provide a seamless geographical coverage of North America, which also enabled epidemiological studies not possible as separate entities, including the formation of a national children’s cancer registry.
Second, at that time, NCI was requiring all of the cancer Cooperative Groups to make extensive changes to their informatics infrastructures, to adopt common toxicity codes and data dictionaries, to streamline and harmonize data reporting, and to migrate from the use of paper forms to electronic forms. This work was both onerous and expensive, and the Group leaders thought that it would be better to work together to accomplish all the upgrades to the informatics systems. Third, the Groups hoped that providing a single source for pediatric clinical trials, a single point of service, and the promise of increased accruals and more rapid completion of Phase I and II trials would improve interactions
with the pharmaceutical industry, which was necessary to gain access to promising new agents for testing. This process of working with industry was inherently challenging because the pharmaceutical industry had relatively little interest in developing and licensing drugs for childhood cancers due to the small market. Fourth, the Group leaders believed that by working together, they could articulate a stronger case to the public for pediatric cancer clinical trials. Parents, the public, and philanthropic foundations and individuals were often confused about why there were multiple Groups and what the differences were.
The merger took 3 years and proved to be very challenging, with perhaps the biggest challenge being the merging of the very different cultures of the Groups. A transition team was created and consisted of the Group chairs, vice chairs/executive officers, administrators, and Group statisticians; the heads of the committees in surgery, pathology, radiation therapy, and nursing; and clinical research associates. The merger was labor-intensive, entailing the development of a memorandum of understanding, the creation of an interim governing council, the creation of a new constitution, the development of transitional committees for every disease and discipline, a new membership committee to review the performance and qualifications of each institutional member, new rosters, greatly increased communications, and many additional interim meetings. NCI provided some additional funding to cover some of the additional travel costs associated with interim meetings, but no extra staff was hired, and it was difficult to retain valued staff who were concerned that their jobs would be eliminated by the merger (many ultimately were).
Reaching consensus on Group data management and statistics was a major challenge. The transition team sought external assessment and guidance, and the result was a distributed network of statistical offices and staff. Another major challenge was the merging of disease-specific committees, which had historically been competitive, often on the basis of competing scientific strategies developed over the course of serial studies. Of necessity, compromises were reached and some stakeholders were not satisfied with the outcome. A great deal of work was also involved with revising the budgeting for the Group U10 grants during the transition, but an additional challenge entailed merging the foundations that CCG and POG had established for private funding, which had very different structures for their 501c3 corporations. POG’s foundation was very simple, with no additional paid staff, but CCG had established a corporation with a fairly large staff, the National Children’s Cancer Foundation (NCCF), to act as its grantee organization and to engage in active fundraising from the public. Thus, POG had to merge with both NCCF and CCG.
The resultant Group, the Children’s Oncology Group is now the world’s largest childhood cancer research organization and united with NCCF under the umbrella 501c3 to form CureSearch, with offices in Arcadia, California; Gainesville, Florida; Omaha, Nebraska; and Bethesda, Maryland, and 235 member institutions throughout the United States and Canada plus five other countries. COG now has more than 5,000 individual members.
SOURCE: Murphy, 2009.
ments for multiple myeloma by facilitating clinical trials and correlative studies (MMRC, 2009). As described at an IOM workshop, MMRC has assessed and devised solutions to many of the inefficiencies commonly encountered in the clinical trials process (IOM, 2009b). MMRC has also implemented metrics and reward systems into its clinical research endeavors to improve its processes. For example, a scorecard tracks the time required to open and accrue clinical trials. It also tracks the level of engagement of the principal investigators, which is determined by monitoring their participation in monthly calls and face-to-face meetings and how often they bring new ideas to the consortium. Those centers performing in the top one-third receive funding to cover the full salary of a clinical research coordinator, who provides dedicated oversight of all MMRC clinical trials (100 percent full-time equivalent [FTE]). The second tier receives 50 percent of an FTE, and the third tier receives 25 percent of an FTE (IOM, 2009b). After the release of the first scorecard results at the end of 2007, 100 percent of the principal investigators participated in the monthly call for the first time. The speed and efficiency of its clinical trials are also priorities, with MMRC setting aggressive goals in this regard: only 3 months is allotted for protocol development or for IRB approval, 2 months is allotted for contracting, and 8 to 14 months is allotted for patient accrual (IOM, 2009b).
Other informative examples include the Center for International Blood and Marrow Transplant Research mentioned in the previous section and the HIV Prevention Trials Network,22 a worldwide collaborative clinical trials network that develops and tests the safety and efficacy of primarily nonvaccine interventions designed to prevent the transmission of HIV.
Several initiatives and centers are dedicated to studying and improving the efficiencies of clinical trials (Box 3-3).
COST OF CANCER CLINICAL TRIALS
It has been difficult to accurately document the costs of all the various components and procedures of clinical trials. These costs vary significantly, depending on the nature of the trial. Additionally, there is a great deal of unfunded volunteerism in developing and conducting trials, particularly by investigators who are deeply committed to the assessment of cancer therapies. The investigators are not fully compensated for this time and effort.
Several groups have attempted to discern the various steps involved in the successful conduct of a clinical trial and the costs linked to carrying out those steps. Clinical trials can be broken down into seven basic functional steps (C-Change and Coalition of Cancer Cooperative Groups, 2007):
Initiatives to Improve the Efficiency of Clinical Trials for Cancer and Beyond
AACI Clinical Research Working Group and Clinical Research Initiative
To support the improved operation of clinical trials and expand patient enrollment, the Association of American Cancer Institutes (AACI) has launched a communications forum for administrative leaders and managers of cancer center clinical research facilities across the AACI network. The forum, called the Clinical Research Working Group, will examine the systems and procedures that clinical trials offices use to perform management and oversight functions and compare the office metrics used for clinical trials: benchmarking, evaluation, and best practices. The forum aims to promote efficient use of resources and personnel. AACI has also established a network for cancer center clinical research leaders called the AACI Clinical Research Initiative (CRI). The AACI CRI will examine and share best practices that promote the efficient operation of cancer center clinical research facilities and will leverage the ability of the AACI cancer center network to advocate for improvement in the national clinical trials enterprise (http://www.aaci-cancer.org/).
FDA Clinical Trials Transformation Initiative
The recently created FDA Clinical Trials Transformation Initiative (CTTI) brings together all interested stakeholders to identify practices that, through their broad adoption, will increase the quality and efficiency of clinical trials. CTTI is currently assessing ways to improve the system of reporting and interpreting serious adverse events. In addition, CTTI’s Clinical Trial Monitoring project aims to identify best practices and to provide sensible criteria for effective monitoring while eliminating practices that may not be of value for ensuring reliable and informative trial results or human subjects protection.
Clinical and Translational Science Awards Network
The Clinical and Translational Science Awards (CTSA) program, led by the National Center for Research Resources (part of the National Institutes of Health), creates a definable academic home for clinical and translational research. CTSA institutions work together as a national consortium with the goal of improving human health by transforming the research and training environment to enhance the efficiency and quality of clinical and translational research across the country. This consortium includes 46 medical research institutions located throughout the nation. When fully implemented by 2012, about 60 institutions will be linked together to strengthen the discipline of clinical and translational science. To set a national research agenda, the CTSA consortium established five overarching strategic goals that will guide consortium-wide activities: (1) Build National Clinical and Translational Research Capability; (2) Provide Training and Improve the Career Development of Clinical and Translational Scientists; (3) Enhance Consortium-Wide Collaborations; (4) Improve the Health of Communities and the Nation; and (5) Advance T1 Translational Research.
Tufts Center for the Study of Drug Development
The Tufts Center for the Study of Drug Development (Tufts CSDD) has a mission to develop strategic information to help drug developers, regulators, and policy makers improve the quality and efficiency of pharmaceutical development, review, and utilization. An independent, academic, nonprofit research group affiliated with Tufts University, Tufts CSDD provides independent analyses on the nature and pace of new drug development. This center has conducted studies on drug development operational processes, including a benchmark analysis of activities related to the initiation of clinical research studies.
Center for Management Research in Healthcare
The Center for Management Research in Healthcare was designed with the focus of providing advances in management disciplines for health care-related applications by integrating theory founded on academic principles and industry best practices. The goals include the transfer of management knowledge to health care settings and the dissemination of findings that arise between the intersection of health care and management.
Sensible Guidelines for the Conduct of Clinical Trials
In 2007 several clinical trials groups from McMaster, Duke, and Oxford Universities in the United Kingdom organized an international meeting called “Sensible Guidelines for the Conduct of Clinical Trials” to discuss the difficulties involved in initiating and running randomized trials efficiently. The organizers concluded that solutions to many of the problems would require a coordinated response from academic trialist groups, regulatory agencies, pharmaceutical companies, and health care providers worldwide. A follow-up meeting of the Sensible Guidelines group took place in Oxford on September 5–6, 2009. The principal aims were to (1) update the review of the main barriers preventing efficient trials; (2) share the experiences of those who are attempting to deal with these barriers; and (3) agree to possible solutions to the main difficulties and encourage their promotion through international collaboration.
SOURCES: AACI, 2009; CMRHC, 2009; CTSI, 2010; CTSU, 2010; CTTI, 2009; NCI, 2009a; Tufts University, 2009; Yusuf et al., 2008.
Protocol selection and development
Study and site feasibility assessment, including scientific review and evaluations of budgets and timelines
Regulatory submission of the protocol and ICFs to IRBs and the trial sponsor(s)
Legal and financial review and approval
Site activation, including site approval and preparation for study execution
Study execution and data collection and review (accrual and follow-up)
Study closure, including document retention
Of these seven steps, four are related to federal regulations: regulatory submission of the protocol, site activation, study execution and data review, and study closure. An average of 35 percent of clinical research costs is spent on compliance with such regulations (C-Change, 2005).
The time and effort spent on all these steps of a clinical trial can be considerable, with one study estimating that the time required to conduct a 12-month randomized, placebo-controlled trial of a new chemotherapeutic agent was, on average, more than 4,000 hours, with the costs for nonclinical activities amounting to between $2,000 and $4,000 (in 2002) per study subject, when overhead costs were excluded (Emanuel et al., 2003).
About half of the time spent on a clinical trial is devoted to study startup endeavors (IOM, 2009c). Startup costs for clinical trials include staff training, IRB approval, time for reviews, and staff time for startup visits and the completion of forms (C-Change and Coalition of Cancer Cooperative Groups, 2006). For Cooperative Group trials, some startup costs may be somewhat lower because of the existing infrastructure and operating procedures, but many unique aspects of each clinical trial also contribute to these costs. Many of the startup steps can involve several iterations, because changes made in response to one review body trigger re-reviews by other bodies. For example, protocols and ICFs often undergo multiple reviews by local or central IRBs, as well as by NCI and FDA. Contracts among multiple parties can require many layers of review that may take months to complete, and the financial review of a study may be done separately from a contract review (C-Change and Coalition of Cancer Cooperative Groups, 2007). Numerous steps are also involved in the initial execution of clinical trials, including on-site training of personnel, the establishment of billing and budget procedures, and the screening and recruitment of patients (C-Change and Coalition of Cancer Cooperative Groups, 2007). These fixed startup costs are independent of the number of subjects enrolled in a clinical trial and are more economically efficient when large numbers of patients are enrolled in the trial.
Only about half of open government-sponsored trials, however, have subjects enrolled, one study found (C-Change, 2005), and an NCI study of four NCI-funded Comprehensive Cancer Centers found that many trials accrue few or even no patients. As noted earlier in this chapter, a review of these four Cancer Centers along with two large Cooperative Groups and CTEP revealed that the amount of time it takes to start up a study is nearly 3 years (Dilts and Sandler, 2007; Dilts et al., 2006, 2008, 2009). The substantial startup costs of trials with low rates of accrual often
go unappreciated. One assessment found that Cooperative Group trials accruing two patients or less cost more than $700 annually, but it did not consider the $5,000 to $8,000 of startup costs documented in other studies (nor did it include the costs for research nurses or long-term follow-up) (Waldinger, 2008).
Once a clinical trial is under way, in addition to administering the experimental treatment to patients, much time is spent on patient follow-up. This follow-up is much more involved for clinical studies than it would be for standard patient care, as detailed case report forms, as well as forms that report adverse events must be filled out (C-Change and Coalition of Cancer Cooperative Groups, 2007). In addition, new requirements from OHRP specify that if a substantial new toxicity becomes apparent during a clinical trial, the trial must again be reviewed by the IRB at the local institution, and the written consent form must then be modified accordingly (Abrams and Mooney, 2008; Goldberg, 2008). Even billing is more complex for patients in clinical trials, with Medicare requiring the costs for routine care of the patients to be listed separately from the research costs on the bills submitted to Medicare (IOM, 2009c). The data centers also have many tasks, such as quality control efforts (editing data, sending out queries, updating the database), creating and circulating reports on the progress of the study to investigators and funders, and preparing reports to data and safety monitoring boards.
Many of the costs of clinical trials are overlooked or understated (Waldinger, 2008), such as the costs of specimen collection, processing, and shipping, especially if the processing of the specimens is time sensitive and the specimens must be shipped individually, as well as the costs of standard imaging and pathology evaluations. These are increasingly important economic issues, as Cooperative Group studies are doing more genetic and other analyses of tumor or blood samples in the movement toward personalized medicine, which depends on the collection and analysis of such samples. This focus on personalized medicine increases the complexity and cost of clinical trials, as there is a greater need for the documentation of patient characteristics, imaging, and biomarker tests (see also Chapter 2).
In addition, for trials that Cooperative Groups undertake with industry support, there can be lengthy negotiations over the ownership and use of the biological specimens collected during the trial because they might be useful for future studies (IOM, 2009c). The use of such biospecimens can also require additional time to craft more complex ICFs and explain them to patients. Furthermore, current NCI policies require that research studies that propose to use specimens collected from intergroup protocols undergo scientific review by a scientific steering committee before specimens are made available. However, this review is not linked to funding, and thus, investigators must often seek funding by other mechanisms. This process
creates many review loops, time delays, and significant double jeopardy, as each proposal requires at least two scientific reviews; one to receive specimens and one to receive funding, by different review groups involving many people and conducted at different times.
The increasing number of global clinical trials adds more complexity and costly bureaucratic burdens as researchers try to comply with the wider range of regulations that vary from country to country (C-Change and Coalition of Cancer Cooperative Groups, 2006). Even variations in local regulations can add to the complexity and can be burdensome in multicenter trials, especially because many participating sites contribute 10 patients or less, yet they must still undergo cumbersome regulatory reviews. One study estimated that 30 to 40 percent of all funding for cancer clinical trials is used to cover the costs of local regulatory compliance (C-Change and Coalition of Cancer Cooperative Groups, 2006). For example, an investigator who participates in just one clinical trial over 7 years may be required to have between 35 and 50 interactions with the IRB, each of which requires about 100 hours of staff preparation time (C-Change and Coalition of Cancer Cooperative Groups, 2006). As one research group summed it up, “Regulations governing the conduct of clinical research have become more and more complex, placing a greater burden on investigators in terms of compliance, documentation, and training” (Glickman et al., 2009). In addition, the workload associated with audits, data queries, and blinded central reviews has been increasing (see also the previous section on oversight of clinical trials).
Further insight into the costs involved in conducting Cooperative Group trials in particular is expected in 2010, when NCI will publish its analysis of the costs of Cooperative Group clinical trials. This comprehensive study will document the Groups’ infrastructure costs linked to operational functions, statistics and data management, scientific leadership, and core support services, including specimen bank and laboratory services. The analysis is expected to identify areas of inadequate funding, as well as to identify best practices and opportunities for enhanced efficiency. The ultimate goal of the study is “to develop an improved funding model for the Cooperative Group Program that aligns funding more closely with actual costs and enhances overall cost effectiveness” (Hautala, 2008). Preliminary results indicate that most groups spend about 50 percent of their budgets on infrastructure and about 50 percent on accruing and managing patients. Most allocate the largest portion of their infrastructure to statistics and data management, but there is large variability in the percentage allocated to various other infrastructure components and subcomponents, such as administration costs. Some of this variation may be due to the way in which expenses are described in the grant applications. The analysis also found that the amounts of funds awarded were always less than the amounts requested and that no group spent the funds that it was awarded at exactly
the same percentage allocation that was originally requested (CTAC, 2008; Hautala, 2008).
Research and patient care costs must be met if the trial is to be efficiently and effectively completed. As one participant at an IOM workshop noted, it may be unethical to attempt to do a clinical trial when those who are running it are not getting paid enough to do it well (IOM, 2009c). However, despite the long history of accomplishments of the NCI Cooperative Group Program (as described in Chapter 1), the program has been chronically underfunded because of limitations in NCI funding and the increasing complexity and costs of clinical trials. The lack of sufficient funds for the program was noted with concern more than 10 years ago, in the 1997 Armitage report (NCI, 1997), but the funding situation for the program has not substantially improved since that report recommended increased funding. When the budget for NIH was doubled between 1998 and 2003, the Cooperative Group Program experienced a 40 percent growth in funding (when adjusted for inflation), although the Program’s share of the total NCI budget actually decreased, and in 2008 was slightly less than 3 percent.23 Furthermore, because of NCI budget constraints, funding for the Cooperative Groups has been flat or declining in recent years. Funding for the Program declined after 2002 and leveled off at about $145 million a year (Figure 3-4). This figure reflects a 20 percent decline in funding since 2002 when the effects of inflation are considered. In real dollars, the current funding level is less than it was in 1999. The CCOP funding that many Cooperative Groups rely on also is declining. This situation is increasingly unsustainable. The committee recommends that NCI allocate a larger portion of its research portfolio to the Clinical Trial Cooperative Group Program to ensure that the Program has sufficient resources to achieve its unique mission.
FUNDING FOR CANCER CLINICAL TRIALS
Overview of Federal Funding for Cancer Research
The U.S. Congress determines the total funding allotment for NCI each year, but the NCI director is responsible for proposing a budget and for allocating the available funds among the various programs and funding mechanisms within NCI. Unlike other institutes at the NIH, NCI’s budget priorities and allocations are independent of those of the NIH director because of the budgetary bypass provision of the National Cancer Act of
The Cooperative Group Program’s share of the NCI budget decreased by 10 percent, from 3.8 percent in 1998 to 3.4 percent in 2003. By FY2008, the Program’s share of the NCI budget had decreased to 2.98 percent. See http://www.cancer.gov/aboutnci/servingpeople/snapshot and also Figure 3.4.
1971. This provision permits the NCI director to submit NCI’s annual budget request directly to the U.S. president. The NIH director and secretary of HHS may comment on the NCI bypass budget, but they cannot change the proposal (reviewed by IOM, 2003).
Allocation of NCI funds among the competing needs of its various programs is a major challenge for the NCI director, who must take many factors into consideration. Decisions must be made about how much funding to devote to basic, laboratory research versus clinically oriented research across several major categories that include cancer causation, prevention, and control; cancer biology; detection, diagnosis, and treatment; and resource development (reviewed by IOM, 2003). Furthermore, the clinical trials program supported by NCI is multifaceted, with the Cooperative Group Program being just one of several clinical research endeavors that NCI supports (Figure 3-5). In addition to its intramural Clinical Center, NCI has grants that can support either investigator-initiated studies or the Cancer Centers at which trials are conducted, as well as U10 cooperative agreements, such as those that underlie the Cooperative Group Program and CCOP (Box 3-4).
To determine how funding should be parceled out among the many intramural and extramural programs of NCI, the NCI director can, in principle, draw on the expertise of external advisory boards (IOM, 2003; NCI, 2009c). Notably, the National Cancer Advisory Board (NCAB) is charged with advising “the NCI director with respect to the activities carried out by NCI, including reviewing and recommending for support grants and cooperative agreements, such as the agreement that funds the Cooperative Groups, following technical and scientific peer review.” All members of this group are appointed by the president, with the intent of providing oversight for all NCI activities to ensure that NCI programs maintain goals focused on the nation’s interests and needs in cancer
Funding Mechanisms for NCI Clinical Trials Program
Extramural Research Activities
Intramural Research Activities
NOTES: R01 = research project grant, R03 = small research grants, R21 = exploratory/developmental grant, R37 = method to extend time in research (MERIT) award, P01 = research program grants, P50 = specialized center grant (see http://deainfo.nci.nih.gov/flash/awards.htm).
SOURCE: Abams, 2008b.
research. The Board of Scientific Advisors (BSA) could also influence allocations within the NCI budget, as one of its charges is to advise the NCI director on the policy, progress, and future directions of the extramural scientific research program within each division. This includes evaluations of awarded grants, cooperative agreements, and contracts and examination of extramural programs and their infrastructures to evaluate whether changes are necessary to ensure that NCI is positioned to effectively guide and administer the needs of science research in the foreseeable future (reviewed by IOM, 2003).
However, NCAB and BSA currently have little input in setting budget priorities and ensuring that the Cooperative Group Program has sufficient funds to operate effectively. The committee recommends that these external advisory boards have a greater role in advising NCI on how it allocates its funds to support a national clinical trials program. This would help to ensure the most rational distribution of funds, in light of such factors as scientific opportunity and clinical need.
Funding of the Cooperative Group Program
The NCI Cooperative Groups receive funding from NCI’s Division of Cancer Treatment and Diagnosis. In 1980–1981, the mechanism of support for the Cooperative Group Program was converted from a grant to a cooperative agreement (U10 award). This was a major change for the program because the cooperative agreement funding mechanism is intended to be a cross between a grant and a contract and thus allowed NCI to have a much more active role in the conduct, management, and oversight of research than grants typically require. Investigators funded through other grant mechanisms (the bulk of NCI extramural funding) based on peer review are not subjected to such oversight. There is considerable variability across the NIH with regard to the balance between oversight and support of trials by the sponsoring institution, and unlike many other NIH clinical trials arrangements, funding for the Cooperative Groups is not linked to specific clinical trials but, rather, to the infrastructure that supports the trials. The U10 award supports the operations, statistical offices, and committees of the Cooperative Groups (CTEP, 1996; IOM, 2009c).
Funding for the CCOP infrastructure is independent of the budget of the Cooperative Groups and comes from the Division of Cancer Prevention rather than the Division of Cancer Treatment and Diagnosis. In fiscal year 2009, the program supported 47 community oncology sites and 12 research bases, as well as 14 minority-based CCOP sites. NCI has proposed increasing the number of CCOP sites to 50, with 1 additional research base, at a cost of $13.6 million over 5 years, and increasing the number of minority-based CCOP sites to 20, at a cost of $6.2 million over 3 years (Goldberg, 2009).
The Cooperative Groups are evaluated at a maximum of 6-year intervals on the basis of various performance criteria. The criteria include the numbers of publications and accruals, the scientific merit and innovation of their trial proposals and whether they meet national priorities, timeliness of study completion, leadership, and whether there is a strong commitment to active, meaningful participation in NCI Phase III treatment trials (NCI, 2006). However, the Cooperative Groups have different timelines for review and so are not compared directly with each other in the evaluation process. In addition, the amount of funding received is not directly linked to the review score, and because of NCI funding limitations, the Cooperative Groups usually receive 30 to 50 percent less than the total grant money requested on their applications and approved by peer review.
CCOP grantees get funds for research costs in advance and earn credits against this funding by enrolling patients into trials (NCI, 2009b). CCOP grants also undergo a peer-review process, largely on the basis of accruals and data quality, different from the review process for the Cooperative
Groups that they have joined (NCI, 2006). CCOP funding also covers only about two-thirds of the actual costs of conducting clinical trials in community settings (IOM, 2009c).
Such insufficient funding has become unsustainable as trials have become more complex. For example, as noted above and in Chapter 2, the funding does not adequately support the collection and molecular characterization of tumor specimens and their storage in biospecimen banks, so Cooperative Groups must supplement support for these activities from a variety of sources, including repository users’ fees, other grants, contracts, and institutional commitments. As noted in Chapter 2, such activities are increasingly part of Cooperative Group clinical trials to assess patient subgroups for whom therapy is especially effective or especially toxic. Recognizing the increasing importance of correlative studies that use biospecimens collected during clinical trials to realize the promise of targeted therapies and personalized medicine, NCI set aside $1.6 million in 2007 for biomarker studies run by the Cooperative Groups. However, that funding may be insufficient for these efforts, as tests may cost thousands of dollars per patient. NCI also recently introduced the Biomarker, Imaging, and Quality of Life Supplemental Funding Program to support correlative science and quality-of-life studies that are integral to Phase III clinical trials, with $5 million being allocated for this program in 2009 (NCI, 2008).
Another major factor contributing to the underfunding of Cooperative Groups is inadequate reimbursement of per patient costs. This short-fall was recognized in the U.S. House of Representatives appropriations report for fiscal year 2010,24 albeit only in regard to gynecologic oncology trials. NCI provides per patient reimbursements to individual Cooperative Group sites in addition to the funding that it provides for the Cooperative Group’s infrastructure. However, since 1999, the reimbursement for sites has remained fixed at $2,000 per patient in treatment trials, which is about one-third to one-quarter of the amount of financial support needed to support the cost of these studies (C-Change and Coalition of Cancer Cooperative Groups, 2006). Although the average per patient cost in industry trials is higher (median costs range from $4,700 for Phase III trials to $8,450 for Phase II trials [C-Change and Coalition of Cancer Cooperative Groups, 2006]), industry-sponsored trials may provide $15,000 or more in reimbursement per patient enrolled (Comis, 2008). A recent survey of Cooperative Group sites found that of the 155 respondents (32 percent) who were planning to limit their Cooperative Group participation, three-quarters cited inadequate per case reimbursement for the decline in their level of participation (Blayney, 2009). Some cancer centers have also
See House Report (H. Rpt. 111-220) at http://www.access.gpo.gov/congress/legislation/10appro.html.
capped the number of accruals in Cooperative Group trials because it is too much of an economic burden and because Cooperative Group accruals are not highly valued in reviews for the renewal of Cancer Center support grants, which place more emphasis on individual investigator-initiated, NCI-funded research undertaken by center personnel (IOM, 2009c). The committee recommends that NCI increase the per case reimbursement and adequately fund highly ranked trials to cover the costs of the trial, including the costs for biomedical imaging and other biomarker tests that are integral to the trial design.
In addition, the new focus on targeted and combination therapies tends to make the process for obtaining informed consent more difficult and to increase the structural complexity of trials, as well as the complexity of data collection and analysis, all of which increase the costs and personnel time devoted to a trial (NCI, 2009e). Recognizing this, NCI recently implemented a trial complexity and scoring model, under which studies deemed “complex” on the basis of various elements described in the complexity model, may be eligible to receive additional funds (if they are available) to supplement their base capitation. The complexity elements evaluated include the informed-consent and randomization process, the complexity and length of the course of investigational treatment, the duration of follow-up required and the follow-up testing to be done, the complexity of data collection, and whether ancillary studies (such as correlative science or quality-of-life studies) will be conducted (NCI, 2009e). This initiative is designed to align reimbursement for Phase III treatment trials with their complexity to compensate the trial sites for additional expenses. However, the maximum reimbursement under the new system for trial complexity payments is $3,000 per study subject. For many cancer clinical trials, this amount appears to be inadequate to cover most labor costs, per subject enrollment costs, and additional research-related paperwork and reporting requirements (ACS CAN, 2009).
The lack of adequate reimbursement is further exacerbated by the refusal on the part of many health insurers to pay for the health care costs linked to a clinical trial, even though many of the same costs would be reimbursed if the patient were not receiving experimental treatment. The costs linked to treatment within cancer clinical trials are substantial and include physician visits, blood work, and X rays (IOM, 2009c). This issue is addressed in more detail in Chapter 4.
Because of insufficient funding from NCI, the Cooperative Groups must leverage other sources of funding to accomplish their work. The Cooperative Groups are permitted to accept funds from nongovernment sources for research activities not supported by NCI (NCI, 2006). Via this mechanism, the Cooperative Groups can accept support for their trials from industry or charitable organizations. A 2004 survey found that, on
average, 29 percent of a U.S. Cooperative Group site’s clinical research revenue originates from sources other than the trial sponsors. These sources include donations, contributions from philanthropic organizations, and community and non-trial-specific grants, as well as institutional discretionary funds of the institutions conducting the research (C-Change, 2005). Private funding, however, is usually allocated to specific trials and not to support the infrastructure of the Cooperative Groups. Consequently, private funds cannot always compensate for insufficient public funding (IOM, 2009c).
The committee recommends that to ensure sufficient funding of high priority clinical trials, the total number of trials undertaken by the Cooperative Groups should be reduced to a quanity that can be adequately supported.
COLLABORATION AMONG STAKEHOLDERS
As noted throughout this chapter, cancer clinical trials often necessitate effective collaboration among diverse stakeholders, but numerous challenges to achieving such collaborations remain (NCI, 2005a). By leveraging the strengths and abilities of different partners, effective collaborations can offer many benefits, including greater efficiency, by pooling skills, technologies, and other resources and by sharing costs and risks. Public-private partnerships in particular can more effectively leverage public funding and resources, increase the breadth and depth of research, and effect a more rapid translation from basic discoveries to public health applications. Industry, government, and nonprofit organizations all have a potential role to play in such partnerships and could each make important and unique contributions to the endeavor. NIH, NCI, and FDA have all recognized the value of these collaborative activities (Niederhuber, 2009; NIH, 2009; Woodcock and Woosley, 2008). As noted in Table 3-2, CTAC recently established an ad hoc Public-Private Partnership Subcommittee charged with providing advice to the director of NCI on how to enhance NCI-sponsored clinical trials through collaborative interactions with the private sector.
Two recent reports from the President’s Council of Advisors on Science and Technology also acknowledge the importance and value of strengthening public-private collaborations to enhance innovation (PCAST, 2008a,b). The latter report noted that “the accelerating speed of technological development requires new methods of knowledge exchange between universities and industry so as to capture the societal and economic benefits of these innovations” (PCAST, 2008b). That council recommended that guidance and educational tools on intellectual property and technology transfer practices be developed for university and private-sector partners (PCAST, 2008b).
One recent example of a situation in which multiple stakeholders worked for a common good is the recent meeting sponsored by the Brookings Institution, in which professional societies and a cancer advocacy organization provided a “safe harbor” to facilitate an evidence-based review of safety data from several pharmaceutical groups and a Cooperative Group to better determine what safety data are needed for supplemental FDA approvals (Curt, 2009; McClellan and Benner, 2009).
Collaborative Funding Mechanisms
Inadequate funding of the Cooperative Groups combined with the growing interest by industry in developing and clinically testing new therapeutics and diagnostics for cancer has also led to more industry-Cooperative Group collaborative cancer clinical trials in the past decade. Both parties stand to benefit from such public-private partnerships. Industry provides Cooperative Group investigators access to their new agents and supplements the currently insufficient per-case payments that NCI provides for those enrolled in a Cooperative Group trial. The Cooperative Groups provide industry with their extensive infrastructure, expertise, and scientific credibility that enables companies to conduct high-quality, large-scale, multicenter clinical trials without the burden of vetting and contracting with multiple academic or private institutions. In addition, industry can use some of the public resources of the Cooperative Groups, such as NCI’s central IRB (Bressler and Schilsky, 2008).
As noted at an IOM workshop, when a clinical trial done by an NCI-funded Cooperative Group has regulatory implications (e.g., if it will be a trial for registration of a drug), the added costs linked to regulatory requirements are increasingly borne by the drug’s sponsor (IOM, 2008). With judicious negotiation and planning with industry, the Cooperative Groups could perhaps use this model to double their budgets so that half comes from drug companies and half comes from NCI. A similar model for the funding of clinical trials is already in use in Canada (IOM, 2008). To avoid perceptions of bias because of industry involvement, it would be important for Cooperative Groups to retain control of the design and analysis of such clinical trials and to ensure that industry partners, just as trial investigators, are not allowed access to the clinical trial database until the trial is completed. Several Cooperative Group-industry clinical trials have successfully used such a procedure (Bressler and Schilsky, 2008; IOM, 2009c).
One Cooperative Group, the National Surgical Adjuvant Breast and Bowel Project, has used a similar approach to industry-Cooperative Group clinical trial collaborations (Wickerham, 2009). It has used a hybrid model to fund Cooperative Group trials, whereby NCI provides funds for fixed
infrastructure costs, such as the costs for the design, production, conduct, and analysis of clinical trials, but industry funds variable costs, such as per case costs at trial sites, the cost of nonstandard patient care, and the cost of ancillary studies. The potential advantage of using this collaborative model is that it allows the Cooperative Groups to maintain the ability to independently design, conduct, and publish the findings of clinical trials and make biospecimens available for public access; provides NCI and FDA review and oversight; and provides adequate resources for the proper conduct of studies in a timely fashion without avoidable fiscal barriers. Another example of a hybrid funding arrangement is the international partnership between CALGB and Novartis to test a leukemia drug within select genetic populations. While CALGB is the IND holder for the United States and North America, Novartis holds the IND for the rest of the world. Novartis organized international sites while CALGB organized the North American sites (IOM, 2009c). The committee recommends that NCI do more to facilitate the use of appropriate hybrid funding models, in which NCI and industry support clearly defined components of trials that are of mutual interest.
This approach is common in other countries as well. The United Kingdom uses a form of collaborative hybrid funding for most of its medical research, and the European Organisation for Research and Treatment of Cancer (EORTC) has used a collaborative funding model for years. Multicenter cancer trials groups in the United Kingdom and Europe generally use methods of organization that are somewhat different from those that the U.S. Cooperative Groups use.25
In the United Kingdom, for instance, a government agency (the Department of Health) covers the costs of laboratory and imaging services and the administration of therapeutic agents at no charge for approved clinical trials, and provides the required national infrastructure in the form of salaries for research staff and clinical research associates at National Health Service hospitals. The United Kingdom disease groups can then work collaboratively with pharmaceutical and biotech companies in one of two ways: either with the industry partner covering all costs for a study primarily intended to support registration of a drug and done under commercial sponsorship, or with only partial support (or even just the provision of a drug at no cost) from a company for a trial that has been developed by the investigator and that may or may not ever support drug registration. In the latter situation, the database and the analysis are entirely controlled by the academic investigators and the company involved otherwise remains at arms length from the trial. A set of characteristics is used to identify which trials funded by industry should be considered commercial and which should be considered
“investigator-initiated” and effectively academic research. An academic trial usually has most or all of the following characteristics:
The primary purpose of the trial is not for licensing. (After completion, a company may decide that it wants to use the data for licensing, but if so, new financial and practical arrangements will need to be negotiated.)
It usually does not collect as full and complete a data set as commercial trials (e.g., concomitant medications, detailed data on less critical blood tests), and it does not employ on-site monitoring beyond the usual standard (such as CTEP’s 10 percent auditing every 3 years).
The database, analysis, and publication are independent of the company and the data are only released to them after an independent data monitoring committee has agreed.
These studies almost always have an academic or public sponsor (sponsorship is very formally defined in EU regulations), not the company.
These criteria almost always clarify which studies are investigator-initiated studies (even if the company may have provided considerable advice). When a trial doesn’t fit the above characteristics, it is considered a “commercial” study, and the company must reimburse full costs of all aspects of the study to the National Health Service. A costing template ensures appropriate reimbursement. This system provides value to the public/taxpayer because the resultant trials are considered scientifically of interest and potentially beneficial for improving patient care, but are less likely to be conducted by industry on its own.
EORTC generally works in a similar way, although it does not have the level of funding support for staff, imaging, and so on, that is available in the United Kingdom. Most EORTC studies therefore require industry support at a higher level than studies in the United Kingdom do. It nonetheless generally uses a model whereby investigators independently control the study database and analysis and may do so even for studies with full funding from commercial sources and with the goal of product registration.
Other public-private models have also been developed or proposed, including that of the Foundation for the National Institutes of Health (FNIH), which was established by the U.S. Congress in 1996 to support NIH’s mission of improving health through scientific discovery. According to its website (FNIH, 2009), “The foundation identifies and develops opportunities for innovative public-private partnerships involving industry, academia, and the philanthropic community. A non-profit corporation, the foundation raises
private-sector funds for a broad portfolio of unique programs that complement and enhance NIH priorities and activities.” The Foundation, which receives between $70 million and $100 million in revenues per year from such benefactors as pharmaceutical companies and the Bill and Melinda Gates Foundation, has funded large-scale initiatives, such as the Grand Challenges in Global Health and the Collaboration for AIDS Vaccine Discovery, as well as smaller-scale endeavors, such as the Biomarkers Consortium of the FNIH (FNIH, 2009). Under the auspices of this Biomarkers Consortium, several pharmaceutical and biotechnology companies are collaborating with NCI, FDA, and academic investigators to further the use of biomarkers in breast cancer treatments; the I-SPY226 trial aims to simultaneously and serially test several different targeted treatments and biomarker tests to more rapidly assess which biomarkers best predict the likelihood of a therapeutic response (Barker et al., 2009; see also Chapter 2).
If funding is provided in a transparent way, both industry and foundations could make important contributions to the publicly funded clinical trials system. If the clinical trials system was streamlined and less complicated (through the adoption of the recommendations in this report and those of the OEWG), these stakeholders might be more willing to support trials conducted by the Cooperative Groups. Similarly, if the core funding provided by NCI adequately supported the clinical trials infrastructure, industry and foundations would be more willing participants, as they could just cover the costs of individual studies. The committee recommends that NCI facilitate more public-private partnerships and precompetitive consortia, guided in part by successful models.
Contract Negotiations and Intellectual Property
A major stumbling block to the development of potentially fruitful private-public partnerships is the complex, multiparty contractual negotiations that can be extremely lengthy and consume substantial staff resources of all parties involved (Dilts and Sandler, 2006). These negotiations often stall over issues related to intellectual property, publication rights, and data or biospecimen ownership and access (Bressler and Schilsky, 2008). NCI has provided guidelines for Cooperative Group-industry collaborations that broadly outline the relationship between the two parties and NCI with regard to confidentiality, publication rights, access to data, indemnification and liability, and intellectual property rights (NCI, 2009d). Although the guidelines do specify some rights, such as the right of an industry collaborator to review a manuscript of the study prior to submission but not
to edit or require changes other than to request the removal of proprietary information, much of the detail in the guidelines is left for negotiations. For example, the guidelines state that “when a clinical protocol involves either an agent, which is proprietary to another company, or involves another NCI collaborative effort, the NCI, the Collaborator, and all other Collaborators will jointly determine a reasonable and appropriate mechanism for data access and sharing prior to initiation of the clinical trial” (NCI, 2009d).
To expedite the negotiations required between industry and the publicly funded investigators before the launch of a collaborative trial, NCI and the CEO Roundtable on Cancer27 recently reviewed copies of 78 clinical trial agreements from participating organizations and identified 45 key concepts related to intellectual property, study data, subject injury, indemnification, confidentiality, and publication rights. They then gleaned from those agreements the exact language that embodied the key concepts and used it to create standardized and harmonized clauses for clinical trial agreements that are designed to serve as a starting point for contract negotiations (CEO Roundtable on Cancer and NCI, 2008). The U.S. Department of Justice gave the proposed clauses a favorable review and indicated that it had no intention to challenge the initiative (DOJ, 2008). However, its adoption is not yet widespread.
Nevertheless, no proposed clauses in this document specifically detail the ownership of and access to biospecimens and related data collected during a clinical trial. In addition, drugmakers must still negotiate the rights to patented discoveries stemming from biomarker research involving their agents. In November 2009, NCI proposed language for technology transfer agreements, which states that sponsors would obtain a royalty-free, worldwide, nonexclusive license for commercial purposes and a time-limited first option to negotiate an exclusive or coexclusive, if applicable, worldwide, royalty-bearing license for commercial purposes for inventions arising from clinical or nonclinical studies involving a collaborator’s therapeutic agent (Ansher et al., 2009). The committee recommends that NCI develop standard licensing language and contract templates for material and data transfer and intellectual property ownership in biospecimen-based studies and trials that combine intellectual property from multiple sources.
The Life Science Consortium of the CEO Roundtable on Cancer has also initiated the creation of a precompetitive pool of intellectual property for cancer drug development (Curt, 2009). This effort is modeled in part on the successful example of SEMATECH (SEmiconductor MAnufacturing TECHnology) in the semiconductor industry, in which the U.S. government and 13 firms representing 80 percent of the U.S. semiconductor manufac-
turing capacity contributed $500 million over 5 years to a public-private partnership to solve the critical problems in computer chip manufacture (reviewed by Curt, 2009). That group developed new ways to standardize the equipment, supply chain, and manufacture of semiconductors in a way that benefited all companies.28 A lack of standardization and qualification of biomarkers has been cited as a major impediment in the development of targeted cancer treatments as well (IOM, 2007). More private-public collaboration in a precompetitive environment could facilitate the development and use of biomarkers in cancer therapeutics, and the codevelopment of a biomarker diagnostic with a targeted cancer drug.
Recognizing this, the Life Science Consortium has been working to establish a new precompetitive environment in which major drug companies can present their biomarker programs for cancer drug development, under confidentiality, to NCI (Curt, 2009). This precompetitive safe harbor allows NCI to gain a unique perspective unobservable to its individual industry partners and to identify areas of overlap and redundancy as well as gaps. By selecting the most promising partners for further biomarker development and then sharing the validated markers with the academic and industry communities at large, NCI provides a neutral platform that can enable cancer drug development across companies and academia because the risks are shared and collaboration replaces competition. This new approach has already come to fruition. NCI identified a promising assay for measuring the activity of poly(ADP-ribose) polymerase (PARP) inhibitors and worked to further develop and validate the assay, which has since been used in a Phase 0 human trial (Kinders et al., 2008; Kummar et al., 2009; Yang et al., 2009).
Grand Challenges to Stimulate Innovation
Philanthropic and government challenge prizes are undergoing a renaissance because of the growing awareness that, when such prizes are properly applied, they can be a powerful tool for change that can tap new, multidisciplinary, and widespread resources to solve problems (McKinsey & Company, 2009). In addition, the growing science on prizes is improving prize economics and practices for managing execution challenges and risks. Unlike Nobel Prizes, which recognize prior achievement, a growing number of big-prize challenges focus on achieving a specific future goal, and they are often awarded to those who help solve complex problems that have not responded well to activities funded by standard grants. Challenge grants may be especially useful for solving problems for which the goals are clear, but the ways to achieve them are not. By attracting diverse talent and
a range of potential solutions, challenge grants can foster innovative and often unexpected solutions (McKinsey & Company, 2009).
Grand challenge grant competitions are increasingly being used with great success to help solve large-scale problems or achieve goals that improve society at large (McKinsey & Company, 2009). For example, the X PRIZE Foundation is offering a multimillion-dollar award to the first team to improve the speed of human genome sequencing to better realize the promise of personalized medicine. Another X PRIZE has been established to find ways to change health care delivery, financing, and incentives to measurably improve the health value in a 10,000-person community during a 3-year trial.29 Rather than directly funding research, an X PRIZE aims to spur innovation by tapping into competitive and entrepreneurial spirits. A report on such incentive prizes concluded that they are unique and powerful tools that can produce change not only by identifying new levels of excellence and by encouraging specific innovations but also by changing wider perceptions, improving the performance of communities of problem solvers, building the skills of individuals, and mobilizing new talent or capital (McKinsey & Company, 2009). Examples of technology development that was spurred by this mechanism include the first commercial space flight, increased super computer speed, and the first autonomous vehicle to drive 100 kilometers.
The use of new and novel approaches and application of the best minds in multiple disciplines (engineering, social science, management, marketing, etc.) could help to solve some of the well-known problems described in this report. The potential for impact can often be a strong motivator to good science, and competition can foster both innovative solutions and rapidity in their discovery, much like what occurred with the sequencing of the human genome. Thus, one promising novel approach would be to develop a major, influential grand challenge to improve cancer clinical trials.
The National Institutes of Health Reform Act of 2006 specifies that
the Secretary of HHS, acting through the Director of NIH, may allocate funds for the national research institutes and national centers to make awards of grants or contracts or to engage in other transactions for demonstration projects for high-impact, cutting edge research that foster scientific creativity and increases fundamental biological understanding leading to the prevention, diagnosis, and treatment of diseases and disorders. The head of a national research institute or national center may conduct or support such high-impact, cutting edge research [using the previously described awards].
The committee recommends that NCI use this authority to implement a grand-challenge grant competition with the goal of dramatically increasing the efficiency and innovation of critical cancer clinical trials and clinical trials processes.
The recommendations in this chapter support the committee’s goal to improve the speed and efficiency of the design, launch, and conduct of clinical trials as well as the goal to improve prioritization, selection, support, and completion of cancer clinical trials. The committee concluded that a robust, standing cancer clinical trials network is essential to effectively translate discoveries into clinical benefits for patients. Multi-institutional collaborations are necessary to conduct large Phase III trials for indications such as adjuvant therapy, first-line therapy of metastatic disease, and prevention; single institutions are not capable of undertaking such large-scale trials. Because cancer encompasses more than 100 different diseases, the treatment regimens are complex and diverse (and becoming more so), and hundreds of experimental therapies for cancer are in development, there is a continuous need for the design and implementation of new trials, and it would be highly inefficient to fund and develop infrastructures and research teams separately for each new clinical trial.
If NCI is to achieve the goal of improving outcomes for patients with cancer, it is imperative to preserve and strengthen the unique capabilities of the NCI Clinical Trials Cooperative Group Program as a critical component of NCI’s translational continuum. However, the current structure and operating processes of the entire trials system need to be reevaluated to reduce redundancy and improve effectiveness and efficiency. Clinical oncology research has changed a great deal since the early days of the Cooperative Group Program in the 1950s. The process of conducting large-scale trials has become highly complex, with the incorporation of new technologies and trial designs, the increasing number of therapeutic agents to be tested, the increase in the number of Cooperative Groups, and the evolving regulatory environment. All of the stakeholders, including NCI and other federal agencies, such as FDA, as well as the Cooperative Groups need to reevaluate their current roles and responsibilities in cancer clinical trials and work together to develop a more effective and efficient multidisciplinary trials system. Modifying any particular element of the Program or the clinical trials process will not suffice; changes across the board are urgently needed.
Implementation of the committee’s recommendations would move the Cooperative Group Program beyond cooperation to integration for many functions, and would significantly alter the definition, structure, and opera-
tions of Cooperative Groups. First, some consolidation of the Cooperative Group front offices would reduce redundancy in the Program, enable the pooling of resources, and reduce competition for enrollment in trials on the basis of Group-specific priorities. NCI should facilitate front office consolidation by reviewing and ranking the Groups by the use of defined metrics on a similar timetable and by linking funding to review scores. Key planning and scientific evaluations should be at the level of multidisciplinary disease site committees, with a focus on the quality and success of the clinical trial concepts developed and the committee’s record of development of new investigators. Committees that do well in review should be funded, and committees with low review scores should be eliminated. Group leaders should consolidate disease site committees from different Groups to strengthen their productivity and review scores. Changing the timeline and focus of the review process to facilitate direct comparisons of the front office operations would ensure that only the most innovative and successful disease site committees would thrive, expand their membership, and maintain a sense of community. The logical extension of the proposed consolidations will be a reduction in the number of Cooperative Groups. For example, Groups focused on a single disease site or modality would likely need to merge with multidisciplinary Groups under this system. Such a system would ideally maintain strong competition for trial concepts among a smaller number of disease site committees and thus help to ensure that only the highest-priority trials are undertaken.
Second, NCI should require and facilitate the consolidation of administration and data management operations across all of the Cooperative Groups (the back office operations) including such activities as data collection and management, data queries and reviews to ensure that the data collected are complete and accurate, patient registration, audit functions, submission of case report forms, training of clinical research associates, image storage and retrieval, drug distribution, credentialing of sites, and funding and reimbursement for patient accrual. Each Cooperative Group devotes significant resources to support similar administrative structures and activities, but consolidated back office operations work very successfully in other industries. The consolidation of offices and personnel to conduct these information-based activities across all the Cooperative Groups would streamline the operations, reduce redundancy, conserve resources, and offer greater consistency to providers enrolling patients in trials launched by different Cooperative Groups. However, it will be imperative to ensure high service quality and responsiveness to the principal investigators and Cooperative Groups, through periodic peer review of formal metrics of performance.
In addition, NCI should work with the extramural community to make process improvement in the operational and organizational management
of clinical trials a priority. For example, NCI should work with governmental and nongovernmental agencies with relevant expertise to facilitate the identification of best practices in the management of clinical research logistics and develop, publish, and use performance, process, and timing standards and metrics to assess the efficiency and operational quality of clinical trials. The operational processes used to conduct clinical trials are idiosyncratic to individual institutions or Cooperative Groups, with little sharing of best practices or lessons learned. Because these operational issues can significantly delay clinical trials and the evaluation of innovative therapies for all types of cancer, the operational performance metrics used to evaluate Cancer Centers and Cooperative Groups need to be enhanced and redefined to include quality, outcome, and timing metrics for clinical trials. A transparent process that could be used to measure and reward the conduct of meaningful and efficient clinical research would greatly facilitate the adoption and use of best practices and metrics.
One of the most time-consuming and complex activities in the clinical trials process is the development of a scientific concept into a viable and approvable clinical trial protocol. NCI’s Operational Efficiency Working Group, which was charged with identifying ways to reduce the study activation time for Cooperative Group and Cancer Center trials by 50 percent, has recently put forth specific, measurable goals that include reducing the time from protocol submission to final protocol approval to 300 work-days for Phase III trials and eliminating trials that do not open and accrue patients within 2 years. To achieve those goals, the working group recommended staffing changes, more coordinated, parallel reviews, improved project management, and better tracking of the trial protocol. The IOM committee endorses these recommendations.
More active and consistent support from NCI to facilitate trial operations would also be beneficial. For example, NCI should devote more funds to drug distribution, provide resources and technical assistance to facilitate the rapid adoption of a common patient registration system as well as a common remote data capture system, facilitate more efficient and timely methods for ensuring that trial data are complete and accurate, and develop standardized case report forms that meet regulatory requirements. However, all these activities will require additional NCI staff and resources to support the Cooperative Group Program.
Compliance with regulatory requirements for the conduct of clinical trials is another major challenge for clinical investigators. Multiple agencies and institutional bodies of HHS review and provide oversight for cancer clinical trials, including NCI, FDA, OHRP, OCR, and IRBs. The many oversight bodies have different objectives and responsibilities and thus seek similar, overlapping, but not identical information and action for compliance. Moreover, the review processes are serial and iterative. This
delays the trial process and increases the burdens on investigators. The committee recommends that HHS lead a transagency effort to streamline and harmonize government oversight and regulation of cancer clinical trials. For example, all review bodies should distinguish between major review concerns (regarding patient safety and critical scientific flaws, which must be addressed) and minor concerns (which should be considered, but are not obligatory). Also, NCI should coordinate with FDA for the review and oversight of trials involving an investigational new drug or investigational device exemption to eliminate iterative review steps. Harmonizing, coordinating, and streamlining the oversight and review processes could significantly improve the speed and efficiency of clinical trials, ease the burden on investigators, and better protect patients.
Changes within individual agencies would also be beneficial. For example, FDA may have multiple centers with jurisdiction over trials testing combination products, such as drug-biologic combinations or therapeutic-diagnostic combinations. Thus, FDA should establish a coordinated Cancer Program across its centers that regulate oncology products to reduce the conflicting expectations that may arise when sponsors seek approval through multiple centers. FDA committed in principle to the formation of such a cancer program in 2004, but it has yet to follow through on that commitment. In addition, FDA should update its regulatory guidelines for the minimum data required to establish the safety and efficacy of experimental therapies (including combinations of products) and eliminate requirements for nonessential data, particularly for supplemental new drug and biologic license applications. Defining a core set of data elements, along with guidance on how those elements could be modified under certain circumstances, would speed the FDA review process and lead to greater uniformity in data requirements. Eliminating unnecessary and onerous data requirements would also conserve resources and result in the testing of more combination therapies in particular.
A major challenge unique to large multi-institutional studies is the involvement of many local IRBs. Regulatory language is often complex and subject to interpretation, so decisions by IRBs can be highly variable, which can cause delays and lead to protocol variations at different sites. Local IRBs can defer to a central IRB (CIRB), but in practice, many institutions are reluctant to rely on decisions made by the NCI CIRB, in large part because of concerns about being held accountable for the decisions that the CIRB makes. The committee recommends that OHRP develop guidance that clearly establishes the accountability of the NCI CIRB, to encourage its wider use and acceptance by local institutions. This would increase the efficiency and reduce the costs of clinical trials, as well as increase consistency in patient protections across sites. Another way to better protect patients, through improved patient communication and decision making,
would be to develop federal guidance that allows the use of a shortened and simplified summary to enhance the provision of informed consent, as consent forms have become very lengthy and complex. Federal oversight should also be more flexible in allowing minor amendments to the protocol or consent form to fast-track the chain of reapprovals.
The progress of clinical oncology research is also impeded by numerous obstacles that are well-known but have eluded solution, despite decades of discussion and multiple reports by review panels. A new and novel approach is required to solve these well-known intractable problems, with application of the best minds in multiple disciplines. The potential for impact can often be a stronger motivator to good science than money per se, and competition can foster rapid and innovative solutions, much like what occurred with the sequencing of the human genome. Thus, NCI should implement a highly visible grand challenge competition to engage experts in cancer and noncancer fields (e.g., engineering, social science, management, and marketing) and to reward significant innovation leading to increased efficiency in clinical trials processes. Models for the development of such grand challenges exist and have shown some successes. A recent report on such incentive prizes, which spur innovation by tapping into competitive and entrepreneurial spirits rather than directly funding research, concluded that they are unique and powerful tools that can produce change not only by identifying new levels of excellence and by encouraging specific innovations but also by changing wider perceptions, improving the performance of communities of problem solvers, building the skills of individuals, and mobilizing new talent or capital.
Cancer clinical trials often necessitate effective collaboration among diverse stakeholders, but there are numerous challenges to achieving such collaborations. Thus, NCI should take steps to facilitate more collaboration among the various stakeholders in cancer clinical trials. For example, negotiations to reach contract and licensing agreements to transfer or share materials, data, and intellectual property (IP) are complex and can cause lengthy and costly delays in the launch of clinical trials. Pharmaceutical companies in particular may be reluctant to share IP or data and patient samples with academic collaborators and may require IP rights that are unacceptable to collaborators. However, valuable insights and discoveries may be lost and progress toward clinical advances may be slowed if important data or samples are withheld from collaborating institutions that could explore novel, additional hypotheses with those resources. Thus, NCI should develop standard licensing language and contract templates for material and data transfer and for intellectual property ownership in biospecimen-based studies and trials that combine intellectual property from multiple sources.
It is also necessary to examine the contributions of and interactions between NCI and the Cooperative Groups in developing and implementing
large-scale cancer clinical trials. NCI’s coordination role within the current environment is quite complex and challenging, and inefficient interactions between NCI and the Groups contribute to delays in the system. To improve the speed of advances in oncology care, streamlined processes are needed for the prioritization, selection, and support of trials and for rapid patient accrual after a trial is launched. Thus, NCI should reevaluate its role in the clinical trials system. NCI has crucial responsibilities in the clinical trials system, for example, by providing a framework for both cooperatively and competitively organized interactions between Groups and their committees and in the management of IND sponsorship. Helping Group investigators gain access to more experimental therapeutic agents for high-priority trials by filing an IND application would reduce the time that the Groups spend in negotiations with industry to acquire agents before a trial is launched and also ensure the availability of the agent during the trial. NCI should file more investigational new drug applications for agents to be tested in high-priority trials and provide a leadership role to ensure the success of those studies.
However, in cases in which NCI does not hold the investigational new drug application, the primary focus of NCI should be on supporting high-priority trials, with less emphasis on oversight of the selection and implementation process and greater focus on facilitating the launch and execution of the trial. Since the funding mechanism for the Cooperative Group Program was changed from grants to cooperative agreements in 1980, NCI has exercised oversight of every aspect of the clinical trials process, including trial selection, protocol development, and trial operations. But this is not the best use of NCI’s limited funds. A Cooperative Group whose trial concept has scored well in peer review should be able to request assistance from NCI as needed to develop and implement the protocol, but it should have the necessary expertise to develop and run the trial without extensive oversight by NCI, which can delay the process. Specific research projects funded through other grant mechanisms on the basis of peer review (the bulk of NCI extramural funding) are not subjected to such oversight.
The role of the steering committees should also be reevaluated. A major challenge that the Cooperative Group Program faces is the prioritization and selection of trial concepts before a trial is launched. The effective prioritization and selection of trial concepts is critical to ensure that limited public funds are used in ways that are likely to have the greatest impact on patient care. However, the disease-specific steering committees set up in response to the CTWG report do not appear to have fully achieved that goal. The approval rate for trial concepts has not changed substantially since implementation of the steering committees, but the length of concept proposals has increased considerably, making the review process more arduous. Moreover, multiple layers of review still slow the process, and
trial concepts are still not ranked against each other with consistent criteria, as is usually done in peer review. Steering committees review and vote up or down on trial concepts as they are submitted and NCI staff actively participate in the review process, unlike other NCI peer review groups. In addition, there is little interaction among the disease-specific steering committees to determine trial priorities across disease categories, although the steering committees are charged with “guiding the development of strategic priorities.” The committee recommends that steering committees administered by NCI operate independently of NCI staff. These committees should focus on the prioritization of clinical needs and scientific opportunities, selection of trial concepts proposed by the Cooperative Group disease site committees, and facilitation of communication and cooperation among the Groups. In addition, the process of peer review for trial concepts should be strengthened and streamlined and should entail the evaluation of concise proposals (including the intended statistical design) that are ranked against each other. The emphasis should be on scientific strength and opportunity, innovation, feasibility, and the importance to improving patient outcomes. Launching only the highest-ranked trials would improve quality, speed advances, and ensure that patients are enrolling in the most meaningful and potentially beneficial trials.
Prioritization alone, however, is not sufficient. At present, only about 60 percent of cancer clinical trials supported by NCI are completed and published. This represents a tremendous waste of very limited resources, including time, effort, and money. Once a priority trial has been launched, resources and effective procedures are needed to ensure rapid patient accrual and completion of the study.
The NCI Clinical Trials Cooperative Group Program has been chronically underfunded for the work that it performs, and current funding does not cover the cost of the clinical trials undertaken. For the past 3 years, the annual budget for the Program has been held at about $145 million, but in real dollars it has declined to less than the 1999 funding level of $119 million, when the funding is adjusted for inflation. Despite this decrease in funding, the Cooperative Group Program has maintained patient accrual, with several hundred clinical trials ongoing at any given point. This level of funding is simply not sufficient to support the number of trials that the Groups undertake. As a result, the Cooperative Group Program is highly dependent on the voluntary efforts of participating investigators and on supplemental funding from other sources, such as foundations, the pharmaceutical industry, and the institutional contributions of Cooperative Group members. Especially in light of the new focus on targeted therapy and personalized medicine, which raises the complexity and cost of clinical trials, the Cooperative Group funding process is becoming increasingly unsustainable.
High-priority trials must be adequately funded to efficiently and effectively attain results that can move the field forward. NCI has an obligation to adequately fund trials identified as being of high priority. NCI should increase the total funding allocation for the Cooperative Group Program to ensure the effective translation of discoveries made with public funding to improved clinical care. Thus, NCI should allocate a larger portion of its research portfolio to the Clinical Trial Cooperative Group Program to ensure that the Program has sufficient resources to achieve its unique mission. The allocation of NCI funds among the competing needs of its various programs is a major challenge for the NCI director, who must take many factors into consideration. Greater input from the broad expertise and experience of external advisory boards would be helpful to ensure the most rational distribution of funds across the major NCI programs, in light of such factors as scientific opportunity and clinical need. External advisory boards, such as the National Cancer Advisory Board and the Board of Scientific Advisors, should have a greater roles in advising NCI on how it allocates its funds to support a national clinical trials program. These high-level boards should not be involved in the oversight of individual trials or in concept review, which would further slow the process, but rather, they should have a greater influence on how much funding is allocated to the overall Cooperative Group Program.
Given the limits of the NCI budget, the total number of NCI-funded trials undertaken by the Cooperative Groups should be reduced to a quantity that can be adequately supported, to ensure sufficient funding for high-priority trials. Compromising the science to launch more trials than the available funding can support is detrimental to progress. However, even in the absence of a substantial increase in the overall funding of the Program, the funds saved by launching fewer but higher-priority trials could be allocated for increased per case reimbursement rates to trial sites, which has been set at $2,000 since 1999, well below the estimated median costs per patient. The many duties required of clinicians and other key research staff to participate in clinical trials are costly in terms of both time and resources. These voluntary contributions constitute a substantial value and strength of the Program. However, when the discrepancy between the per case reimbursement and the actual cost of participation is excessive, as it is now, it becomes a major disincentive to participation. The existing system also often does not provide the resources required to thoroughly characterize each patient’s tumor and carefully match that profile to targeted therapeutics. Biomedical imaging and other biomarker tests are commonly becoming integral components of modern cancer clinical trials, but supplemental funding for these tests must be obtained by the Cooperative Groups through other support mechanisms. Thus, NCI should increase the per case reimbursement and adequately fund highly ranked trials to cover the costs
of the trial, including the costs of biomedical imaging and other biomarker tests that are integral to the trial design.
Given the limited funding capacity of NCI, it would also be beneficial to leverage the resources of industry to support the work of the Cooperative Groups in a transparent way to benefit patients, for example, in comparison trials or for secondary indications. Two recent reports from PCAST acknowledge the importance and value of strengthening public-private collaborations to enhance innovation, particularly for discovery and translational research in personalized medicine. However, industry funding for Cooperative Group trials has been limited for a variety of reasons, including concern about the inherent inefficiencies in the Program and the groups’ concern about maintaining independence in study design and execution. These concerns may contribute to the increasing tendency of pharmaceutical companies to conduct trials in other countries.
Thus, NCI should facilitate the creation of more public-private partnerships and precompetitive consortia, guided in part by successful models. NCI should also facilitate the development of appropriate hybrid funding models, in which NCI and industry support clearly defined components of trials that are of mutual interest. Commercial firms might be more interested in collaborations with the Cooperative Groups if the review and operational procedures of the Program were streamlined, as recommended in this report. However, novel hybrid funding mechanisms, as well as new efforts to establish public-private partnerships and precompetitive consortia would further aid progress toward effective collaboration, to the benefit of patients, who desire access to new and promising cancer therapies. Maintaining a critical mass of clinical trials in the United States via appropriate collaborations is important to ensure that patients in this country gain access to promising therapies as they develop, that trials address questions and generate data that are relevant and meaningful to patients in the United States, and that the nation retains a sufficient number of properly trained clinical trial specialists.
AACI (Association of American Cancer Institutes). 2009. Association of American Cancer Institutes. http://www.aaci-cancer.org/ (accessed December 28, 2009).
AAMC (American Association of Medical Colleges). 2006. National Conference on Alternative IRB Models: Optimizing Human Subject Protection. http://www.aamc.org/research/irbreview/irbconf06rpt.pdf (accessed April 7, 2009).
AAMC. 2007a. Universal Use of Short and Readable Informed Consent Documents: How Do We Get There? http://www.aamc.org/research/clinicalresearch/hdickler-mtgsumrpt53007.pdf (accessed December 23, 2009).
AAMC. 2007b. Universal Use of Short and Readable Informed Consent Documents: How Do We Get There? Summary of Strategic Planning Meeting, May 30, 2007. Washington, DC: Association of American Medical Colleges.
Abrams, J. 2008a. NCI’s Central Institutional Review Board. Presented at the Director’s Consumer Liaison Group Meeting, Bethesda, MD.
Abrams, J. 2008b. Organization of the NCI Clinical Trials System. Presentation to the National Cancer Policy Forum Workshop on Multi-Center Phase III Clinical Trials and the NCI Cooperative Groups, July 1, 2008, Washington, DC.
Abrams, J., and M. Mooney. 2008. Memorandum on OHRP Regulations on Changes in Clinical Trial Informed Consent Documents and Continued Enrollment of New Participants. National Cancer Institute, Bethesda, MD, March 20.
Abrams, J., R. Erwin, G. Fyfe, R. L. Schilsky, and R. Temple. 2009. Data Submission Standards and Evidence Requirements. Presented at the Conference on Clinical Cancer Research, Panel 1. Engelberg Center for Health Care Reform, Brookings Institution, Washington, DC.
ACS CAN (American Cancer Society Cancer Action Network). 2009. Barriers to Provider Participation in Clinical Cancer Trials: Potential Policy Solutions (draft). Washington, DC: American Cancer Society Cancer Action Network.
Adler, J. 2009. NCI’s CIRB: Streamlining IRB Processes. Presentation to Cancer and Leukemia Group B Clinical Research Associates. http://www.calgb.org/Public/meetings/presentations/2009/summer_group/cra_cont_ed/03a_CIRB-Presentation_062009.pdf (accessed December 23, 2009).
AHRQ (Agency for Healthcare Research and Quality). 2009. The AHRQ Informed Consent and Authorization Toolkit for Minimal Risk Research. http://www.ahrq.gov/fund/informedconsent/ (accessed December 23, 2009).
Amit, O., W. Bushnell, L. Dodd, R. Pazdur, N. Roach, and D. Sargent. 2009. Blinded Independent Central Review of PFS Endpoint. Presented at the Conference on Clinical Cancer Research, Panel 2. Engelberg Center for Health Care Reform, Brookings Institution, Washington, DC.
Ansher, S., J. Abrams, and J. Cristofaro. 2009. Issues Related to the Revision of the IP Option in DCTD Sponsored Clinical Trials. Presented at the 9th Clinical Trials Advisory Committee Meeting, Bethesda, MD.
Barker, A. D., C. C. Sigman, G. J. Kelloff, N. M. Hylton, D. A. Berry, and L. J. Esserman. 2009. I-SPY 2: An adaptive breast cancer trial design in the setting of neoadjuvant chemotherapy. Clinical Pharmacology and Therapeutics 86(1):97–100.
Beecher, H. K. 1966. Ethics and clinical research. New England Journal of Medicine 274(24):1354–1360.
Benowitz, S. 2000. Children’s Oncology Group looks to increase efficiency, numbers in clinical trials. Journal of the National Cancer Institute 92(23):1876–1878.
Bent, S., A. Padula, and A. L. Avins. 2006. Brief communication. Better ways to question patients about adverse medical events: A randomized, controlled trial. Annals of Internal Medicine 144(4):257–261.
Blayney, D. W. 2009. Results of Cooperative Group Site Poll. Presented at the NCI National Cancer Advisory Board Meeting, September, Bethesda, MD.
Breese, S., W. Burman, C. Rietmeijer, and D. Lezotte. 2004. The Health Insurance Portability and Accountability Act and the informed consent process. Annals of Internal Medicine 141(11):897–898.
Bressler, L. R., and R. L. Schilsky. 2008. Collaboration between Cooperative Groups and industry. Journal of Oncology Practice 4(3):140–141.
Burman, W., S. Breese, N. Weis, J. Bock, A. Vernon, and Tuberculosis Trials Consortium. 2003. The effects of a local review on informed consent documents from a multicenter clinical trials consortium. Controlled Clinical Trials 24(3):245–255.
caBIG (Cancer Biomedical Informatics Grid). 2007. Informed Consent Standardization and Simplification Projects. http://docs.google.com/gview?a=v&q=cache:VXz9ZL7sGnIJ: https://cabig-kc.nci.nih.gov/DSIC/uploaded_files/b/bb/Informed_Consent_Simplifica-tion_Projects.pdf+cabig,+informed+consent+standardization&hl=en&gl=us (accessed December 28, 2009).
Campbell, F. A., B. D. Goldman, M. L. Boccia, and M. Skinner. 2004. The effect of format modifications and reading comprehension on recall of informed consent information by low-income parents: A comparison of print, video, and computer-based presentations. Patient Education and Counseling 53(2):205–216.
CCCG (Coalition of Cancer Cooperative Groups). 2007. Informed Cancer Patient Consent. http://www.cancertrialshelp.org/Icare_content/icMainContent.aspx?intAppMode=10 (accessed December 28, 2009).
C-Change. 2005. A Guidance Document for Implementing Effective Cancer Clinical Trials: Version 1.2. Washington, DC: C-Change.
C-Change and Coalition of Cancer Cooperative Groups. 2006. Enhancing Cancer Treatment Through Improved Understanding of the Critical Components, Economics and Barriers of Cancer Clinical Trials. Washington, DC: C-Change; Philadelphia, PA: Coalition of Cancer Cooperative Groups.
C-Change and Coalition of Cancer Cooperative Groups. 2007. The Elements of Success: Conducting Cancer Clinical Trials: A Guide. Washington, DC: C-Change; Philadelphia, PA: Coalition of Cancer Cooperative Groups.
CEO Roundtable on Cancer and NCI. 2008. Proposed Standardized Harmonized Clauses for Clinical Trial Agreements. Rockville, MD: CEO Roundtable on Cancer and NCI.
Chase, R. B., and D. A. Tansik. 1983. The customer contact model for organizational design. Management Science 29(9):1037–1050.
Cheng, S., M. Dietrich, S. Finnigan, A. Sandler, J. Crites, L. Ferranti, A. Wu, and D. Dilts. 2009. A sense of urgency: Evaluating the link between clinical trial development time and the accrual performance of CTEP-sponsored studies. Journal of Clinical Oncology 2009 ASCO Annual Meeting Proceedings 27(18 Suppl.): CRA6509.
Christel, M. 2009. More Muscle Needed for Regulatory Science. http://blog.rddirections.com/index.php/2009/09/17/more-muscle-needed-for-regulatory-science (accessed November 3, 2009).
CIBMTR (Center for International Blood and Marrow Transplant Research). 2008. Center for International Blood and Marrow Transplant Research. http://www.cibmtr.org (accessed December 8, 2009).
CMRHC (Center for Management Research in Healthcare). 2009. Center for Management Research in Healthcare. http://www.cmrhc.org/ (accessed December 28, 2009).
Coletti, A. S., P. Heagerty, A. R. Sheon, M. Gross, B. A. Koblin, D. Metzger, G. R. Seage, and International Conference on AIDS. 2003. Randomized controlled evaluation of a prototype informed consent process for HIV vaccine efficacy trials. Journal of Acquired Immune Deficiency Syndrome 32(2):161–169.
CTAC (Clinical Trials Advisory Committee). 2008. 5th Clinical Trials Advisory Committee Meeting. http://deainfo.nci.nih.gov/advisory/ctac/0608/25jun08mins.pdf (accessed December 28, 2009).
CTEP (Cancer Therapy Evaluation Program). 1996. Clinical Trials Cooperative Group Program Guidelines. http://ctep.cancer.gov/resources/clinical/guidelines1-3.html (accessed November 19, 2008).
CTSA (Clinical and Translational Award) Network. 2010. Clinical & Translational Science Awards. http://www.ctsaweb.org/ Accessed February 25, 2010).
CTSU (Clinical Trial Service Unit & Epidemiological Studies Unit). 2010. Sensible Guidelines for the Conduct of Clinical Trials. http://www.ctsu.ox.ac.uk/projects/sg (accessed February 25, 2010).
CTTI (Clinical Trials Transformation Initiative). 2009. Clinical Trials Transformation Initiative. https://www.trialstransformation.org/ (accessed April 1, 2009).
Curt, G. 2009. Step change in safe harbors: Public-private partnerships. The Oncologist 14(4):308–310.
Dare, L., and A. Reeler. 2005. Health systems financing: Putting together the “back office.” British Medical Journal 331(7519):759–762.
Davis, K. E. 2009. What-if: Back Office Consolidation. http://18.104.22.168/search?q=cache:yOgvCqTeMWsJ:advancingthenonprofit.blogspot.com/2009/10/nonprofit-what-if-back-office.html+%22back+office+consolidation%22&cd=2&hl=en&ct=clnk&gl=us (accessed January 7, 2010).
Dilts, D. 2008. CTEP/CIRB Process Flow and Timing Study. Presented at the National Cancer Policy Forum Workshop on Multi-Center Phase III Clinical Trials and NCI Cooperative Groups, July 1, 2008, Washington, DC.
Dilts, D. M., and A. B. Sandler. 2006. Invisible barriers to clinical trials: The impact of structural, infrastructural, and procedural barriers to opening oncology clinical trials. Journal of Clinical Oncology 24(28):4545–4552.
Dilts, D. M., and A. B. Sandler. 2007. In reply to “Barriers to clinical trials vary according to the type of trial and the institution.” Journal of Clinical Oncology 25(12):1634.
Dilts, D. M., A. B. Sandler, M. Baker, S. K. Cheung, S. L. George, K. S. Karas, S. McGuire, G. S. Menon, J. Reusch, D. Sawyer, M. Scoggins, A. Wu, K. Zhou, and R. L. Schilsky. 2006. Processes to activate phase III clinical trials in a cooperative oncology group: The case of Cancer and Leukemia Group B. Journal of Clinical Oncology 24(28):4553–4557.
Dilts, D. M., A. B. Sandler, S. Cheng, J. Crites, L. Ferranti, A. Wu, R. Gray, J. MacDonald, D. Marinucci, and R. Comis. 2008. Development of clinical trials in a cooperative group setting: The Eastern Cooperative Oncology Group. Clinical Cancer Research 14:3427–3433.
Dilts, D. M., A. B. Sandler, S. K. Cheng, J. S. Crites, L. B. Ferranti, A. Y. Wu, S. Finnigan, S. Friedman, M. Mooney, and J. Abrams. 2009. Steps and time to process clinical trials at the Cancer Therapy Evaluation Program. Journal of Clinical Oncology 27(11):1761–1766.
Director’s Consumer Liaison Group. 2008. NCI’s Central Institutional Review Board. Presentation at the Director’s Consumer Liaison Group Meeting, October 14, 2008.
Dodd, L. E., E. L. Korn, B. Freidlin, C. C. Jaffe, L. V. Rubinstein, J. Dancey, and M. M. Mooney. 2008. Blinded independent central review of progression-free survival in phase III clinical trials: Important design element or unnecessary expense? Journal of Clinical Oncology 26(22):3791–3796.
DOJ (U.S. Department of Justice). 2008. Response to the CEO Roundtable on Cancer’s Request for Business Review Letter. http://www.justice.gov/atr/public/busreview/237311.htm (accessed December 28, 2009).
Doroshow, J. 2008. Restructuring the National Cancer Clinical Trials Enterprise: Institute of Medicine Update. Presented to the Committee on Cancer Clinical Trials and the NCI Cooperative Group Program, December 16, 2008, Washington, DC.
Doroshow, J., and Hortobagyi G. N. 2009. Operational Efficiency Working Group Clinical Trials Advisory Committee Report. Bethesda, MD: National Cancer Institute.
Dresden, G. M., and M. A. Levitt. 2001. Modifying a standard industry clinical trial consent form improves patient information retention as part of the informed consent process. Academic Emergency Medicine 8(3):246–252.
Emanuel, E. J., L. E. Schnipper, D. Y. Kamin, J. Levinson, and A. S. Lichter. 2003. The costs of conducting clinical research. Journal of Clinical Oncology 21(22):4145–4150.
Epstein, D. 2009. Vision and will: The future of the FDA. The Oncologist 14(4):317–319.
Epstein, L. C., and L. Lasagna. 1969. Obtaining informed consent. Form or substance. Archives of Internal Medicine 123(6):682–688.
FDA (Food and Drug Administration). 2001. Guidance for Industry: Cancer Drug and Biological Products—Clinical Data in Marketing Applications. http://www.fda.gov/CbER/gdlns/canclin.htm (accessed April 7, 2009).
FDA. 2004. FDA to Establish New Cancer Office and Program: Changes Designed to Improve Efficiency and Consistency of Cancer Product Reviews. http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/2004/ucm108326.htm (accessed December 28, 2009).
FDA. 2006. Guidance for Industry: Using a Centralized IRB Review Process in Multicenter Clinical Trials. http://www.fda.gov/RegulatoryInformation/Guidances/ucm127004.htm (accessed December 23, 2009).
FDA. 2009. U.S. Food and Drug Administration. http://www.fda.gov (accessed December 28, 2009).
FNIH (Foundation for the National Institutes of Health). 2009. Foundation for the National Institutes of Health: About Us. http://www.fnih.org/index.php?option=com_content&task=section&id=6&Itemid=37 (accessed December 28, 2009).
Glickman, S. W., J. G. McHutchison, E. D. Peterson, C. B. Cairns, R. A. Harrington, R. M. Califf, and K. A. Schulman. 2009. Ethical and scientific implications of the globalization of clinical research. New England Journal of Medicine 360(8):816–823.
Gold, J. L., and C. S. Dewa. 2005. Institutional review boards and multisite studies in health services research: Is there a better way? Health Services Research 40(1):291–307.
Goldberg, K. B. 2008. New policy on minor changes in trials requires halt in patient enrollment. The Cancer Letter 34(16):1–4.
Goldberg, K. B. 2009. Advisors approve increase to add new CCOP sites. The Cancer Letter 35(42):6.
Grant, B. 2009. The Scientist: NewsBlog: More Regulatory Science: FDA Chief. http://www.the-scientist.com/blog/print/55984/ (accessed December 28, 2009).
Greene, S. M., and A. M. Geiger. 2006. A review finds that multicenter studies face substantial challenges but strategies exist to achieve institutional review board approval. Journal of Clinical Epidemiology 59(8):784–790.
Grosser, J. M. 2008. Sustaining a Non-Profit in Tough Economic Times. Restructuring: A Possible Solution. Philadelphia, PA: Philadelphia Chamber of Commerce.
Hautala, J. 2008. Analysis of Cooperative Group Clinical Trial Costs. Paper presented to the Clinical Trials and Translational Research Advisory Committee, Bethesda, MD.
HEW (U.S. Department of Health, Education, and Welfare). 1979. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. http://ohsr.od.nih.gov/guidelines/belmont.html (accessed December 23, 2009).
HHS (U.S. Department of Health and Human Services). 1998. Institutional Review Boards: A Time for Reform. Washington, DC: Office of Inspector General, U.S. Department of Health and Human Services.
HHS. 2007. Secretary’s Advisory Committee on Human Research Protections (SACHRP). http://www.hhs.gov/ohrp/sachrp/mtgings/mtg07-07/present.htm (accessed December 28, 2009).
HHS. 2009. Office for Human Research Protections (OHRP). http://www.hhs.gov/ohrp/policy (accessed December 28, 2009).
ICH (International Conference on Harmonisation). 1996. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. ICH Harmonised Tripartite Guideline. Guideline for Good Clinical Practice E6. http://www.ich.org/LOB/media/MEDIA482.pdf (accessed October 29, 2009).
Ioannidis, J. P. A., C. D. Mulrow, and S. N. Goodman. 2006. Adverse events: The more you search, the more you find. Annals of Internal Medicine 144(4):298–300.
IOM (Institute of Medicine). 2002. Responsible Research: A Systems Approach to Protecting Research Participants. Washington, DC: The National Academies Press.
IOM. 2003. Large-Scale Biomedical Research: Exploring Strategies for Future Research. Washington, DC: The National Academies Press.
IOM. 2007. Cancer Biomarkers: The Promises and Challenges of Improving Detection and Treatment. Washington, DC: The National Academies Press.
IOM. 2008. Improving the Quality of Cancer Clinical Trials: Workshop Summary. Washington, DC: The National Academies Press.
IOM. 2009a. Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health Through Research. Washington, DC: The National Academies Press.
IOM. 2009b. Breakthrough Business Models: Drug Development for Rare and Neglected Diseases and Individualized Therapies. Workshop summary. Washington, DC: The National Academies Press.
IOM. 2009c. Multi-Center Phase III Clinical Trials and NCI Cooperative Groups: Workshop Summary. Washington, DC: The National Academies Press.
Jamrozik, K. 2000. The case for a new system for oversight of research on human subjects. Journal of Medical Ethics 26(5):334–339.
Joffe, S., E. F. Cook, P. D. Cleary, J. W. Clark, and J. C. Weeks. 2001. Quality of informed consent in cancer clinical trials: A cross-sectional survey. The Lancet 358(9295):1772–1777.
Kaufer, D. S., E. R. Steinberg, and S. D. Toney. 1983. Revising medical consent forms: An empirical model and test. Law, Medicine & Health Care 11(4):155–162.
Kinders, R., M. Hollingshead, S. Khin, L. Rubinstein, J. E. Tomaszewski, J. H. Doroshow, R. E. Parchment, and National Cancer Institute Phase 0 Clinical Trials Team. 2008. Preclinical modeling of a phase 0 clinical trial: Qualification of a pharmacodynamic assay of poly (ADP-ribose) polymerase in tumor biopsies of mouse xenografts. Clinical Cancer Research 14(21):6877–6885.
Kirsch, I., A. Jungeblut, L. Jenkins, and A. Kolstad. 2002. Adult Literacy in America: First Looks at the Result of the National Adult Literacy Survey, 3rd ed. Washington, DC: National Center for Education Statistics, U.S. Department of Education.
Kraus, J. R., and J. R. Marjanovic. 1995. Following private-sector lead, Fed to overhaul back office. (Federal Reserve System). American Banker, May 17. http://www.americanbanker.com/issues/160_95/-58613-1.html (accessed February 2, 2010).
Kummar, S., R. Kinders, M. E. Gutierrez, L. Rubinstein, R. E. Parchment, L. R. Phillips, J. Ji, A. Monks, J. A. Low, A. Chen, A. J. Murgo, J. Collins, S. M. Steinberg, H. Eliopoulos, V. L. Giranda, G. Gordon, L. Helman, R. Wiltrout, J. E. Tomaszewski, and J. H. Doroshow. 2009. Phase 0 clinical trial of the poly (ADP-ribose) polymerase inhibitor ABT-888 in patients with advanced malignancies. Journal of Clinical Oncology 27(16):2586–2588.
Kurzrock, R., S. Pilat, M. Bartolazzi, D. Sanders, J. Van Wart Hood, S. D. Tucker, K. Webster, M. A. Mallamaci, S. Strand, E. Babcock, and R. C. Bast, Jr. 2009. Project Zero Delay: A process for accelerating the activation of cancer clinical trials. Journal of Clinical Oncology 27(26):4433–4440.
Lacity, M. C., D. Feeny, and L. P. Willcocks. 2003. Transforming a back-office function: Lessons from BAE Systems’ experience with an enterprise partnership. MIS Quarterly 2(2):86–103.
Leith, W. 2002. How to lose a billion. The Guardian, October 26, p. 34.
Loh, E. D., and R. E. Meyer. 2004. Medical schools’ attitudes and perceptions regarding the use of central institutional review boards. Academic Medicine 79(7):644–651.
LoVerde, M. E., A. V. Prochazka, and R. L. Byyny. 1989. Research consent forms: Continued unreadability and increasing length. Journal of General Internal Medicine 4(5):410–412.
Mauer, A. M., E. S. Rich, and R. L. Schilsky. 2007. The role of Cooperative Groups in cancer clinical trials. Cancer Treatment and Research 132:111–129.
McArthur, M., A. Hodges, A. Wilson, and J. Hautala. 2008. Analysis of Barriers to Acceptance of NCI Central Institutional Review Board Facilitated Review Process by Institutions Conducting NCI-Funded Clinical Trials. Final report. Washington, DC: Science and Technology Policy Institute.
McClellan, M., and J. S. Benner. 2009. Four important steps toward 21st century care for patients with cancer. The Oncologist 14(4):313–316.
McJoynt, T. A., M. A. Hirzallah, D. V. Satele, J. H. Pitzen, S. R. Alberts, and S. V. Rajkumar. 2009. Building a protocol expressway: The case of Mayo Clinic Cancer Center. Journal of Clinical Oncology 27(23):3855–3860.
McKinsey & Company. 2009. “And the winner is …” Capturing the Promise of Philanthropic Prizes. New York: McKinsey & Company.
McNeil, C. 2005. Central IRBs: Why are some institutions reluctant to sign on? Journal of the National Cancer Institute 97(13):953–955.
McWilliams, R., J. Hoover-Fong, A. Hamosh, S. Beck, T. Beaty, and G. Cutting. 2003. Problematic variation in local institutional review of a multicenter genetic epidemiology study. Journal of the American Medical Association 290(3):360–366.
MMRC (Multiple Myeloma Research Consortium). 2009. Welcome to the MMRC. http://www.themmrc.org/ (accessed December 28, 2009).
Murphy, S. 2009. Overview of the Consolidation of the Pediatric Oncology Groups. Washington, DC: Committee on Cancer Clinical Trials and the NCI Cooperative Group Program. http://www.iom.edu/~/media/Files/Activity%20Files/Disease/NCPF/2009-Coop-Groups-Study/Murphy_On_the_Merger_of_the_Pediatric_Cancer_Clinical_Trials_Cooperative_Groups.ashx (accessed March 9, 2010).
NCI (National Cancer Institute). 1997. Report of the National Cancer Institute Clinical Trials Program Review Group. http://deainfo.nci.nih.gov/ADVISORY/bsa/bsa_program/bsactprgmin.htm#8a (accessed November 19, 2008).
NCI. 2005a. President’s Cancer Panel 2004–2005 Annual Report: Translating Research into Cancer Care: Delivering on the Promise. Bethesda, MD: National Cancer Institute.
NCI. 2005b. Report of the Clinical Trials Working Group of the National Cancer Advisory Board: Restructuring the National Cancer Clinical Trials Enterprise. Bethesda, MD: National Cancer Institute.
NCI. 2006. National Cancer Institute Clinical Trials Cooperative Group Program Guidelines. Bethesda, MD: National Cancer Institute.
NCI. 2008. Biomarker, Imaging and Quality of Life Studies Funding Program. http://restructuringtrials.cancer.gov/files/BIQSFP_Announcement_12_12_08.pdf (accessed December 28, 2009).
NCI. 2009a. Clinical Trials and Translational Research Advisory Committee Meeting Minutes Menu. http://deainfo.nci.nih.gov/advisory/ctac/ctacminmenu.htm (accessed December 21, 2009).
NCI. 2009b. Community Clinical Oncology Program (CCOP). http://prevention.cancer.gov/programs-resources/programs/ccop/about (accessed December 23, 2009).
NCI. 2009c. Division of Extramural Activities: Advisory Boards and Groups. http://deainfo.nci.nih.gov/advisory/boards.htm (accessed December 28, 2009).
NCI. 2009d. NCI-Cooperative Group-Industry Relationship Guidelines. http://ctep.cancer.gov/industryCollaborations2/guidelines.htm (accessed December 28, 2009).
NCI. 2009e. NCI Trial Complexity Elements & Scoring Model (Version 1.2).http://ctep.cancer.gov/protocoldevelopment/docs/trial_complexity_elements_scoring.doc (accessed October 20, 2009).
NCI. 2009f. Restructuring the NCI Clinical Trials Enterprise. http://restructuringtrials.cancer.gov/steering/overview (accessed December 21, 2009).
Niederhuber, J. E. 2009. Facilitating patient-centered cancer research and a new era of drug discovery. The Oncologist 14(4):311–312.
NIH (National Institutes of Health). 2009. Public Private Partnership Program. http://ppp.od.nih.gov/ (accessed November 4, 2009).
Nosowsky, R., and T. J. Giordano. 2006. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule: Implications for clinical research. Annual Review of Medicine 57(1):575–590.
OHRP (Office for Human Research Protections). 2009. Advanced notice of proposed rulemaking; requests for public comment. Federal Register 74(42):9578–9583.
OHRP, NIH (National Institutes of Health), AAMC (American Association of Medical Colleges), and ASCO (American Society of Clinical Oncology). 2005. Alternative Models of IRB Review: Workshop Summary Report. http://www.dhhs.gov/ohrp/sachrp/documents/AltModIRB.pdf (accessed April 7, 2009).
Paasche-Orlow, M. K., H. A. Taylor, and F. L. Brancati. 2003. Readability standards for informed-consent forms as compared with actual readability. New England Journal of Medicine 348(8):721–726.
PCAST (President’s Council of Advisors on Science and Technology). 2008a. Priorities for Personalized Medicine. Washington, DC: Office of Science and Technology Policy.
PCAST. 2008b. University-Private Sector Research Partnerships in the Innovation Ecosystem. Washington, DC: Office of Science and Technology Policy.
Reaman, G. 2009. Children’s Oncology Group: A National/International Infrastructure for Pediatric Cancer Clinical Translational Research. Presentation to the Institute of Medicine Committee on Cancer Clinical Trials and the NCI Cooperative Group Program, April 23, 2009, Washington, DC.
Rhoades, S. A. 1998. The efficiency effects of bank mergers: An overview of case studies of nine mergers. Journal of Banking & Finance 22(3):273–291.
Ridpath, J. R., S. M. Greene, and C. J. Wiese. 2007. PRISM Readability Toolkit, 3rd ed. Seattle, WA: Group Health Center for Health Studies.
RTI International. 2007. Evaluation of the NCI Central Institutional Review Board to Improve Cancer Clinical Trials System; CIRB User Satisfaction Survey Research. Final report. Research Triangle Park, NC: RTI International.
SACHRP (Secretary’s Advisory Committee on Human Research Protections). 2005. Summary of SACHRP’s Recommendations on the HIPAA Privacy Rule. Washington, DC: Secretary’s Advisory Committee on Human Research Protections.
SACHRP. 2008. Letter to HHS Secretary, September 18. Secretary’s Advisory Committee on Human Research Protections, Washington, DC.
Schilsky, R. L., J. Abrams, J. Woodcock, G. Fyfe, and R. Erwin. 2008. Data Submissions Standards and Evidence Requirements, Conference on Clinical Cancer Research, Panel 1. Washington, DC: Brookings Institution, Engelberg Center for Health Care Reform.
Schroen, A. T., G. R. Petroni, H. Wang, B. Djulbegovic, C. L. Slingluff, X. F. Wang, R. Gray, D. J. Sargent, W. Cronin, and J. Benedetti. 2009. Challenges to accrual predictions to phase III cancer clinical trials: A survey of study chairs and lead statisticians of 248 NCI-sponsored trials. Journal of Clinical Oncology 27:15s (suppl; abstr 6562).
Sharp, S. M. 2004. Consent documents for oncology trials: Does anybody read these things? American Journal of Clinical Oncology 27(6):570–575.
Shortell, S. M., T. M. Waters, K. W. B. Clarke, and P. P. Budetti. 1998. Physicians as double agents: Maintaining trust in an era of multiple accountabilities Journal of the American Medical Association 280(12):1102–1108.
Steensma, D. P. 2009. The ordinary miracle of cancer clinical trials. Journal of Clinical Oncology 27(11):1737–1739.
Sudore, R. L., C. S. Landefeld, B. A. Williams, D. E. Barnes, K. Lindquist, and D. Schillinger. 2006. Use of a modified informed consent process among vulnerable patients: A descriptive study. Journal of General Internal Medicine 21(8):867–873.
Taheri, P. A., D. Butz, L. C. Griffes, D. R. Morlock, and L. J. Greenfield. 2000. Physician impact on the total cost of care. Annals of Surgery 231(3):432–435.
Tait, A. R., T. Voepel-Lewis, S. Malviya, and S. J. Philipson. 2005. Improving the readability and processability of a pediatric informed consent document: Effects on parents’ understanding. Archives of Pediatric and Adolescent Medicine 159(4):347–352.
Tarnowski, K. J., D. M. Allen, C. Mayhall, and P. A. Kelly. 1990. Readability of pediatric biomedical research informed consent forms. Pediatrics 85(1):58–62.
Tufts University. 2009. Tufts Center for the Study of Drug Development. http://csdd.tufts.edu/Default.asp (accessed April 1, 2009).
Tully, J., N. Ninis, R. Booy, and R. Viner. 2000. The new system of review by multicentre research ethics committees: Prospective study. British Medical Journal 320:1179–1182.
Wagner, T., C. Murray, J. Goldberg, J. Adler, and J. Abrams. October 19, 2009, posting date. Costs and benefits of the National Cancer Institutional Review Board. Journal of Clinical Oncology Epub ahead of print.
Waldinger, M. 2008. Cost Out. Presented at the National Cancer Policy Forum Workshop on Multi-Center Phase III Clinical Trials and NCI Cooperative Groups, July 2, 2008, Washington, DC.
Wechsler, J. 2007. Central vs. local: Rethinking IRBs. Applied Clinical Trials Online (February 1, 2007). http://appliedclinicaltrialsonline.findpharma.com/appliedclinicaltrials/article/articleDetail.jsp?id=401619 (accessed April 7, 2009).
Wickerham, L. 2009. A “Modest Proposal” and a Few Suggestions Concerning Cooperative Group Clinical Trials. Presented to the Committee on Cancer Clinical Trials and the NCI Cooperative Group Program, April 23, 2009, Washington, DC.
Woodcock, J., and R. Woosley. 2008. The FDA critical path initiative and its influence on new drug development. Annual Review of Medicine 59(1):1–12.
Yang, S. X., S. Kummar, S. M. Steinberg, A. J. Murgo, M. Gutierrez, L. Rubinstein, D. Nguyen, G. Kaur, A. P. Chen, V. L. Giranda, J. E. Tomaszewski, J. H. Doroshow, and The National Cancer Institute Phase 0 Working Group. 2009. Immunohistochemical detection of poly(ADP-ribose) polymerase inhibition by ABT-888 in patients with refractory solid tumors and lymphomas. Cancer Biology and Therapy 8(21):2004–2009.
Young, D. R., D. T. Hooker, and F. E. Freeberg. 1990. Informed consent documents: increasing comprehension by reducing reading level. IRB: a Review of Human Subjects Research 12(3):1–5.
Yusuf, S., J. Bosch, P. J. Devereaux, R. Collins, C. Baigent, C. Granger, R. Califf, and R. Temple. 2008. Sensible guidelines for the conduct of large randomized trials. Clinical Trials 5(1):38–39.