Presubmission and Submission

Data Collection

Presented by John R. Schultz, Ph.D.

Vice President and General Manager, Neuroclinical Trials Center

University of Virginia


A central issue in data collection is how to identify relevant, high-quality data that are readily available for appropriate decision making and to do so in a cost-effective manner. In this case, the phrase "high-quality data" refers to data that can be used without further revisions or data that will produce conclusions and interpretations that are equivalent to those that would be derived from error-free data, that is, data that are accurate, reliable, and fit for use. The key to producing such data is to engineer data quality into the entire clinical trial process.

Retrieval of High-Quality Data

The factors critical to the successful retrieval of high-quality data begin far upstream from the clinical trial and affect all stages of the clinical trial, as outlined in the following sections.

Scientifically Valid-Protocol

The protocol should have clear, specific objectives in the form of a testable hypothesis. There should be a well-defined target population with specific criteria for inclusion and exclusion of study subjects. The study design should be relatively simple, because complexity frequently introduces error. The protocol should include all of the relevant endpoints with an identification of primary and secondary endpoints, and a detailed schedule of the activities and observations that will be included in the study. The protocol should also address those steps taken to assure data quality.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 13
Presubmission and Submission Data Collection Presented by John R. Schultz, Ph.D. Vice President and General Manager, Neuroclinical Trials Center University of Virginia A central issue in data collection is how to identify relevant, high-quality data that are readily available for appropriate decision making and to do so in a cost-effective manner. In this case, the phrase "high-quality data" refers to data that can be used without further revisions or data that will produce conclusions and interpretations that are equivalent to those that would be derived from error-free data, that is, data that are accurate, reliable, and fit for use. The key to producing such data is to engineer data quality into the entire clinical trial process. Retrieval of High-Quality Data The factors critical to the successful retrieval of high-quality data begin far upstream from the clinical trial and affect all stages of the clinical trial, as outlined in the following sections. Scientifically Valid-Protocol The protocol should have clear, specific objectives in the form of a testable hypothesis. There should be a well-defined target population with specific criteria for inclusion and exclusion of study subjects. The study design should be relatively simple, because complexity frequently introduces error. The protocol should include all of the relevant endpoints with an identification of primary and secondary endpoints, and a detailed schedule of the activities and observations that will be included in the study. The protocol should also address those steps taken to assure data quality.

OCR for page 13
Comprehensive Data Management and Analysis Plan Elements of a data management and analysis plan include an annotated case report form with instructions on how to complete and code the form, a data entry manual, and a flowchart that describes the location and custodian of the data. A list of data variables, an analysis grid, and samples of the tables and graphics used for data presentation should also be available. Additionally, an explicit statement of data quality requirements should be developed to provide a standard for auditing purposes. The data management and analysis plan should also address the editing and auditing procedures that will be used, the methods used to calculate derived variables, and the methods used to validate software used in the study, as well as data security, system backups, and archival procedures. User-Friendly Data Capture Instruments Appearance is important for a form on paper and is even more important for a computer screen used for the electronic capture of data. The data capture instrument should allow data to be collected in parallel with the performance of the clinical routine, check-off boxes should be used if possible, and narratives should be avoided. Units of measure should be specified. Above all, the data capture instrument needs to be kept as simple as possible. Good Clinical Testing Site Selection and Training Procedures Overriding issues in good clinical testing site selection are access to target patient populations and determination of whether the site has the qualifications and expertise to meet the protocol requirements. Particularly important is the quality of the support personnel responsible for completing the case report form. One often overlooked question is determination of whether the site under consideration has concurrent studies, and how those studies may affect the quality and capacity of the site to conduct the proposed study. Training of personnel at the site begins with the investigator's brochure and continues with a review of a schedule of protocol activities and instructions for completing the case report form. Investigator and study coordinator meetings are recommended, particularly when they bring together personnel from different sites who are working on the same protocol. Defined Site-Monitoring Procedures Monitoring is required before, during, and after the study. At study initiation, monitoring involves protocol review, drug storage and accountability, and construction of the study file. After the first patient has been processed and the

OCR for page 13
first set of case report forms has been completed, the data flow is reviewed to reduce the likelihood of errors. The results of the review are shared with collaborating sites. Attention to Study-Specific Issues The use of a means to code for adverse events and the use of concomitant medications is an important component of the study that requires careful planning and consideration. Laboratory procedures and data should be standardized, preferably through a central laboratory. Clinical supplies for the study should also be standardized. Enhancing the Process Three principal areas have the potential to enhance the quality and efficiency of this process. Process Standardization There is a great deal of literature to support the proposition that quality is achieved by the standardization of processes. In clinical trials for drugs, this means providing an integrated framework for study management, work flow, drug supply, and software development. It also means the need for an approach to handle medical events as well as an approach to data verification and validation. Resource Review More attention should be given to ensuring that the necessary resources are available to carry out the study. This includes identification of key personnel, additional human resources both at and away from the site, financial commitments outlined in a budget, and even clinical supplies and equipment. The entire activity must also have a time limit. Enabling Technologies Electronic mail and shared databases are very effective means of linking the personnel who monitor the trial at each site. Remote data capture has enormous potential, although there continue to be problems in the interface between the data capture and data management packages. Videotape instruction is effective and helps standardize procedures across sites. Videoconferencing can also rein-

OCR for page 13
force the contact between personnel at the different study sites. Document sharing and interactive voice recognition are other enabling technologies. Data Cleanup A major cost element involves validation of the completeness, accuracy, and reliability of the data after the study, or data cleanup, particularly when there are open-ended sections of the data capture form for adverse events and concomitant medications. In addition, different variables might allow different quality standards. For example, primary variables that are closely related to the protocol objectives (e.g., patient identification and mortality rate) should have a much lower rate of error than secondary variables (e.g., leukocyte counts). In general, there is still a tendency to collect too many data. Monitoring: Industry Perspective Presented by Eleanor Segal, M.D. Senior Director, Drug Safety and Clinical Quality Assurance, Chiron Corporation Industry (e.g., pharmaceutical companies) monitors and audits clinical trial data for three reasons: first, to ensure the safety of the human subjects; second, to ensure that the company's investment results in a marketable product; and third, because it is required by regulatory agencies in the Code of Federal Regulations under 21 CFR 31.250: Sponsors are responsible for selecting qualified investigators, providing them with the information they need to conduct an investigation properly, ensuring proper monitoring of the investigation(s), ensuring that the investigation(s) is conducted in accordance with the general investigational plan and protocols contained in the IND [Investigational New Drug], maintaining an effective IND with respect to the investigations, and ensuring that FDA [Food and Drug Administration] and all participating investigators are promptly informed of significant new adverse effects or risks with respect to the drug. Additional specific responsibilities of sponsors are described elsewhere in this part. Although each company may structure its activities in different ways, responsibility for monitoring is typically borne by the following five principal groups: 1.   The clinical research department includes medical monitors, often M.D.s, with a considerable amount of clinical experience. An even greater burden of monitoring, however, falls to the clinical research associates, who go out

OCR for page 13
into the field to make sure that sites are properly initiated and that data are collected appropriately. 2.   Most companies also have a separate clinical quality assurance department that conducts in-house file audits (to ensure that protocols are written correctly), and site and investigator audits (to match case report forms with patient charts), and that reviews informed-consent forms. Another function of the clinical quality assurance department is facilitation of audits when they are performed by regulatory agencies. 3.   Members of the biostatistics and data management group, which is usually separate from the clinical research group, monitor all of the data received from the field and investigate emerging trends that might affect safety. 4.   The drug safety department collects data on serious adverse effects. In many cases, clinical trial drug safety is handled separately from postmarketing drug safety. 5.   The regulatory affairs group compiles expedited serious adverse effects reports and sends them to appropriate U.S. and international regulatory agencies within 15 calendar days after learning about the event. Organizational structure, procedures, and data forms vary among pharmaceutical companies. For example, each company is responsible for assessing the relationship of a serious adverse event to the experimental drug; some companies record five to six subjective observations, whereas others simply use a "yes" or "no". These processes generate large amounts of data. A single draft serious adverse effects form may comprise five or more pages, and reconciliation of the data on this form with those on case report forms requires hours of effort by a study coordinator. This process generates an enormous volume of data, and it is reasonable to ask whether all of these data are necessary and relevant. Frequently, the greater amount of data collected increases the risk of error and makes the task of reconciling data streams more difficult. Although many companies profess that they use data automation and electronic reconciliation techniques, the complexity of the data requires the use of trained personnel for the manual comparison of the data on separate printouts. To assure data quality and validity, regulatory decision making relies on the careful monitoring and review of data collection and data processing. Monitoring: National Cancer Institute Perspective Presented by Michaele C. Christian, M.D. Associate Director, Cancer Therapy Evaluation Program, Division of Cancer Treatment, National Cancer Institute The National Cancer Institute's (NCI's) Cancer Therapy Evaluation Program sponsors up to 200 Investigational New Drug applications for investiga

OCR for page 13
tional agents and has up to 1,000 treatment trials, accruing over 20,000 new patients each year into studies at thousands of sites with thousands of investigators. In 1978, NCI filed quality control procedures with the Food and Drug Administration (FDA) that included guidelines for monitoring Phase 1 and 2 drug trials, a responsibility that is shared with a dozen multi-institutional Cooperative Clinical Trials Groups (CCTGs). These guidelines were revised and refined in 1982 and again in 1995. The goal of the program is to prevent data problems and to detect them when they occur. The components include training, study monitoring, data safety and monitoring committees, and on-site auditing. Audits serve as an educational tool because of the interactions between the auditors and clinical investigators. Each participating institution is audited at least once every 3 years, although Phase 1 trials in which patients have increased risks are audited every 3 months. Institutions are notified 3 to 6 months before an audit. A list of the protocols and patient records to be audited is provided 2 to 4 weeks before the audit. The audit also includes at least three protocols and 10 percent of the patients who have accrued since the previous audit. The audit assesses Institutional Review Board approval, consent forms, and compliance and accuracy with regard to eligibility, treatment administration, response assessment, and toxicity reporting. The goals are to ensure the quality and accuracy of the data, compliance with federal regulations, and protection of the rights and welfare of human subjects. A preliminary report is issued within 24 hours of completion of the audit, and the final report is submitted within 70 days. NCI has established a computerized audit database to track the thousands of audits conducted by CCTGs. Audits are rated "acceptable," meaning that no deficiencies or only a few minor deficiencies were detected; "acceptable needs follow-up," meaning that there were multiple minor deficiencies or a major deficiency that was not corrected before the audit; or "unacceptable." An assessment other than "acceptable" requires a written explanation and submission of a corrective plan to the CCTG and NCI. These rules and the standards for major and lesser deficiencies in each audit focus area were included in the 1995 guidelines. From June 1995 through March 1998 there were 2,057 audits involving 675 protocols and 17,668 patient records. During that period, 6 percent of the institutions were rated "exceptional," 46 percent were rated "acceptable," 37 percent were rated "acceptable needs follow-up," and 11 percent were rated "unacceptable." Compared with the period from 1985 to 1994, the total number of deficiencies increased slightly, but most of the difference was in terms of lesser deficiencies. The increase in deficiencies may be due to greater and more consistent scrutiny, it may be the result of increased oversight by the Office for Protection from Research Risks, or it may reflect increased demands on Institutional Review Boards. CCTGs are spending about $1.3 million, or 1.5 percent of their budgets, on audits. However, these expenditures do not reflect the full costs of auditing because the auditors are volunteers, and the sum does not include the costs for the sites to prepare for the audit. In addition, NCI sends auditors on CCTG audits

OCR for page 13
and contracts with a clinical trials monitoring service to audit its cancer centers and selected grantees. NCI spends an additional $575,000 on CCTG audits, $193,000 on auditing Phase 1 clinical trials, and $104,000 on audits of Phase 2 and select Phase 3 clinical trials. NCI is making a major investment in electronic data reporting and has already instituted World Wide Web-based reporting for Phase 2 clinical trials and adverse event reporting. The final system may take one of several forms, but a primary objective is to reduce errors by removing the need to reenter data that are already housed in another repository. One study across five clinical trials found that shifting from manual data entry to remote data entry with computerized edit checks can reduce error rates from 80,000 per million to only 200 per million. It can also cut the time to database closure from 22 weeks to 10 days. Industry and NCI have different goals and procedures for data monitoring. Industry wants to get the drug to market in the shortest possible time, so it audits the individual trial and may not tolerate any errors with regard to efficacy or safety. NCI, on the other hand, wants to identify effective cancer therapies; it audits the institution in order to detect and prevent problems and to educate and train clinical investigators as a future resource for the conduct of clinical trials. Although FDA regulations and guidance documents allow flexibility in the design of data management procedures, the requirements can become more rigid when they are standardized across a large institution or company. Industry, in particular, often has too much invested in a trial and is unwilling to take a chance on new or innovative approaches to data quality or site monitoring. In response, FDA is preparing final guidance for remote data entry. These efforts provide an opportunity for industry to work with FDA to develop guidance on other subjects such as defining a minimal data set that will meet regulatory requirements. These are important initiatives because the costs of collecting excess data and extensive monitoring are substantial and divert human and financial resources from other meritorious clinical trials. Data Handling and Cleanup Presented by Kristin O'Connor, M.P.H. Director, Data Management, Boehringer Ingleheim Pharmaceuticals Greater communication and trust between industry and the Food and Drug Administration (FDA) are needed in terms of data handling and cleanup. Both go to great lengths and great expense to ensure data quality; the question is whether each is doing enough or too much, and whether the system could be simpler. Industry's efforts toward ensuring data quality include the checking of source data in the field, the use of double data entry and computerized data checks, and review of data listings to identify outliers. These efforts also include means of validating systems and programs, as well as maintaining extensive

OCR for page 13
documentation on data-handling plans, data inconsistencies, and agreement changes. FDA verifies these data using individuals with expertise in several areas: medical reviewers, statisticians, and auditors. In some cases, FDA statisticians reanalyze industry's data. Yet, it may be that some duplicative efforts by industry and FDA could be increasing the costs of clinical trials. An alternative model for data management is demonstrated by the AIDS Clinical Trials Group, which cleans up data selectively by focusing on the fields important to the analysis. As another example of an alternative model, European regulatory agencies spend less time reanalyzing the data and more time evaluating the arguments set forth in the expert reports. The initiatives undertaken by the International Committee on Harmonization (ICH) are important steps toward standardization. However, the rate at which ICH guidelines are implemented varies among nations. During an audit, FDA reviews all of the data and not just the primary end-points of efficacy and safety. One perception among members of industry is that medical examiners do not like to find any errors or inconsistencies in the data, even in minor secondary variables, and that the application will lose credibility should minor errors occur. Industry's view is that there is no acceptable error rate. Consequently, industry spends additional money on further data cleanup, regardless of its effect on the key analyses. FDA has been proactive in developing guidelines on archival submissions, electronic data formats, and other related topics. Obstacles to both the sponsor and FDA, however, include inconsistencies among FDA divisions in terms of hardware, software, computer literacy, and review standards. There is a need for greater communication and collaboration between industry and FDA to develop data management and data quality guidelines. An industry-FDA partnership could develop guidelines on the following: procedures for assessment of the robustness of key analyses and the effects of data inconsistencies; acceptable error rates for different fields in the database; minimal standard operating procedures for data management, as called for by ICH (e.g., how corrections are made and by whom, documentation of the process, and validation of programs that produce data tables and listings); and effective communication of quality assurance issues to FDA. At present, a major cost factor may be an incomplete understanding by industry of FDA requirements for data quantity, quality, and cleanup. As long as technology continues to advance through faster computers and global databases without a corresponding improvement in the process, costs will continue to increase. FDA may not have a clear enough understanding of the data handling and cleanup process. If a sponsor clearly documents focused data cleanup as well as error rates, the FDA should accept the sponsor's data unless FDA believes that safety, efficacy, or another important aspect may be adversely affected. Increased communication and collaboration involving medical, statistical, and regulatory

OCR for page 13
specialists from both industry and FDA are important means to developing a common understanding of the requirements and processes that are acceptable to both. These efforts will prompt trust and achieve a common goal: production of a quality report with well-documented proof of safety and efficacy. Preparation and Content of Marketing Applications Presented by Nicholas Pelliccione, Ph.D. Senior Director, Worldwide Regulatory Affairs, Schering Plough More active and rapid drug approval provides earlier benefits to the patient, the medical community, and the pharmaceutical industry. The decision on when to file a New Drug Application (NDA) must therefore strike a balance between the collection of enough data to support a complete and high-quality application and the time and expense involved in obtaining these data. Although some submissions consist of greater than 500 volumes of data, an NDA often requires less documentation. The size of an NDA is primarily determined by the number of trials required to prove a drug's safety and efficacy. A study involving a drug for the treatment of cancer might require only a few hundred subjects, whereas one involving a drug used for the treatment of cardiovascular disease can require more than 10,000 subjects. Other sections of an NDA that require extensive documentation include sections on preclinical pharmacology and toxicology, carcinogenicity, chemistry, manufacturing and controls, and labeling information. The scope of formal meetings between industry sponsors and the Food and Drug Administration (FDA) are defined in the NDA regulations. In addition, the sponsor's regulatory staff are frequently in verbal contact with FDA. The timing of the meetings before a request for an Investigational New Drug application, at the end of Phase 2, and before a request for an NDA helps advance a drug through the process; failure to take advantage of these meetings can lead to expensive delays. A key consideration in filing an NDA is the number of pivotal trials necessary to demonstrate safety and efficacy in Phase 3 trials. In most cases, two adequate and well-controlled trials are needed to establish the safety and efficacy of a drug. For oncology or orphan drugs, however, the acceptability of one pivotal trial may be negotiated with FDA. Another consideration involves the distinctions between superiority or equivalence trials. Superiority trials may require fewer subjects, since the difference in the effects of two drugs that investigators are trying to prove is much larger. Studies performed to demonstrate the equivalence of existing drugs require more subjects and thus more time. Consequently, discussions with FDA early in the NDA process will contribute significantly to proper experimental design, the use of adequate number of subjects in the trial, and a sufficient number of clinical trials.

OCR for page 13
FDA and more recently the International Committee on Harmonization (ICH), have issued documents that provide guidance on the preparation of an NDA (Code of Federal Regulations, [21CFR 201, 312, 314, 600, 601]). These guidelines should be followed when clinical reports, statistical analyses, labeling, or other documentation is being prepared. Moreover, FDA recommends that companies build these guidelines into their development program. Adherence to ICH guidelines is equally important for companies desiring international approval of their drugs. More information on these guidelines may be found on the World Wide Web at the following URLs: Center for Drug Evaluation and Research guidance source: http://www.fda.gov/cder/guidance/index.htm; Center for Biologics Evaluation and Research guideline source: http://www.fda.gov/cber/; and ICH guideline source: http://dg3.eudra.org. FDA has improved its turnaround time on applications for new drugs, biologics, and devices. The Prescription Drug User Fee Act of 1992 (PDUFA) requires sponsors to pay a fee to have their applications reviewed. These fees were used to increase FDA staff requirements to meet the performance goals implemented by PDUFA. Both the fees and the performance goals were renewed in the FDA Modernization Act of 1997. As a result, a company can now depend on receiving an action letter in 12 months for a standard review and 6 months for a priority review. A key factor that ensures the effectiveness of PDUFA has been industry's understanding that NDAs must be submitted in accordance with established rules and that the application is complete upon filing or the NDA will not be reviewed. Since no sponsor wants its submission to be refused, the quality of submissions has increased significantly in the past 5 years. Panel Discussion During the first panel discussion, participants identified three key trends that may affect the response to issues addressed by workshop presenters. First, economic constraints on the health care system and safety concerns for patients suggest that there is a need for more information regarding tangible outcomes. Second, the pace of discovery is accelerating, and with it the number of trials that are under way is also increasing. Third, at, least one effective therapy is available for most diseases, suggesting an increase in the number of equivalency trials, which require large numbers of subjects in whom differences in clinical outcomes are detected as a function of the particular therapy. These trends may lead to a collision between attention to minutiae, with its accompanying costs, and the information that society needs to choose safe treatments rationally. Four responses to this conflict are identified below:

OCR for page 13
1.   The intent of medical records is to provide a factual account of a subject's response and therapy. Auditing should critically review these records without biased assumptions on the data's accuracy. 2.   There is a need for standards in data management, which would include standard nomenclature and differentiation between critical, high-quality data and secondary or background data. 3.   More credentialed investigators and study coordinators are needed. 4.   Sponsors may need to become more involved in paying patient care costs that are accrued specifically during approved clinical trials. Participants discussed the development of a ''decision science'' that could establish quality control procedures on a scientific basis as one way to improve data quality and validity. For example, understanding of the current kinds of errors and the impacts of these errors on the interpretation of data and the conclusions drawn from these data needs to be improved. A means for the quantification of the level of acceptable data inaccuracy and the resulting sample size for clinical trials would enhance these efforts. Improved technological capabilities have facilitated the rapid and voluminous data acquisitions by many investigators. A sharper focus on data quality and relevance rather than data quantity needs to be emphasized, especially with regard to the level of acceptable data quality. Quality data were then defined as data that support the same conclusions and interpretations as those derived from error-free data. Gathering of quality data requires greater transparency and sharing among study sponsors as well as with Food and Drug Administration (FDA), to achieve greater standardization and to increase confidence in innovative approaches. To accomplish this, there must be a greater level of communication between those who monitor the trials and the data analysts. A perspective from the managed care industry was that the number of equivalency trials requiring relatively large patient populations is increasing. An opportunity provided by managed care is organized access to large numbers of potential enrollees in clinical trials. A concern expressed by the managed care industry was the possibility that long-term side effects of drugs would be undetected after an accelerated review process. These concerns involve both regulatory issues and industry's commitment to evaluating the long-term effects of these drugs. The remaining discussion focused on the impacts of (1) multinational trials and (2) outsourcing of study coordination. Most panelists agreed that auditing of international data presents more of a challenge than auditing of data from studies conducted in the United States because of the use of different definitions for disease states and other variables. Studies conducted in developing countries (e.g., AIDS treatment trials) raise other sets of issues. Among these are ethical concerns. To reduce the cost of multinational clinical trials, some companies have set up European divisions, if only to decrease the expense of international travel. The use of International Committee on Harmonization (ICH) guidelines as the starting point for the standardization of operating procedures and labora-

OCR for page 13
tory practices and cross-training of personnel from different quality assurance (QA) groups are approaches undertaken by most companies. These companies have learned that a high level of communication and cooperation is necessary to influence the ICH guidelines. Contract research organizations (CROs) are not used by all sponsors. However, CROs are usually used for study coordination rather than for auditing or QA. Nonetheless, outsourcing does not completely replace internal resources, and high levels of communication and cooperation within a company are required. When data originate from CROs, most sponsors use the same computerized validation and other QA procedures that they would use if the data originated from internally performed studies. Workshop participants discussed broader questions regarding the ultimate goals of data quality and the purpose of various QA procedures. For example, there was general agreement that the purpose of monitoring was to identify and correct data inconsistencies immediately upon their occurrence. Consequently, monitoring positively affects training and QA efforts. The National Cancer Institute model, as discussed earlier, addresses the issues of QA by focusing on the institution rather than on an individual study protocol. This is similar to the idea of having certified investigators conduct clinical trials. Because each trial is unique, the panelists felt that it is inappropriate to take a uniform approach to data quality; perhaps, instead, each protocol should have an explicit data quality plan that addresses the need for monitoring and audits in terms of the characteristics and complexity of that particular study. Multicenter and multinational studies, for instance, typically require higher levels of monitoring. This reinforces the desirability of developing a "decision science" with methodologies or procedures that provide guidance on (1) protocol design, (2) acceptable error rates for different variables, and (3) the impact of data errors on the conclusions and decisions that result from a trial. This might provide industry an alternative way to achieve "data credibility" for its studies and for FDA applications. Finally, there was general agreement that although monitoring improves data quality, too much data are collected in disparate formats and the increased cost of monitoring has not necessarily brought about a corresponding increase in data quality. Three options for further action were identified: 1.   Simplify the process by designing the protocol correctly, selecting the right population for study, and identifying the right endpoints. 2.   Decide which measurements of quality data should be used on the basis of evaluations of impact of data errors. 3.   Simplify data collection and data monitoring by developing simpler and more consistent collection forms (e.g., for adverse events) and by standardizing approaches to monitoring. The creation of a detailed evaluation of an intensively monitored and audited trial may be an initial step toward better understanding and communica-

OCR for page 13
tion of the process. Such a report could identify the nature of errors and their impacts on the outcome of the study and could be used to make theories about the impacts of undetected errors. Although standardization and simplification are desirable goals, industry may be reluctant to be the test case for innovative approaches unless FDA approved such approaches by prior agreement.