Summary of Issues

Janet Woodcock, M.D.


The workshop was successful in broadening the dialogue between the Food and Drug Administration (FDA) and industry on the subject of data quality. This dialogue clarified many expectations on both sides, notably by dispelling the myth that FDA cannot accept errors in a submission. Major points stressed repeatedly throughout the workshop included recognition by FDA (1) that there will be errors in the clinical trial process, (2) that the existence of errors does not mean that there is fraud, and (3) that a reasonable number of minor errors is acceptable, as long as they do not compromise the reliability of the overall data set or the inferences that are being drawn from the data about the safety and effectiveness of the product. FDA reviewers described several instances in which they approved products despite sloppiness and even fraud at isolated sites, largely in part because the agency went to extraordinary lengths to reconstruct data sets and restore confidence in the reliability of the inferences drawn from those data.

FDA deals with errors in almost every submission that it reviews. Foreign trials seem to have particularly high error rates. Although outright fraud is extremely rare, the violating investigators are likely to be conducting different trials for other sponsors. Although FDA feels that many of the errors they find would have been detected by adequate monitoring, several speakers questioned whether the entire system—industry in its monitoring and FDA in its reviews—is devoting too much attention to minor details when it should instead be looking for a better way to assess data quality or even a way to build data quality into the system.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 42
Summary of Issues Janet Woodcock, M.D. The workshop was successful in broadening the dialogue between the Food and Drug Administration (FDA) and industry on the subject of data quality. This dialogue clarified many expectations on both sides, notably by dispelling the myth that FDA cannot accept errors in a submission. Major points stressed repeatedly throughout the workshop included recognition by FDA (1) that there will be errors in the clinical trial process, (2) that the existence of errors does not mean that there is fraud, and (3) that a reasonable number of minor errors is acceptable, as long as they do not compromise the reliability of the overall data set or the inferences that are being drawn from the data about the safety and effectiveness of the product. FDA reviewers described several instances in which they approved products despite sloppiness and even fraud at isolated sites, largely in part because the agency went to extraordinary lengths to reconstruct data sets and restore confidence in the reliability of the inferences drawn from those data. FDA deals with errors in almost every submission that it reviews. Foreign trials seem to have particularly high error rates. Although outright fraud is extremely rare, the violating investigators are likely to be conducting different trials for other sponsors. Although FDA feels that many of the errors they find would have been detected by adequate monitoring, several speakers questioned whether the entire system—industry in its monitoring and FDA in its reviews—is devoting too much attention to minor details when it should instead be looking for a better way to assess data quality or even a way to build data quality into the system.