on the type of activities supported. Pointing to the extremes, she suggested that those projects investigating nonmedical sales and marketing have far different data quality requirements than those involving regulated research and clinical decisions.
FIGURE 3-1 Spectrum of data quality requirements based on intended use.
SOURCE: Reprinted with permission from Rebecca Kush.
Currently, Kush said, clinical research (especially regulated clinical research) presents a plethora of logistical challenges to clinical investigators. The average active study site has 3 or more disparate data entry systems; 50-60 percent of trials involve paper data collection on 3- or 4-part forms, while the remaining 40-50 percent of trials involve electronic data capture tools. Data are entered 4-7 times total on average, including 2-3 times by the clinicians or study coordinators. Thus, there is plenty of opportunity to introduce transcription errors. In addition, reporting an unexpected or serious adverse event does not fit into normal clinical care workflow and takes excessive time, so that researchers often refrain from doing so. Given these inefficiencies and labor-intensive procedures, Kush emphasized, most clinicians do one regulated clinical research study and no more. Further exacerbating the data quality issue, efforts to ensure that study data are clean can involve significant resources and financial consequences; depending on the point of the research process in which the error is identified, correction of a single error in the database can cost upward of $8,000.
However, promise lies in the growing industry appreciation of the power of standardization. Kush described CDISC’s progress in this field through development of integration profiles with the capability to enable the extraction of a standardized, clinical research dataset (CDASH, Clinical Data Acquisition Standards Harmonization) from EHRs. The resulting interoperability specification (a set of standards) meets global regulations for collection of electronic research data and produces the minimum dataset needed on any clinical trial for regulated purposes. This combination of workflow enablers and standards has been used in safety reporting, regulatory reporting, and Phase 4 trials; and presents an opportunity to support research with EHRs and contribute to the process of research informing clinical decisions faster with higher quality information.
Kush then pointed to the Coalition Against Major Diseases (CAMD), an effort initiated by the Critical Path Institute (C-Path) to pool Alzheimer’s-related trial data from multiple sources with the goal of generating better information from a larger, aggregated database. By standardizing this data into the CDISC format and then pooling it across sources, C-Path was able to create a database of more than 6,000 patients and has now made this database available to researchers around the world. A standard guide has been developed for researchers moving forward, so that they can collect Alzheimer’s data in the CDISC format from the start and it can be easily compared with the current database. As such, this standardization effort has allowed researchers the capability to easily break out different cohorts