National Academies Press: OpenBook

Sharing Clinical Research Data: Workshop Summary (2013)

Chapter: 5 Standardization to Enhance Data Sharing

« Previous: 4 Models of Data Sharing
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 43
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 44
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 45
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 46
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 47
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 48
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 49
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 50
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 51
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 52
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 53
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 54
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 55
Suggested Citation:"5 Standardization to Enhance Data Sharing." Institute of Medicine. 2013. Sharing Clinical Research Data: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/18267.
×
Page 56

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

5 Standardization to Enhance Data Sharing Key Messages Identified by Individual Speakers  Standardization can improve clinical research through in- creased data quality, better data integration and reusability, facilitation of data exchange with partners, increased use of software tools, improvements in team communication, and fa- cilitation of regulatory reviews and audits.  Collection of clinical research data using predetermined standards is preferable to post-hoc conversion of data to meet a standard.  The development of standards requires collaborative expert input, analysis, and consensus.  Clinical and scientific expertise is also needed to deter- mine how to fit data retroactively to standards and harmonize terminology.  Standards need to be used to the greatest extent possible, but they do not ensure data quality. Speakers at the workshop addressed issues of standardization in two ways. They described the general principles that should underlie such efforts and they drew lessons from specific projects that could be applied more broadly. Standards can be applied to clinical research data in multiple ways: standardized core datasets can provide a minimum set of variables that should be measured or recorded during a trial. Standards can also specify 43

44 SHARING CLINICAL RESEARCH DATA how demographic (e.g., gender) and clinical information is recorded or defined. The value of shared clinical data is undermined when those data can- not be used to answer new questions through secondary analysis. Stand- ards can help facilitate pooling of data from disparate sources, either to increase sample sizes or for comparison purposes. By harmonizing vo- cabularies standards can also help to ensure that researchers are “speak- ing the same language.” HOW STANDARDS BENEFIT SHARING Meredith Nahm, associate director for clinical research informatics at the Duke Translational Medicine Institute, emphasized that a major func- tion of data sharing is reuse of data for purposes other than those intend- ed by the people who collected the data. If the data are not defined well enough that others can use them, then the original researchers have not done their jobs well, she said. Data reuse requires both standards and a level of rigor and semantic specificity sufficient not just for human, but also for computational analysis. For example, she briefly described an effort by the Clinical Trials Network at the National Institute on Drug Abuse to de-identify data, align the data to Clinical Data Interchange Standards Consortium (CDISC) standards, and make the data available on the Web. Because data elements and tools were defined and imple- mented uniformly across the network, the mapping of the data onto the CDISC Study Data Tabulation Model (SDTM) standard was relatively straightforward, facilitating pooled analysis and cross-product comparisons. As an example of the difficulties in synthesizing a common set of da- ta element standards retrospectively from case reports, rather than having them defined upfront, she mentioned the different ways in which spon- sors operationalized critical variables in clinical trials on treatment of schizophrenia. As a result, each trial examined yielded fewer and fewer instances of new semantic content. Authoritative clinical definitions are essential, she said, to reduce the burden on clinical investigational sites and to support the compilation and reuse of data for health care, research, and regulatory decision making. “It all depends on the data element as the atomic level of information exchange.” To demonstrate the need for standards to ensure that shared data can be pooled and compared, Rebecca Kush, CDISC president and chief ex- ecutive officer, described the many different systems for reporting the

STANDARDIZATION TO ENHANCE DATA SHARING 45 gender of study participants. Some use 1 and 0 for male and female, oth- ers 1 and 2, others M and F, and others an arbitrary designation. Health Level 7 (HL7) has about 15 options for the gender field, she said, de- pending on how people define themselves. With so many systems and no standards for data collection and reporting, data often have to be exam- ined by hand just to determine something as simple as how many males or females are in a study. Using data standards, such as those being de- veloped by CDISC and other standards development organizations (SDOs), can save significant time and cost, especially when implemented in the early stages of the study, said Kush. She reemphasized the value of developing standards a priori around a core dataset that is required across all trials. Information is lost when data are gathered in different ways and later mapped to common standards. Standardization also provides oppor- tunities for additional impact on clinical research through increased data quality, better data integration and reusability, facilitation of data ex- change and communication with partners, interoperability of software tools, and facilitation of regulatory reviews and audits. Laura Lyman Rodriguez, director of the Office of Policy, Communi- cations, and Education at the National Human Genome Research Insti- tute, observed that the Institute has been thinking about issues of standardization as it has constructed large data repositories that combine genomic information with phenotype information. For example, the PhenX project, through a consensus process, has been working to create standard measures of phenotypes and environmental exposures for use in population-based genomic studies to facilitate cross-study comparisons and analysis. Standardized taxonomies to describe phenotypes ensure that different studies share a common vocabulary. Agreeing with several earlier speakers, Rodriguez emphasized that standards do not ensure quality and that the value of standardization is best realized when it is done upfront. However, aligning interests in the development of data standards and the sharing of data is not easy, she said. The search for common interests requires identifying common values and integrating them into the research enterprise. Communication and transparency can help identify and spread these common values while also building public trust. Sharing and accessing clinical information is a global issue, said Neil de Crescenzo, senior vice president and general manager at Oracle Health Sciences. For a number of projects in which Oracle has been in- volved, there has been heavy emphasis on data standardization. Innova- tion and progress in clinical research and care will depend on immense

46 SHARING CLINICAL RESEARCH DATA quantities of complex data being passed among organizations, and stand- ardization can help overcome some of challenges posed by the “3 Vs of big data”—variety, volume, and the velocity at which data are needed. Outside the United States, de Crescenzo has seen great progress in im- plementing requirements for the use of standards in national-level re- search projects and electronic health record (EHR) systems. These efforts over the past decade have yielded many lessons to be learned. Cautions on Standardization While acknowledging the value of data standards, Vicki Seyfert- Margolis, senior advisor for science innovation and policy at the Food and Drug Administration’s (FDA’s) Office of the Chief Scientist, brought up some points researchers should remember when thinking about standardizing data. “Standardization does not ensure quality,” she said. If not done well, conversion to a standard format has the potential to adversely affect data quality and analysis. For example, standardized formats for indicating patient race can still lead to inaccurate information if the categories used in questionnaires do not adequately capture the complexity of a person’s racial identity. It can also result in loss of trace- ability from the source. Standardization does not imply that data are fit for purpose either, she warned. Standardized data may or may not answer the questions of interest and may or may not be useful for future analysis. It may not be possible to predefine all standards, and not all data must be standardized. FDA is working to identify minimum sets of data points that must be standardized for analysis. The effort devoted to standards needs to be weighed against these other considerations, she said, to de- termine how much time and money to invest in standardization, especial- ly given that the data gathered will never be perfect. Standards solve some problems, Seyfert-Margolis said, but they do not solve problems with data quality, disease definition, basic under- standing, or data analysis. She emphasized the importance of defining diseases and having a clear understanding of clinical phenotypes as part of the standardization process. Especially as genomics begins to play a larger role in medicine, a taxonomy of disease will be needed to define patient subpopulations, “because we know not every type 2 diabetes pa- tient is the same, yet we call them all that.” In that respect, case report forms should not treat all patients identically because patient characteris- tics need to be probed carefully to clarify patient populations.

STANDARDIZATION TO ENHANCE DATA SHARING 47 DEVELOPING STANDARDS TO ENABLE DATA SHARING The Role of Standards Development Organizations Kush described the desired criteria for data standards to facilitate clinical research. They should be fit for purpose; global; based on good clinical practices, guidelines, and regulations; harmonized and semanti- cally consistent; developed through a recognized standards development process; consensus based; and platform independent. They should also encourage innovation and support links with health care. “There is no right or wrong in standards,” said Kush, “[just] how are we going to agree to go forward.” CDISC, a nonprofit SDO, works to create standards that meet these criteria in order to support the acquisition, sharing, submission, and ar- chiving of clinical research data. Such standards enable information sys- tem interoperability, thereby improving the efficiency and quality of medical research. The development of CDISC consensus standards re- quires the expert input from thousands of volunteers around the world, said Kush. In some cases, CDISC also works with other SDOs, like HL7 International, which generates standards for the exchange, integration, sharing, and retrieval of electronic health information to support clinical practice and health services areas. For example, CDISC and HL7, along with partners at the FDA and National Cancer Institute, developed the Biomedical Research Integrated Domain Group (BRIDG) model to ensure that standards spanning the entire clinical research process are harmonized. “The idea is to have all these standards working together and go from end to end in the clinical research process,” said Kush. The BRIDG model, which serves to bridge standards and research organizations, as well as the gap between clinical research and health care, provides a shared view of the semantics for the field of protocol-driven research and associated regulatory activities (e.g., postmarketing adverse event reporting). It was predicated on the need to pass information seamlessly between patient care and clinical research arenas in order to shorten the time lag between basic research and the implementation of new knowledge in patient care processes, said Charles Jaffe, chief executive officer of HL7. An important current challenge in the arena of clinical research is how health care providers at different sites can use different electronic health records yet share a core high-quality set of clinical research data. One solution discussed by Kush that has been developed by CDISC and

48 SHARING CLINICAL RESEARCH DATA its partners at Integrating the Healthcare Enterprise (IHE) is a tool that allows the EHR user to remotely collect a key set of data out of electron- ic health records, making it available for secondary uses like research and adverse event reporting. While complying with 21 CFR 11— regulations that require clinical researchers to implement controls such as audit trails and system validations to ensure that electronic health records are trustworthy and reliable—it can produce a standard core clinical re- search dataset, such as that defined by the CDISC Clinical Data Acquisi- tion Standards Harmonization (CDASH) standard. This integration profile is being implemented in Europe and in Japan and has been used in some postmarketing studies in the United States. More than just a standard, Kush explained, it is a workflow tool. When implemented among ambulatory care physicians at Harvard, the time to report an adverse event dropped from 35 minutes to less than a minute, and the number of reports increased dramatically. “This is a real workflow improvement effort,” said Kush. FDA Efforts New drug application data submitted to FDA have extremely varia- ble and unpredictable formats and content, which presents a major obsta- cle to timely, consistent, and efficient review with currently mandated time frames. “This has been the problem for years,” said Ron Fitzmartin, senior advisor in the Office of Planning and Informatics at FDA’s Center for Drug Evaluation and Research (CDER). “It is unbelievable that in 2012 we are still saying that.” This lack of standardization has serious implications for FDA reviewers. It limits their ability to address in-depth questions and late- emerging issues in a timely manner. It also impedes timely safety analy- sis to inform risk evaluation and mitigation strategy decisions, and limits the ability to transition to more standardized and quantitative approaches to benefit–risk assessment. Given the “tremendous workload” facing re- viewers at FDA, there is a great need for standards and tools that can ex- pedite the review process, Fitzmartin said. Toward this end, the Prescription Drug User Fee Act (PDUFA), which Fitzmartin noted was reauthorized in 2012 by the FDA Safety and Innovation Act (FDASIA), mandates that FDA develop “standardized clinical data terminology through open standards development organiza- tions with the goal of completing clinical data terminology and detailed

STANDARDIZATION TO ENHANCE DATA SHARING 49 implementation guides by FY 2017.” It also calls for FDA to “periodical- ly publish final guidance specifying the completed data standards, for- mat, and terminologies that sponsors must use to submit data in applications.” Fitzmartin presented a list of 58 therapeutic and disease areas where standards will be developed by the end of 2017, which will require extensive collaboration between FDA and other organizations. Already, a number of organizations are converging on this challenge, said Fitzmartin. Recently, for example, CDISC and the Critical Path In- stitute, an independent, nonprofit organization committed to accelerating the pace of drug and diagnostics development through data, method, and measurement standards, have collaboratively formed an entity called the Coalition for Accelerating Standards and Therapies (CFAST). CFAST is an initiative established with the objective of defining, developing, and maintaining an initial set of data standards for priority therapeutic areas identified by FDA. HL7 is also working with CDISC and CFAST on clinical data standards, as is a recently formed industry group called TransCelerate BioPharma. FDA is participating in these efforts by providing scientific and technical direction to prioritize therapeutic areas, advising on work streams, and publishing draft and final guidance on completed standards. Because the standards are coming out under PDUFA, Fitzmartin said, they will be enforceable. Fitzmartin concluded his presentation with three examples showing how standardization can help expedite the review process. In the first, Fitzmartin described a clinical data integration tool developed by SAS that could be used to map clinical trial submission data in the CDISC- developed standard SDTM format to the format required by a liver tox- icity assessment product called Electronic Drug-Induced Serious Hepatoxicity, or eDISH. Using this tool, reviewers do not have to spend time piecing these data together and can quickly drill down to the patient-level data to look at outliers and elevated values. In his second example, Fitzmartin described another adverse event diagnostic tool, this one developed by reviewers at FDA’s CDER, which uses input data in the SDTM format to perform more than 200 automat- ed, complex safety signal detection assessments. Within 1 month of be- ing available to reviewers, medical officers using this tool discovered multiple cases of adverse events that previously had gone undetected, including anaphylaxis and pancreatitis. Finally, Fitzmartin discussed how more than 50 common CDER re- view analyses, including demographics, exposure, adverse events, dispo- sition, and liver toxicity, can now be automated through the use of

50 SHARING CLINICAL RESEARCH DATA standard review analysis panels. A typical clinical review has not previ- ously been able to produce this degree of output, which can corroborate sponsors’ analyses and improve reviewer efficiency, consistency, and quality. However, standard panels require standardized data to run suc- cessfully, Fitzmartin said. RETROSPECTIVE VERSUS PROSPECTIVE APPROACHES TO DATA STANDARDIZATION Seyfert-Margolis described an experiment at FDA that was designed to evaluate the return on investment from normalization of raw data on an as-needed basis as compared to conversion of legacy data into a standard format. In this experiment, two different approaches were taken to the standardization of data. In the first, legacy data from about 100 new drug applications (NDAs) were converted to a standard data format with no predetermined scientific questions, then the standardized dataset was used for research. In the second approach, the converted data and the unconverted data were both used to answer a specific scientific question using a program called Amalga, Microsoft software designed to integrate patient data from disparate sources and in different formats. The first approach yielded several key insights regarding the standard- ization of legacy data to allow for integration and comparison of studies and products, according to Seyfert-Margolis. First, scientific questions drove the details of the conversion. Clinical and scientific subject-matter expertise was essential to determine how to reorganize the data into the standard format required to address a particular question and to harmo- nize the terminology. Statisticians were needed to translate scientific questions into analyzable components. In addition, quality control of the converted data was essential, but time consuming, and the conversion activity in general was resource intensive and expensive, costing about $7 million in this case. Seyfert-Margolis went on to describe lessons learned from the se- cond approach to data standardization, where FDA used a tool that could take data from a variety of formats and transform it on the fly depending on the questions that were being asked of the data. Such tools allow inte- gration of multiple types of data without having to go through all the time to standardize it first. Using this approach, they were able to inte- grate disparate regulatory datasets, including postmarket and premarket data, and in the process answer many interesting questions. Together,

STANDARDIZATION TO ENHANCE DATA SHARING 51 standardization and advanced tools that integrate data on the fly could help move advanced analytics forward, she said, whether for the evalua- tion of multiple clinical trials or single product applications. Collection of data using standards—versus conversion to a standard—is optimal, Seyfert-Margolis concluded. These standards should be imple- mented in the same way across studies, which would facilitate analysis across studies at FDA. DATA-SHARING APPROACHES THAT HAVE BENEFITED FROM THE USE OF STANDARDS The Alzheimer’s Clinical Trials Database Without standards, said Carolyn Compton, president and chief exec- utive officer of the Critical Path Institute (C-Path), integrating datasets and pooling data is difficult. The Critical Path Institute acts as a trusted third party that works with partners in FDA, industry, and academia to develop consensus measurement, method, and data standards. The stand- ards are then submitted to FDA for qualification. After qualification is achieved, the standards enter the public domain and can be used by eve- ryone. C-Path “convenes consortiums to bring together the best science and in this fashion create shared risk and shared cost for the creation of these standards.” Compton went on to describe one of six such global consortiums organized by C-Path, the Coalition Against Major Diseases (CAMD), which focuses on diseases of the brain and peripheral nervous system. The coalition is seeking to advance drug development tools and standards as a means of addressing the challenge of the unsustainable time and cost required to get a new drug to market. In particular, the focus of its efforts is process improvement to advance effective treatments for Alzheimer’s and Parkinson’s diseases. CAMD is working to qualify biomarkers as drug development tools and has also been developing standards to create inte- grated databases drawn from clinical trials. These databases have been used to model clinical trials to optimize trial design. Nine member companies agreed to share placebo control data from 22 clinical trials on Alzheimer’s disease, but the data were not in a com- mon format and needed to be combined in a consistent manner, Compton explained. All data were remapped to the CDISC standards and pooled. The resulting database was used to develop a new computerized clinical

52 SHARING CLINICAL RESEARCH DATA trial simulation and modeling tool. To get there, however, the contrib- uting companies had to go through a corporate approval process to share and de-identify the data, after which C-Path did further de-identification to ensure compliance with Health Insurance Portability and Accountabil- ity Act requirements. The modeling tool allowed for accurate quantitative predictions of defined patient populations, Compton said. By merging data from di- verse sources, 65-year-old males who looked alike in the databases could be divided into three classes with different trajectories of disease. “See- ing this kind of distinction emerge from the modeling tool would allow you to design a trial much more wisely,” said Compton. “It would inform patient selection, study size, study duration, study feasibility, and even study costs.” Compton cited several key insights gained from the project. First, as others noted previously, legacy data conversion is resource dependent, but worthwhile for specific projects. In this case, de-identifying data and converting it to a standard format took 9 months, but generated a data- base with 6,100 Alzheimer’s disease patients. To get the value back from the conversion process, it is important to assess upfront that the database will be useful for achieving specific objectives, like qualifying a new tool. If it will be, selectivity is beneficial, she recommended. “Convert the data you need, [but] maybe not everything.” Once data are converted to a common standard and aggregated, the addition of standardized data from other sources, whether prospective or retrospective, becomes sim- plified and expands the power and utility of a standardized data resource. “Your database continues to grow over time and in power,” Compton said. Based on the success with Alzheimer’s, the approach is now being applied to other research projects, including the development of new tools for Parkinson’s disease, polycystic kidney disease, and tuberculo- sis. According to Compton, this approach could cut drug development times “by 4 to 5 years.” Such tools also have applications to postapproval safety monitoring and data gathering. Translational Medicine Mart Eric Perakslis, chief information officer and chief scientist for infor- matics at FDA, provided an overview on an initiative he spearheaded while working at Johnson & Johnson in 2008-2009. The company asked

STANDARDIZATION TO ENHANCE DATA SHARING 53 him to bring together data and informatics across their immunology, on- cology, and biotechnology franchises, which originally had been differ- ent companies with many different clinical trials and standards. Rather than reinventing the wheel, he and his colleagues built their system off a data warehousing tool called i2b2 that had been developed by researchers at Harvard for data from electronic health records. They made it open source and ran it through Amazon’s cloud computing service, which Perakslis termed “heretical” for the time. The system, known as Translational Medicine Mart (tranSMART), was designed for research and development, specifically to generate hy- potheses for biomarker research. The requirements for this kind of sys- tem are different from those required for automating premarket review of FDA submission data, said Perakslis. Johnson & Johnson wanted to be able to ask secondary research questions using the substantial amount of clinical trials data they had already collected. For example, many clinical trials have been done on potential asthma medications. What else can be learned from those trials? At the time, the Innovative Medicines Initiative in Europe had gotten under way, and its first project was to look at severe asthma in 5,000 pa- tients. Perakslis worked with the consortium to integrate the system he had helped build with the European effort. Within 3 months, the group had set up a pilot study and was able to combine data from several phar- maceutical companies and begin analyzing it. “Nobody could believe it had happened so early,” said Perakslis, “but what happened more than anything else was the incentives aligned. We all had one goal.” Several lessons emerged from the experience, according to Perakslis. First, use the standards that are available because “patients are waiting.” At some point, human curators are going to be necessary to align the data and insert it into a database, but to get the project moving forward, start with what already works. Second, an important goal for a project such as this one is to rule out options quickly. Clinical trials should not waste patients’ time on drugs that are not going to work. “Get me 60 or 70 hy- potheses that I can rule out, and then I can be really interested in the one that I cannot.” Perakslis concluded that he prefers light and agile data “marts,” or databases generated to answer specific questions or test hypotheses, over large data warehouses. “That sounds like IT speak, but what I am saying is aggregate the source around the question quickly and effectively.” That way, as technologies, standards, and definitions change, tools are flexible and can change accordingly.

54 SHARING CLINICAL RESEARCH DATA ePlacebo Michael Cantor, senior director of clinical informatics and innova- tion at Pfizer Inc., described an ongoing data-sharing project being un- dertaken by Pfizer as part of its “Data Without Borders” initiative. The project, called ePlacebo, pools data from placebo and control arms across multiple clinical trials in a variety of therapeutic areas. The result is a large comparison group that can be used to evaluate events that might not be seen in a single trial, study placebo effects, and possibly reduce the size of placebo arms needed in future clinical trials. So far, data from about 20,000 patients have been compiled from hundreds of trials, and Pfizer is hoping to expand the utility of this data source by soliciting par- ticipation from other organizations. The goal for ePlacebo is to provide a resource that is inclusive, rests on standards, and spans disease areas. The intent is to set it up as a self- service dataset that could be used for any legitimate research purpose. However, consistent data standards have only been implemented at Pfizer within the past decade and as a result, only relatively recent studies were used for ePlacebo because of the difficulties combining data from trials that did not use standards or implemented them in different ways. GOVERNANCE ISSUES Compton discussed several important governance issues that arose during the CAMD initiative and in other C-Path efforts. First, rules for developing the data standards require collaborative expert input and con- sensus. Disease definitions need to come from the bottom up, said Compton, from the clinicians who are dealing with patients and diseases. A system cannot be imposed on them from the outside. However, the National Institutes of Health can use its purse strings to enforce clinician- driven, evidence-based guidelines, and perhaps some degree of evidence- based standardization could be regulated. Also, best practices for merg- ing the data call for the use of high-quality data and FDA-accepted standards that work together along the process, from beginning to end. With regard to rules for accessing the data, the broadest possible data use agreements are needed, and access controls need to be appropriate to the use objectives. Finally, qualified drug development tools should be placed in the public domain to maximize their use.

STANDARDIZATION TO ENHANCE DATA SHARING 55 With data as complex as those produced by clinical trials, standardi- zation is needed upfront, added Cantor. But which standards should be used and how should they be implemented? Political will is needed to enforce standards—for example, by using the funding process to encour- age standardization. Standards make it much easier to overcome the technical hurdles to broad-based cooperative projects, but people and institutions need the right incentives to contribute their data. As detailed in the next chapter, the social and cultural aspects of sharing clinical data are much more challenging than the technical issues.

Next: 6 Changing the Culture of Research »
Sharing Clinical Research Data: Workshop Summary Get This Book
×
Buy Paperback | $42.00 Buy Ebook | $33.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Pharmaceutical companies, academic researchers, and government agencies such as the Food and Drug Administration and the National Institutes of Health all possess large quantities of clinical research data. If these data were shared more widely within and across sectors, the resulting research advances derived from data pooling and analysis could improve public health, enhance patient safety, and spur drug development. Data sharing can also increase public trust in clinical trials and conclusions derived from them by lending transparency to the clinical research process. Much of this information, however, is never shared. Retention of clinical research data by investigators and within organizations may represent lost opportunities in biomedical research. Despite the potential benefits that could be accrued from pooling and analysis of shared data, barriers to data sharing faced by researchers in industry include concerns about data mining, erroneous secondary analyses of data, and unwarranted litigation, as well as a desire to protect confidential commercial information. Academic partners face significant cultural barriers to sharing data and participating in longer term collaborative efforts that stem from a desire to protect intellectual autonomy and a career advancement system built on priority of publication and citation requirements. Some barriers, like the need to protect patient privacy, pre- sent challenges for both sectors. Looking ahead, there are also a number of technical challenges to be faced in analyzing potentially large and heterogeneous datasets.

This public workshop focused on strategies to facilitate sharing of clinical research data in order to advance scientific knowledge and public health. While the workshop focused on sharing of data from preplanned interventional studies of human subjects, models and projects involving sharing of other clinical data types were considered to the extent that they provided lessons learned and best practices. The workshop objectives were to examine the benefits of sharing of clinical research data from all sectors and among these sectors, including, for example: benefits to the research and development enterprise and benefits to the analysis of safety and efficacy. Sharing Clinical Research Data: Workshop Summary identifies barriers and challenges to sharing clinical research data, explores strategies to address these barriers and challenges, including identifying priority actions and "low-hanging fruit" opportunities, and discusses strategies for using these potentially large datasets to facilitate scientific and public health advances.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!