Omics technologies have ushered in a new era in biomedical research. Omics data are extremely complex and multidimensional, with a high risk of inaccuracies being introduced by inappropriate methods, human error, conflicts of interest, or acts of commission/omission. Omics research requires a multidisciplinary team with specialized expertise, which adds to the challenge of conducting scientifically rigorous research and makes overseeing and reviewing omics studies difficult. This multidimensionality introduces an inherent risk of overfitting the data, making independent validation critical. While other fields such as high-energy physics, astrophysics, and cosmology also require specialized expertise and multidisciplinary collaboration, and deal with data complexity and high dimensionality, the development of omics-based tests is different in that the tests have potential commercial value and there is potential for developers to reap financial gains. In addition, patient safety is paramount for omics-based tests that are used to aid patient treatment decisions. Although these characteristics are also true of drug development, that process has more uniform and more stringent oversight from the U.S. Food and Drug Administration (FDA); all new drugs must demonstrate clinical utility in well-designed clinical trials to gain FDA approval. Thus, those responsible for the integrity of omics research—investigators, institutions, funders, FDA, and journals—should rethink the processes and protections designed to ensure that omics research is scientifically rigorous, transparent, and conducted ethically, with proper institutional and regulatory oversight.
The failures of the omics research at Duke University illustrate that current practices and safeguards can easily fall short (see Appendix B).
The Duke events thus provide a watershed illustration—reminiscent of the Gelsinger gene therapy cases at the University of Pennsylvania, the Santillan mismatched heart transplant case at Duke University Hospital, the Johns Hopkins asthma trial death, and the viral link to chronic fatigue syndrome at the Whittemore Peterson Institute for Neuro-Immune Disease—of how such research can go awry even though institutions and other responsible parties have extensive systems in place to ensure research integrity, with roles and responsibilities delineated (Enserink, 2011; Kolata, 2001; Nelson and Weiss, 1999; Sloane, 2003; Yarborough and Sharp, 2009). These processes need to be rethought in the omics era. In short, the ability of health care decision makers to rely on the trustworthiness of omics-based tests to predict disease risk and treatment response will be limited unless renewed efforts are made by all parties responsible for the integrity of this research.
The committee makes four recommendations related to defining the roles and responsibilities of the key parties involved in the conduct and evaluation of omics research (Recommendations 4-7). These recommendations are directed toward investigators and institutions (i.e., intrainstitutional responsibilities), funders, FDA, and biomedical journals. Recommendations 1 through 3, which are discussed in Chapters 2-4, refer to responsibilities of investigators, and focus on recommended best practices for the development, validation, and clinical utility assessment of candidate omics-based tests. Recommendations 4 to 7 are similarly critical because, without the participation of institutions, investigators, funders, FDA, and journals, the committee’s recommended evaluation processes for omics technologies intended for clinical use (Recommendations 1-3) cannot be implemented. The committee recognized that the recommendations presented in this chapter may increase the oversight requirements for omics research in some cases, but decided that these potential costs were offset by the added safeguards to the integrity of this research. If an institution does not have the infrastructure or capability to follow the recommended Test Development and Evaluation Process defined in this report, then the committee believes that the institution should consider not engaging in the translation of omics-based discoveries into validated tests for use in clinical trials and potentially clinical practice.
The committee developed the recommendations discussed in this chapter by reviewing the available literature about the design, conduct, analysis, and reporting of omics research and by identifying lessons learned from case studies of the development of omics-based tests (see Appendixes A and B). This chapter emphasizes lessons learned from the Duke University case study (Appendix B) in particular because the most publicly available information exists about this case study and because the Duke case was specifically highlighted in the committee’s statement of task. The committee also relied heavily on the work of previous National Academies reports
that have reviewed the roles and responsibilities of the parties involved in research. It is imperative that all responsible parties prepare for the omics research era, with its promise as well as its perils. This chapter discusses the details of how this preparation can be accomplished.
The roles and responsibilities of investigators and institutions that are involved in omics-based research are discussed together because both parties contribute to the scientific research culture in which omics research is conducted. They are also the two most responsible and the most knowledgeable parties in the entire evaluation process. Investigators control the culture of individual laboratories embedded within the larger institution. Individual laboratories can have unique values and cultural norms that are separate from the broader institutional culture. These variables become more complex as the research becomes more interdisciplinary, with the lead investigators setting the culture for the investigational team. Institutions and the institutional leadership, on the other hand, have the primary responsibility for the policies and procedures, reward systems, and values that contribute to the overarching institutional culture as well as for the infrastructure of oversight and support for research. Institutions and their leaders also have the greatest responsibility for in-depth investigation of potential lapses in scientific integrity because they employ, promote, and supervise the investigators who conduct these studies.
The National Academies defined integrity in the research process as “the adherence by scientists and their institutions to honest and verifiable methods in proposing, performing, evaluating, and reporting research activities” (NAS, 1992, p. 27). The challenge is that science is a self-regulating community, with few comprehensive guidelines for responsible research practices (Steneck, 2006). The guidelines that do exist often contradict each other (Emanuel et al., 2000). For example, there are inconsistencies in the rules governing the deidentification of personal health information, obtaining individual consent for future research, and the recruitment of research volunteers (IOM, 2009a). The 2011 Report of the Presidential Commission for the Study of Bioethics Issues recommended that the Common Rule be revised to include a section on investigators’ responsibilities in order to bring it into harmony with FDA regulations for clinical research and international standards (PCSBI, 2011). Moreover, when ethical standards and best practices are available to guide behavior, some investigators may still be unaware of these rules, or simply breach them. For example, Martinson and colleagues (2005) conducted a series of focus groups with investigators from top-tier research universities to identify the top 10 misbehaviors of greatest concern in science. They then surveyed more than 7,000 early- and
mid-career U.S. investigators who have funding from the National Institutes of Health (NIH) and asked them to report on their own behavior. Thirty-three percent of the respondents reported engaging in at least 1 of the 10 misbehaviors during the previous 3 years. The three most common misbehaviors were: (1) overlooking other researchers’ use of flawed data or questionable interpretations of data; (2) changing the design, methodology, or results of a study in response to pressure from a funding source; and (3) circumventing certain minor aspects of human-subjects research requirements (Martinson et al., 2005). This situation is problematic because the underlying science must be sound if patients are going to participate in clinical trials and, eventually, in consultation with their physicians, use research results for medical care decisions.
Responsible conduct in any research, including omics research, starts with the investigators. This includes both junior and senior investigators. This section of the chapter describes the roles and responsibilities of investigators who conduct biomedical omics research with the goal to improve patient care. These responsibilities include the most basic principles of science, such as a serious and in-depth consideration in a discussion section of a journal article of “what might be wrong with the data and conclusions I have just reported” (Platt, 1964). The specific responsibilities discussed below include fostering a culture of scientific rigor and welcoming constructive criticism, comprehensively reporting the methods and results of a study, and making data and code publicly available so that a third party can verify the data and result. Box 5-1 highlights themes extracted from several representative case studies for investigators to consider.
All investigators have a responsibility to promote a culture of scientific rigor and to transmit ethical principles of science to future generations of investigators. Scientific rigor can be fostered by developing clear standards of behavior, disseminating those standards through education and mentoring, and reinforcing the standards through exemplary practice at all levels of the research community (Frankel, 1995). Investigators who do not adhere to these values are not fulfilling their ethical responsibilities. Although many cultural issues are not unique to omics research, taking steps to improve scientific culture is particularly important in omics research because of the nature of omics discoveries, which depend on large datasets, complex analyses, and a specialized multidisciplinary team.
A number of influential reports have recommended sets of values,
The Duke Case Study
Several questions have emerged regarding the degree to which key tenets of scientific rigor (for both laboratory-based research and clinical trials) were followed in the Nevins laboratory at Duke University. First, there were numerous errors in the primary data (Baggerly and Coombes, 2009; Coombes et al., 2007). Predictors derived from the training datasets were not locked down, leading to flaws in the validation process and the omics-based tests that were developed. Second, major results in the papers published by the Duke investigators were not reproducible. For example, figures in the Hsu et al. paper could not be reproduced with the data provided (McShane, 2010b). Third, the Lancet Oncology paper states that the investigators had access to unblinded data as indicated by the statement that “MD, P F, A P, CA, SM, JRN, and RDI had full access to the raw data”; it was subsequently confirmed that the data files had not been blinded by the European investigators when the data were originally sent (Goldberg, 2009). Fourth, the Duke investigators did not provide the public with full access to their data and code (Baggerly and Coombes, 2009; Baron et al., 2010). They also failed to address the questions and challenges of external investigators who were trying to reproduce their work to the mutual satisfaction of all parties involved (Baggerly, 2011; McShane, 2010c). In response to the National Cancer Institute’s queries, the Duke investigators acknowledged that their tests were unreproducible and retracted the original papers (Bonnefoi et al., 2011; Hsu et al., 2010; Potti et al., 2011). Dr. Joseph Nevins, senior mentor of the investigators whose genomic predictors were used in the three clinical trials named in the IOM committee’s statement of task, stated during discussions with the committee that “a critical flaw in the research effort was one of data corruption” (Nevins, 2011). Throughout this process, the responsibilities of the coinvestigators on the research team and lines of accountability were apparently unclear.
The OvaCheck Case Study
The investigators made their initial datasets publicly available. Independent investigators found numerous problems with the statistical and experimental methods and concluded that the results were unreproducible (Baggerly et al., 2004). Thus, in this case, making the data publicly available may have helped prevent the routine clinical use of an unvalidated screening test.
Commercially Developed Tests: Data and Code Availability
A review of the six commercially available tests discussed in Appendix A illustrates that public availability of all omics-based test data has not been standard practice. The field of omics is early in its development, and the standards for data sharing have been unclear and only now slowly evolving toward more transparency.
Commercial interests and protection of proprietary information also may have limited the public availability of some data and information.
These six cases highlight several examples in which test developers explicitly note the availability of data. For example, Paik et al. (2004), Deng et al. (2006), and Rosenberg et al. (2010) reported the computational model for Oncotype DX, Allo-Map, and Corus CAD, respectively. Both tests developed as LDTs had published computational models (Oncotype DX and Corus CAD); only one FDA-cleared test has a published computational model (AlloMap). Discovery microarray data are available for MammaPrint, AlloMap, and Corus CAD (Deng et al., 2006; van ‘t Veer et al., 2002).* Buyse et al. (2006) reports that raw microarray data and clinical data for the MammaPrint clinical validation study were deposited with the European Bioinformatics Institute ArrayExpress database. Although there are examples of developers reporting the availability of a test’s computational model or data used in discovery or validation, often sufficient information is not publicly available for external investigators to fully reproduce a test.
NOTE: See Appendixes A and B on the case studies for more information.
traditions, and standards that investigators should embody to promote a culture of scientific rigor. The National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine (IOM) collaborated in producing the report, Responsible Science, Volume I: Ensuring the Integrity of the Research Process (NAS, 1992). This report highlighted the importance of investigators upholding the highest standards of honesty, integrity, objectivity, and collegiality. The authoring committee directed individual investigators to accept formal responsibility for ensuring the integrity of the research process and creating an environment, a reward system, and a training system that encourage responsible research practices. A more recent National Academies report, On Being a Scientist: A Guide to Responsible Conduct in Research (NAS, 2009), identified three sets of obligations for investigators: (1) an obligation to merit the trust that their colleagues place in them (i.e., science is cumulative and investigators build on previous work); (2) an obligation to themselves (i.e., investigators should adhere to professional standards and develop personal integrity); and (3) an obligation to act in ways that serve the public (i.e., the public uses science to make policy decisions). The Office of Research Integrity (ORI) of the Department of Health and Human Services (HHS) also has outlined several values that investigators should share in promoting a culture of scientific
rigor, including (1) honesty: conveying information truthfully and honoring commitments; (2) accuracy: reporting findings precisely and taking care to avoid errors; (3) efficiency: using resources wisely and avoiding waste; and (4) objectivity: letting the facts speak for themselves and avoiding improper bias (Steneck, 2006). These reports outline general guiding principles for investigators’ behavior. However, identifying the values and obligations that investigators should possess does not directly inform investigators on how they should respond in specific situations and conflicts. Ultimately, investigators’ actions need to be informed by good judgment and personal integrity.
Two of the major influences on the development of investigators’ values and integrity are advisors and mentors (Bird, 2001; NAS, 1992, 2009), who define, explain, and exemplify scientific norms and ethics. All members of the research team, including biostatisticians and bioinformatics scientists, should have access to mentors with the appropriate expertise and credentials. Senior investigators’ conduct can reinforce or weaken the importance of complying with these scientific norms and values. Sprague and colleagues (2001), for example, conducted a study to identify the methods by which ethical beliefs are passed on to students. They surveyed faculty and graduate students and asked respondents to rank methods of teaching about ethics; 1,451 surveys were distributed to faculty and 627 were returned (45.2 percent return rate). An additional 6,000 surveys were sent to academic departments to be distributed to graduate students and 1,152 were returned (19.2 percent return rate). A major weakness of this study is the low response rates. However, both faculty and students ranked courses dealing with ethical issues as most influential in teaching students ethical beliefs. Mentors in graduate school also were highly ranked, with graduate students ranking mentors as more important than faculty did. Other important influences included discussions in courses, laboratories, and seminars as well as interactions with other graduate students (Sprague et al., 2001). In other words, young investigators’ interactions with other investigators shaped their beliefs and values.
Another important component of promoting a scientifically rigorous culture, which falls to investigators, is valuing teamwork and mutual respect and empowering people at lower levels in the hierarchy to speak up if they observe a problem or have a concern regarding research practices. The aviation and energy industries provide evidence for the pivotal importance of creating cultures that value these characteristics and consistently expect and laud persons who speak up to alert the group to problems and concerns. For example, the aviation industry has recognized for some time that errors are more likely to happen when there is suboptimal teamwork and communication (Helmreich, 2000). Thus, improvements in aviation safety have been attributed to training crews on how to address and prevent human
error, the role of leadership, the need for monitoring and crosschecking decision-making processes, and the use of checklists. This same approach has been applied successfully to the patient safety improvement movement to reduce the effects of human errors (Gawande, 2009; Hudson, 2003; Longo et al., 2005; Pronovost et al., 2003) and can be applied equally to the biomedical research enterprise.
Fully reporting the methods and results of a study is essential for the reproducibility of research and for reviewers’ and readers’ evaluation of the validity of a study. Thus, investigators have a fundamental responsibility to provide a complete and accurate report of their methods and findings (NAS, 1992, 2009; Steneck, 2006). All publications—and omics publications in particular—should present a full and detailed description of the study methodology, the statistical analysis plan that was finalized before the validation data were analyzed, an accurate report of the results, and an honest assessment of the findings, including an explanation of limitations that may affect the conclusions (Platt, 1964; Steneck, 2006). This level of transparency should allow an independent third party to verify the data and results.
As discussed in Appendix D, reporting guidelines are tools to help investigators meet this obligation and report the essential information and elements of a study. All investigators who are coauthors on a report— and particularly a senior investigator or mentor—also are responsible for understanding the specific aims, methods, major findings, and implications of the interdisciplinary research. They are responsible for reading the complete manuscript, suggesting edits, and for being alert to misinterpretation, such as misrepresentation of findings and limitations, and discussing such observations with appropriate members of the team or oversight groups.
Data and Code Availability and Transparency
The scientific community widely agrees that investigators should make the research data and code supporting a manuscript, as well as the statistical analysis plan that had been finalized before data were unblinded and available for analysis, publicly available at the time of publication (NAS, 1992, 2009; NRC, 1985, 2003). Transparency is essential for the interpretability and reproducibility of research and a tenet of any good scientific method. Indeed, the purpose of methods sections in journal publications is to provide enough detail so that other investigators can interpret the results and, if they wish, reproduce the study and obtain the same results. Thus, providing sufficient detail of methods allows independent investigators to
verify published findings and conduct alternative analyses of the same data. It also discourages fraud and helps expedite the exchange of ideas (Peng et al., 2006). Investigators who refuse to share the evidentiary basis behind their conclusions, or the materials and analytical methods needed to replicate published experiments, fail to uphold transparency as a basic standard of science. In an era when much of the Methods section and/or elaborate data appear only in the Supplementary Materials section, more attention is needed to guide the reader through well-annotated supplementary material. This problem is perpetuated by the brevity of articles published in the higher impact journals.
The National Academies has issued numerous reports emphasizing the importance of data sharing. Sharing Research Data recommended that sharing research data at the time of publication should be a regular practice in science (NRC, 1985). A later report, Sharing Publication-Related Data and Materials, developed a uniform principle for sharing integral data and materials expeditiously (NRC, 2003). It recommended that authors include the code, algorithms, or other information that are central to verifying or replicating the claims in a publication. If the data and code cannot be included in the actual publication (e.g., because the data files are too large), the report recommended that the data and code be made freely available through other means in a format that allows an independent investigator to manipulate, analyze, and combine the data with other scientific data. The report also stipulated that, if publicly accessible repositories for data have been developed and are in general use, the relevant data should be deposited in those repositories. Investigators are responsible for anticipating which materials are most likely to be requested and should include a statement on how to access the materials in the published paper.
In On Being a Scientist, the National Academies addressed the challenge of sharing research data in the current environment, where the quantity and complexity of data are increasing and the cost of sharing data is high (NAS, 2009). The complications and cost of sharing large datasets also were recently highlighted in an issue of Science, dedicated entirely to data collection, curation, and access issues (Science Staff, 2011). The National Academies concluded that, despite these challenges, investigators have a responsibility to develop methods to share their data and materials at the time of publication (NAS, 2009). Investigators may share data through centralized facilities or undertake collaborative efforts to form large databases, such as the database of Genotypes and Phenotypes (dbGAP), the European Molecular Biology Laboratory’s European Bioinformatics Institute (EMBL/EBI), the National Library of Medicine’s Gene Expression Omnibus (NLM/GEO), Compendia Bioscience, UCSC Gene Browser, and ProteomeXchange. When data undergo extensive analysis as part of a scientific study, the requirements to share those data also include a requirement
to share the software, code, and sometimes the hardware used in the analyses (NAS, 2009). Authors can facilitate the use of such information with graphical user interfaces introduced into the dataset, for example, as facilitated through nanoHUB (Klimeck, 2011).
Ultimately, many investigators are unwilling to comply with the requirement to share their data and code. For example, in an article for the New York Times, Andrew Vickers, a biostatistician at Memorial Sloan-Kettering Cancer Center, documented his lack of success in requesting cancer data from various investigators from numerous institutions (Vickers, 2008). Vickers also referenced a survey conducted by John Kirwan of the University of Bristol on investigators’ attitudes toward sharing data from clinical trials. Three-quarters of the investigators surveyed stated that they were opposed to making original trial data available. They cited several reasons for refusing, such as the difficulty of putting together a dataset and the risk of their data being analyzed using invalid methods. Vickers concluded that investigators are often opposed to the potential use of their data by other independent investigators who may make influential discoveries, and often resist challenges to their conclusions that emerge from new analyses. Investigators may also be resistant to sharing their data and code because of the time and effort needed to curate and annotate a dataset and support other investigators’ access to the material.
The obstacles to sharing data and code may seem particularly daunting in omics research. However, the fields of molecular biology and structural biology widely use web-based genomic and proteomic databases (e.g., GenBank and Protein Data Bank) (Brown, 2003). These databases allow investigators to share DNA and amino acid sequences, as well as protein structure data, and many journals mandate deposition of these data as a condition of publication. Microarray assays do produce an enormous quantity of data (Quackenbush, 2009); as many as 1 million variant positions on the genome across thousands of samples, and next-generation RNA sequencing methods raise further challenges.
The scheme for Minimum Information About a Microarray Experiment (MIAME) was created and adopted by investigators in this field to improve the annotation of microarray data (Brazma et al., 2001). It established standard, comprehensive annotation requirements that have been adopted by most scientific journals. Data from more than 10,000 microar-ray studies have been deposited into public repositories designed to archive MIAME-compliant data (Brazma, 2009). MIAME also has stimulated the proteomics and metabolomics scientific communities to develop reporting standards and formats. In fact, the Minimum Information for Biological and Biomedical Investigations (MIBBI) project has cataloged more than 30 different reporting standards for biological and biomedical data (Taylor et al., 2008). Nevertheless, many investigators still fail to provide fully annotated
data (Brazma, 2009; Quackenbush, 2009). Thus, further steps need to be taken to ensure investigators share their data and code. The committee’s recommendations to journals and funders (discussed below) are intended to create additional incentives for investigators to comply with data- and code-sharing norms. Issues of proprietary information can be dealt with by depositing the materials with a responsible third party that can ensure confidentiality and protection of the material (e.g., FDA). The patent system also protects private investments in omics research (SACGHS, 2010) (see Box 2-1 for a more detailed discussion on intellectual property law and related challenges associated with data sharing).
Institutions and Institutional Leaders
This section describes the roles and responsibilities for institutions that conduct biomedical omics research aimed at improving patient care, including: fostering a culture of scientific integrity, overseeing research, increasing awareness of reporting systems for lapses in research integrity, investigating credible concerns about scientific integrity, monitoring and managing financial and non-financial conflicts of interest, and supporting and protecting the intellectual independence of biostatisticians, bioinformatics scientists, pathologists, and other collaborators in omics research. These responsibilities lie ultimately with institutional leadership. Indeed, any institutional attempt to meet these responsibilities will fail without explicit and visible support and direction from institutional leadership (Schein, 2004). Some of these responsibilities are closely related to the responsibilities of the investigators.
Institutions, such as universities and companies, and the institutional leaders, in collaboration with their investigators, play an essential role in promoting a culture that encourages investigators to act ethically and conduct scientifically rigorous research. Institutions and their leadership bear direct responsibility for complying with existing rules and regulations governing research; overseeing and creating reward systems for investigators; providing training and education to investigators on relevant topics; and producing an environment of trust, openness, and honesty. The integrity of the research enterprise depends on investigators, collaborators, and observers feeling encouraged and supported when they identify and report either routine scientific disagreements or potential breaches of scientific integrity, regardless of their position within the institution. Institutional leaders also have direct responsibility, when concerns are raised, for establishing and supervising a “process of evaluation” of specific research results and claims by their investigators.
In the Duke University case, inadequacies in the institutional oversight processes and a lack of sufficient checks and balances allowed invalid
omics-based tests to progress to clinical trials (see Appendix B). Therefore, the committee believes that explicitly defining the roles and responsibilities of all of the parties involved in omics research is essential to ensuring that omics-based tests are credible and can be used to inform real-world clinical questions. Any overlap in responsibilities can be an added layer of protection to ensure that omics research is scientifically rigorous, transparent, and conducted with proper oversight. Box 5-2 highlights relevant themes from several case studies for institutions to consider.
The Duke Case Study
Although the three clinical trials named in the IOM statement of task involved cancer patients, the trials were not overseen by the Duke Cancer Center, which has a substantial infrastructure of biostatistics, bioinformatics, and data management support for all studies conducted under its purview. Rather, these trials were overseen by the Duke Institute of Genomic Sciences and Policy (IGSP). According to Robert Califf, M.D., vice chancellor for clinical research and director of the Duke Translational Medicine Institute, “there was ambiguity” in the lines of authority and oversight in the IGSP during the conduct of the three clinical trials, and there were “numerous missed signals” that there were problems with the research (Califf, 2011b). Moreover, there was discontinuity in the statistical team, which may have contributed to the research team’s failure to follow proper data management practices (Kornbluth and Dzau, 2011). Junior investigators on the team either did not recognize what was wrong or did not feel comfortable expressing their concerns even though whistle-blowing systems were in place. Some members of the laboratory did ultimately come forward with concerns about the research, but only after the University began an investigation (Kornbluth, 2011).
Despite review of the clinical trials by a scientific review committee and approval by the Institutional Review Board (IRB), the trials were initiated using omics-based tests that were not “locked down” or properly validated and turned out to be unreliable. Three years later, the Duke IRB initiated an investigation of the three clinical trials based on the concerns of the National Cancer Institute and external statisticians. The IRB formed an external review committee composed of two statisticians to conduct an independent evaluation of the data but did not inform the external reviewers of the scientific questions raised by the MD Anderson biostatisticians and the NCI biostatisticians (Kornbluth and Dzau, 2011).
The NIH mission clearly includes accelerating translational research. Its mission statement is articulated as “seek[ing] fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce the burdens of illness and disability” (NIH, 2011). This emphasis on translation creates stresses on the research oversight system and requires complex collaborations between clinical and basic investigators. Thus, it is timely for institutional
The reviewers concluded that the omics-based tests were viable and likely to succeed based on the data provided to them by the Duke investigators. The university resumed the three clinical trials following this report (Kornbluth and Dzau, 2011). .Sally Kornbluth, vice dean for research, stated in discussions with the Institute of Medicine in August 2011 that she wished, in retrospect, that they had directed the external reviewers to give more in-depth consideration to the specific concerns of the outside parties (Kornbluth, 2011). Califf explained to the IOM committee that the university administration exercised caution in investigating the work of Nevins, partly out of deference to a well-regarded, tenured professor (Califf, 2011a). This illustrates that institutions, in conducting these reviews, face non-financial conflicts (protecting reputations of the institution or of investigators) in addition to financial conflicts (of individuals; and of institutions in patents and spin-off companies).
There was also confusion about what constitutes individual and institutional financial conflicts of interest (COIs). The investigators had an intellectual property interest and financial stake in the omics-based tests they were developing and evaluating. After reviewing multiple versions of the trial protocols, the IOM committee concluded that the consent forms for the clinical trials did not always include disclosure that some of the investigators held patents and had a financial stake in the omics-based tests being studied (which is recommended practice; Kim et al., 2004; OHSR, 2006). The institution itself was invested in the two spin-off companies related to the omics studies; it divested itself of those interests after the misconduct investigation was initiated.
In response to this situation, the university formed the Translational Medicine Quality Framework Committee to make recommendations to university leadership on appropriate oversight policies for omics research being tested in clinical trials (TMQF Committee, 2011).
Commercially Available Tests
Various types of institutions were involved in the development of the omics-based tests discussed in Appendix A, including universities (e.g., MammaPrint) and industry. However, the committee did not explore the institutional roles and responsibilities in these tests because of the lack of publicly available information and limited resources.
NOTE: See Appendixes A and B on the case studies for more information.
leaders to undertake a careful reappraisal of the research culture in their institution so that a culture of scientific integrity and transparency is promoted, particularly with regard to translational research that could have an impact on patients.
One challenge for institutions is expanding beyond a compliance-based culture, where the focus is on following the letter of the law, to a culture that emphasizes the spirit of the law and highlights the ethical principles underlying research-related behaviors (Geller et al., 2010; Yarborough et al., 2009). Institutions’ interests in preserving academic freedom can also make it difficult to impose rules that promote scientific rigor. Yarborough and colleagues (2009) conducted a workshop to identify strategies used by industries outside of biomedical research to promote ethically based cultures. The workshop participants emphasized the importance of self-regulation above and beyond what is required by the law. Also, the participants stated that, when problems occur (from errors to misconduct), the institutions involved need to conduct root-cause analyses to understand how the system allowed the problems to occur and take steps to correct the systemic problems to avoid similar lapses in the future. These suggestions are particularly relevant to omics research because it is a quickly developing field, and merely complying with existing rules may be inadequate for the responsible conduct of research. Moreover, omics-based test development sweeps across basic research, translational research, clinical research, and regulatory requirements for clinical applications.
Although there is no guidebook detailing exactly what steps and actions institutions should implement to encourage good behavior, a recent NRC (2002) report identified a list of practices that institutions can engage in to promote responsible conduct in research, including
- Providing leadership in support of responsible conduct in research;
- Encouraging respect for everyone involved in research;
- Promoting productive interactions between trainees and mentors;
- Advocating adherence to the rules regarding all aspects of the con-duct of research;
- Anticipating, revealing, and managing individual and institutional conflicts of interest;
- Arranging timely and thorough inquiries and investigations of allegations of scientific misconduct and applying appropriate administrative sanctions;
- Offering education pertaining to integrity in the conduct of research; and
- Monitoring and evaluating the institutional environment supporting integrity in the conduct of research and using this knowledge for continuous quality improvement (NRC and IOM, 2002).
Oversight of Research
Biomedical research falls under the purview of multiple federal regulations. Institutions are responsible for ensuring that research conducted at their facilities or by their investigators complies with these regulations. Three major federal regulations governing human subjects research are: (1) the Common Rule, which protects the safety, autonomy, privacy, and fair treatment of patient-participants in federally funded research conducted on humans, and the cultural groups from which they are recruited; (2) the Health Insurance Portability and Accountability Act Privacy Rule, which protects the privacy of personally identifiable health information created or received by healthcare professionals, health plans, or health care clearinghouses; and (3) the FDA Protection of Human Research Subjects regulations, which protect the rights, safety, and welfare of human subjects involved in research on products that FDA regulates, including drugs and medical devices (which include omics-based tests). The two main regulations governing animal research are: (1) the Animal Welfare Act, which sets standards for the transportation, care, and use of animals in research; and (2) the Health Research Extension Act, which delegates authority to the Secretary of HHS for animals used in biomedical research. Within an institution, multiple oversight bodies are involved in supervising research (see Box 5-3). The exact organization of the oversight bodies and policies is specific to each institution. Ultimately, every institution that conducts biomedical research should ensure that patient-participant safety and privacy are protected and that conflict of interest (COI) and good data management and analysis practices are followed.
- Institutional Review Boards (IRBs): Protect human safety, privacy, and autonomy; ensure informed consent
- Privacy Boards: Protect the privacy of individuals involved in research (an IRB can also serve as a privacy board)
- Scientific Review Boards: Evaluate the science underlying an intervention or test proposed for assessment in a clinical trial
- Data and Safety Monitoring Boards (also called Data Monitoring Committees): Independently monitor clinical trials to ensure the continuing safety of human subjects and the validity and integrity of the data
- Conflict of Interest Committees: Review individuals’ and institutions’ possible conflicts of interest
Institutions are responsible for establishing, supporting, and overseeing the infrastructure and research processes for omics-based test development and evaluation as well as best practices for clinical trials and observational research, including those incorporating omics technologies, and should assure that the evaluation process outlined in this report is followed for omics-based test development and evaluation at their institution (Recommendation 4a). Given the complexity of omics research and omics-based tests, the multidisciplinary nature of omics and research, and the potential for conflicts of interest in developing and evaluating tests for clinical use, institutional leaders should pay heightened attention to providing appropriate oversight and promoting a culture of scientific integrity and transparency (Recommendation 4b). These recommendations aim to emphasize and enhance institutional awareness of existing responsibilities to ensure the integrity of the scientific process; this includes optimally organizing oversight bodies. Omics research may impose some novel research designs and other considerations for institutional oversight bodies. To facilitate compliance with FDA’s regulation of new devices (including omics-based tests), the committee recommends that institutional leaders designate specific Institutional Review Board (IRB) member(s) to be responsible for considering investigational device exemption (IDE) and investigational new drug (IND) requirements as a component of ensuring the proper conduct of clinical omics research (Recommendation 4[b][i]). The committee makes specific recommendations to FDA on how to improve this process (discussed below). IRBs are also required by law to include members “with varying backgrounds to promote complete and adequate review of research activities commonly conducted by the institution,” such as expertise in clinical trial design or omics research.1 In addition, institutions may need to develop infrastructure to protect data provenance in omics research (see Box 5-4 on Clinical Trial Management Systems).
Institutions also are responsible for overseeing COI policies. A recent IOM report defined a COI as “a set of circumstances that creates a risk that professional judgment or actions regarding a primary interest will be unduly influenced by a secondary interest” (IOM, 2009b, p. 46). The potential for bias and COI to compromise the scientific rigor of a study can be particularly great for industry-sponsored studies. A substantial body of evidence suggests that biomedical studies funded by industry are open to systematic bias (Bekelman et al., 2003; Blumenthal et al., 1996; Rennie, 1997; Stelfox et al., 1998). The new financial COI policy governing all research funded by NIH or other U.S. Public Health Service (PHS) agencies states that institutional officials, rather than the investigator, should determine if payments from drug companies or other outside sources constitute
1 Protection of Human Subjects, 45 CFR 46,107 (2009).
Best practices for clinical trials that provide the definitive evaluation and utility assessment of an omics-based test include the use of a clinical trial management system (CTMS), in which data are entered, edited, and stored in a controlled environment where audit trails can protect the data from inadvertent or intentional corruption. A CTMS is now required for cancer centers funded by the National Cancer Institute and for institutes funded through the Clinical and Translational Science Awards (CTSA) of the National Institutes of Health, which includes many leading academic research institutions. Although use of these systems is now widely accepted in clinical research, their use in omics research is not standard practice. Rather, various data management strategies are often developed locally by basic omics research groups where quality control and data integrity and security are not optimal. However, there are commercial software systems that can provide such data management security requirements, and institutions can readily purchase a system that is used by a central data management office for any omics research that is intended to lead to a product for clinical use, and, thus, should go through a rigorous clinical evaluation such as a randomized clinical trial. In addition, the omics data and computational models that were used to develop an omics-based test should be available and auditable to allow external review and validation.
SOURCES: Chahal, 2011; Choi et al., 2005; Philip et al., 2003.
a conflict (Kaiser, 2011).2 This policy requires institutions receiving PHS funding to develop specific COI policies. Commonly accepted practices include requiring the disclosure of financial relationships, the prohibitions of certain relationships, and the management of COIs that have been identified. There is also a push in biomedical research to broaden the definition of COI to include secondary interest beyond financial interest, such as personal, professional, political, institutional, religious, or other associations (Drazen et al., 2009, 2010). At the same time, under the Bayh-Dole Act3 and other laws and policies, there is a competing desire for universities to transfer technology to the private sector, which can result in financial profits for the institution and investigator(s). Thus, there is high interest in establishing spin-off companies, such as the two companies4 formed by the
2 Responsibility of Applicants for Promoting Objectivity in Research for Which Public Health Service Funding Is Sought and Responsible Prospective Contractors, 76 Fed. Reg. 53256 (2011).
3 Patent and Trademark Act Amendments of 1980. Public Law No. 96-517 (December 12, 1980).
4 Expression Analysis and CancerGuideDX.
relevant investigators at Duke (see Appendix B). This situation adds to the complexity of managing COIs effectively.
The IOM report on COI recommended that some basic policies should be implemented by all institutions, such as a presumption that investigators with COI should not conduct human subjects research, with some exceptions permitted but managed (IOM, 2009b). Steps also need to be taken to manage the COIs of participants on institutional oversight bodies. Individuals on these bodies should not be associated in any way with the research they are supervising. In addition, these individuals should disclose their COIs to their colleagues, other committee members, and their trainees. NIH policy for Data and Safety Monitoring, for example, requires that the individuals charged with monitoring a trial at an institution receiving NIH funding are not associated with the trials and recommends that institutions evaluate and manage any existing conflicts (NIH, 1998).
Institutional COIs are equally important as individual COIs and, thus, must be managed as part of the oversight of research. According to the IOM, an institutional COI arises “when an institution’s own financial interests or those of its senior officials pose risks of undue influence on decisions involving the institution’s primary interests” (IOM, 2009b, p. 218). These COIs are often due to the licensing of intellectual property owned by an institution itself, the institution’s partial ownership of companies arising from its research, or COIs as the result of endowed chairs and scholarships. Institutional COIs also can occur when members of an institution’s leadership have personal financial interests that may affect their decision making on behalf of the institution.
In addition, institutions can be influenced by secondary interest beyond financial interests, such as factors that impact an institution’s reputation. In research, such reputational factors can be quite prominent and difficult to manage, including deference to esteemed and well-funded investigators and the importance to both investigators and institutions of faculty publications in high-impact journals. Few federal laws and regulations oversee institutional COIs. The IOM report on COI recommended that an institution’s board of trustees or an equivalent governing body be given authority to make judgments about institutional COIs (IOM, 2009b).
A key lesson learned from the Duke case study is that COI, though subject to multiple layers of oversight in most institutions, can still affect research integrity (see Appendix B). Thus, the committee recommends that institutional leaders designate an institutional official who is responsible for comprehensive and timely documentation, disclosure, and management of financial and non-financial conflicts of interest, both individual and institutional (Recommendation 4[b][ii]). This official should have the power to act independently of other parts of the institution. Institutions should pay particular attention to non-financial interests such as loyalty to
the institution, as well as promotion policies that incentivize publication in high-impact journals and grants, but do not emphasize research integrity. In addition, the confirmation and replication of other investigators’ work are frequently poorly rewarded. Incentives need to be established to ensure appropriate oversight by knowledgeable professionals. Institutions need to be particularly sensitive to institutional COI when investigating scientific controversies (see discussion below). When there are substantial COIs, including non-financial COIs, it may be impossible for the institution to fairly conduct investigations into controversies. All of these considerations may be compounded in multiinstitutional research studies, including the need for interinstitutional communication and disclosures.
When credible questions arise regarding the reliability of scientific work, institutions have the primary responsibility for investigating the merits of such questions. This responsibility has two aspects: (1) creating a safe system for reporting potential lapses and (2) conducting an investigation into credible concerns about scientific integrity. The National Academies has stated that whistleblowing can be valuable in preserving the integrity of the research process and should be supported by the entire research community (NAS, 1992). It recommended that institutions establish a central office for handling allegations of potential scientific irregularities in research and develop clear policies for reviewing these allegations. Most institutions have implemented this recommendation and established central offices that are charged with promoting the ethical conduct of research and protecting good-faith whistleblowers from retaliation. However, there is variability across institutions with respect to which office has this responsibility. At the Johns Hopkins University School of Medicine, for example, the Office of Policy Coordination is charged with these responsibilities, whereas at the University of Virginia, the Office of the Vice President for Research is responsible. At the College of William and Mary, the Office of Sponsored Research is responsible. Furthermore, the degree to which investigators are aware of such whistleblowing systems at their institution is unclear, and their use is uneven. This is problematic because individual investigators and trainees are in the best position to identify potential research irregularities (Frankel, 1995; NAS, 1992). At the same time, there may be concerns about unfounded allegations or personal grudges triggering troubling inquiries, as well as confusion about the acceptability of certain practices. Many institutions have designated ombudspersons whom investigators can consult when they need guidance about research integrity (IOM, 2002).
ORI’s regulations state that each extramural entity that applies for a biomedical or behavioral research grant or cooperative agreement should
establish policies and procedures that provide for “undertaking diligent efforts to protect the positions and reputations of those persons who, in good faith, make allegations” in accordance with the Code of Federal Regulations (42 CFR Part 50.103[d]). This helps ensure that all individuals, regardless of rank, feel comfortable raising questions about the integrity of a research study (Geller et al., 2010). However, institutions inherently have a strong incentive to discourage whistleblowing or to not fully investigate allegations of research irregularities; institutions risk forfeiting industry support and grants, harming their reputation, garnering negative publicity, and being subject to retaliatory litigation when there are breaches in research ethics (Rhodes and Strain, 2004). These negative incentives illustrate again the influence of non-financial COI. Many investigators perceive the reluctance of institutions to investigate potential misconduct and fear negative consequences. Thus, a major barrier to whistleblowing is the perceived threat of institutional recriminations against whistleblowers (NAS, 2009). In addition, whistleblowers might harbor guilt about coming forward too late if they were aware of inappropriate activity for some time but did not report it. They might fear being accused of covering something up or of being held legally liable.
A recent survey of 4,298 investigators at 605 institutions with NIH extramural research funding asked respondents about observed instances of likely misconduct over a 3-year period from 2002 to 2005 (Titus et al., 2008). Eight percent of the respondents indicated that they had observed or suspected investigators in their own department of committing research misconduct. Only 58 percent of the incidents, however, were reported to officials at their institutions (either by the respondent or someone else); 37 percent of the incidents were not reported by anyone; and in 5 percent of the cases, the respondent did not know if the incidents were reported. An earlier survey conducted by the International Society for Clinical Bio-statistics also found that a majority of respondents were aware of instances of medical fraud, but many respondents did not know whether their organization had a formal system for reporting suspected fraud (note: the response rate in this survey was only 37 percent) (Ranstam et al., 2000). In a multimodal needs assessment conducted at the Johns Hopkins University to inform the development of research ethics education and services, a major theme that emerged was fear of punishment for reporting research breaches and the difficulties of the power differential between the levels of the organizational hierarchy (Geller et al., 2010). Fifteen percent of the respondents in this assessment indicated they would not feel comfortable reporting suspected breaches in research integrity out of fear of professional repercussions.
Thus, the committee recommends that institutional leaders designate an institutional official who is responsible for establishing and managing a
safe system for preventing, reporting, and adjudicating lapses in scientific integrity, to enhance patient safety (Recommendation 4[b][iii]). The lines of reporting within the whistleblowing system should be independent from the institutional offices responsible for developing intellectual property in order to minimize the pressure of any financial COIs. Academic medical centers may be able to draw upon their growing experience with “blame-free reporting” of hazards to improve patient safety and quality care to implement corresponding systems-type oversight and inquiries for basic, translational, and clinical research (Pronovost et al., 2003).
Investigating Scientific Controversies
Little guidance is available on how strong the allegations and their basis should be before triggering an institution’s investigation of a scientific controversy. There also are no existing criteria to inform an institution’s decision to initiate an internal versus external review of a potential scientific controversy. As discussed in the IOM report on COI, “no decision maker in an institution is fully free of conflict in the case of institutional conflicts of interest.” The Duke leadership recognized this problem and stated that in some instances an institution’s COI may be too substantial to conduct an effective internal investigation. In such cases an external institution should conduct the investigation (see Appendix B) (Kornbluth, 2011). Ultimately, these decisions depend on the circumstances of each case and require careful attention and judgment by the institutional leadership.
The committee recommends that institutional leaders designate an institutional official who is responsible for establishing clear procedures for response to inquiries and/or serious criticism about the science being conducted at the institution (Recommendation 4[b][iv]). For example, this individual would be the responsible official for journals to contact with a serious concern about a manuscript, to ensure that relevant information is provided to external investigators to help resolve issues of methods and data transparency, and to inform funders when an investigation of potential scientific misconduct is initiated. NIH’s ORI only will become involved in an institution’s investigation if the scientific controversy rises to the level of misconduct (ORI, 2011). All institutions receiving federal grants are required to have assurances on file with ORI stating that they have developed and will comply with an administrative process for responding to allegations of research misconduct. When problems do arise, ORI monitors the institution’s investigation of the misconduct, but ultimately the institution is responsible for addressing and resolving any controversies (NIH, 2010).
These investigations require financial and other resources, and sometimes, outside expertise. Adequate resources and time should be made available for an investigation to be done thoroughly and fairly. The review
should be conducted by individuals with the necessary expertise, and these experts should be completely independent in their review. Access to all relevant information from within and outside the institution, such as information from external investigators who have expressed concerns and from funders and journal editors, is essential to the success of the review. If an institution has a COI affecting the case, special protections should be in place to ensure that the review is unbiased. One example is to use only external experts who have free access to all data relevant to the case. In addition, some institutional COIs, such as substantial financial investment in the research or the potential for a high-impact breakthrough that can greatly enhance the institution’s stature, should be more carefully managed and acknowledged during investigations and may indicate the need for the investigation to be conducted completely independent of the institution.
Biostatisticians, Bioinformatics Scientists, Pathologists, and Other Collaborators
Omics research is multidisciplinary and requires effective teamwork. Institutions play a pivotal role in promoting teamwork, training faculty in effective team-based practices, and in rewarding collaborative accomplishments (Altshuler and Altshuler, 2004). Institutions also have the responsibility to ensure that the research team includes individuals with all of the required expertise. This section emphasizes the important role that biostatisticians, bioinformatics scientists, and pathologists play in omics research. However, many of the issues discussed below apply more broadly to the various collaborators who are involved in omics research and test development, including experts in omics technology and clinical trials.
The complementary disciplines of biostatistics and bioinformatics are both required in order to analyze and interpret the large multidimensional datasets used in omics research. Although there is overlap in the principles and methods of these two disciplines, they are distinct. Biostatisticians are trained in experimental design and data analysis. Bioinformatics faculty focus on developing fast, efficient algorithms for data reduction, data mining, and literature search techniques, and formulating biologically informative annotations relating to DNA or RNA sequence, gene or protein expression, and the interaction of pathways, networks, phenotypes, and druggable targets. Biostatisticians and bioinformatics scientists publish in distinct sets of journals. In recent years, biostatisticians have tended to focus their careers either on classical clinical research, including clinical trials, or on the newer fields of genomics and statistical genetics. Most biostatisticians do not possess expertise and experience in both realms. Given the nature of omics research and omics-based clinical trials in particular, it is important that biostatisticians with expertise in both statistical genomics
and clinical trials be involved, as well as individuals with bioinformatics expertise.
The shortage of these quantitative scientists is well known, and the gap between supply and demand has been growing since the genomics era began (DeMets, 2009; DeMets et al., 1998). Reasons for this shortage are numerous, but include the fact that the supply of Ph.D.-trained experts in these fields has remained relatively constant for the past two or three decades while the demand has skyrocketed. Unfortunately, NIH, which funds most of the doctoral training programs in this country, does not have a unified approach to training biostatisticians and bioinformatics scientists. Rather, training grants for these fields are scattered across the disease-oriented institutes, and the review of these training grants do not always include peers who are quantitative scientists. Further compounding the professional staffing crisis is the new set of challenges for the design, conduct, and analysis of research in the era of genomics, much like that experienced in the field of clinical trials four decades ago. Biostatisticians and bioinformatics scientists need to develop new experimental designs and methods for analysis because existing methods are not optimal or even adequate for current challenges (Apweiler et al., 2009; Mischak et al., 2007; Simon, 2008, 2010). Trained bioinformatics scientists are also needed to perform the complex analyses required by omics research, a collaborative task that may not promote career advancement.
Investigators developing new biomarkers for clinical use often do not include in their collaboration teams the pathologists and clinical laboratory scientists with expertise in proper methods for tissue diagnosis and selection for testing, test development, test validation, and ongoing test performance in compliance with clinical laboratory standards. In the worst of circumstances, investigators are not aware of the benefits of collaboration with pathology experts and the contributions such faculty can make to the translational process of omics-based test development, validation, and implementation in clinical use. Alternatively, patholo-gists might be viewed as technicians who simply perform tests, rather than physicians with knowledge and experience who can facilitate the translational aspects of an omics-based discovery into a clinical test. The inclusion of pathologists or clinical laboratory scientists in the proper validation of a new omics-based test prior to use in a clinical trial to direct patient care enhances patient safety and the quality of the testing during the clinical trial. Thus, the committee recommends that institutions that conduct biomedical omics research, including test development and clinical trials, should train, recognize, and support the faculty-level careers of individuals from the multiple collaborating disciplines, including biostatistics, bioinformatics, pathology, omics technologies, and clinical trialists (Recommendation 4c).
The critical roles for these disciplines in omics research should compel institutions to develop analytical units, sections, or departments for bio statistics, bioinformatics, and pathology faculty, staff, and trainees. Involving these faculty and staff only at selected steps in the omics-based test development process is inadequate. Rather, they need to be viewed as equal partners on the research team. The committee recommends that biostatisticians, bioinformatics scientists, and pathologists, as well as other collaborators in omics research, be treated as equal co-investigators and co-owners of responsibility (Recommendation 4[c][i]). In the NRC report Catalyzing Inquiry at the Interface of Computing and Biology (NRC, 2005), devaluation of the contributions of collaborators from different fields was discussed as an important cultural issue for research taking place at the so-called BioComp (biomedical–computational) interface. This concept can be extended to biostatisticians, bioinformatics scientists, pathologists, and other collaborators who participate in omics research. As an example, the National Cancer Institute (NCI)-supported Cancer Centers and NIH- supported Clinical Translational Science Award (CTSA) units at many of the leading academic research institutions provide support for both biostatistics and bioinformatics cores (Berry, 2012; DeMets, 2009). In doing so, they have created an expectation and tradition of including these faculty members as collaborators, starting with the experimental design.
The committee also recommends that institutions ensure that biostatisticians, bioinformatics scientists, pathologists, and individuals from the other multiple disciplines that collaborate in omics research are represented on all relevant review and oversight bodies within the institutions (Recommendation 4[c][ii]). Omics-based test development with serious design or analysis flaws will ultimately fail the clinical validation process and will waste investigator time and resources. Minimizing false-positive leads in omics research is in the best interests of the investigators, institutions, and, most importantly, patients. The same is true for the test validation steps. Trials with serious design flaws or incorporating tests that have not yet been fully defined and validated should not be approved or implemented. If conducted, they may produce data that erroneously lead to an omics-based test being used to guide clinical decision making, with potential adverse consequences for patients. As omics-based grant applications and clinical protocols are being prepared, biostatistics, bioinformatics, and pathology faculty should be part of both the research team and the review team. This will ensure that only the most appropriate and most rigorously designed and analytically sound plans and evaluation processes are being proposed. Funders and journals that are involved in omics research also need to ensure that biostatistics, bioinformatics, pathology, and other faculty collaborating in omics research are involved in reviewing grant proposals and submitted manuscripts.
Biostatistics, bioinformatics, pathology, and other faculty collaborating in omics research ideally should be part of a larger unit, such as a section or department, where they can be mentored. Being part of a larger unit also provides them with support from a more senior leader so they can contribute all of their expertise to the omics research effort without feeling pressured to deviate from best practices. Faculty, and especially non-faculty staff, working in isolation may not know how to defend themselves when they are asked to conduct incomplete or flawed analyses that would create biased or misleading results and interpretations. Furthermore, such isolation does not foster academic development or promotions and should be avoided. The committee recommends that institutions ensure that individuals from the multiple disciplines that collaborate on omics research and test development are intellectually independent, preferably reporting to an independent mentor and/or department chair as well as to the project leader (Recommendation 4[c][iii]). The key concept here is to ensure that collaborators in omics research can act as independent scientists in applying their specific analytical expertise. However, they should be heavily integrated in their scientific research team in order to be effective. This arrangement enhances independence and reduces risks for inappropriate pressure and COI.
Multiple types of organizations fund omics research, including government agencies, for-profit institutions, private foundations, public nonprofit organizations, and international organizations. NIH is by far the largest funder of research at academic and independent research institutions in the United States. The principal roles and responsibilities of funders of omics research are the same regardless of the type of entity. However, international funding organizations are outside the scope of this report.
Funders have influence over the conduct of research because they determine which projects are funded, and thus, which projects ultimately are conducted. They have a responsibility to sponsor scientifically rigorous research and to develop policies that promote the responsible conduct of research among their grantees. Funders also can use their relationship with investigators and institutions to encourage these parties to adopt and adhere to standards and best practices, such as sharing data and code. For example, NIH and the National Science Foundation now require data-sharing plans for large grants (NIH, 2010; NSF, 2001). The challenge is to balance the funder’s interest in promoting innovative science and advancing a field of study with the need for oversight. Funders may find that fulfilling an oversight role is particularly difficult when financial support for a
specific project comes from multiple funders, and it is unclear which aspect of a project any given funder is supporting.
This section presents the roles and responsibilities of funders. It focuses primarily on NIH, and specifically NCI for the Duke case, because more information is known about the practices of NIH than for-profit institutions, private foundations, and public nonprofit organizations. The committee makes several recommendations for funders of omics research that address the availability of data and code, the support for data repositories and test validation, and the role of funders in responding to scientific controversies. Box 5-5 highlights themes from the case studies for funders to use.
Role and Responsibility in Research
Funders, including those who support omics research, are responsible for screening prospective research projects and for monitoring funded projects. The peer review processes that funders use to select projects to support generally rely on committees made up of scientific peers. NIH, for example, uses a dual-level peer review process. The first-level review is conducted by a Scientific Review Group (called a study section), which is composed of non-federal scientists and lay members, and focuses on the scientific merit of the proposal. The reviewers are directed to consider the following criteria in evaluating a proposal: significance, investigator expertise, innovation, approach, and research environment. The second level of review is performed by NIH staff within the specific institute of NIH that is considering the proposal, and assesses whether the proposal is consistent with the institute’s programmatic and funding priorities. The institute directors make the final funding decisions based on the advice of reviewers (NIH, 2010). A similar process is used by many non-federal funding agencies. At the American Cancer Society, for example, proposal review is conducted by a peer review committee made up of 12 to 25 cancer investigators and non-scientists. Each application for funding is assigned to at least two committee members who consider the application’s scientific merit, originality, feasibility; the qualifications and expertise of the investigative team; the facilities and resources available for the project; and the potential of the research to improve cancer treatment (ACS, 2011).
In omics research, whether funded by a federal or non-federal entity, it is important that the peer review process involve biostatisticians and bioinformatics scientists who can assess the research methods, including the quality of complex biomarker trial designs if necessary, and the proposed data collection and analysis plans (see discussion on biostatistics/ bioinformatics in the institutions section above). In addition, some very large grants, such as program projects or Center grants, have subprotocols
The Duke Case Study
This case highlights significant barriers that funders face with regard to effective oversight and communication. For example, the National Cancer Institute (NCI) has a policy that requires investigators to make their data and code publicly available at the time of publication (NIH, 2010), but that policy was not followed by Duke investigators. It is unclear whether the other funders of the clinical trials had similar policies in place, but it is clear that, where such policies exist, funders face a challenge in overseeing compliance with them. In 2009, NCI contacted Duke regarding its concerns about the validity of the omics-based tests being used in the three clinical trials named in the IOM statement of task. NCI initially relied on Duke’s investigation of the concerns. However, NCI was unable to review the university’s charge to the external reviewers, or the draft report generated by the external reviewers to ensure that it was responsive to NCI’s concerns. Duke had told NCI it would notify the sponsors of the trials about the actions it was taking when it initially suspended those trials. NCI’s review of its trials databases and www.clinicalltrials.gov did not reveal NCI sponsorship for any of the three trials (McShane, 2010b,c), but when the NCI staff determined in April 2010 that it was providing partial funding through an R01 grant to Potti for the tests for sensitivity to cisplatin and pemetrexed, it requested the resulting data and computer code necessary to reproduce results in the paper cited in the grant as providing validations for the cisplatin and pemetrexed predictors (Hsu et al., 2007). NCI staff evaluated the cisplatin test and were unable to reproduce the results. (McShane, 2010a). NCI then asked Duke to produce the original raw data that would reproduce the findings in the papers. On October 22, 2010, Duke notified NCI that multiple validation datasets associated with the cisplatin predictor were corrupted (McShane, 2010a). Thus, the trials were closed, and retraction of the paper was initiated.
Commercially Available Tests
Various types of funders supported the omics-based tests described in Appendix A, including government funders, private nonprofit organizations, and industry. However, the committee did not explore the roles and responsibilities of funders in the development process of these tests because of the lack of publicly available information and limited resources.
NOTE: See Appendixes A and B on the case studies for more information.
embedded within them or developed during the life of the grant. These sub-protocols also should be reviewed by bio statisticians and/or bioinformatics scientists, as appropriate, if they involve omics research.
Funders also should have a method to track a study once it is funded. This is important because it allows funders to oversee the research process and ensure that investigators and institutions are applying best research practices. There are several mechanisms for tracking research studies that are mandated by law. The Federal Funding Accountability and Transparency Act (FFATA) created the website www.USASpending.gov, which provides information on federal grants and contracts over $25,000. Similarly, www.clinicaltrials.gov is a repository of information on most clinical trials involving a drug, biological product, or device (see discussion on trial registration below). However, FFATA is limited to federal contracts and grants, and www.clinicaltrials.gov only includes clinical trials (not other study designs). Thus, privately funded omics studies that are not clinical trials are not included in either of these repositories.
Most funders use more active methods of monitoring their funded studies to address these gaps and to provide an additional level of oversight. For example, NIH conducts active monitoring by reviewing progress reports and correspondence from grantees, requiring audits, and conducting site visits (NIH, 2010). Funders with more limited budgets are likely to rely heavily on progress reports as their mechanisms of oversight (ACS, 2011; PCF, 2011; PhRMA Foundation, 2011). Some also may require meetings with the grantees to monitor the research (PCF, 2011).
Data and Code Availability
Many funders have policies requiring grantees to make their data and code publicly available prior to publication (Sherpa, 2011). Requiring investigators to share their data and code can maximize the societal benefit resulting from a funder’s support and contributions to a project. The policies of the Wellcome Trust, for example, specifically state that sharing data and code leads to: (1) faster progress in translating research results into practices and products that improve human health, (2) better value for the money, and (3) higher-quality science (Wellcome Trust, 2011). The requirement to share data and code is also consistent with the 2003 NRC report on data sharing, which recommended that sponsors of research “clearly and prominently state their policies for the distribution of publication-related materials and data” (NRC, 2003, p. 11). It also recommended that sponsors provide the recipients of research grants with the financial resources needed to support the dissemination of data and code.
Funders’ existing policies on data and code sharing vary in stringency and level of detail. NIH policy endorses the “timely release” of final
research data (i.e., at the time of publication) (NIH, 2010). Unfortunately, the “final research data” phrase is quite ambiguous because a whole series of publications over many years may be based on the ongoing analyses of the data until the research data are called “final” by the investigators. NIH policy also requires investigators applying for projects with direct costs of $500,000 or more in a given year to address data sharing in their applications. NSF’s data-sharing policy specifically addresses the availability of algorithms and code and requires investigators to share any corresponding software and materials that are necessary to interpret the data (NSF, 2001). The UK Medical Research Council’s policy states that when investigators believe the data arising from their studies are not amenable to sharing, investigators should provide an explicit explanation in their proposal for not making the data available (Lowrance, 2006).
However, many funders still do not require grantees to share their data and code. A group of 33 research universities with an interest in open access have created a website that tracks research funders’ policies on data sharing (Sherpa, 2011). Of the 80 funders’ policies assessed on this website, only 18 have data-archiving policies. In addition, many funders with data-sharing policies do not enforce them (Piwowar, 2011). The NRC report on data sharing recognized this problem and recommended that funding organizations have published procedures for resolving problems of non-compliance with data sharing (NRC, 2003).
The committee recommends that funders require investigators to make all data, metadata, prespecified analysis plans, computer code, and fully specified computational procedures publicly available and readily interpretable either at the time of publication or, if not published, at the end of funding, and funders should financially support this requirement (Recommendation 5[a][i]). If the investigators make this information available at the end of funding, it could be held in escrow for 2 years to allow the investigators an opportunity to publish their research. Issues of proprietary information can be dealt with by depositing the materials with a responsible third party that can ensure confidentiality and protection of the material. Funders also should provide continuing support for independent repositories to guarantee ongoing access to relevant omics and clinical data (Recommendation 5[a][ii]). Although the methodology and funding for making data publicly available from sponsored research is still under discussion, such efforts are needed to move the field of omics research forward. Transparency always is healthy in research, and the more individuals who can examine the available data, the more robust the conclusions will be. NIH Director Francis Collins has declared as an NIH priority that the genomic data generated be accessed and harvested (Collins, 2010). Omics-based tests, and the data on which they are based, clearly fall in this realm.
Funding of Test Validation
A crucial step in developing an omics-based test to guide patient management in a clinical trial setting is appropriate validation in a CLIA-certified laboratory (described in Chapter 3). A candidate omics-based test may be applied to patient samples from a completed trial or even from an ongoing trial as part of the validation process as long as the testing does not interfere with the conduct of the clinical trial or impose undue hazards to patients. Before an omics-based test is considered ready to direct patient management in a clinical trial, the investigators from the discovery phase should identify a CLIA-certified laboratory, either a commercial or an academic medical center clinical laboratory, to confirm that the candidate omics-based test is stable, reproducible, and validated appropriately for the intended study design for assessment of the clinical utility of the test (see Chapter 4). Investigators are responsible for arranging for the independent validation and, very importantly, sharing the evidence and methods necessary for a CLIA-certified laboratory to validate the candidate omics-based test in preparation for use in a clinical trial to direct patient management. However, the cost for validation of the candidate omics-based test in a CLIA-certified laboratory must be funded. In addition to the test validation described in Chapter 3, confirmation of the discovery phase findings (see Chapter 2) that are the basis of the candidate omics-based test may be worthy of independent replication by another research laboratory. Thus, the committee recommends that funders should support test validation in a CLIA-certified laboratory (as described in Chapter 3) and consider the usefulness of an independent confirmation of a candidate omics-based test prior to evaluation for clinical use (Recommendation 5[a][iii]).
If an independent confirmation is funded, the study should be conducted using fully independent specimens or datasets, to provide a fully independent test of the omics-based discovery. All data, metadata, prespeci-fied analysis plans, code, and fully specified computational models of the independent study should again be made publicly available either at the time of publication or at the end of funding, and funders should financially support this requirement (Recommendation 5a above). Confirmation of a candidate omics-based test either by a CLIA-certified laboratory in preparation for use in a clinical trial or by an independent research laboratory is particularly important for complex omics-based tests because of the complexity and quantity of the data, the high likelihood of overfitting, and the great potential for investigators’ bias to influence the results. The adage “trust but verify” is appropriate for this setting.
Responding to Scientific Controversies
The committee recommends that funders should designate an official to alert the institutional leadership when serious allegations or questions have been raised that may warrant an institutional investigation; if the funder has initiated that question, then the funder and institution should communicate during the investigation (Recommendation 5[a][iv]). As stated above, the committee recommends that all institutions that conduct omics research identify an administrator or office that outside parties, such as funders, can approach with serious concerns about the validity of work conducted by investigators within the institution, including problems that do not rise to the level of misconduct. The committee also recommends that institutions should inform the funder(s) when investigations are initiated on a study they have funded based on other parties’ concerns regarding the integrity of that research. When the funding agency requests the review, the agency may request that the institution conduct either an internal or external review. The research institution and funding institution should communicate effectively to ensure that the funder’s specific concerns are fairly heard and considered. The funders should be prepared to evaluate the research institution’s review to decide if they believe it is thorough and convincing. In some cases, the funding institution may ask for its own appointed external reviewers to at least review the institution’s report. Funding institutions may need to set aside a small fund to support investigations of serious allegations.
In the case of the Duke clinical trials evaluating omics-based tests, funding came from multiple sources including NCI and the Department of Defense (DOD). Yet the committee could find no evidence of communication between NCI and DOD. To address this problem, the committee recommends that funding agencies should establish lines of communication with other funders to be used when serious problems appear to involve interdependent research sponsored by another funder along the omics-based test development process (Recommendation 5[a][v]). Establishing such communication channels should help to alleviate confusion when multiple funders support various stages of omics-based test development. The committee recognizes that it will be easier to establish lines of communication between federal funders of omics research than it will be for the many private funders of omics research. However, all funders of omics research have a responsibility to communicate with each other. The Inter-agency Oncology Task Force (IOTF) is an example of federal agencies communicating with each other and could serve as a model for communication among funders of omics research. Through the IOTF, NCI and FDA are jointly sponsoring fellowship programs to train scientists in both preclinical and clinical research, as well as in FDA’s policies and regulations that
govern research (IOTF, 2011). In addition, the committee recommends that federal funders of omics-based translational research have the authority to exercise the option of investigating any research being conducted by a funding recipient after requesting an investigation by the institution (Recommendation 5b). The investigation by NCI is what lead to the discovery of the underlying problems with the data in the Duke University case study (see Appendix B).
Two federal agencies have regulatory authority relevant to omics-based tests: FDA and the Centers for Medicare & Medicaid Services (CMS). FDA oversees the marketing of devices, including in vitro diagnostics (which encompass most omics-based tests).5 However, FDA has exercised enforcement discretion with regard to laboratory-developed tests (LDTs), meaning it does not oversee the development of tests that fall into this category. Laboratories that provide LDT services are regulated by CMS under the Clinical Laboratory Improvement Amendments6 (CLIA) to ensure the quality of the laboratory testing services.
It is challenging for all parties involved in the development of omics-based tests (e.g., investigators, institutions, and IRB committees) to understand and correctly navigate FDA’s current oversight system. This is primarily due to the rapidly changing technological landscape and the longstanding and unclear practice of FDA in its use of enforcement discretion as described in Chapter 4. In recent years, FDA has taken some initial steps to clarify regulatory policy for these tests, but more could be done to guide investigators. For example, FDA developed draft guidance on In Vitro Diagnostic Multivariate Assays (FDA, 2007), but that guidance was never finalized, and FDA is now moving away from that terminology. Box 5-6 highlights the great variability in mechanisms that omics-based test developers have used to bring a test to the market. However, uncertainty about FDA’s enforcement discretion does not excuse test developers’ mistakes and failures to submit a test that directs therapy and is of significant risk to the health, safety, or welfare of a trial participant to FDA for discussion and consideration for an IDE.
In order to enable investigators and institutions to have a clear understanding of their regulatory responsibilities, the committee recommends that FDA develop and finalize a risk-based guidance or regulation on bringing omics-based tests to FDA for review and on the oversight of laboratory
5 The Medical Device Amendments of 1976. Public Law No. 94-295 (May 28, 1976).
6 The Clinical Laboratory Improvement Amendments of 1988. Public Law No. 100-578 (October 31, 1988).
The Duke Case Study
In 2009, FDA sent a letter to the Duke investigators stating that the omics-based tests being studied in the three clinical trials named in the IOM statement of task needed to go through the investigational device exemption (IDE) process (Chan, 2009). In response, the investigators made some changes to the protocol of the studies and contacted FDA for further clarification about whether an IDE was still required (FDA, 2011b; Potti, 2009). The Duke Institutional Review Board (IRB) determined that an IDE was not needed because it did not receive a response from FDA (FDA, 2011b). However, in retrospect, the Duke IRB recognized that an IDE should have been obtained for the omics-based tests because the tests were used to direct patient management in the clinical trials (FDA, 2011b).
Commercially Available Tests
A review of the six commercially available tests discussed in Appendix A demonstrates that companies have pursued both laboratory-developed tests (LDTs) and FDA pathways for translation of an omics-based test. The availability of multiple pathways indicates a lack of clarity and consistency on the regulatory requirements for omics-based tests. Five of the commercially available tests that the committee examined are performed exclusively by each company’s proprietary laboratory that has certification under the Clinical Laboratory Improvement Amendments of 1988 (CLIA) as an LDT.a Two companies did not seek FDA clearance and market their tests as LDTs: Genomic Health (Oncotype DX) and CardioDx (Corus CAD). Four tests received FDA 510(k) clearance of their tests: Agendia (MammaPrint), Pathwork Diagnostics (Tissue of Origin), Vermillion (OVA1), and XDx (AlloMap).
In several of the case studies, the company and FDA held a pre-IDE meeting to determine whether an IDE would be required for the test under development and validation. FDA determined that an IDE was not needed for the AlloMap and Tissue of Origin tests because the tests were not directing patient therapy in the studies proposed to assess the tests. Physicians can now use these tests for that purpose, however.b Agendia reported that it received an IDE for MammaPrint that helped clarify the process and requirements for the de novo 510(k),c and Vermillion reported that it received an IDE for OVA1.d Two ongoing prospective studies direct patient management on the basis of Oncotype DX Recurrence Score. For both trials, information required for approval of investigational use of Oncotype DX in the trial was submitted as part of an investigational new drug application to FDA.e Regardless of which pathway is taken to market, consultation with FDA can be beneficial. For example, the developers of the OVA1 test sought FDA input, and this early dialogue with FDA prompted Vermillion to include two different cut-off values for the test, depending on a patient’s menopausal status (Fung, 2010).
NOTE: See Appendixes A and B on the case studies for more information.
a OVA1 is performed exclusively by Quest Diagnostics, which is subject to CLIA certification (Quest Diagnostics, 2011). Currently Pathwork Diagnostics offers Tissue of Origin exclusively through its CLIA-certified laboratory, but is developing an in vitro diagnostics test kit for other laboratories (Pathwork Diagnostics, 2010).
b Personal communication, Mitch Nelles, XDx, October 12, 2011; personal communication, Ed Stevens, Pathwork Diagnostics, October 18, 2011.
c Personal communication, Laura van ‘t Veer, November 28, 2011.
d Personal communication, Scott Henderson, Vermillion, November 1, 2011.
e Personal communication, Lisa McShane, National Cancer Institute, February 9, 2012.
developed tests (Recommendation 6a). Specific areas that need clarification include the circumstances when: (1) an omics-based test qualifies as an LDT, (2) an omics-based test and computational model qualify as a device, (3) devices are exempt from submission and review by FDA, and (4) devices are considered to pose significant risk (e.g., does this determination take into consideration the state of health of the intended patients?). Clarification of enforcement discretion in the LDT arena, particularly for highly complex omics-based tests, is of paramount importance.
Two specific areas where FDA operations could be more transparent include (1) the pre-IDE process and (2) the quality reporting system. FDA’s pre-IDE process often is very helpful to test developers by providing clarity about the regulatory requirements for marketing an omics-based test. For example, the evidence supporting an omics-based test derived from a prospective–retrospective trial can be very strong and does not require an IDE because the test is not used to direct choice of therapy. A pre-IDE meeting resulting in an agreement that this design is sufficient for market (or an agreement that this design would be part of a package of data submitted for FDA review) would be extremely beneficial to test developers. FDA could improve its transparency by continuing to use the pre-IDE process, making it as widely available as possible within reasonable resource constraints, and publicly advertising its willingness to hold pre-IDE meetings. However,
the committee recognizes that this type of pre-IDE agreement may be challenging in omics research because the science is changing rapidly.
FDA also could improve its transparency by clarifying when its quality system requirements and manufacturers’ quality management systems (QMSs) are required. These systems provide a high level of assurance about a product’s production integrity and safety. However, the requirements are quite demanding, and many academic laboratories do not meet the requirements. In addition, many aspects of test development (e.g., analytical validation) are needed regardless of whether manufacturers go through the FDA process (and use a QMS) or whether the manufacturers develop a test in a CLIA-certified laboratory as an LDT.
Communication of IDE Requirements
The committee recommends that FDA communicate the IDE requirements for use of omics-based tests in clinical trials to the Office of Human Research Protections (OHRP), IRBs, and other relevant institutional leadership (Recommendation 6b). IRBs often lack knowledge of the IDE requirements compared to their understanding of the IND requirements; thus, clarification and education by FDA about IDE requirements are necessary. This communication could be conducted online and via technologies such as webcasting in order to reduce FDA’s cost and time requirements. However, this educational outreach effort should be adequately resourced and updated on a regular basis.
Although omics technologies are being developed in a rapidly changing environment, it would be helpful to the scientific community if FDA developed guidance about emerging technologies in advance of the technologies coming to the market. The committee recognizes that this will be challenging. When it is impossible to have FDA guidance keep pace with technological advances, the committee encourages FDA to organize forums with members of the scientific community and have an open and publicly accessible dialogue, as FDA has done on other matters. This will provide test developers with some insight into FDA’s thinking and potential next steps.
The challenges faced by FDA with emerging technologies are particularly salient with respect to companion diagnostic tests. The committee applauds FDA for its recently issued guidance on companion diagnostics (FDA, 2011a). However, where possible, further clarification of the relationship between IND and IDE requirements in the presence of a combination product or a companion diagnostic test would be of assistance to the scientific community.
Data compiled for the Wall Street Journal by Thomson Reuters suggest that the number and percentage of papers that journals are retracting has increased significantly in recent years, from 22 retractions in 2001 to 339 in 2010 (Naik, 2011a). A larger percentage of these retractions have occurred in high-impact journals than in low-impact journals (Cokol et al., 2007). It is unclear whether this increase in retractions is due to an increase in mistakes and inappropriate methodologies by investigators, the increasing number of articles published in the increasing number of journals, or to increased vigilance by journals and the scientific community. Regardless, journal editors play a key role in overseeing the quality of published research, including omics research.
Journal editors have a responsibility to use due diligence to ensure that the information reported in an omics study is consistent with what the investigators actually did and that the conclusions are supported by the evidence. Journal articles of omics studies should accurately document the steps in the omics-based test development process in enough detail to allow other investigators to reproduce the methods and results. The challenge for journal editors is ensuring that this standard is met and that omics studies are conducted in a transparent and scientifically rigorous manner. In addition, journals play a significant role in overseeing and minimizing the effects of bias and COI in published research. The Manual of Style: A Guide for Authors and Editors (MoS), for example, outlines several requirements that journals could implement to prevent bias and COI from undermining the credibility of reports containing original data (Fontanarosa et al., 2011; MoS, 2007).
If patients and clinicians are to rely on omics studies and tests to guide treatment decisions, journals need to ensure that the omics studies they publish adhere to best practices. The editorial policies of journals can institute quality control measures for assessing the merit of articles submitted for publication. The instructions for authors also can direct authors to abide by certain standards that advance science as a condition for publication (CSE, 2009). Specific policies may include requiring registration of clinical trials involving omics-based tests in www.clinicaltrials.gov; ensuring data and code availability; protecting the scientific integrity of published research; and developing a process to respond to significant scientific concerns. These requirements for publication are not unique to omics research. However, the importance of journal policies that promote quality and transparency is magnified in omics research, where the methodologies are highly complex and rapidly advancing. Box 5-7 highlights themes from the case studies for journals. The challenges for the reproducibility of science are increasingly
The Duke Case Study
Keith Baggerly and Kevin Coombes, two MD Anderson biostatisticians who tried to reproduce the research results of Potti and Nevins, submitted letters to the journal editors of Nature Medicine, the Journal of Clinical Oncology (JCO), and Lancet Oncology with concerns about the omics-based tests being studied in the three clinical trials named in the IOM statement of task. In general, correspondence with journals and “letters to the editor” did not provide resolution of questions about reproducibility because information contained in Potti and Nevins’ responses still did not enable investigators to reproduce the results and journals declined to pursue the issues further following additional inquiries by Baggerly and Coombes. Nature Medicine published their letter along with the authors’ reply (Coombes et al., 2007; Potti and Nevins, 2007); JCO published one of their letters and the authors’ reply (Baggerly et al., 2008; Dressman et al., 2008), but declined to publish their letter regarding the Hsu et al. (2007) article; Lancet Oncology rejected their letter (Baggerly, 2011). The Duke investigators maintained in their replies that, with only a few exceptions, the errors were clerical errors that had no impact on the actual tests developed or the reported test performance results (Dressman et al., 2008; Potti and Nevins, 2007). Meanwhile, their papers were used and cited by hundreds of other investigators.a
After deciding that the originating journals would not help to address and resolve remaining questions, Baggerly and Coombes published their alternative analysis and detailed critique of each of the papers in a specialty statistics journal (Bag-gerly and Coombes, 2009). Based on simultaneous inquiries from the National Cancer Institute (NCI), Duke decided to undertake independent analyses of the omics-based tests. The original papers were eventually retracted after NCI identified problems with the data in the Hsu et al., 2007 JCO paper, and then directed Duke to find the original raw data underlying that paper and to check for potential data corruption in that paper and others. The original papers were retracted after NCI analyses uncovered the data corruption (Bonnefoi et al., 2011; Hsu et al., 2010; Potti et al., 2011). Duke University also took steps to retract multiple additional papers involving original data analysis coauthored by Potti (Califf, 2011b).
Commercially Available Tests
Many of the case studies described in Appendix A document important steps in the tests’ development processes in the peer review literature. However, the committee did not explore the roles and responsibilities of journals in these case studies because of the lack of publicly available information and limited resources.
NOTE: See Appendixes A and B on the case studies for more information.
a The Potti et al. (2006a,b) articles were cited 306 and 350 times, respectively, the Hsu et al. (2007) article was cited 60 times, the Dressman et al. (2007a) article was cited 111 times, and the Bonnefoi et al. (2007) article was cited 95 times in Scopus (all as of October 28, 2011).
discussed in high-visibility professional literature and the lay press (Ioannidis and Khoury, 2011; Naik, 2011b; Peng et al., 2006).
The FDA Modernization Act of 1997 created www.clinicaltrials.gov to increase the transparency of clinical trials. It requires registration of trials of drug effectiveness for “serious and life threatening diseases and conditions.”7 The FDA Amendments Act, Section 801, broadened the scope of www.clinicaltrials.gov to include a results database. All clinical investigations involving a drug, biological product, or device (other than Phase I trials), regardless of sponsor, are required to register results in this database.8 Many journal editors have accommodated this requirement by stipulating that posting summary results will not interfere with publication if the results are presented as an abstract or table (Laine et al., 2007). As of 2011, 108,000 trials had been registered in www.clinialtrials.gov, and results for 3,600 trials had been reported (Marshall, 2011). There is also a recent push to create registries for new types of studies, such as tumor biomarker studies (Andre et al., 2011).
The rationale for trial registries is strong. Individuals agree to participate in clinical trials based on the understanding that trials will improve medical knowledge and potentially lead to improved health for others. This only can happen if the public is knowledgeable about ongoing trials and the results are disseminated (Zarin and Tse, 2008). Currently, the reporting of biomedical research findings is often incomplete and biased (IOM, 2011). Investigators are most likely to publish positive findings, often report only a subset of the relevant data and outcomes, and may fail to report relevant adverse events (Chan and Altman, 2005; Chan et al., 2004a,b; Curfman et al., 2006; Dickersin and Chalmers, 2010; Dwan et al., 2008; Song et al., 2009; Turner et al., 2008; Vedula et al., 2009). Trial registries have the potential to address reporting bias by creating a public record of ongoing and completed trials (DeAngelis et al., 2005).
The impact of www.clinicaltrials.gov in addressing reporting bias was initially limited because the information submitted to the database was often inaccurate and incomplete, many investigators failed to comply with the registration mandate, and the government instituted few quality control mechanisms (Marshall, 2011; Zarin and Tse, 2008). The International Committee of Medical Journal Editors (ICMJE) increased registration by
7 Food and Drug Administration Modernization Act of 1997, Public Law No. 105-115 § 113 (1997).
8 Food and Drug Administration Amendments Act of 2007, Public Law No. 110-85 § 801 (2007).
requiring all clinical trials to register at www.clinicaltrials.gov or an appropriate trial registry at the onset of patient enrollment to be considered for publication (DeAngelis et al., 2004, 2005; Laine et al., 2007). The ICMJE defined a clinical trial broadly to include “any research project that prospectively assigns human subjects to intervention and comparison groups to study the cause-and-effect relationship between a medical intervention and health outcome” (DeAngelis et al., 2004, p. 2436). This policy led to a 73 percent increase in trial registrations at www.clinicaltrials.gov for all intervention types (Zarin, 2005). More recently, every protocol registered at www.clinicaltrials.gov is required to undergo an automated review to identify missing information and a quality review to assess whether the experiment is presented accurately (Zarin et al., 2011). These practices are likely to improve the quality of the entries.
However, despite the recent increase in trial registration, not all trials are registered at www.clinicaltrials.gov, and many journals still publish studies that have not been posted in the database. Those entries that are posted often still lack essential information about the trial. Also, some trials fail to register prior to patient enrollment, as required by law and the ICMJE policy (Meldrum and DeCherney, 2011; Zarin et al., 2011). Although FDA is trying to encourage sponsors to present more information on the website, “the usefulness of www.clinicaltrials.gov ultimately depends on whether responsible investigators and sponsors make diligent efforts to submit complete, timely, accurate, and informative data about their studies” (Zarin et al., 2011, p. 860). Journal policies are one mechanism to encourage comprehensive trial registration, including trials of omics studies. Thus, the committee recommends that journal editors require authors who submit manuscripts describing clinical evaluations of omics-based tests to register all clinical trials at www.clinicaltrials.gov or another clinical trial registry acceptable to the journal (Recommendation 7[a][i]). The peer review process should confirm that authors have registered their trials and that any data posted in the registry is consistent with the data submitted for publication (Meldrum and DeCherney, 2011).
Data and Code Availability
Baggerly and Coombes, two statisticians from MD Anderson who wanted to reproduce the omics-based tests being used in clinical trials at Duke University, reported spending more than 1,500 person-hours trying without success to replicate the statistical analyses. If all the data used to develop the omics-based tests had been transparent and publicly available, checking the validity of the results would have been much faster and easier. To facilitate the reproducibility of omics research, Baggerly and Coombes recommended that journals require authors to make the following five items
available prior to the publication of an omics study: (1) raw data, (2) the computer code used to derive the results from the raw data, (3) evidence of the provenance of the raw data so that data labels can be checked, (4) written descriptions of any nonscriptable analysis steps, and (5) the prespecified analysis plans (Baggerly and Coombes, 2011). The ability of independent investigators to access data and computer code for omics-based tests is particularly important because of the complexity of the data and analyses. As Baggerly and Coombes’ experience suggests, without sufficient access to the data and code, it is very difficult to judge the scientific integrity of the data and conclusions drawn. A recent edition of the journal Science was dedicated to data reproducibility (Jasny et al., 2011), and several of the articles emphasized the importance of journals demanding that authors make their data and computer code available to improve the reproducibility of published research (Ioannidis and Khoury, 2011; Peng, 2011). Other investigators are currently developing methods to make computational research data readily available to the public (Stodden and Yale Roundtable Participants, 2010).
Journals have widely divergent policies on data sharing. Piwowar and Chapman (2008) investigated data-sharing policies at 70 journals that published more than 15 articles on gene expression in 2006. Eighteen (26 percent) of the journals did not mention data sharing in the instructions to the authors, 11 (16 percent) included requirements for sharing non-microarray data but no requirement for data in general, and 42 (60 percent) included data-sharing policies applicable to microarrays. Oncology journals often lacked any microarray data-sharing policies. In another study, Piwowar examined the percentage of gene expression microarray journal articles reporting an associated dataset published in a data repository from 2000 to 2009 (Piwowar, 2011). Of the 11,603 articles identified over the entire time period, only 2,901 articles (25 percent) indicated that the data were deposited in a data repository. However, the percentage of microarray journal articles reporting that data were submitted to a data repository increased each subsequent year, with less than 5 percent of articles reporting data submission in 2001, but 30-35 percent reporting data submission in 2007-2009.
The strength of data-sharing policies among journals that include instructions to authors on data and code availability also vary greatly. For example, Science’s and Nature’s instructions to the authors state that all data, materials, and associated codes and protocols should be available to the reader as a condition of publication. After publication, the journals stipulate that the authors should fulfill all reasonable requests for data and materials necessary for independent investigators to replicate the findings (Nature, 2011; Science, 2011). By contrast, the Annals of Internal Medicine’s policy on reproducible research only requires authors to publish a
statement of their willingness to share data and code and to specify any conditions to sharing (Annals of Internal Medicine, 2010). There is no actual requirement to share the data and code. This difference in policy is likely due to the different nature of the articles that these journals publish (basic science versus clinical research) because the challenges associated with data and information sharing vary across different fields of research. The journal Biostatistics offers authors the opportunity to request a “reproducibility review,” in which the journal runs the data and code and confirms that the results can be reproduced. Articles are rated as R (reproducible), D (data provided), C (code provided), or none of the above (Peng, 2009). See Box 5-8 for lessons on data and code sharing from the banking industry.
The preferred method of many journals for sharing datasets is for authors to deposit the data in an approved database, such as www.clinicaltrials.gov, dbGaP, or GEO, and to include instructions on accessing the datasets in the published paper (Nature, 2011; Science, 2011). Some journals also have websites that can host supplementary materials, including data and code (MoS, 2007). When necessary, journal editors may use their influence to encourage authors to more fully share their data and code and respond to queries from individuals who have a legitimate interest in understanding the methods of a study (MoS, 2007). The challenge for journal editors with limited resources and limited access to the necessary review expertise is overseeing authors’ compliance with this policy and ensuring that the data and code deposited in repositories are accurate and complete. The committee recommends that journal editors require authors who submit manuscripts describing clinical evaluations of omics-based tests to
The Journal of Money, Credit, and Banking instituted a policy in 1982-1984 requiring authors to submit the data and code supporting their manuscripts to the journal. Despite this policy, investigators showed that the majority of the published studies could not be replicated with the data and code provided. In response to this study, the journal changed its policy, and in 1996 it mandated that authors deposit their data and code into an archive. From 1996-2003 the journal published 193 empirical articles in which the authors should have deposited their data and code into the archive. However, an analysis of the archive showed that the authors only deposited information in 69 of the cases, often including incomplete information. Replication could only be achieved in 14 cases.
SOURCES: Dewald et al., 1986; McCullough, 2007.
make their data, metadata, prespecified analysis plans, computer code, and fully specified computational models publicly available in an independently managed database (e.g., dbGaP) in standard format (Recommendation 7[a] [ii]). Journals should not accept papers on complex biomarkers for publication, if the corresponding data and software are not independently certified as available. This requirement is particularly important in omics research where the datasets are enormous and the code is complex. It may be useful for journals to contact not only individual investigators when this policy is not effectively implemented, but also deans of investigators’ institutions. A system that journals could use to verify that the code is reproducible from the starting data has been developed recently (Segal et al., 2012).
Safeguards for Scientific Integrity
The committee recommends that journal editors require authors who submit manuscripts describing clinical evaluations of omics-based tests to provide the journal with the sections of the research protocol relevant to their manuscript (Recommendation 7[a][iii]). The research protocol and statistical analysis plan provide a detailed description of the objectives and methods developed at the outset of a study. These include the prespecified primary and secondary outcome measures as well as the prespecified primary analyses of each of these measures. These documents are likely to be more detailed for prospective clinical trials of omics-based tests than for other types of clinical omics studies. However, at a minimum, these documents should specify the research questions being addressed, primary outcomes of interest, and the data analysis strategy.
Previous statements made by organizations about the benefits of journal editors requiring authors to submit their research protocols for clinical trials with their manuscripts have had little impact on journals’ practices (Korn and Ehringhaus, 2006). However, requiring authors to share their research protocol and statistical analysis plan with journal editors is an important mechanism to ensure the integrity of an omics study. This allows journal editors and the referees to compare the prespecified outcome measures and analysis plans to the manuscript to make sure they are the same. For trials, a comparison between the sections of the protocol that the authors submit to a registry and what is included in the manuscript also is useful. Often amendments to the protocol and statistical analysis plan are necessary during the conduct of the trial. However, authors should document any amendments and provide an explanation for these changes. Access should be provided to the version of these documents that was in place at the time the outcome data were unblinded to allow data analysis. This prevents investigators from retrospectively accessing the results of the study to modify the principal findings. Both acts of omission (e.g., incomplete
reporting of primary and secondary outcome measures) and acts of commission (e.g., unacknowledged changes to prespecified outcome measures) can bias the study (Fleming, 2010; MoS, 2007; Zarin et al., 2011).
Journal editors also can institute policies that enforce the authors’ responsibility for the scientific integrity of the manuscript. The IOM committee was informed by the MoS in formulating its recommendation on this topic. The MoS requires journals to obtain a statement from at least one author declaring that he/she “had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis” (MoS, 2007, p. 29). When a study is industry sponsored, this statement should come from an independent investigator (preferably the Principal Investigator), who is not employed by any commercial funding sources. This policy does not excuse all other authors of responsibility for the integrity of the data. Best practices in science require that data be checked repeatedly for quality by multiple members of a research team. An individual author’s ability to guarantee the validity of the data is based on his or her trust in the internal data-checking process used in an omics study.
The MoS also states that journals should not publish any industry-sponsored studies when the data analysis was solely conducted by statisticians employed by the company sponsoring the research, although industry-employed statisticians can be listed as authors. To be eligible for publication, the study should include an independent analysis of the data conducted by a statistician at an academic institution or government research institute. The independent statistician should have access to the entire raw data and the study protocol, verify the appropriateness of the analytic plan, and conduct an independent analysis of the data. The manuscript should include the results of this independent analyses (Fontanarosa et al., 2011; MoS, 2007).
The IOM committee also reviewed the policies of the ICMJE in formulating its recommendations. The ICJME policy states that “an author must take responsibility for at least one component of the work, should be able to identify who is responsible for each other component, and should ideally be confident in their coauthors’ ability and integrity” (ICMJE, 2009a). Some of the ICMJE journals require that one or more authors guarantee the integrity of the work as a whole, from inception to publication (i.e., the guarantors). The ICMJE also recognizes the importance of reporting guidelines in preparing a manuscript for publication and documenting important information from a study. It encourages authors to consult the reporting guidelines relevant to their specific research design (ICMJE, 2009b). Appendix D discusses reporting guidelines in more detail.
Based on this review of existing journal policies, the committee recommends that journals require every author to identify their role in the development, conduct, analysis, writing, and editing of the manuscript.
Journals also should require the lead and senior authors to attest to the integrity of the study and the coauthors to confirm shared responsibility for study integrity (Recommendation 7[a][iv]). In addition, the committee recommends that journal editors should require authors who submit manuscripts describing clinical evaluations of omics-based tests to use appropriate reporting guidelines (e.g., the Consolidated Standards of Reporting Trials [CONSORT] [Moher et al., 2010] and the REporting recommendations for tumor MARKer prognostic studies [REMARK] [Altman et al., 2012a,b; McShane et al., 2005]) and submit checklists to certify guideline use (Recommendation 7[v]).
Responding to Credible Concerns About Published Manuscripts
Evidence suggests that many biomarker studies inadequately document important aspects of the scientific process (Brundage et al., 2002; Burton and Altman, 2004; Riley et al., 2003). Omics studies also may fail to meet the requirements for transparency and scientific rigor. For example, it has been reported that many tumor biomarker studies are poorly designed; fail to standardize the omics-based test; conduct inappropriate or misleading statistical analyses; and are based on inadequate study sample sizes (Burke and Henson, 1993; Concato et al., 1993; Fielding et al., 1992; Gasparini et al., 1993; Hall and Going, 1999; McGuire, 1991; McShane et al., 2005; Ransohoff, 2002; Ransohoff and Feinstein, 1978; Simon and Altman, 1994). In extreme cases, journals may respond to the technical shortcomings by issuing corrections, retractions, or expressions of concerns (ICMJE, 2009c; MoS, 2007). Journals have a responsibility to respond to credible questions about the scientific integrity and accuracy of the research they publish. However, a recent study found that of 122 leading biomedical journals, only 21 had a retraction policy, 76 had no policy, and the remainder did not post a policy or respond to the researcher’s inquiry (Atlas, 2004).
Many existing journals’ policies recognize that work demonstrating problems in reproducing the analysis in a paper is fundamentally different from disagreements about interpretation of the results of a study—and should be treated differently. For example, the MoS states that if there is an error in a published paper that can be proved with data, a correction should be published and attached to the original article in PubMed. If there is merely a difference of opinion, a reader can submit a letter through the normal peer review process (MoS, 2007). The challenge for journal editors is determining the correct response and identifying a dispute as an error or a difference of an opinion.
The committee identified a lack of a consistent and clear route to publication of reader-initiated comments and corrections to published papers.
Significant scientific disputes rarely have obviously right or wrong solutions, and journal editors often have to become very involved to adjudicate disagreements, but this may be beyond the resources of a journal. It may be useful for journals to utilize the peer review process to assess the nature and seriousness of challenges raised by readers. In general, letters to the editor have serious limitations. Letters are not linked to the original PubMed articles, and the original author is normally given the opportunity to respond last. Sometimes there is no provision for further correspondence or inquiry regarding disputes after initial letters are exchanged. This can result in the authors delaying or stonewalling the publication of the letter, because journals are reluctant to publish a letter without an author response. Thus, the committee recommends that journal editors develop mechanisms to resolve possible serious errors in published data, metadata, code, and/or computational models and establish clear procedures for management of error reports (Recommendation 7b). Potential solutions include creating a mechanism by which investigators who identify a substantive issue in a published paper can submit an erratum notice for peer review and possible publication that links the referred erratum and the original publication in PubMed and on the journal’s website. Alternatively, journal editors could ask the original peer reviewers to consider a particularly cogent criticism or proposed correction, if the authors are unresponsive to a letter. Establishing a way to link the letter to the original PubMed article is important. Journal editors also could invite peer-reviewed commentaries or editorials that are linked to the key primary articles in PubMed. The committee also recommends that journals alert the institutional leadership and all authors when a serious question of accuracy or integrity has been raised (Recommendation 7c). The data and other information needed to investigate an allegation are under the domain of the author’s institution (see discussion above on institutional investigations of scientific controversies).
Both investigators and institutions contribute to the scientific research culture in which omics research is conducted; investigators control the culture of individual laboratories, and institutions put policies and procedures in place that support scientific integrity. Recommendation 4 focuses on the necessary institutional policies and procedures that will guide all members of the institution, including investigators.
Institutions that conduct omics research to improve patient care have responsibilities for supporting the integrity of the omics research and the test development process. Although the committee does not intend for these recommendations to create new barriers to innovation in this promising technology, it is clear that in the era of omics research, with its multidisciplinary
highly specialized teams and complex data, standard procedures in some institutions do not currently assure the integrity of the scientific process in omics-based test development. If an institution does not feel it has the infrastructure or capability to follow these recommendations, then the committee believes that such an institution should consider not engaging in research aimed at the development of omics-based tests for use in medical practice, including clinical trials. While this may reduce the number of clinically oriented studies and publications in omics research, if the end result is higher-quality publications, this would be a positive change, given the limited resources for research.
RECOMMENDATION 4: Institutions
4a: Institutions are responsible for establishing, supporting, and overseeing the infrastructure and research processes for omics-based test development and evaluation as well as best practices for clinical trials and observational research, including those incorporating omics technologies, and should assure that the evaluation process outlined in this report is followed for omics-based test development and evaluation at their institution.
4b: Given the complexity of research and omics-based tests, the multi-disciplinary nature of omics and research, and the potential for conflicts of interest in developing and evaluating tests for clinical use, institutional leaders should pay heightened attention to providing appropriate oversight and promoting a culture of scientific integrity and transparency. They should designate:
i. A specific IRB member(s) to be responsible for considering IDE and IND requirements as a component of ensuring the proper conduct of clinical omics research.
ii. An institutional official who is responsible for comprehensive and timely documentation, disclosure, and management of financial and non-financial conflicts of interest, both individual and institutional.
iii. An institutional official who is responsible for establishing and managing a safe system for preventing, reporting, and adjudicating lapses in scientific integrity, to enhance patient safety.
iv. An institutional official who is responsible for establishing clear procedures for response to inquiries and/or serious criticism about the science being conducted at the institution. (For example, this individual would be the responsible official for journals to contact with a serious concern about a manuscript; ensure that relevant information to external scientists to help resolve issues of transparency of methods and data; and inform
funders when an investigation of potential scientific mis conduct is initiated.)
4c: Institutions that conduct omics research, including clinical trials, should train, recognize, and support the faculty-level careers of individuals from the multiple disciplines that collaborate on omics research and test development including, among others, omics technology, biostatistics, bioinformatics, pathology, and clinical trialists and ensure that they are:
i. Treated as equal co-investigators and co-owners of responsibility.
ii. Represented on all relevant review and oversight bodies within the institutions.
iii. Intellectually independent, preferably reporting to an independent mentor and/or department chair as well as to the project leaders.
The committee also addressed responsibilities of funders, FDA, and journals in ensuring rigorous development of omics-based tests. Funders play a leadership role in encouraging a culture of integrity and transparency in science, while they seek to accelerate progress through discovery, translation, and clinical applications. The committee highlighted the responsibilities of funders; funding of independent verification and validation of these tests is particularly important because funders have generally not supported such work, and they do not consider it to be original, innovative science. Without this support, replication and validation will be difficult, and the field will be left with promising ideas published in journals that may be used in clinical practice prematurely or not at all. FDA should take steps to improve understanding of regulatory requirements for omics-based tests, by directly communicating with investigators and academic institutions and by developing a guidance or regulation that spells out the relevant requirements in this dynamic field. Finally, the responsibilities of journal editors with respect to the adoption and adherence to the omics-based test development and evaluation process are complicated by the wide spectrum of policies adopted and resources available to individual journals.
RECOMMENDATION 5: Funders
5a: All funders of omics-based translational research should:
i. Require investigators to make all data, metadata, prespecified analysis plans, computer code, and fully specified computational procedures publicly available and readily interpretable either at the time of publication or, if not published, at the
end of funding, and funders should financially support this requirement.
ii. Provide continuing support for independent repositories to guarantee ongoing access to relevant omics and clinical data.
iii. Support test validation in a CLIA-certified laboratory and consider the usefulness of an independent confirmation of a candidate omics-based test prior to evaluation for clinical use.
iv. Designate an official to alert the institutional leadership when serious allegations or questions have been raised that may warrant an institutional investigation; if the funder (e.g., NIH) has initiated that question, then the funder and institution should communicate during the investigation;
v. Establish lines of communication with other funders to be used when serious problems appear to involve interdependent research sponsored by another funder along the omics-based test development process.
5b: Federal funders of omics-based translational research should have authority to exercise the option of investigating any research being conducted by a funding recipient after requesting an investigation by the institution.
RECOMMENDATION 6: FDA
6a: In order to enable investigators and institutions to have a clear understanding of their regulatory responsibilities, FDA should develop and finalize a risk-based guidance or a regulation on:
i. Bringing omics-based tests to FDA for review.
ii. Oversight of LDTs.
6b: FDA should communicate the IDE requirements for use of omics-based tests in clinical trials to the OHRP, IRBs, and other relevant institutional leadership.
RECOMMENDATION 7: Journals
7: Journal editors should:
7a: Require authors who submit manuscripts describing clinical evaluations of omics-based tests to:
i. Register all clinical trials at www.clinicaltrials.gov or another trial registry acceptable to the journal.
ii. Make data, metadata, prespecified analysis plans, computer code, and fully specified computational procedures publicly available in an independently managed database (e.g., dbGAP) in standard format.
iii. Provide the journal with the sections of the research protocol relevant to their manuscript.
iv. Identify each author’s role in the development, conduct, analysis, writing, and editing of the manuscript. Require the lead and senior authors to attest to the integrity of the study and the coauthors to confirm shared responsibility for study integrity,
v. Use appropriate guidelines (e.g., CONSORT, REMARK) and submitting checklists to certify guideline use.
7b: Develop mechanisms to resolve possible serious errors in published data, metadata, code, and/or computational models and establish clear procedures for management of error reports.
7c: Alert the institutional leadership and all authors when a serious question of accuracy or integrity has been raised.
ACS (American Cancer Society). 2011. Pilot and Exploratory Projects in Palliative Care of Cancer Patients and Their Families. http://www.cancer.org/acs/groups/content/@researchadministration/documents/document/acspc-023897.pdf (accessed August 10, 2011).
Altman, D. G., L. M. McShane, W. Sauerbrei, and S. E. Taube. 2012a. Reporting recommendations for tumor marker prognostic studies (REMARK): Explanation and elaboration. BMC Medicine 10:51.
Altman, D. G., L. M. McShane, W. Sauerbrei, and S. E. Taube. 2012b. Reporting recommendations for tumor marker prognostic studies (REMARK): Explanation and elaboration. Public Library of Science Medicine 9(5):e1001216.
Altshuler, J. S., and D. Altshuler. 2004. Organizational challenges in clinical genomic research. Nature 429(6990):478-481.
Andre, F., L. M. McShane, S. Michiels, D. F. Ransohoff, D. G. Altman, J. S. Reis-Filho, D. F. Hayes, and L. Pusztai. 2011. Biomarker studies: A call for a comprehensive biomarker study registry. Nature Reviews Clinical Oncology 8(3):171-176.
Annals of Internal Medicine. 2010. Information for Authors. http://www.annals.org/site/misc/ifora.xhtml (accessed August 22, 2011).
Apweiler, R., C. Aslanidis, T. Deufel, A. Gerstner, J. Hansen, D. Hochstrasser, R. Kellner, M. Kubicek, F. Lottspeich, E. Maser, H. W. Mewes, H. E. Meyer, S. Müllner, W. Mutter, M. Neumaier, P. Nollau, H. G. Nothwang, F. Ponten, A. Radbruch, K. Reinert, G. Rothe, H. Stockinger, A. Tárnok, M. J. Taussig, A. Thiel, J. Thiery, M. Ueffing, G. Valet, J. Vandekerckhove, C. Wagener, O. Wagner, and G. Schmitz. 2009. Approaching clinical proteomics: Current state and future fields of application in cellular proteomics. Cytometry, Part A 75(10):816-832.
Atlas, M. C. 2004. Retraction policies of high-impact biomedical journals. Journal of the Medical Library Association 92(2):242-250.
Baggerly, K. A. 2011. Forensics Bioinformatics. Presented at the Workshop of the IOM Committee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials, Washington, DC, March 30-31.
Baggerly, K. A., and K. R. Coombes. 2009. Deriving chemosensitivity from cell lines: Forensic bioinformatics and reproducible research in high-throughput biology. Annals of Applied Statistics 3(4):1309-1334.
Baggerly, K. A., and K. R. Coombes. 2011. What information should be required to support clinical “omics” publications? Clinical Chemistry 57(5):688-690.
Baggerly, K. A., J. S. Morris, and K. R. Coombes. 2004. Reproducibility of SELDI-TOF protein patterns in serum: Comparing datasets from different experiments. Bioinformatics 20(5):777-785.
Baggerly, K. A., K. R. Coombes, and E. S. Neeley. 2008. Run batch effects potentially compromise the usefulness of genomic signatures of ovarian cancer. Journal of Clinical Oncology 26(7):1186-1187.
Baron, A. E., K. Bandeen-Roche, D. A. Berry, J. Bryan, V. J. Carey, K. Chaloner, M. Delorenzi, B. Efron, R. C. Elston, D. Ghosh, J. D. Goldberg, S. Goodman, F. E. Harrell, S. Galloway Hilsenbeck, W. Huber, R. A. Irizarry, C. Kendziorski, M. R. Kosorok, T. A. Louis, J. S. Marron, M. Newton, M. Ochs, J. Quackenbush, G. L. Rosner, I. Ruczinski, S. Skates, T. P. Speed, J. D. Storey, Z. Szallasi, R. Tibshirani, and S. Zeger. 2010. Letter to Harold Varmus: Concerns about Prediction Models Used in Duke Clinical Trials. Bethesda, MD, July 19, 2010. http://www.cancerletter.com/categories/documents (accessed January 18, 2012).
Bekelman, J. E., Y. Li, and C. P. Gross. 2003. Scope and impact of financial conflicts of interest in biomedical research. Journal of the American Medical Association 289(4):454-465.
Berry, D. 2012. Statisticians and clinicians: Collaborations based on mutual respect. Amstat News. http://magazine.amstat.org/blog/2012/02/01/collaborationpolic/ (accessed February 9, 2012).
Bird, S. J. 2001. Mentors, advisors and supervisors: Their role in teaching responsible research conduct. Science and Engineering Ethics 7(4):455-467.
Blumenthal, D., N. Causino, E. Campbell, and K. S. Louis. 1996. Relationships between academic institutions and industry in the life sciences—an industry survey. New England Journal of Medicine 334(6):368-374.
Bonnefoi, H., A. Potti, M. Delorenzi, L. Mauriac, M. Campone, M. Tubiana-Hulin, T. Petit, P. Rouanet, J. Jassem, E. Blot, V. Becette, P. Farmer, S. Andre, C. R. Acharya, S. Mukherjee, D. Cameron, J. Bergh, J. R. Nevins, and R. D. Iggo. 2007. Validation of gene signatures that predict the response of breast cancer to neoadjuvant chemotherapy: A substudy of the EORTC 10994/BIG 00-01 clinical trial. Lancet Oncology 8(12):1071-1078.
Bonnefoi, H., A. Potti, M. Delorenzi, L. Mauriac, M. Campone, M. Tubiana-Hulin, T. Petit, P. Rouanet, J. Jassem, E. Blot, V. Becette, P. Farmer, S. Andre, C. Acharya, S. Mukherjee, D. Cameron, J. Bergh, J. R. Nevins, and R. D. Iggo. 2011. Retraction-validation of gene signatures that predict the response of breast cancer to neoadjuvant chemotherapy: A substudy of the EORTC10994/BIG 00-01 clinical trial. Lancet Oncology 12(2):116.
Brazma, A. 2009. Minimum Information About a Microarray Experiment (MIAME)— successes, failures, challenges. Scientific World Journal 9:420-423.
Brazma, A., P. Hingamp, J. Quackenbush, G. Sherlock, P. Spellman, C. Stoeckert, J. Aach, W. Ansorge, C. A. Ball, H. C. Causton, T. Gaasterland, P. Glenisson, F. C. P. Holstege, I. F. Kim, V. Markowitz, J. C. Matese, H. Parkinson, A. Robinson, U. Sarkans, S. Schulze-Kremer, J. Stewart, R. Taylor, J. Vilo, and M. Vingron. 2001. Minimum Information About a Microarray Experiment (MIAME)—toward standards for microarray data. Nature Genetics 29(4):365-371.
Brown, C. 2003. The changing face of scientific discourse: Analysis of genomic and proteomic database usage and acceptance. Journal of the American Society for Information Science and Technology 54(10):926-938.
Brundage, M. D., D. Davies, and W. J. Mackillop. 2002. Prognostic factors in non-small cell lung cancer: A decade of progress. Chest 122(3):1037-1057.
Burke, H. B., and D. E. Henson. 1993. Criteria for prognostic factors and for an enhanced prognostic system. Cancer 72:3131-3135.
Burton, A., and D. G. Altman. 2004. Missing covariate data within cancer prognostic studies: A review of current reporting and proposed guidelines. British Journal of Cancer 91(1):4-8.
Buyse, M., S. Loi, L. J. van ‘t Veer, G. Viale, M. Delorenzi, A. M. Glas, M. S. d’Assignies, J. Bergh, R. Lidereau, P. Ellis, A. Harris, J. Bogaerts, P. Therasse, A. Floore, M. Amakrane, F. Piette, E. T. Rutgers, C. Sortiriou, F. Cardoso, and M. J. Piccart. 2006. Validation and clinical utility of a 70-gene prognostic signature for women with node-negative breast cancer. Journal of the National Cancer Institute 98(17):1183-1192.
Califf, R. M. 2011a. Discussion at the Workshop of the IOM Committee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials, Washington, DC, March 30-31.
Califf, R. M. 2011b. Discussion at the Discovery of Process Working Group Meeting with Representatives of Duke Faculty and Administration, Washington, DC, August 22.
Chahal, A. P. S. 2011. Informatics in clinical research in oncology: Current state, challenges, and a future perspective. Cancer Journal 17(4):239-245 210.1097/ PPO.1090b1013e31822c31827b31825.
Chan, A. W., and D. G. Altman. 2005. Identifying outcome reporting bias in randomised trials on PubMed: Review of publications and survey of authors. British Medical Journal 330(7494):753.
Chan, A. W., K. Krleza-Jeric, I. Schmid, and D. G. Altman. 2004a. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. Canadian Medical Association Journal 171(7):735-740.
Chan, A. W., A. Hrobjartsson, M. T. Haahr, P. C. Gotzsche, and D. G. Altman. 2004b. Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles. Journal of the American Medical Association 291(20):2457-2465.
Chan, M. M. 2009. Letter to Division of Medical Oncology, Duke University Medical Center. http://www.fda.gov/downloads/MedicalDevices/ProductsandMedicalProcedures/InVitroDiagnostics/UCM289102.pdf (accessed February 9, 2012).
Choi, B., S. Drozdetski, M. Hackett, C. Lu, C. Rottenberg, L. Yu, D. Hunscher, and D. Clauw. 2005. Usability comparison of three clinical trial management systems. AMIA Annual Symposium Proceedings 2005:921.
Cokol, M., I. Iossifov, R. Rodriguez-Esteban, and A. Rzhetsky. 2007. How many scientific papers should be retracted? EMBO Reports 8(5):422-423.
Collins, F. 2010. Has the revolution arrived? Nature 464 (7289):674-675.
Concato, J., A. R. Feinstein, and T. R. Holford. 1993. The risk of determining risk with multi-variable models. Annals of Internal Medicine 118(3):201-210.
Coombes, K. R., J. Wang, and K. A. Baggerly. 2007. Microarrays: Retracing steps. Nature Medicine 13(11):1276-1277.
CSE (Council of Science Editors). 2009. CSE’s White Paper on Promoting Integrity in Scientific Journal Publications, 2009 Update. http://www.councilscienceeditors.org/i4a/pages/index.cfm?pageid=3331 (accessed August 4, 2011 ).
Curfman, G. D., S. Morrissey, and J. M. Drazen. 2006. Response to expression of concern regarding VIGOR study. New England Journal of Medicine 354(11):1196-1199.
DeAngelis, C. D., J. M. Drazen, F. A. Frizelle, C. Haug, J. Hoey, R. Horton, S. Kotzin, C. Laine, A. Marusic, A. J. P. M. Overbeke, T. V. Schroeder, H. C. Sox, and M. B. Van Der Weyden. 2004. Clinical trial registration. Journal of the American Medical Association 292(11):1363-1364.
DeAngelis, C. D., J. M. Drazen, F. A. Frizelle, C. Haug, J. Hoey, R. Horton, S. Kotzin, C. Laine, A. Marusic, A. J. P. M. Overbeke, T. V. Schroeder, H. C. Sox, and M. B. Van Der Weyden. 2005. Is this clinical trial fully registered? A statement from the International Committee of Medical Journal Editors. Journal of the American Medical Association 293(23):2927-2929.
DeMets, D. L. 2009. “Minding the Gap”: Driving Clinical and Translation Research by Eliminating the Shortage of Biostatisticians. Bethesda, MD: Clinical Translational Science Award (CTSA) Consortium.
DeMets, D. L., R. Woolson, C. Brooks, and R. Qu. 1998. Where the jobs are: A study of Amstat News job advertisements. American Statistician 52(4):303-307.
Deng, M. C., H. J. Eisen, M. R. Mehra, M. Billingham, C. C. Marboe, G. Berry, J. Kobashigawa, F. L. Johnson, R. C. Starling, S. Murali, D. F. Pauly, H. Baron, J. G. Wohlgemuth, R. N. Woodward, T. M. Klingler, D. Walther, P. G. Lal, S. Rosenberg, S. Hunt, and for the CARGO Investigators. 2006. Noninvasive discrimination of rejection in cardiac allograft recipients using gene expression profiling. American Journal of Transplantation 6(1):150-160.
Dewald, W. G., J. G. Thursby, and R. G. Anderson. 1986. Replication in empirical economics: The journal of money, credit and banking project. American Economic Review 76(4):587-603.
Dickersin, K., and I. Chalmers. 2010. Recognising, Investigating and Dealing with Incomplete and Biased Reporting of Clinical Research: From Francis Bacon to the World Health Organization. http://www.jameslindlibrary.org (accessed June 11, 2010).
Drazen, J., M. B. Van Der Weyden, P. Sahni, J. Rosenberg, A. Marusic, C. Laine, S. Kotzin, R. Horton, P. C. Hebert, C. Haug, F. Godlee, F. A. Frozelle, P. W. Leeuw, and C. D. DeAngelis. 2009. Uniform format for disclosure of competing interests in ICMJE journals. New England Journal of Medicine 361(19):1896-1897.
Drazen, J. M., P. W. de Leeuw, C. Laine, C. D. Mulrow, C. D. DeAngelis, F. A. Frizelle, F. Godlee, C. Haug, P. C. Hébert, S. Kotzin, A. Marusic, H. Reyes, and J. Rosenberg. 2010. Toward more uniform conflict disclosures: The updated ICMJE conflict of interest reporting form. Annals of Internal Medicine 153(4):268-269.
Dressman, H. K., A. Berchuck, G. Chan, J. Zhai, A. Bild, R. Sayer, J. Cragun, J. Clarke, R. S. Whitaker, L. Li, G. Gray, J. Marks, G. S. Ginsburg, A. Potti, M. West, J. R. Nevins, and J. M. Lancaster. 2007. An integrated genomic-based approach to individualized treatment of patients with advanced-stage ovarian cancer. Journal of Clinical Oncology 25(5):517-525.
Dressman, H. K., A. Potti, J. R. Nevins, and J. M. Lancaster. 2008. In reply. Journal of Clinical Oncology 26(7):1187-1188.
Dwan, K., D. G. Altman, J. A. Arnaiz, J. Bloom, A. Chan, E. Cronin, E. Decullier, P. J. Easterbrook, E. Von Elm, C. Gamble, D. Ghersi, J. P. A. Ioannidis, J. Simes, and P. R. Williamson. 2008. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE 3(8):e3081.
Emanuel, E. J., D. Wendler, and C. Grady. 2000. What makes clinical research ethical? Journal of the American Medical Association 283(20):2701-2711.
Enserink, M. 2011. Authors pull the plug on second paper supporting viral link to chronic fatigue syndrome. Science December 28.
FDA (Food and Drug Administration). 2007. Draft Guidance for Industry, Clinical Laboratories, and FDA Staff—In Vitro Diagnostic Multivariate Index Assays. http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm079148.htm (accessed February 1, 2012).
FDA. 2011a. Draft Guidance for Industry and Food and Drug Administration Staff—In Vitro Companion Diagnostic Devices. http://www.fda.gov/medicaldevices/deviceregulationand-guidance/guidancedocuments/ucm262292.htm (accessed December 15, 2011).
FDA. 2011b. FDA Establishment Inspection Report, Duke University Medical Center. http://www.fda.gov/downloads/MedicalDevices/ProductsandMedicalProcedures/InVitroDiagnostics/UCM289106.pdf (accessed February 9, 2012).
Fielding, L. P., C. M. Fenoglio-Preiser, and S. Freedman. 1992. The future of prognostic factors in outcome prediction for patients with cancer. Cancer 70:2367-2377.
Fleming, T. R. 2010. Clinical trials: Discerning hype from substance. Annals of Internal Medicine 153:400-406.
Fontanarosa, P. B., A. Flanagin, and C. D. DeAngelis. 2011. Reporting conflicts of interest, financial aspects of research, and role of sponsors in funded studies. JAMA 294(1):110-111.
Frankel, M. S. 1995. Commission on research integrity: Origins and charge. In Professional Ethics Report. http://www.aaas.org/spp/sfrl/per/per3.htm (accessed August 3, 2011).
Fung, E. T. 2010. A recipe for proteomics diagnostic test development: The ova1 test, from biomarker discovery to FDA clearance. Clinical Chemistry 56(2):327-329.
Gasparini, G., F. Pozza, and A. L. Harris. 1993. Evaluating the potential usefulness of new prognostic and predictive indicators on node-negative breast cancer patients. Journal of the National Cancer Institute 85(15):1206-1219.
Gawande, A. 2009. The Checklist Manifesto: How to Get Things Right. New York: Metropolitan books.
Geller, G., A. Boyce, D. E. Ford, and J. Sugarman. 2010. Beyond “compliance”: The role of institutional culture in promoting research integrity. Academic Medicine 85(8):1296-1302.
Hall, P. A., and J. J. Going. 1999. Predicting the future: A critical appraisal of cancer prognosis studies. Histopathology 35:489-494.
Helmreich, R. L. 2000. On error management: Lessons from aviation. BMJ 320(7237):781-785.
Hsu, D. S., B. S. Balakumaran, C. R. Acharya, V. Vlahovic, K. S. Walters, K. Garman, C. Anders, R. F. Riedel, J. Lancaster, D. Harpole, H. K. Dressman, J. R. Nevins, P. G. Febbo, and A. Potti. 2007. Pharmacogenomic strategies provide a rational approach to the treatment of cisplatin-resistant patients with advanced cancer. Journal of Clinical Oncology 25(28):4350-4357.
Hsu, D. S., B. S. Balakumaran, C. R. Acharya, V. Vlahovic, K. S. Walters, K. Garman, C. Anders, R. F. Riedel, J. Lancaster, D. Harpole, H. K. Dressman, J. R. Nevins, P. G. Febbo, and A. Potti. 2010. Retraction to Journal of Clinical Oncology 25(28):4350-4357.
Hudson, P. 2003. Applying the lessons of high risk industries to health care. Quality & Safety in Health Care 12(Suppl 1):i7-i12.
ICMJE (International Committee of Medical Journal Editors). 2009a. Ethical Considerations in the Conduct and Reporting of Research: Authorship and Contributorship. http://www.icmje.org/ethical_1author.html (accessed February 2, 2012).
ICMJE. 2009b. Manuscript Preparation and Submission: Preparing a Manuscript for Submission to a Biomedical Journal. http://www.icmje.org/manuscript_1prepare.html (accessed February 2, 2012).
ICMJE. 2009c. Publishing and Editorial Issues Related to Publication in Biomedical Journals: Corrections, Retractions and “Expressions of Concern.” http://www.icmje.org/publishing_2corrections.html (accessed August 11, 2011).
Ioannidis, J. P. A., and M. J. Khoury. 2011. Improving validation practices in “omics” research. Science 334(6060):1230-1232.
IOM (Institute of Medicine). 2002. Integrity in Scientific Research: Creating an Environment That Promotes Responsible Conduct. Washington, DC: The National Academies Press.
IOM. 2009a. Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health through Research. Washington, DC: The National Academies Press.
IOM. 2009b. Conflict of Interest in Medical Research, Education, and Practice. Edited by B. Lo and M. Field. Washington, DC: The National Academies Press.
IOM. 2011. Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC: The National Academies Press.
IOTF (Interagency Oncology Task Force). 2011. Joint Fellowship Training Program. http://iotftraining.nci.nih.gov/index.html (accessed October 28, 2011).
Jasny, B. R., G. Chin, L. Chong, and S. Vignieri. 2011. Again, and again, and again. Science 334(6060):1225.
Kaiser, J. 2011. Public Health Service Issues Final Conflicts of Interest Rule. http://news.sciencemag.org/scienceinsider/2011/08/new-us-conflict-of-interest-rule.html (accessed September 8, 2011).
Kim, S., R. Millard, P. Nisbet, C. Cox, and E. Caine. 2004. Potential research participants’ views regarding researcher and institutional financial conflicts of interest. Journal of Medical Ethics 30(1):73-79.
Klimeck, G. 2011. Platform for Collaborative Research with Quantifiable Impact on Research and Education. Paper presented at Cyberinfrastructure Days Conference, Ann Arbor, MI. Kolata, G. 2001. Johns Hopkins death brings halt to U.S.-financed human studies. New York Times, July 20.
Korn, D., and S. Ehringhaus. 2006. Principles for strengthening the integrity of clinical research. PLoS Clinical Trials 1(1):e1.
Kornbluth, S. 2011. Discussion at the Discovery of Process Working Group Meeting with Representatives of Duke Faculty and Administration, Washington, DC, August 22.
Kornbluth, S. A., and V. Dzau. 2011. Predictors of Chemotherapy Response: Background Information: Draft. Duke University.
Laine, C., R. Horton, C. D. DeAngelis, J. M. Drazen, F. A. Frizelle, F. Godlee, C. Haug, P. C. Hébert, S. Kotzin, A. Marusic, P. Sahni, T. V. Schroeder, H. C. Sox, M. B. Van Der Weyden, and F. W. A. Verheugt. 2007. Clinical trial registration—looking back and moving ahead. New England Journal of Medicine 356(26):2734-2736.
Longo, D. R., J. E. Hewett, B. Ge, and S. Schubert. 2005. The long road to patient safety. JAMA 294(22):2858-2865.
Lowrance, W. W. 2006. Access to Collections of Data and Materials for Health Research: A Report to the Medical Research Council and the Wellcome Trust. http://www.wellcome.ac.uk/stellent/groups/corporatesite/@msh_grants/documents/web_document/wtx030842.pdf (accessed August 10, 2011).
Marshall, E. 2011. Unseen world of clinical trials emerges from U.S. database. Science 333(6039):145.
Martinson, B. C., M. S. Anderson, and R. de Vries. 2005. Scientists behaving badly. Nature 435(7043):737-738.
McCullough, B. D. 2007. Got replicability? The journal of money, credit, and banking archive. Econ Journal Watch 4(3):326-337.
McGuire, W. L. 1991. Breast cancer prognostic factors: Evaluation guidelines. Journal of the National Cancer Institute 83:154-155.
McShane, L. M. 2010a. NCI Address to the Institute of Medicine Committee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials. Presented at Meeting 1. Washington, DC, December 20.
McShane, L. 2010b. Reanalysis Report for Cisplatin Chemosensitivity Predictor. Bethesda, MD: NCI.
McShane, L. M. 2010c. December 20. NCI Address to Institute of Medicine Committee Convened to Review Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials. Meeting 1: Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials, Washington, DC.
McShane, L. M., D. G. Altman, W. Sauerbrei, S. E. Taube, M. Gion, and G. M. Clark. 2005. REporting recommendations for tumor MARKer prognostic studies (REMARK). Journal of the National Cancer Institute 97(16):1180-1184.
Meldrum, D. R., and A. H. DeCherney. 2011. The who, why, what, when, where, and how of clinical trial registries. Fertility and Sterility 96(1):2-5.
Mischak, H., R. Apweiler, R. Banks, M. Conaway, J. Coon, A. Dominiczak, J. Ehrich, D. Fliser, M. Girolami, H. Hermjakob, D. Hochstrasser, J. Jankowski, B. Julian, W. Kolch, Z. Massy, C. Neusuess, J. Novak, K. Peter, K. Rossing, J. Schanstra, J. Semmes, D. Theodorescu, V. Thongboonkerd, E. Weissinger, J. Van Eyk, and T. Yamamoto. 2007. Clinical proteomics: A need to define the field and to begin to set adequate standards. PROTEOMICS—Clinical Applications 1(2):148-156.
Moher, D., S. Hopewell, K. F. Schulz, V. Montori, P. C. Gotzsche, P. J. Devereaux, D. Elbourne, M. Egger, and D. G. Altman. 2010. CONSORT 2010 explanation and elaboration: Updated guidelines for reporting parallel group randomised trials. Journal of Clinical Epidemiology 63(8):e1-e37.
MoS (Manual of Style). 2007. AMA Manual of Style: A Guide for Authors and Editors, 10th ed. New York: Oxford University Press, Inc.
Naik, G. 2011a. Mistakes in scientific studies surge. Wall Street Journal, August 10.
Naik, G. 2011b. Scientists’ elusive goal: Reproducing study results. Wall Street Journal, December 2.
NAS (National Academy of Sciences). 1992. Responsible Science, Volume. I: Ensuring the Integrity of the Research Process. Washington, DC: National Academy Press.
NAS. 2009. On Being a Scientist: A Guide to Responsible Conduct in Research, 3rd ed. Washington, DC: The National Academies Press.
Nature. 2011. Availability of Data and Material. http://www.nature.com/authors/policies/availability.html (accessed August 15, 2011).
Nelson, D., and R. Weiss. 1999. Hasty Decisions in the Race to a Cure? Gene Therapy Study Proceeded Despite Safety, Ethics Concerns. http://www.washingtonpost.com/wp-srv/WPcap/1999-11/21/101r-112199-idx.html (accessed October 27, 2011).
Nevins, J. 2011. Genomic Strategies to Address the Challenge of Personalizing Cancer Therapy. Presented at the Workshop of the IOM Committee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials, Washington, DC, March 30-31.
NIH (National Institutes of Health). 1998. NIH Policy for Data and Safety Monitoring. http://grants.nih.gov/grants/guide/notice-files/not98-084.html (accessed July 22, 2011).
NIH. 2010. NIH Grants Policy Statement. http://grants.nih.gov/grants/policy/nihgps_2010/index.htm (accessed July 22, 2010).
NIH. 2011. Mission. http://www.nih.gov/about/mission.htm (accessed October 19, 2011). NRC (National Research Council). 1985. Sharing Research Data. Washington, DC: National Academy Press.
NRC. 2003. Sharing Publication-Related Data and Materials: Responsibilities of Authorship in the Life Sciences. Washington, DC: The National Academies Press.
NRC. 2005. Catalyzing Inquiry at the Interface of Computing and Biology. Washington, DC: The National Academies Press.
NRC and IOM. 2002. Integrity in Scientific Research: Creating an Environment That Promotes Responsible Conduct. Washington, DC: The National Academies Press.
NSF (National Science Foundation). 2001. Grant General Conditions (GC-1). http://www.nsf.gov/pubs/2001/gc101/gc101rev1.pdf (accessed August 11, 2011).
OHSR (Office of Human Subjects Research). 2006. Sheet 6: Guidelines for Writing Informed Consent Documents. http://ohsr.od.nih.gov/info/sheet6.html (accessed October 27, 2011).
ORI (Office of Research Integrity). 2011. About ORI. http://ori.hhs.gov/about/index.shtml (accessed September 21, 2011).
Paik, S., S. Shak, G. Tang, C. Kim, J. Baker, M. Cronin, F. L. Baehner, M. G. Walker, D. Watson, T. Park, W. Hiller, E. R. Fisher, D. L. Wickerham, J. Bryant, and N. Wolmark. 2004. A multigene assay to predict recurrence of tamoxifen-treated, node-negative breast cancer. New England Journal of Medicine 351(27):2817-2826.
Pathwork Diagnostics. 2010. Pathwork Tissue of Origin Test for FFPE Cleared by U.S. Food and Drug Administration. http://www.pathworkdx.com/News/M129_FDA_Clearance_Final.pdf (accessed November 17, 2011).
PCF (Prostate Cancer Foundation). 2011. Prostate Cancer Research. http://www.pcf.org/site/c.leJRIROrEpH/b.5780289/k.D2E4/Research.htm (accessed August 10, 2011).
PCSBI (Presidential Commission for the Study of Bioethical Issues). 2011. Moral Science: Protecting Participants in Human Subjects Research. http://bioethics.gov/cms/sites/default/files/Moral%20Science%20-%20Final.pdf (accessed December 21, 2011).
Peng, R. D. 2009. Reproducible research and Biostatistics. Biostatistics 10(3):405-408.
Peng, R. D. 2011. Reproducible research in computational science. Science 334(6060):1226-1227.
Peng, R. D., F. Dominici, and S. L. Zeger. 2006. Reproducible epidemiologic research. American Journal of Epidemiology 163(9):783-789.
Philip, R. O., M. A. Payne, W. Andrew, B. S. Greaves, and T. J. Kipps. 2003. CRC clinical trials management system (CTMS): An integrated information management solution for collaborative clinical research. AMIA Annual Symposium Proceedings 2003:967.
PhRMA Foundation. 2011. 2012 Awards in Pharmacology. http://phrmafoundation.org/download/PhRMA%20Bro_pharmacology.pdf (accessed August 10, 2011).
Piwowar, H. A. 2011. Who shares? Who doesn’t? Factors associated with openly archiving raw research data. PLoS ONE 6(7):218657.
Piwowar, H. A., and W. W. Chapman. 2008. A review of journal policies for sharing research data. AMIA Annual Symposium Proceedings 2008:596-600.
Platt, J. R. 1964. Strong inference: Certain systematic methods of scientific thinking may produce much more rapid progress than others. Science 146(3642):347-353.
Potti, A. 2009. Letter to FDA’s CDER from Division of Medical Oncology, Duke University Medical Center. http://www.fda.gov/downloads/MedicalDevices/ProductsandMedicalProcedures/InVitroDiagnostics/UCM289103.pdf (accessed February 9, 2012).
Potti, A., and J. R. Nevins. 2007. Potti et al. Reply. Nature Medicine 13(11):1277-1278.
Potti, A., H. K. Dressman, A. Bild, R. F. Riedel, G. Chan, R. Sayer, J. Cragun, H. Cottrill, M. J. Kelley, R. Petersen, D. Harpole, J. Marks, A. Berchuck, G. S. Ginsburg, P. Febbo, J. Lancaster, and J. R. Nevins. 2006a. Genomic signatures to guide the use of chemotherapeutics. Nature Medicine 12(11):1294-1300.
Potti, A., S. Mukherjee, R. Petersen, H. K. Dressman, A. Bild, J. Koontz, R. Kratzke, M. A. Watson, M. Kelley, G. S. Ginsburg, M. West, D. H. Harpole, and J. R. Nevins. 2006b. A genomic strategy to refine prognosis in early-stage non-small-cell lung cancer. New England Journal of Medicine 355(6):570-580
Potti, A., H. K. Dressman, A. Bild, G. Chan, R. Sayer, J. Cragun, H. Cottrill, M. J. Kelley, R. Petersen, D. Harpole, J. Marks, A. Berchuck, G. S. Ginsburg, P. Febbo, J. Lancaster, and J. R. Nevins. 2011. Retraction: Genomic signatures to guide the use of chemotherapeutics. Nature Medicine 17(1):135.
Pronovost, P. J., B. Weast, C. G. Holzmueller, B. J. Rosenstein, R. P. Kidwell, K. B. Haller, E. R. Reroli, J. B. Sexton, and H. R. Rubin. 2003. Evaluation of the culture of safety: Survey of clinicians and managers in an academic medical center. Quality & Safety in Health Care 12:405-410.
Quackenbush, J. 2009. Data reporting standards: Making the things we use better. Genome Medicine 1(11):111.
Quest Diagnostics. 2011. Licenses and Accreditation. http://www.questdiagnostics.com/brand/company/b_comp_licenses.html (accessed November 21, 2011).
Ransohoff, D. F. 2002. Challenges and opportunities in evaluating diagnostic tests. Journal of Clinical Epidemiology 55(12):1178-1182.
Ransohoff, D. F., and A. R. Feinstein. 1978. Problems of spectrum and bias in evaluating the efficacy of diagnostic tests. New England Journal of Medicine 299(17):926-930.
Ranstam, J., M. Buyse, S. L. George, S. Evans, N. L. Geller, B. Scherrer, E. Lesaffre, G. Murray, L. Edler, J. L. Hutton, T. Colton, and P. Lachenbruch. 2000. Fraud in medical research: An international survey of biostatisticians. Controlled Clinical Trials 21(5):415-427.
Rennie, D. 1997. Thyroid storm. Journal of the American Medical Association 277(15):1238-1243.
Rhodes, R., and J. J. Strain. 2004. Whistleblowing in academic medicine. Journal of Medical Ethics 30:35-39.
Riley, R. D., K. R. Abrams, A. J. Sutton, P. C. Lambert, D. R. Jones, D. Heney, and S. A. Burchill. 2003. Reporting of prognostic markers: Current problems and development of guidelines for evidence-based practice in the future. British Journal of Cancer 88(8):1191-1198.
Rosenberg, S., M. R. Elashoff, P. Beineke, S. E. Daniels, J. A. Wingrove, W. G. Tingley, P. T. Sager, A. J. Sehnert, M. Yau, W. E. Kraus, K. Newby, R. S. Schwartz, S. Voros, S. G. Ellis, N. Tahirkhelli, R. Waksman, J. McPherson, A. Lansky, M. E. Winn, N. J. Schork, E. J. Topol, and for the PREDICT (Personalized Risk Evaluation and Diagnosis In the Conorary Tree) Investigators. 2010. Multicenter validation of the diagnostic accuracy of a blood-based gene expression test for assessing obstructive coronary artery disease in nondiabetic patients. Annals of Internal Medicine 153(7):425-434.
SACGHS (Secretary’s Advisory Committee on Genetics, Health, and Society). 2010. Gene Patents and Licensing Practices and Their Impact on Patient Access to Genetic Tests. http://oba.od.nih.gov/oba/sacghs/reports/SACGHS_patents_report_2010.pdf (accessed January 5, 2012).
Schein, E. 2004. Organizational Culture and Leadership, 3rd ed. The Jossey-Bass Business & Management Series. San Francisco, CA: John Wiley & Sons, Inc.
Science. 2011. General Information for Authors. http://www.sciencemag.org/site/feature/contribinfo/prep/gen_info.xhtml#dataavail (accessed August 15, 2011).
Science Staff. 2011. Challenges and opportunities. Science 331(6018):692-693.
Segal, M. R., H. Xiong, H. Bengtsson, R. Bourgon, and R. Gentleman. 2012. Querying genomic databases: Refining the connectivity map. Statistical Applications in Genetics and Molecular Biology 11(2):1-34.
Sherpa. 2011. Research Funders’ Open Access Policies. http://www.sherpa.ac.uk/juliet/ (accessed September 9, 2011).
Simon, R. 2008. The use of genomics in clinical trial design. Clinical Cancer Research 14(19):5984-5993.
Simon, R . 2010. Clinical trials for predictive medicine: New challenges and paradigms. Clinical Trials 7(5):Epub 2010 Mar.
Simon, R., and D. G. Altman. 1994. Statistical aspects of prognostic factor studies in oncology. British Journal of Cancer 69(6):979-985.
Sloane, A. 2003. Grading Duke: “A” for acknowledgment. Journal of Health Law 36(4):627-645.
Song, F., S. Parekh-Bhurke, L. Hooper, Y. Loke, J. Ryder, A. Sutton, C. Hing, and I. Harvey. 2009. Extent of publication bias in different categories of research cohorts: A meta-analysis of empirical studies. BMC Medical Research Methodology 9(1):79.
Sprague, R. L., J. Daw, and G. C. Roberts. 2001. Influences on the ethical beliefs of graduate students concerning research. Science and Engineering Ethics 7(4):507-516.
Stelfox, H. T., G. Chua, K. O’Rourke, and A. S. Detsky. 1998. Conflict of interest in the debate over calcium-channel antagonists. New England Journal of Medicine 338(2):101-106.
Steneck, N. H. 2006. ORI Introduction to the Responsible Conduct of Research. http://ori.hhs.gov/education/products/RCRintro/index.html (accessed August 3, 2011).
Stodden, V., and Yale Roundtable Participants. 2010. Reproducible research: Addressing the need for data and code sharing in computational science. Computing in Science and Engineering 12(5):8-13.
Taylor, C. F., D. Field, S.-A. Sansone, J. Aerts, R. Apweiler, M. Ashburner, C. A. Ball, P.-A. Binz, M. Bogue, T. Booth, A. Brazma, R. R. Brinkman, A. Michael Clark, E. W. Deutsch, O. Fiehn, J. Fostel, P. Ghazal, F. Gibson, T. Gray, G. Grimes, J. M. Hancock, N. W. Hardy, H. Hermjakob, R. K. Julian, M. Kane, C. Kettner, C. Kinsinger, E. Kolker, M. Kuiper, N. L. Novere, J. Leebens-Mack, S. E. Lewis, P. Lord, A.-M. Mallon, N. Marthandan, H. Masuya, R. McNally, A. Mehrle, N. Morrison, S. Orchard, J. Quackenbush, J. M. Reecy, D. G. Robertson, P. Rocca-Serra, H. Rodriguez, H. Rosenfelder, J. Santoyo-Lopez, R. H. Scheuermann, D. Schober, B. Smith, J. Snape, C. J. Stoeckert, K. Tipton, P. Sterk, A. Untergasser, J. Vandesompele, and S. Wiemann. 2008. Promoting coherent minimum reporting guidelines for biological and biomedical investigations: The MIBBI project. Nature Biotechnology 26(8):889-896.
Titus, S. L., J. A. Wells, and L. J. Rhoades. 2008. Repairing research integrity. Nature 453(7198):980-982.
TMQF Committee (Translational Medicine Quality Framework Committee). 2011. A Framework for the Quality of Translational Medicine with a Focus on Human Genomics Studies: Principles from the Duke Translational Medicine Quality Framework Committee. Durham, NC: Duke University.
Turner, E. H., A. M. Matthews, E. Linardatos, R. A. Tell, and R. Rosenthal. 2008. Selective publication of antidepressant trials and its influence on apparent efficacy. New England Journal of Medicine 358(3):252-260.
van ‘t Veer, L. J., H. Dai, M. J. van de Vijver, Y. D. He, A. A. M. Hart, M. Mao, H. L. Peterse, K. van der Kooy, M. J. Marton, A. T. Wittereveen, G. J. Schreiber, R. M. Kerkoven, C. Roberts, P. S. Linsley, R. Bernards, and S. F. Friend. 2002. Gene expression profiling predicts clinical outcome of breast cancer. Nature 415(31):530-536.
Vedula, S. S., L. Bero, R. W. Scherer, and K. Dickersin. 2009. Outcome reporting in industry-sponsored trials of gabapentin for off-label use. New England Journal of Medicine 361(20):1963-1971.
Vickers, A. 2008. Cancer data? Sorry, can’t have it. New York Times, January 22.
Wellcome Trust. 2011. Sharing Research Data to Improve Public Health: Full Joint Statement by Funders of Health Research. http://www.wellcome.ac.uk/About-us/Policy/Spotlight-issues/Data-sharing/Public-health-and-epidemiology/WTDV030690.htm (accessed August 11, 2011).
Yarborough, M., and R. R. Sharp. 2009. Public trust and research a decade later: What have we learned since Jesse Gelsinger’s death? Molecular Genetics and Metabolism 97(1):4-5.
Yarborough, M., K. Fryer-Edwards, G. Geller, and R. S. Sharp. 2009. Transforming the culture of biomedical research from compliance to trustworthiness: Insights from nonmedical sectors. Academic Medicine 84(4):472-476.
Zarin, D. A. 2005. Clinical trial registration. New England Journal of Medicine 352(15):1611.
Zarin, D. A., and T. Tse. 2008. Moving toward transparency of clinical trials. Science 319(5868):1340-1342.
Zarin, D. A., T. Tse, R. J. Williams, R. M. Califf, and N. C. Ide. 2011. The ClinicalTrials.gov results database—update and key issues. New England Journal of Medicine 364(9):852-860.