5

Responsible Parties

Omics technologies have ushered in a new era in biomedical research. Omics data are extremely complex and multidimensional, with a high risk of inaccuracies being introduced by inappropriate methods, human error, conflicts of interest, or acts of commission/omission. Omics research requires a multidisciplinary team with specialized expertise, which adds to the challenge of conducting scientifically rigorous research and makes overseeing and reviewing omics studies difficult. This multidimensionality introduces an inherent risk of overfitting the data, making independent validation critical. While other fields such as high-energy physics, astrophysics, and cosmology also require specialized expertise and multidisciplinary collaboration, and deal with data complexity and high dimensionality, the development of omics-based tests is different in that the tests have potential commercial value and there is potential for developers to reap financial gains. In addition, patient safety is paramount for omics-based tests that are used to aid patient treatment decisions. Although these characteristics are also true of drug development, that process has more uniform and more stringent oversight from the U.S. Food and Drug Administration (FDA); all new drugs must demonstrate clinical utility in well-designed clinical trials to gain FDA approval. Thus, those responsible for the integrity of omics research—investigators, institutions, funders, FDA, and journals—should rethink the processes and protections designed to ensure that omics research is scientifically rigorous, transparent, and conducted ethically, with proper institutional and regulatory oversight.

The failures of the omics research at Duke University illustrate that current practices and safeguards can easily fall short (see Appendix B).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 105
5 Responsible Parties Omics technologies have ushered in a new era in biomedical research. Omics data are extremely complex and multidimensional, with a high risk of inaccuracies being introduced by inappropriate methods, human error, conflicts of interest, or acts of commission/omission. Omics research requires a multidisciplinary team with specialized expertise, which adds to the challenge of conducting scientifically rigorous research and makes overseeing and reviewing omics studies difficult. This multidimensionality introduces an inherent risk of overfitting the data, making independent vali- dation critical. While other fields such as high-energy physics, astrophys- ics, and cosmology also require specialized expertise and multidisciplinary collaboration, and deal with data complexity and high dimensionality, the development of omics-based tests is different in that the tests have potential commercial value and there is potential for developers to reap financial gains. In addition, patient safety is paramount for omics-based tests that are used to aid patient treatment decisions. Although these characteristics are also true of drug development, that process has more uniform and more stringent oversight from the U.S. Food and Drug Administration (FDA); all new drugs must demonstrate clinical utility in well-designed clinical trials to gain FDA approval. Thus, those responsible for the integrity of omics research—investigators, institutions, funders, FDA, and journals—should rethink the processes and protections designed to ensure that omics research is scientifically rigorous, transparent, and conducted ethically, with proper institutional and regulatory oversight. The failures of the omics research at Duke University illustrate that current practices and safeguards can easily fall short (see Appendix B). 105

OCR for page 105
106 EVOLUTION OF TRANSLATIONAL OMICS The Duke events thus provide a watershed illustration—reminiscent of the Gelsinger gene therapy cases at the University of Pennsylvania, the Santillan mismatched heart transplant case at Duke University Hospital, the Johns Hopkins asthma trial death, and the viral link to chronic fatigue syndrome at the Whittemore Peterson Institute for Neuro-Immune Disease—of how such research can go awry even though institutions and other responsible parties have extensive systems in place to ensure research integrity, with roles and responsibilities delineated (Enserink, 2011; Kolata, 2001; Nelson and Weiss, 1999; Sloane, 2003; Yarborough and Sharp, 2009). These pro- cesses need to be rethought in the omics era. In short, the ability of health care decision makers to rely on the trustworthiness of omics-based tests to predict disease risk and treatment response will be limited unless renewed efforts are made by all parties responsible for the integrity of this research. The committee makes four recommendations related to defining the roles and responsibilities of the key parties involved in the conduct and eval- uation of omics research (Recommendations 4-7). These recommendations are directed toward investigators and institutions (i.e., intrainstitutional responsibilities), funders, FDA, and biomedical journals. Recommendations 1 through 3, which are discussed in Chapters 2-4, refer to responsibilities of investigators, and focus on recommended best practices for the develop- ment, validation, and clinical utility assessment of candidate omics-based tests. Recommendations 4 to 7 are similarly critical because, without the participation of institutions, investigators, funders, FDA, and journals, the committee’s recommended evaluation processes for omics technologies intended for clinical use (Recommendations 1-3) cannot be implemented. The committee recognized that the recommendations presented in this chapter may increase the oversight requirements for omics research in some cases, but decided that these potential costs were offset by the added safeguards to the integrity of this research. If an institution does not have the infrastructure or capability to follow the recommended Test Develop- ment and Evaluation Process defined in this report, then the committee believes that the institution should consider not engaging in the translation of omics-based discoveries into validated tests for use in clinical trials and potentially clinical practice. The committee developed the recommendations discussed in this chap- ter by reviewing the available literature about the design, conduct, analysis, and reporting of omics research and by identifying lessons learned from case studies of the development of omics-based tests (see Appendixes A and B). This chapter emphasizes lessons learned from the Duke University case study (Appendix B) in particular because the most publicly available information exists about this case study and because the Duke case was specifically highlighted in the committee’s statement of task. The commit- tee also relied heavily on the work of previous National Academies reports

OCR for page 105
107 RESPONSIBLE PARTIES that have reviewed the roles and responsibilities of the parties involved in research. It is imperative that all responsible parties prepare for the omics research era, with its promise as well as its perils. This chapter discusses the details of how this preparation can be accomplished. INTRAINSTITUTIONAL PARTIES The roles and responsibilities of investigators and institutions that are involved in omics-based research are discussed together because both par- ties contribute to the scientific research culture in which omics research is conducted. They are also the two most responsible and the most knowl- edgeable parties in the entire evaluation process. Investigators control the culture of individual laboratories embedded within the larger institution. Individual laboratories can have unique values and cultural norms that are separate from the broader institutional culture. These variables become more complex as the research becomes more interdisciplinary, with the lead investigators setting the culture for the investigational team. Institu- tions and the institutional leadership, on the other hand, have the primary responsibility for the policies and procedures, reward systems, and values that contribute to the overarching institutional culture as well as for the infrastructure of oversight and support for research. Institutions and their leaders also have the greatest responsibility for in-depth investigation of potential lapses in scientific integrity because they employ, promote, and supervise the investigators who conduct these studies. The National Academies defined integrity in the research process as “the adherence by scientists and their institutions to honest and verifiable methods in proposing, performing, evaluating, and reporting research activ- ities” (NAS, 1992, p. 27). The challenge is that science is a self-regulating community, with few comprehensive guidelines for responsible research practices (Steneck, 2006). The guidelines that do exist often contradict each other (Emanuel et al., 2000). For example, there are inconsistencies in the rules governing the deidentification of personal health information, obtain- ing individual consent for future research, and the recruitment of research volunteers (IOM, 2009a). The 2011 Report of the Presidential Commission for the Study of Bioethics Issues recommended that the Common Rule be revised to include a section on investigators’ responsibilities in order to bring it into harmony with FDA regulations for clinical research and inter- national standards (PCSBI, 2011). Moreover, when ethical standards and best practices are available to guide behavior, some investigators may still be unaware of these rules, or simply breach them. For example, Martinson and colleagues (2005) conducted a series of focus groups with investigators from top-tier research universities to identify the top 10 misbehaviors of greatest concern in science. They then surveyed more than 7,000 early- and

OCR for page 105
108 EVOLUTION OF TRANSLATIONAL OMICS mid-career U.S. investigators who have funding from the National Insti- tutes of Health (NIH) and asked them to report on their own behavior. Thirty-three percent of the respondents reported engaging in at least 1 of the 10 misbehaviors during the previous 3 years. The three most common misbehaviors were: (1) overlooking other researchers’ use of flawed data or questionable interpretations of data; (2) changing the design, methodol- ogy, or results of a study in response to pressure from a funding source; and (3) circumventing certain minor aspects of human-subjects research requirements (Martinson et al., 2005). This situation is problematic because the underlying science must be sound if patients are going to participate in clinical trials and, eventually, in consultation with their physicians, use research results for medical care decisions. Investigators Responsible conduct in any research, including omics research, starts with the investigators. This includes both junior and senior investigators. This section of the chapter describes the roles and responsibilities of inves- tigators who conduct biomedical omics research with the goal to improve patient care. These responsibilities include the most basic principles of sci- ence, such as a serious and in-depth consideration in a discussion section of a journal article of “what might be wrong with the data and conclusions I have just reported” (Platt, 1964). The specific responsibilities discussed below include fostering a culture of scientific rigor and welcoming con- structive criticism, comprehensively reporting the methods and results of a study, and making data and code publicly available so that a third party can verify the data and result. Box 5-1 highlights themes extracted from several representative case studies for investigators to consider. Culture All investigators have a responsibility to promote a culture of scientific rigor and to transmit ethical principles of science to future generations of investigators. Scientific rigor can be fostered by developing clear standards of behavior, disseminating those standards through education and mentor- ing, and reinforcing the standards through exemplary practice at all levels of the research community (Frankel, 1995). Investigators who do not adhere to these values are not fulfilling their ethical responsibilities. Although many cultural issues are not unique to omics research, taking steps to improve scientific culture is particularly important in omics research because of the nature of omics discoveries, which depend on large datasets, complex analyses, and a specialized multidisciplinary team. A number of influential reports have recommended sets of values,

OCR for page 105
109 RESPONSIBLE PARTIES BOX 5-1 Themes from the Case Studies for Investigators The Duke Case Study Several questions have emerged regarding the degree to which key tenets of scientific rigor (for both laboratory-based research and clinical trials) were followed in the Nevins laboratory at Duke University. First, there were numerous errors in the primary data (Baggerly and Coombes, 2009; Coombes et al., 2007). Predic- tors derived from the training datasets were not locked down, leading to flaws in the validation process and the omics-based tests that were developed. Second, major results in the papers published by the Duke investigators were not repro- ducible. For example, figures in the Hsu et al. paper could not be reproduced with the data provided (McShane, 2010b). Third, the Lancet Oncology paper states that the investigators had access to unblinded data as indicated by the statement that “MD, PF, AP, CA, SM, JRN, and RDI had full access to the raw data”; it was subsequently confirmed that the data files had not been blinded by the European investigators when the data were originally sent (Goldberg, 2009). Fourth, the Duke investigators did not provide the public with full access to their data and code (Baggerly and Coombes, 2009; Baron et al., 2010). They also failed to address the questions and challenges of external investigators who were trying to repro- duce their work to the mutual satisfaction of all parties involved (Baggerly, 2011; McShane, 2010c). In response to the National Cancer Institute’s queries, the Duke investigators acknowledged that their tests were unreproducible and retracted the original papers (Bonnefoi et al., 2011; Hsu et al., 2010; Potti et al., 2011). Dr. Joseph Nevins, senior mentor of the investigators whose genomic predictors were used in the three clinical trials named in the IOM committee’s statement of task, stated during discussions with the committee that “a critical flaw in the research effort was one of data corruption” (Nevins, 2011). Throughout this process, the responsibilities of the coinvestigators on the research team and lines of account- ability were apparently unclear. The OvaCheck Case Study The investigators made their initial datasets publicly available. Independent investigators found numerous problems with the statistical and experimental meth- ods and concluded that the results were unreproducible (Baggerly et al., 2004). Thus, in this case, making the data publicly available may have helped prevent the routine clinical use of an unvalidated screening test. Commercially Developed Tests: Data and Code Availability A review of the six commercially available tests discussed in Appendix A illus- trates that public availability of all omics-based test data has not been standard practice. The field of omics is early in its development, and the standards for data sharing have been unclear and only now slowly evolving toward more transpar- continued

OCR for page 105
110 EVOLUTION OF TRANSLATIONAL OMICS BOX 5-1 Continued ency. Commercial interests and protection of proprietary information also may have limited the public availability of some data and information. These six cases highlight several examples in which test developers explicitly note the availability of data. For example, Paik et al. (2004), Deng et al. (2006), and Rosenberg et al. (2010) reported the computational model for Oncotype DX, Allo- Map, and Corus CAD, respectively. Both tests developed as LDTs had published computational models (Oncotype DX and Corus CAD); only one FDA-cleared test has a published computational model (AlloMap). Discovery microarray data are available for MammaPrint, AlloMap, and Corus CAD (Deng et al., 2006; van ‘t Veer et al., 2002).* Buyse et al. (2006) reports that raw microarray data and clinical data for the MammaPrint clinical validation study were deposited with the European Bioinformatics Institute ArrayExpress database. Although there are examples of developers reporting the availability of a test’s computational model or data used in discovery or validation, often sufficient information is not publicly available for external investigators to fully reproduce a test. NOTE: See Appendixes A and B on the case studies for more information. *Microarray data from Corus CAD are available, but PCR data used in test devel- opment are unavailable. Personal communication, Steve Rosenberg, October 21, 2011. traditions, and standards that investigators should embody to promote a culture of scientific rigor. The National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine (IOM) collabo- rated in producing the report, Responsible Science, Volume I: Ensuring the Integrity of the Research Process (NAS, 1992). This report highlighted the importance of investigators upholding the highest standards of honesty, integrity, objectivity, and collegiality. The authoring committee directed individual investigators to accept formal responsibility for ensuring the integrity of the research process and creating an environment, a reward system, and a training system that encourage responsible research practices. A more recent National Academies report, On Being a Scientist: A Guide to Responsible Conduct in Research (NAS, 2009), identified three sets of obligations for investigators: (1) an obligation to merit the trust that their colleagues place in them (i.e., science is cumulative and investigators build on previous work); (2) an obligation to themselves (i.e., investigators should adhere to professional standards and develop personal integrity); and (3) an obligation to act in ways that serve the public (i.e., the public uses science to make policy decisions). The Office of Research Integrity (ORI) of the Department of Health and Human Services (HHS) also has outlined several values that investigators should share in promoting a culture of scientific

OCR for page 105
111 RESPONSIBLE PARTIES rigor, including (1) honesty: conveying information truthfully and honoring commitments; (2) accuracy: reporting findings precisely and taking care to avoid errors; (3) efficiency: using resources wisely and avoiding waste; and (4) objectivity: letting the facts speak for themselves and avoiding improper bias (Steneck, 2006). These reports outline general guiding principles for investigators’ behavior. However, identifying the values and obligations that investigators should possess does not directly inform investigators on how they should respond in specific situations and conflicts. Ultimately, investigators’ actions need to be informed by good judgment and personal integrity. Two of the major influences on the development of investigators’ values and integrity are advisors and mentors (Bird, 2001; NAS, 1992, 2009), who define, explain, and exemplify scientific norms and ethics. All mem- bers of the research team, including biostatisticians and bioinformatics scientists, should have access to mentors with the appropriate expertise and credentials. Senior investigators’ conduct can reinforce or weaken the importance of complying with these scientific norms and values. Sprague and colleagues (2001), for example, conducted a study to identify the methods by which ethical beliefs are passed on to students. They surveyed faculty and graduate students and asked respondents to rank methods of teaching about ethics; 1,451 surveys were distributed to faculty and 627 were returned (45.2 percent return rate). An additional 6,000 surveys were sent to academic departments to be distributed to graduate students and 1,152 were returned (19.2 percent return rate). A major weakness of this study is the low response rates. However, both faculty and students ranked courses dealing with ethical issues as most influential in teaching students ethical beliefs. Mentors in graduate school also were highly ranked, with graduate students ranking mentors as more important than faculty did. Other important influences included discussions in courses, laboratories, and seminars as well as interactions with other graduate students (Sprague et al., 2001). In other words, young investigators’ interactions with other investigators shaped their beliefs and values. Another important component of promoting a scientifically rigorous culture, which falls to investigators, is valuing teamwork and mutual respect and empowering people at lower levels in the hierarchy to speak up if they observe a problem or have a concern regarding research practices. The avia- tion and energy industries provide evidence for the pivotal importance of creating cultures that value these characteristics and consistently expect and laud persons who speak up to alert the group to problems and concerns. For example, the aviation industry has recognized for some time that errors are more likely to happen when there is suboptimal teamwork and com- munication (Helmreich, 2000). Thus, improvements in aviation safety have been attributed to training crews on how to address and prevent human

OCR for page 105
112 EVOLUTION OF TRANSLATIONAL OMICS error, the role of leadership, the need for monitoring and crosschecking decision-making processes, and the use of checklists. This same approach has been applied successfully to the patient safety improvement movement to reduce the effects of human errors (Gawande, 2009; Hudson, 2003; Longo et al., 2005; Pronovost et al., 2003) and can be applied equally to the biomedical research enterprise. Full Reporting Fully reporting the methods and results of a study is essential for the reproducibility of research and for reviewers’ and readers’ evaluation of the validity of a study. Thus, investigators have a fundamental responsibil- ity to provide a complete and accurate report of their methods and findings (NAS, 1992, 2009; Steneck, 2006). All publications—and omics publica- tions in particular—should present a full and detailed description of the study methodology, the statistical analysis plan that was finalized before the validation data were analyzed, an accurate report of the results, and an honest assessment of the findings, including an explanation of limitations that may affect the conclusions (Platt, 1964; Steneck, 2006). This level of transparency should allow an independent third party to verify the data and results. As discussed in Appendix D, reporting guidelines are tools to help investigators meet this obligation and report the essential information and elements of a study. All investigators who are coauthors on a report— and particularly a senior investigator or mentor—also are responsible for understanding the specific aims, methods, major findings, and implications of the interdisciplinary research. They are responsible for reading the com- plete manuscript, suggesting edits, and for being alert to misinterpretation, such as misrepresentation of findings and limitations, and discussing such observations with appropriate members of the team or oversight groups. Data and Code Availability and Transparency The scientific community widely agrees that investigators should make the research data and code supporting a manuscript, as well as the statisti- cal analysis plan that had been finalized before data were unblinded and available for analysis, publicly available at the time of publication (NAS, 1992, 2009; NRC, 1985, 2003). Transparency is essential for the interpret- ability and reproducibility of research and a tenet of any good scientific method. Indeed, the purpose of methods sections in journal publications is to provide enough detail so that other investigators can interpret the results and, if they wish, reproduce the study and obtain the same results. Thus, providing sufficient detail of methods allows independent investigators to

OCR for page 105
113 RESPONSIBLE PARTIES verify published findings and conduct alternative analyses of the same data. It also discourages fraud and helps expedite the exchange of ideas (Peng et al., 2006). Investigators who refuse to share the evidentiary basis behind their conclusions, or the materials and analytical methods needed to repli- cate published experiments, fail to uphold transparency as a basic standard of science. In an era when much of the Methods section and/or elaborate data appear only in the Supplementary Materials section, more attention is needed to guide the reader through well-annotated supplementary mate- rial. This problem is perpetuated by the brevity of articles published in the higher impact journals. The National Academies has issued numerous reports emphasizing the importance of data sharing. Sharing Research Data recommended that sharing research data at the time of publication should be a regular practice in science (NRC, 1985). A later report, Sharing Publication-Related Data and Materials, developed a uniform principle for sharing integral data and materials expeditiously (NRC, 2003). It recommended that authors include the code, algorithms, or other information that are central to verifying or replicating the claims in a publication. If the data and code cannot be included in the actual publication (e.g., because the data files are too large), the report recommended that the data and code be made freely available through other means in a format that allows an independent investigator to manipulate, analyze, and combine the data with other scientific data. The report also stipulated that, if publicly accessible repositories for data have been developed and are in general use, the relevant data should be deposited in those repositories. Investigators are responsible for anticipating which materials are most likely to be requested and should include a statement on how to access the materials in the published paper. In On Being a Scientist, the National Academies addressed the chal- lenge of sharing research data in the current environment, where the quan- tity and complexity of data are increasing and the cost of sharing data is high (NAS, 2009). The complications and cost of sharing large datasets also were recently highlighted in an issue of Science, dedicated entirely to data collection, curation, and access issues (Science Staff, 2011). The National Academies concluded that, despite these challenges, investigators have a responsibility to develop methods to share their data and materials at the time of publication (NAS, 2009). Investigators may share data through centralized facilities or undertake collaborative efforts to form large data- bases, such as the database of Genotypes and Phenotypes (dbGAP), the European Molecular Biology Laboratory’s European Bioinformatics Insti- tute (EMBL/EBI), the National Library of Medicine’s Gene Expression Omnibus (NLM/GEO), Compendia Bioscience, UCSC Gene Browser, and ProteomeXchange. When data undergo extensive analysis as part of a scientific study, the requirements to share those data also include a require-

OCR for page 105
114 EVOLUTION OF TRANSLATIONAL OMICS ment to share the software, code, and sometimes the hardware used in the analyses (NAS, 2009). Authors can facilitate the use of such information with graphical user interfaces introduced into the dataset, for example, as facilitated through nanoHUB (Klimeck, 2011). Ultimately, many investigators are unwilling to comply with the require- ment to share their data and code. For example, in an article for the New York Times, Andrew Vickers, a biostatistician at Memorial Sloan-Kettering Cancer Center, documented his lack of success in requesting cancer data from various investigators from numerous institutions (Vickers, 2008). Vickers also referenced a survey conducted by John Kirwan of the Univer- sity of Bristol on investigators’ attitudes toward sharing data from clinical trials. Three-quarters of the investigators surveyed stated that they were opposed to making original trial data available. They cited several reasons for refusing, such as the difficulty of putting together a dataset and the risk of their data being analyzed using invalid methods. Vickers concluded that investigators are often opposed to the potential use of their data by other independent investigators who may make influential discoveries, and often resist challenges to their conclusions that emerge from new analyses. Inves- tigators may also be resistant to sharing their data and code because of the time and effort needed to curate and annotate a dataset and support other investigators’ access to the material. The obstacles to sharing data and code may seem particularly daunting in omics research. However, the fields of molecular biology and structural biology widely use web-based genomic and proteomic databases (e.g., GenBank and Protein Data Bank) (Brown, 2003). These databases allow investigators to share DNA and amino acid sequences, as well as protein structure data, and many journals mandate deposition of these data as a condition of publication. Microarray assays do produce an enormous quan- tity of data (Quackenbush, 2009); as many as 1 million variant positions on the genome across thousands of samples, and next-generation RNA sequencing methods raise further challenges. The scheme for Minimum Information About a Microarray Experi- ment (MIAME) was created and adopted by investigators in this field to improve the annotation of microarray data (Brazma et al., 2001). It estab- lished standard, comprehensive annotation requirements that have been adopted by most scientific journals. Data from more than 10,000 microar- ray studies have been deposited into public repositories designed to archive MIAME-compliant data (Brazma, 2009). MIAME also has stimulated the proteomics and metabolomics scientific communities to develop reporting standards and formats. In fact, the Minimum Information for Biological and Biomedical Investigations (MIBBI) project has cataloged more than 30 different reporting standards for biological and biomedical data (Taylor et al., 2008). Nevertheless, many investigators still fail to provide fully anno-

OCR for page 105
115 RESPONSIBLE PARTIES tated data (Brazma, 2009; Quackenbush, 2009). Thus, further steps need to be taken to ensure investigators share their data and code. The committee’s recommendations to journals and funders (discussed below) are intended to create additional incentives for investigators to comply with data- and code-sharing norms. Issues of proprietary information can be dealt with by depositing the materials with a responsible third party that can ensure confidentiality and protection of the material (e.g., FDA). The patent system also protects private investments in omics research (SACGHS, 2010) (see Box 2-1 for a more detailed discussion on intellectual property law and related challenges associated with data sharing). Institutions and Institutional Leaders This section describes the roles and responsibilities for institutions that conduct biomedical omics research aimed at improving patient care, includ- ing: fostering a culture of scientific integrity, overseeing research, increasing awareness of reporting systems for lapses in research integrity, investigating credible concerns about scientific integrity, monitoring and managing finan- cial and non-financial conflicts of interest, and supporting and protecting the intellectual independence of biostatisticians, bioinformatics scientists, pathologists, and other collaborators in omics research. These responsi- bilities lie ultimately with institutional leadership. Indeed, any institutional attempt to meet these responsibilities will fail without explicit and visible support and direction from institutional leadership (Schein, 2004). Some of these responsibilities are closely related to the responsibilities of the investigators. Institutions, such as universities and companies, and the institutional leaders, in collaboration with their investigators, play an essential role in promoting a culture that encourages investigators to act ethically and con- duct scientifically rigorous research. Institutions and their leadership bear direct responsibility for complying with existing rules and regulations gov- erning research; overseeing and creating reward systems for investigators; providing training and education to investigators on relevant topics; and producing an environment of trust, openness, and honesty. The integrity of the research enterprise depends on investigators, collaborators, and observ- ers feeling encouraged and supported when they identify and report either routine scientific disagreements or potential breaches of scientific integrity, regardless of their position within the institution. Institutional leaders also have direct responsibility, when concerns are raised, for establishing and supervising a “process of evaluation” of specific research results and claims by their investigators. In the Duke University case, inadequacies in the institutional oversight processes and a lack of sufficient checks and balances allowed invalid

OCR for page 105
152 EVOLUTION OF TRANSLATIONAL OMICS end of funding, and funders should financially support this requirement. ii. Provide continuing support for independent repositories to guarantee ongoing access to relevant omics and clinical data. iii. Support test validation in a CLIA-certified laboratory and con- sider the usefulness of an independent confirmation of a candi- date omics-based test prior to evaluation for clinical use. iv. Designate an official to alert the institutional leadership when serious allegations or questions have been raised that may war- rant an institutional investigation; if the funder (e.g., NIH) has initiated that question, then the funder and institution should communicate during the investigation; v. Establish lines of communication with other funders to be used when serious problems appear to involve interdependent research sponsored by another funder along the omics-based test development process. 5b: Federal funders of omics-based translational research should have authority to exercise the option of investigating any research being conducted by a funding recipient after requesting an investigation by the institution. RECOMMENDATION 6: FDA 6a: In order to enable investigators and institutions to have a clear understanding of their regulatory responsibilities, FDA should develop and finalize a risk-based guidance or a regulation on: i. Bringing omics-based tests to FDA for review. ii. Oversight of LDTs. 6b: FDA should communicate the IDE requirements for use of omics- based tests in clinical trials to the OHRP, IRBs, and other relevant institutional leadership. RECOMMENDATION 7: Journals 7: Journal editors should: 7a: Require authors who submit manuscripts describing clinical evalu- ations of omics-based tests to: i. Register all clinical trials at www.clinicaltrials.gov or another trial registry acceptable to the journal.

OCR for page 105
153 RESPONSIBLE PARTIES ii. Make data, metadata, prespecified analysis plans, computer code, and fully specified computational procedures publicly available in an independently managed database (e.g., dbGAP) in standard format. iii. Provide the journal with the sections of the research protocol relevant to their manuscript. iv. Identify each author’s role in the development, conduct, analy- sis, writing, and editing of the manuscript. Require the lead and senior authors to attest to the integrity of the study and the coauthors to confirm shared responsibility for study integrity, v. Use appropriate guidelines (e.g., CONSORT, REMARK) and submitting checklists to certify guideline use. 7b: Develop mechanisms to resolve possible serious errors in published data, metadata, code, and/or computational models and establish clear procedures for management of error reports. 7c: Alert the institutional leadership and all authors when a serious question of accuracy or integrity has been raised. REFERENCES ACS (American Cancer Society). 2011. Pilot and Exploratory Projects in Palliative Care of Cancer Patients and Their Families. http://www.cancer.org/acs/groups/content/ @researchadministration/documents/document/acspc-023897.pdf (accessed August 10, 2011). Altman, D. G., L. M. McShane, W. Sauerbrei, and S. E. Taube. 2012a. Reporting recommen- dations for tumor marker prognostic studies (REMARK): Explanation and elaboration. BMC Medicine 10:51. Altman, D. G., L. M. McShane, W. Sauerbrei, and S. E. Taube. 2012b. Reporting recommen- dations for tumor marker prognostic studies (REMARK): Explanation and elaboration. Public Library of Science Medicine 9(5):e1001216. Altshuler, J. S., and D. Altshuler. 2004. Organizational challenges in clinical genomic research. Nature 429(6990):478-481. Andre, F., L. M. McShane, S. Michiels, D. F. Ransohoff, D. G. Altman, J. S. Reis-Filho, D. F. Hayes, and L. Pusztai. 2011. Biomarker studies: A call for a comprehensive biomarker study registry. Nature Reviews Clinical Oncology 8(3):171-176. Annals of Internal Medicine. 2010. Information for Authors. http://www.annals.org/site/misc/ ifora.xhtml (accessed August 22, 2011). Apweiler, R., C. Aslanidis, T. Deufel, A. Gerstner, J. Hansen, D. Hochstrasser, R. Kellner, M. Kubicek, F. Lottspeich, E. Maser, H. W. Mewes, H. E. Meyer, S. Müllner, W. Mutter, M. Neumaier, P. Nollau, H. G. Nothwang, F. Ponten, A. Radbruch, K. Reinert, G. Rothe, H. Stockinger, A. Tárnok, M. J. Taussig, A. Thiel, J. Thiery, M. Ueffing, G. Valet, J. Vandekerckhove, C. Wagener, O. Wagner, and G. Schmitz. 2009. Approaching clinical proteomics: Current state and future fields of application in cellular proteomics. Cytom- etry, Part A 75(10):816-832.

OCR for page 105
154 EVOLUTION OF TRANSLATIONAL OMICS Atlas, M. C. 2004. Retraction policies of high-impact biomedical journals. Journal of the Medical Library Association 92(2):242-250. Baggerly, K. A. 2011. Forensics Bioinformatics. Presented at the Workshop of the IOM Com- mittee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials, Washington, DC, March 30-31. Baggerly, K. A., and K. R. Coombes. 2009. Deriving chemosensitivity from cell lines: Forensic bioinformatics and reproducible research in high-throughput biology. Annals of Applied Statistics 3(4):1309-1334. Baggerly, K. A., and K. R. Coombes. 2011. What information should be required to support clinical “omics” publications? Clinical Chemistry 57(5):688-690. Baggerly, K. A., J. S. Morris, and K. R. Coombes. 2004. Reproducibility of SELDI-TOF pro- tein patterns in serum: Comparing datasets from different experiments. Bioinformatics 20(5):777-785. Baggerly, K. A., K. R. Coombes, and E. S. Neeley. 2008. Run batch effects potentially compro- mise the usefulness of genomic signatures of ovarian cancer. Journal of Clinical Oncology 26(7):1186-1187. Baron, A. E., K. Bandeen-Roche, D. A. Berry, J. Bryan, V. J. Carey, K. Chaloner, M. Delorenzi, B. Efron, R. C. Elston, D. Ghosh, J. D. Goldberg, S. Goodman, F. E. Harrell, S. Galloway Hilsenbeck, W. Huber, R. A. Irizarry, C. Kendziorski, M. R. Kosorok, T. A. Louis, J. S. Marron, M. Newton, M. Ochs, J. Quackenbush, G. L. Rosner, I. Ruczinski, S. Skates, T. P. Speed, J. D. Storey, Z. Szallasi, R. Tibshirani, and S. Zeger. 2010. Letter to Harold Varmus: Concerns about Prediction Models Used in Duke Clinical Trials. Bethesda, MD, July 19, 2010. http://www.cancerletter.com/categories/documents (accessed January 18, 2012). Bekelman, J. E., Y. Li, and C. P. Gross. 2003. Scope and impact of financial conflicts of interest in biomedical research. Journal of the American Medical Association 289(4):454-465. Berry, D. 2012. Statisticians and clinicians: Collaborations based on mutual respect. Amstat News. http://magazine.amstat.org/blog/2012/02/01/collaborationpolic/ (accessed Febru- ary 9, 2012). Bird, S. J. 2001. Mentors, advisors and supervisors: Their role in teaching responsible research conduct. Science and Engineering Ethics 7(4):455-467. Blumenthal, D., N. Causino, E. Campbell, and K. S. Louis. 1996. Relationships between academic institutions and industry in the life sciences—an industry survey. New England Journal of Medicine 334(6):368-374. Bonnefoi, H., A. Potti, M. Delorenzi, L. Mauriac, M. Campone, M. Tubiana-Hulin, T. Petit, P. Rouanet, J. Jassem, E. Blot, V. Becette, P. Farmer, S. Andre, C. R. Acharya, S. Mukherjee, D. Cameron, J. Bergh, J. R. Nevins, and R. D. Iggo. 2007. Validation of gene signatures that predict the response of breast cancer to neoadjuvant chemotherapy: A substudy of the EORTC 10994/BIG 00-01 clinical trial. Lancet Oncology 8(12):1071-1078. Bonnefoi, H., A. Potti, M. Delorenzi, L. Mauriac, M. Campone, M. Tubiana-Hulin, T. Petit, P. Rouanet, J. Jassem, E. Blot, V. Becette, P. Farmer, S. Andre, C. Acharya, S. Mukherjee, D. Cameron, J. Bergh, J. R. Nevins, and R. D. Iggo. 2011. Retraction−validation of gene signatures that predict the response of breast cancer to neoadjuvant chemotherapy: A substudy of the EORTC10994/BIG 00-01 clinical trial. Lancet Oncology 12(2):116. Brazma, A. 2009. Minimum Information About a Microarray Experiment (MIAME)— successes, failures, challenges. Scientific World Journal 9:420-423. Brazma, A., P. Hingamp, J. Quackenbush, G. Sherlock, P. Spellman, C. Stoeckert, J. Aach, W. Ansorge, C. A. Ball, H. C. Causton, T. Gaasterland, P. Glenisson, F. C. P. Holstege, I. F. Kim, V. Markowitz, J. C. Matese, H. Parkinson, A. Robinson, U. Sarkans, S. Schulze- Kremer, J. Stewart, R. Taylor, J. Vilo, and M. Vingron. 2001. Minimum Information About a Microarray Experiment (MIAME)—toward standards for microarray data. Nature Genetics 29(4):365-371.

OCR for page 105
155 RESPONSIBLE PARTIES Brown, C. 2003. The changing face of scientific discourse: Analysis of genomic and proteomic database usage and acceptance. Journal of the American Society for Information Science and Technology 54(10):926-938. Brundage, M. D., D. Davies, and W. J. Mackillop. 2002. Prognostic factors in non-small cell lung cancer: A decade of progress. Chest 122(3):1037-1057. Burke, H. B., and D. E. Henson. 1993. Criteria for prognostic factors and for an enhanced prognostic system. Cancer 72:3131-3135. Burton, A., and D. G. Altman. 2004. Missing covariate data within cancer prognostic studies: A review of current reporting and proposed guidelines. British Journal of Cancer 91(1):4-8. Buyse, M., S. Loi, L. J. van ‘t Veer, G. Viale, M. Delorenzi, A. M. Glas, M. S. d’Assignies, J. Bergh, R. Lidereau, P. Ellis, A. Harris, J. Bogaerts, P. Therasse, A. Floore, M. Amakrane, F. Piette, E. T. Rutgers, C. Sortiriou, F. Cardoso, and M. J. Piccart. 2006. Validation and clinical utility of a 70-gene prognostic signature for women with node-negative breast cancer. Journal of the National Cancer Institute 98(17):1183-1192. Califf, R. M. 2011a. Discussion at the Workshop of the IOM Committee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials, Washington, DC, March 30-31. Califf, R. M. 2011b. Discussion at the Discovery of Process Working Group Meeting with Representatives of Duke Faculty and Administration, Washington, DC, August 22. Chahal, A. P. S. 2011. Informatics in clinical research in oncology: Current state, chal- lenges, and a future perspective. Cancer Journal 17(4):239-245 210.1097/ PPO.1090b1013e31822c31827b31825. Chan, A. W., and D. G. Altman. 2005. Identifying outcome reporting bias in randomised trials on PubMed: Review of publications and survey of authors. British Medical Journal 330(7494):753. Chan, A. W., K. Krleza-Jeric, I. Schmid, and D. G. Altman. 2004a. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. Canadian Medical Association Journal 171(7):735-740. Chan, A. W., A. Hrobjartsson, M. T. Haahr, P. C. Gotzsche, and D. G. Altman. 2004b. Empirical evidence for selective reporting of outcomes in randomized trials: Compari- son of protocols to published articles. Journal of the American Medical Association 291(20):2457-2465. Chan, M. M. 2009. Letter to Division of Medical Oncology, Duke University Medical Center. http://www.fda.gov/downloads/MedicalDevices/ProductsandMedicalProcedures/ InVitroDiagnostics/UCM289102.pdf (accessed February 9, 2012). Choi, B., S. Drozdetski, M. Hackett, C. Lu, C. Rottenberg, L. Yu, D. Hunscher, and D. Clauw. 2005. Usability comparison of three clinical trial management systems. AMIA Annual Symposium Proceedings 2005:921. Cokol, M., I. Iossifov, R. Rodriguez-Esteban, and A. Rzhetsky. 2007. How many scientific papers should be retracted? EMBO Reports 8(5):422-423. Collins, F. 2010. Has the revolution arrived? Nature 464 (7289):674-675. Concato, J., A. R. Feinstein, and T. R. Holford. 1993. The risk of determining risk with multi- variable models. Annals of Internal Medicine 118(3):201-210. Coombes, K. R., J. Wang, and K. A. Baggerly. 2007. Microarrays: Retracing steps. Nature Medicine 13(11):1276-1277. CSE (Council of Science Editors). 2009. CSE’s White Paper on Promoting Integrity in Scien- tific Journal Publications, 2009 Update. http://www.councilscienceeditors.org/i4a/pages/ index.cfm?pageid=3331 (accessed August 4, 2011 ). Curfman, G. D., S. Morrissey, and J. M. Drazen. 2006. Response to expression of concern regarding VIGOR study. New England Journal of Medicine 354(11):1196-1199.

OCR for page 105
156 EVOLUTION OF TRANSLATIONAL OMICS DeAngelis, C. D., J. M. Drazen, F. A. Frizelle, C. Haug, J. Hoey, R. Horton, S. Kotzin, C. Laine, A. Marusic, A. J. P. M. Overbeke, T. V. Schroeder, H. C. Sox, and M. B. Van Der Weyden. 2004. Clinical trial registration. Journal of the American Medical Association 292(11):1363-1364. DeAngelis, C. D., J. M. Drazen, F. A. Frizelle, C. Haug, J. Hoey, R. Horton, S. Kotzin, C. Laine, A. Marusic, A. J. P. M. Overbeke, T. V. Schroeder, H. C. Sox, and M. B. Van Der Weyden. 2005. Is this clinical trial fully registered? A statement from the International Committee of Medical Journal Editors. Journal of the American Medical Association 293(23):2927-2929. DeMets, D. L. 2009. “Minding the Gap”: Driving Clinical and Translation Research by Eliminating the Shortage of Biostatisticians. Bethesda, MD: Clinical Translational Science Award (CTSA) Consortium. DeMets, D. L., R. Woolson, C. Brooks, and R. Qu. 1998. Where the jobs are: A study of Amstat News job advertisements. American Statistician 52(4):303-307. Deng, M. C., H. J. Eisen, M. R. Mehra, M. Billingham, C. C. Marboe, G. Berry, J. Kobashigawa, F. L. Johnson, R. C. Starling, S. Murali, D. F. Pauly, H. Baron, J. G. Wohlgemuth, R. N. Woodward, T. M. Klingler, D. Walther, P. G. Lal, S. Rosenberg, S. Hunt, and for the CARGO Investigators. 2006. Noninvasive discrimination of rejection in cardiac allo- graft recipients using gene expression profiling. American Journal of Transplantation 6(1):150-160. Dewald, W. G., J. G. Thursby, and R. G. Anderson. 1986. Replication in empirical economics: The journal of money, credit and banking project. American Economic Review 76(4):587-603. Dickersin, K., and I. Chalmers. 2010. Recognising, Investigating and Dealing with Incomplete and Biased Reporting of Clinical Research: From Francis Bacon to the World Health Organization. http://www.jameslindlibrary.org (accessed June 11, 2010). Drazen, J., M. B. Van Der Weyden, P. Sahni, J. Rosenberg, A. Marusic, C. Laine, S. Kotzin, R. Horton, P. C. Hebert, C. Haug, F. Godlee, F. A. Frozelle, P. W. Leeuw, and C. D. DeAngelis. 2009. Uniform format for disclosure of competing interests in ICMJE jour- nals. New England Journal of Medicine 361(19):1896-1897. Drazen, J. M., P. W. de Leeuw, C. Laine, C. D. Mulrow, C. D. DeAngelis, F. A. Frizelle, F. Godlee, C. Haug, P. C. Hébert, S. Kotzin, A. Marusic, H. Reyes, and J. Rosenberg. 2010. Toward more uniform conflict disclosures: The updated ICMJE conflict of interest report- ing form. Annals of Internal Medicine 153(4):268-269. Dressman, H. K., A. Berchuck, G. Chan, J. Zhai, A. Bild, R. Sayer, J. Cragun, J. Clarke, R. S. Whitaker, L. Li, G. Gray, J. Marks, G. S. Ginsburg, A. Potti, M. West, J. R. Nevins, and J. M. Lancaster. 2007. An integrated genomic-based approach to individualized treatment of patients with advanced-stage ovarian cancer. Journal of Clinical Oncology 25(5):517-525. Dressman, H. K., A. Potti, J. R. Nevins, and J. M. Lancaster. 2008. In reply. Journal of Clinical Oncology 26(7):1187-1188. Dwan, K., D. G. Altman, J. A. Arnaiz, J. Bloom, A. Chan, E. Cronin, E. Decullier, P. J. Easterbrook, E. Von Elm, C. Gamble, D. Ghersi, J. P. A. Ioannidis, J. Simes, and P. R. Williamson. 2008. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE 3(8):e3081. Emanuel, E. J., D. Wendler, and C. Grady. 2000. What makes clinical research ethical? Journal of the American Medical Association 283(20):2701-2711. Enserink, M. 2011. Authors pull the plug on second paper supporting viral link to chronic fatigue syndrome. Science December 28.

OCR for page 105
157 RESPONSIBLE PARTIES FDA (Food and Drug Administration). 2007. Draft Guidance for Industry, Clinical Labora- tories, and FDA Staff—In Vitro Diagnostic Multivariate Index Assays. http://www.fda. gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm079148.htm (accessed February 1, 2012). FDA. 2011a. Draft Guidance for Industry and Food and Drug Administration Staff—In Vitro Companion Diagnostic Devices. http://www.fda.gov/medicaldevices/deviceregulationand- guidance/guidancedocuments/ucm262292.htm (accessed December 15, 2011). FDA. 2011b. FDA Establishment Inspection Report, Duke University Medical Center. h ttp://www.fda.gov/downloads/MedicalDevices/ProductsandMedicalProcedures/ InVitroDiagnostics/UCM289106.pdf (accessed February 9, 2012). Fielding, L. P., C. M. Fenoglio-Preiser, and S. Freedman. 1992. The future of prognostic factors in outcome prediction for patients with cancer. Cancer 70:2367-2377. Fleming, T. R. 2010. Clinical trials: Discerning hype from substance. Annals of Internal Medicine 153:400-406. Fontanarosa, P. B., A. Flanagin, and C. D. DeAngelis. 2011. Reporting conflicts of interest, financial aspects of research, and role of sponsors in funded studies. JAMA 294(1):110-111. Frankel, M. S. 1995. Commission on research integrity: Origins and charge. In Professional Ethics Report. http://www.aaas.org/spp/sfrl/per/per3.htm (accessed August 3, 2011). Fung, E. T. 2010. A recipe for proteomics diagnostic test development: The ova1 test, from biomarker discovery to FDA clearance. Clinical Chemistry 56(2):327-329. Gasparini, G., F. Pozza, and A. L. Harris. 1993. Evaluating the potential usefulness of new prognostic and predictive indicators on node-negative breast cancer patients. Journal of the National Cancer Institute 85(15):1206-1219. Gawande, A. 2009. The Checklist Manifesto: How to Get Things Right. New York: Metro- politan books. Geller, G., A. Boyce, D. E. Ford, and J. Sugarman. 2010. Beyond “compliance”: The role of institutional culture in promoting research integrity. Academic Medicine 85(8):1296-1302. Hall, P. A., and J. J. Going. 1999. Predicting the future: A critical appraisal of cancer prognosis studies. Histopathology 35:489-494. Helmreich, R. L. 2000. On error management: Lessons from aviation. BMJ 320(7237):781-785. Hsu, D. S., B. S. Balakumaran, C. R. Acharya, V. Vlahovic, K. S. Walters, K. Garman, C. Anders, R. F. Riedel, J. Lancaster, D. Harpole, H. K. Dressman, J. R. Nevins, P. G. Febbo, and A. Potti. 2007. Pharmacogenomic strategies provide a rational approach to the treatment of cisplatin-resistant patients with advanced cancer. Journal of Clinical Oncology 25(28):4350-4357. Hsu, D. S., B. S. Balakumaran, C. R. Acharya, V. Vlahovic, K. S. Walters, K. Garman, C. Anders, R. F. Riedel, J. Lancaster, D. Harpole, H. K. Dressman, J. R. Nevins, P. G. Febbo, and A. Potti. 2010. Retraction to Journal of Clinical Oncology 25(28):4350-4357. Hudson, P. 2003. Applying the lessons of high risk industries to health care. Quality & Safety in Health Care 12(Suppl 1):i7-i12. ICMJE (International Committee of Medical Journal Editors). 2009a. Ethical Considerations in the Conduct and Reporting of Research: Authorship and Contributorship. http://www. icmje.org/ethical_1author.html (accessed February 2, 2012). ICMJE. 2009b. Manuscript Preparation and Submission: Preparing a Manuscript for Submis- sion to a Biomedical Journal. http://www.icmje.org/manuscript_1prepare.html (accessed February 2, 2012). ICMJE. 2009c. Publishing and Editorial Issues Related to Publication in Biomedical Jour- nals: Corrections, Retractions and “Expressions of Concern.” http://www.icmje.org/ publishing_2corrections.html (accessed August 11, 2011). Ioannidis, J. P. A., and M. J. Khoury. 2011. Improving validation practices in “omics” research. Science 334(6060):1230-1232.

OCR for page 105
158 EVOLUTION OF TRANSLATIONAL OMICS IOM (Institute of Medicine). 2002. Integrity in Scientific Research: Creating an Environment That Promotes Responsible Conduct. Washington, DC: The National Academies Press. IOM. 2009a. Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health through Research. Washington, DC: The National Academies Press. IOM. 2009b. Conflict of Interest in Medical Research, Education, and Practice. Edited by B. Lo and M. Field. Washington, DC: The National Academies Press. IOM. 2011. Finding What Works in Health Care: Standards for Systematic Reviews. Wash- ington, DC: The National Academies Press. IOTF (Interagency Oncology Task Force). 2011. Joint Fellowship Training Program. http:// iotftraining.nci.nih.gov/index.html (accessed October 28, 2011). Jasny, B. R., G. Chin, L. Chong, and S. Vignieri. 2011. Again, and again, and again. Science 334(6060):1225. Kaiser, J. 2011. Public Health Service Issues Final Conflicts of Interest Rule. http://news. sciencemag.org/scienceinsider/2011/08/new-us-conflict-of-interest-rule.html (accessed September 8, 2011). Kim, S., R. Millard, P. Nisbet, C. Cox, and E. Caine. 2004. Potential research participants’ views regarding researcher and institutional financial conflicts of interest. Journal of Medical Ethics 30(1):73-79. Klimeck, G. 2011. Platform for Collaborative Research with Quantifiable Impact on Research and Education. Paper presented at Cyberinfrastructure Days Conference, Ann Arbor, MI. Kolata, G. 2001. Johns Hopkins death brings halt to U.S.-financed human studies. New York Times, July 20. Korn, D., and S. Ehringhaus. 2006. Principles for strengthening the integrity of clinical research. PLoS Clinical Trials 1(1):e1. Kornbluth, S. 2011. Discussion at the Discovery of Process Working Group Meeting with Representatives of Duke Faculty and Administration, Washington, DC, August 22. Kornbluth, S. A., and V. Dzau. 2011. Predictors of Chemotherapy Response: Background Information: Draft. Duke University. Laine, C., R. Horton, C. D. DeAngelis, J. M. Drazen, F. A. Frizelle, F. Godlee, C. Haug, P. C. Hébert, S. Kotzin, A. Marusic, P. Sahni, T. V. Schroeder, H. C. Sox, M. B. Van Der Weyden, and F. W. A. Verheugt. 2007. Clinical trial registration—looking back and mov - ing ahead. New England Journal of Medicine 356(26):2734-2736. Longo, D. R., J. E. Hewett, B. Ge, and S. Schubert. 2005. The long road to patient safety. JAMA 294(22):2858-2865. Lowrance, W. W. 2006. Access to Collections of Data and Materials for Health Research: A Report to the Medical Research Council and the Wellcome Trust. http://www.wellcome. ac.uk/stellent/groups/corporatesite/@msh_grants/documents/web_document/wtx030842. pdf (accessed August 10, 2011). Marshall, E. 2011. Unseen world of clinical trials emerges from U.S. database. Science 333(6039):145. Martinson, B. C., M. S. Anderson, and R. de Vries. 2005. Scientists behaving badly. Nature 435(7043):737-738. McCullough, B. D. 2007. Got replicability? The journal of money, credit, and banking archive. Econ Journal Watch 4(3):326-337. McGuire, W. L. 1991. Breast cancer prognostic factors: Evaluation guidelines. Journal of the National Cancer Institute 83:154-155. McShane, L. M. 2010a. NCI Address to the Institute of Medicine Committee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials. Presented at Meeting 1. Washington, DC, December 20. McShane, L. 2010b. Reanalysis Report for Cisplatin Chemosensitivity Predictor. Bethesda, MD: NCI.

OCR for page 105
159 RESPONSIBLE PARTIES McShane, L. M. 2010c. December 20. NCI Address to Institute of Medicine Committee Con- vened to Review Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials. Meeting 1: Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials, Washington, DC. McShane, L. M., D. G. Altman, W. Sauerbrei, S. E. Taube, M. Gion, and G. M. Clark. 2005. REporting recommendations for tumor MARKer prognostic studies (REMARK). Journal of the National Cancer Institute 97(16):1180-1184. Meldrum, D. R., and A. H. DeCherney. 2011. The who, why, what, when, where, and how of clinical trial registries. Fertility and Sterility 96(1):2-5. Mischak, H., R. Apweiler, R. Banks, M. Conaway, J. Coon, A. Dominiczak, J. Ehrich, D. Fliser, M. Girolami, H. Hermjakob, D. Hochstrasser, J. Jankowski, B. Julian, W. Kolch, Z. Massy, C. Neusuess, J. Novak, K. Peter, K. Rossing, J. Schanstra, J. Semmes, D. Theodorescu, V. Thongboonkerd, E. Weissinger, J. Van Eyk, and T. Yamamoto. 2007. Clinical proteomics: A need to define the field and to begin to set adequate standards. PROTEOMICS—Clinical Applications 1(2):148-156. Moher, D., S. Hopewell, K. F. Schulz, V. Montori, P. C. Gotzsche, P. J. Devereaux, D. Elbourne, M. Egger, and D. G. Altman. 2010. CONSORT 2010 explanation and elaboration: Updated guidelines for reporting parallel group randomised trials. Journal of Clinical Epidemiology 63(8):e1-e37. MoS (Manual of Style). 2007. AMA Manual of Style: A Guide for Authors and Editors, 10th ed. New York: Oxford University Press, Inc. Naik, G. 2011a. Mistakes in scientific studies surge. Wall Street Journal, August 10. Naik, G. 2011b. Scientists’ elusive goal: Reproducing study results. Wall Street Journal, December 2. NAS (National Academy of Sciences). 1992. Responsible Science, Volume. I: Ensuring the Integrity of the Research Process. Washington, DC: National Academy Press. NAS. 2009. On Being a Scientist: A Guide to Responsible Conduct in Research, 3rd ed. Washington, DC: The National Academies Press. Nature. 2011. Availability of Data and Material. http://www.nature.com/authors/policies/ availability.html (accessed August 15, 2011). Nelson, D., and R. Weiss. 1999. Hasty Decisions in the Race to a Cure? Gene Therapy Study Proceeded Despite Safety, Ethics Concerns. http://www.washingtonpost.com/wp-srv/ WPcap/1999-11/21/101r-112199-idx.html (accessed October 27, 2011). Nevins, J. 2011. Genomic Strategies to Address the Challenge of Personalizing Cancer Therapy. Presented at the Workshop of the IOM Committee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials, Washington, DC, March 30-31. NIH (National Institutes of Health). 1998. NIH Policy for Data and Safety Monitoring. http:// grants.nih.gov/grants/guide/notice-files/not98-084.html (accessed July 22, 2011). NIH. 2010. NIH Grants Policy Statement. http://grants.nih.gov/grants/policy/nihgps_2010/ index.htm (accessed July 22, 2010). NIH. 2011. Mission. http://www.nih.gov/about/mission.htm (accessed October 19, 2011). NRC (National Research Council). 1985. Sharing Research Data. Washington, DC: National Academy Press. NRC. 2003. Sharing Publication-Related Data and Materials: Responsibilities of Authorship in the Life Sciences. Washington, DC: The National Academies Press. NRC. 2005. Catalyzing Inquiry at the Interface of Computing and Biology. Washington, DC: The National Academies Press. NRC and IOM. 2002. Integrity in Scientific Research: Creating an Environment That Pro- motes Responsible Conduct. Washington, DC: The National Academies Press. NSF (National Science Foundation). 2001. Grant General Conditions (GC-1). http://www.nsf. gov/pubs/2001/gc101/gc101rev1.pdf (accessed August 11, 2011).

OCR for page 105
160 EVOLUTION OF TRANSLATIONAL OMICS OHSR (Office of Human Subjects Research). 2006. Sheet 6: Guidelines for Writing Informed Consent Documents. http://ohsr.od.nih.gov/info/sheet6.html (accessed October 27, 2011). ORI (Office of Research Integrity). 2011. About ORI. http://ori.hhs.gov/about/index.shtml (accessed September 21, 2011). Paik, S., S. Shak, G. Tang, C. Kim, J. Baker, M. Cronin, F. L. Baehner, M. G. Walker, D. Watson, T. Park, W. Hiller, E. R. Fisher, D. L. Wickerham, J. Bryant, and N. Wolmark. 2004. A multigene assay to predict recurrence of tamoxifen-treated, node-negative breast cancer. New England Journal of Medicine 351(27):2817-2826. Pathwork Diagnostics. 2010. Pathwork Tissue of Origin Test for FFPE Cleared by U.S. Food and Drug Administration. http://www.pathworkdx.com/News/M129_FDA_Clearance_ Final.pdf (accessed November 17, 2011). PCF (Prostate Cancer Foundation). 2011. Prostate Cancer Research. http://www.pcf.org/site/c. leJRIROrEpH/b.5780289/k.D2E4/Research.htm (accessed August 10, 2011). PCSBI (Presidential Commission for the Study of Bioethical Issues). 2011. Moral Science: Pro- tecting Participants in Human Subjects Research. http://bioethics.gov/cms/sites/default/ files/Moral%20Science%20-%20Final.pdf (accessed December 21, 2011). Peng, R. D. 2009. Reproducible research and Biostatistics. Biostatistics 10(3):405-408. Peng, R. D. 2011. Reproducible research in computational science. Science 334(6060):1226-1227. Peng, R. D., F. Dominici, and S. L. Zeger. 2006. Reproducible epidemiologic research. Ameri- can Journal of Epidemiology 163(9):783-789. Philip, R. O., M. A. Payne, W. Andrew, B. S. Greaves, and T. J. Kipps. 2003. CRC clinical trials management system (CTMS): An integrated information management solution for collaborative clinical research. AMIA Annual Symposium Proceedings 2003:967. PhRMA Foundation. 2011. 2012 Awards in Pharmacology. http://phrmafoundation.org/ download/PhRMA%20Bro_pharmacology.pdf (accessed August 10, 2011). Piwowar, H. A. 2011. Who shares? Who doesn’t? Factors associated with openly archiving raw research data. PLoS ONE 6(7):218657. Piwowar, H. A., and W. W. Chapman. 2008. A review of journal policies for sharing research data. AMIA Annual Symposium Proceedings 2008:596-600. Platt, J. R. 1964. Strong inference: Certain systematic methods of scientific thinking may pro- duce much more rapid progress than others. Science 146(3642):347-353. Potti, A. 2009. Letter to FDA’s CDER from Division of Medical Oncology, Duke University Medi- cal Center. http://www.fda.gov/downloads/MedicalDevices/ProductsandMedicalProcedures/ InVitroDiagnostics/UCM289103.pdf (accessed February 9, 2012). Potti, A., and J. R. Nevins. 2007. Potti et al. Reply. Nature Medicine 13(11):1277-1278. Potti, A., H. K. Dressman, A. Bild, R. F. Riedel, G. Chan, R. Sayer, J. Cragun, H. Cottrill, M. J. Kelley, R. Petersen, D. Harpole, J. Marks, A. Berchuck, G. S. Ginsburg, P. Febbo, J. Lancaster, and J. R. Nevins. 2006a. Genomic signatures to guide the use of chemo- therapeutics. Nature Medicine 12(11):1294-1300. Potti, A., S. Mukherjee, R. Petersen, H. K. Dressman, A. Bild, J. Koontz, R. Kratzke, M. A. Watson, M. Kelley, G. S. Ginsburg, M. West, D. H. Harpole, and J. R. Nevins. 2006b. A genomic strategy to refine prognosis in early-stage non-small-cell lung cancer. New England Journal of Medicine 355(6):570-580 Potti, A., H. K. Dressman, A. Bild, G. Chan, R. Sayer, J. Cragun, H. Cottrill, M. J. Kelley, R. Petersen, D. Harpole, J. Marks, A. Berchuck, G. S. Ginsburg, P. Febbo, J. Lancaster, and J. R. Nevins. 2011. Retraction: Genomic signatures to guide the use of chemotherapeu- tics. Nature Medicine 17(1):135. Pronovost, P. J., B. Weast, C. G. Holzmueller, B. J. Rosenstein, R. P. Kidwell, K. B. Haller, E. R. Reroli, J. B. Sexton, and H. R. Rubin. 2003. Evaluation of the culture of safety: Survey of clinicians and managers in an academic medical center. Quality & Safety in Health Care 12:405-410.

OCR for page 105
161 RESPONSIBLE PARTIES Quackenbush, J. 2009. Data reporting standards: Making the things we use better. Genome Medicine 1(11):111. Quest Diagnostics. 2011. Licenses and Accreditation. http://www.questdiagnostics.com/brand/ company/b_comp_licenses.html (accessed November 21, 2011). Ransohoff, D. F. 2002. Challenges and opportunities in evaluating diagnostic tests. Journal of Clinical Epidemiology 55(12):1178-1182. Ransohoff, D. F., and A. R. Feinstein. 1978. Problems of spectrum and bias in evaluating the efficacy of diagnostic tests. New England Journal of Medicine 299(17):926-930. Ranstam, J., M. Buyse, S. L. George, S. Evans, N. L. Geller, B. Scherrer, E. Lesaffre, G. Murray, L. Edler, J. L. Hutton, T. Colton, and P. Lachenbruch. 2000. Fraud in medical research: An international survey of biostatisticians. Controlled Clinical Trials 21(5):415-427. Rennie, D. 1997. Thyroid storm. Journal of the American Medical Association 277(15):1238-1243. Rhodes, R., and J. J. Strain. 2004. Whistleblowing in academic medicine. Journal of Medical Ethics 30:35-39. Riley, R. D., K. R. Abrams, A. J. Sutton, P. C. Lambert, D. R. Jones, D. Heney, and S. A. Burchill. 2003. Reporting of prognostic markers: Current problems and development of guidelines for evidence-based practice in the future. British Journal of Cancer 88(8):1191-1198. Rosenberg, S., M. R. Elashoff, P. Beineke, S. E. Daniels, J. A. Wingrove, W. G. Tingley, P. T. Sager, A. J. Sehnert, M. Yau, W. E. Kraus, K. Newby, R. S. Schwartz, S. Voros, S. G. Ellis, N. Tahirkhelli, R. Waksman, J. McPherson, A. Lansky, M. E. Winn, N. J. Schork, E. J. Topol, and for the PREDICT (Personalized Risk Evaluation and Diagnosis In the Conorary Tree) Investigators. 2010. Multicenter validation of the diagnostic accuracy of a blood-based gene expression test for assessing obstructive coronary artery disease in nondiabetic patients. Annals of Internal Medicine 153(7):425-434. SACGHS (Secretary’s Advisory Committee on Genetics, Health, and Society). 2010. Gene Patents and Licensing Practices and Their Impact on Patient Access to Genetic Tests. http://oba.od.nih.gov/oba/sacghs/reports/SACGHS_patents_report_2010.pdf (accessed January 5, 2012). Schein, E. 2004. Organizational Culture and Leadership, 3rd ed. The Jossey-Bass Business & Management Series. San Francisco, CA: John Wiley & Sons, Inc. Science. 2011. General Information for Authors. http://www.sciencemag.org/site/feature/ contribinfo/prep/gen_info.xhtml#dataavail (accessed August 15, 2011). Science Staff. 2011. Challenges and opportunities. Science 331(6018):692-693. Segal, M. R., H. Xiong, H. Bengtsson, R. Bourgon, and R. Gentleman. 2012. Querying genomic databases: Refining the connectivity map. Statistical Applications in Genetics and Molecular Biology 11(2):1-34. Sherpa. 2011. Research Funders’ Open Access Policies. http://www.sherpa.ac.uk/juliet/ (ac- cessed September 9, 2011). Simon, R. 2008. The use of genomics in clinical trial design. Clinical Cancer Research 14(19):5984-5993. Simon, R . 2010. Clinical trials for predictive medicine: New challenges and paradigms. Clini- cal Trials 7(5):Epub 2010 Mar. Simon, R., and D. G. Altman. 1994. Statistical aspects of prognostic factor studies in oncology. British Journal of Cancer 69(6):979-985. Sloane, A. 2003. Grading Duke: “A” for acknowledgment. Journal of Health Law 36(4):627-645. Song, F., S. Parekh-Bhurke, L. Hooper, Y. Loke, J. Ryder, A. Sutton, C. Hing, and I. Harvey. 2009. Extent of publication bias in different categories of research cohorts: A meta- analysis of empirical studies. BMC Medical Research Methodology 9(1):79. Sprague, R. L., J. Daw, and G. C. Roberts. 2001. Influences on the ethical beliefs of graduate students concerning research. Science and Engineering Ethics 7(4):507-516.

OCR for page 105
162 EVOLUTION OF TRANSLATIONAL OMICS Stelfox, H. T., G. Chua, K. O’Rourke, and A. S. Detsky. 1998. Conflict of interest in the debate over calcium-channel antagonists. New England Journal of Medicine 338(2):101-106. Steneck, N. H. 2006. ORI Introduction to the Responsible Conduct of Research. http://ori. hhs.gov/education/products/RCRintro/index.html (accessed August 3, 2011). Stodden, V., and Yale Roundtable Participants. 2010. Reproducible research: Addressing the need for data and code sharing in computational science. Computing in Science and Engineering 12(5):8-13. Taylor, C. F., D. Field, S.-A. Sansone, J. Aerts, R. Apweiler, M. Ashburner, C. A. Ball, P.-A. Binz, M. Bogue, T. Booth, A. Brazma, R. R. Brinkman, A. Michael Clark, E. W. Deutsch, O. Fiehn, J. Fostel, P. Ghazal, F. Gibson, T. Gray, G. Grimes, J. M. Hancock, N. W. Hardy, H. Hermjakob, R. K. Julian, M. Kane, C. Kettner, C. Kinsinger, E. Kolker, M. Kuiper, N. L. Novere, J. Leebens-Mack, S. E. Lewis, P. Lord, A.-M. Mallon, N. Marthandan, H. Masuya, R. McNally, A. Mehrle, N. Morrison, S. Orchard, J. Quackenbush, J. M. Reecy, D. G. Robertson, P. Rocca-Serra, H. Rodriguez, H. Rosenfelder, J. Santoyo-Lopez, R. H. Scheuermann, D. Schober, B. Smith, J. Snape, C. J. Stoeckert, K. Tipton, P. Sterk, A. Untergasser, J. Vandesompele, and S. Wiemann. 2008. Promoting coherent minimum reporting guidelines for biological and biomedical investigations: The MIBBI project. Nature Biotechnology 26(8):889-896. Titus, S. L., J. A. Wells, and L. J. Rhoades. 2008. Repairing research integrity. Nature 453(7198):980-982. TMQF Committee (Translational Medicine Quality Framework Committee). 2011. A Frame- work for the Quality of Translational Medicine with a Focus on Human Genomics Studies: Principles from the Duke Translational Medicine Quality Framework Commit- tee. Durham, NC: Duke University. Turner, E. H., A. M. Matthews, E. Linardatos, R. A. Tell, and R. Rosenthal. 2008. Selective publication of antidepressant trials and its influence on apparent efficacy. New England Journal of Medicine 358(3):252-260. van ‘t Veer, L. J., H. Dai, M. J. van de Vijver, Y. D. He, A. A. M. Hart, M. Mao, H. L. Peterse, K. van der Kooy, M. J. Marton, A. T. Wittereveen, G. J. Schreiber, R. M. Kerkoven, C. Roberts, P. S. Linsley, R. Bernards, and S. F. Friend. 2002. Gene expression profiling predicts clinical outcome of breast cancer. Nature 415(31):530-536. Vedula, S. S., L. Bero, R. W. Scherer, and K. Dickersin. 2009. Outcome reporting in industry- sponsored trials of gabapentin for off-label use. New England Journal of Medicine 361(20):1963-1971. Vickers, A. 2008. Cancer data? Sorry, can’t have it. New York Times, January 22. Wellcome Trust. 2011. Sharing Research Data to Improve Public Health: Full Joint Statement by Funders of Health Research. http://www.wellcome.ac.uk/About-us/Policy/Spotlight- issues/Data-sharing/Public-health-and-epidemiology/WTDV030690.htm (accessed Au - gust 11, 2011). Yarborough, M., and R. R. Sharp. 2009. Public trust and research a decade later: What have we learned since Jesse Gelsinger’s death? Molecular Genetics and Metabolism 97(1):4-5. Yarborough, M., K. Fryer-Edwards, G. Geller, and R. S. Sharp. 2009. Transforming the culture of biomedical research from compliance to trustworthiness: Insights from nonmedical sectors. Academic Medicine 84(4):472-476. Zarin, D. A. 2005. Clinical trial registration. New England Journal of Medicine 352(15):1611. Zarin, D. A., and T. Tse. 2008. Moving toward transparency of clinical trials. Science 319(5868):1340-1342. Zarin, D. A., T. Tse, R. J. Williams, R. M. Califf, and N. C. Ide. 2011. The ClinicalTrials. gov results database—update and key issues. New England Journal of Medicine 364(9):852-860.