Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
3 Experimental Design and Data Analysis The greatest challenge of toxicogenomics is no longer data generation but effective collection, management, analysis, and interpretation of data. Although genome sequencing projects have managed large quantities of data, genome sequencing deals with producing a reference sequence that is relatively static in the sense that it is largely independent of the tissue type analyzed or a particular stimulation. In contrast, transcriptomes, proteomes, and metabolomes are dy- namic and their analysis must be linked to the state of the biologic samples un- der analysis. Further, genetic variation influences the response of an organism to a stimulus. Although the various toxicogenomic technologies (genomics, tran- scriptomics, proteomics, and metabolomics) survey different aspects of cellular responses, the approaches to experimental design and high-level data analysis are universal. This chapter describes the essential elements of experimental design and data analysis for toxicogenomic experiments (see Figure 3-1) and reviews issues associated with experimental design and data analysis. The discussion focuses on transcriptome profiling using DNA microarrays. However, the approaches and issues discussed here apply to various toxicogenomic technologies and their applications. This chapter also describes the term biomarker. EXPERIMENTAL DESIGN The types of biologic inferences that can be drawn from toxicogenomic experiments are fundamentally dependent on experimental design. The design must reflect the question that is being asked, the limitations of the experimental system, and the methods that will be used to analyze the data. Many experiments using global profiling approaches have been compromised by inadequate con- sideration of experimental design issues. Although experimental design for toxi- 45
46 Applications of Toxicogenomic Technologies cogenomics remains an area of active research, a number of universal principles have emerged. First and foremost is the value of broad sampling of biologic variation (Churchill 2002; Simon et al. 2002; Dobbin and Simon 2005). Many early experiments used far too few samples to draw firm conclusions, possibly because of the cost of individual microarrays. As the cost of using microarrays and other toxicogenomic technologies has declined, experiments have begun to include sampling protocols that provide better estimates of biologic and system- atic variation within the data. Still, high costs remain an obstacle to large, popu- lation-based studies. It would be desirable to introduce power calculations into the design of toxicogenomic experiments (Simon et al. 2002). However, uncer- tainties about the variability inherent in the assays and in the study populations, as well as interdependencies among the genes and their levels of expression, limit the utility of power calculations. FIGURE 3-1 Overview of the workflow in a toxicogenomic experiment. Regardless of the goal of the analysis, all share some common elements. However, the underlying ex- perimental hypothesis, reflected in the ultimate goal of the analysis, should dictate the details of every step in the process, starting from the experimental design and extending beyond what is presented here to the methods used for validation.
Experimental Design and Data Analysis 47 A second lesson that has emerged is the need for carefully matched con- trols and randomization in any experiment. Because microarrays and other toxi- cogenomic technologies are extremely sensitive, they can pick up subtle varia- tions in gene, protein, or metabolite expression that are induced by differences in how samples are collected and handled. The use of matched controls and ran- domization can minimize potential sources of systematic bias and improve the quality of inferences drawn from toxicogenomic datasets. A related question in designing toxicogenomic experiments is whether samples should be pooled to improve population sampling without increasing the number of assays (Dobbin and Simon 2005; Jolly et al. 2005; Kendziorski et al. 2005). Pooling averages variations but may also disguise biologically rele- vant outliersâfor example, individuals sensitive to a particular toxicant. Al- though individual assays are valuable for gaining a more robust estimate of gene expression in the population under study, pooling can be helpful if experimental conditions limit the number of assays that can be performed. However, the rela- tive costs and benefits of pooling should be analyzed carefully, particularly with respect to the goals of the experiment and plans for follow-up validation of re- sults. Generally, the greatest power in any experiment is gained when as many biologically independent samples are analyzed as is feasible. Universal guidelines cannot be specified for all toxicogenomic experi- ments, but careful design focused on the goals of the experiment and adequate sampling are needed to assess both the effect and the biologic variation in a sys- tem. These lessons are not unique to toxicogenomics. Inadequate experimental designs driven by cost cutting have forced many studies to sample small popula- tions, which ultimately compromises the quality of inferences that can be drawn. TYPES OF EXPERIMENTS DNA microarray experiments can be categorized into four types: class discovery, class comparison, class prediction, and mechanistic studies. Each type addresses a different goal and uses a different experimental design and analysis. Table 3-1 summarizes the broad classes of experiments and representa- tive examples of the data analysis tools that are useful for such analyses. These data analysis approaches are discussed in more detail below. Class Discovery Class discovery analysis is generally the first step in any toxicogenomic experiment because it takes an unbiased approach to looking for new group classes in the data. A class discovery experiment asks âAre there unexpected, but biologically interesting, patterns that exist in the data?â For example, one might consider an experiment in which all nephrotoxic compounds are used in- dividually to treat rats, and gene expression data are collected from the kidneys
48 TABLE 3-1 Data Analysis Approaches Applicationa Algorithmb Representative Referencesc Class Discovery Hierarchical clustering Weinstein et al. 1997; Eisen et al. 1998; Wen et al. 1998 k-means clustering Soukas et al. 2000 Self-organizing maps Tamayo et al. 1999; Toronen et al. 1999; Wang et al. 2002 Self-organizing trees Herrero et al. 2001 Relevance networks Butte and Kohane 1999 Force-directed layouts Kim et al. 2001 Principal component analysis Raychaudhuri et al. 2000 Class Comparison t-test Baggerly et al. 2001 Significance analysis of microarrays Tusher et al. 2001 Analysis of variance Long et al. 2001 Class Prediction k-nearest neighbors Theilhaber et al. 2002 Weighted voting Golub et al. 1999 Artificial neural networks Bloom et al. 2004; Ellis et al. 2002 Discriminant analysis Nguyen and Rocke 2002; Orr and Scherf 2002; Antoniadis et al. 2003; Le et al. 2003 Classification and regression trees Boulesteix et al. 2003 Support vector machines Brown et al. 2000; Ramaswamy et al. 2001 Functional and Network Inference for EASE Hosack et al. 2003 Mechanistic Analysis MAPPFinder Doniger et al. 2003
GOMiner Zeeberg et al. 2003 Cytoscape Shannon et al. 2003 Boolean networks Akutsu et al. 2000; Savoie et al. 2003; Soinov 2003 Probabilistic boolean networks Shmulevich et al. 2002a,b; Datta et al. 2004; Hashimoto et al. 2004 Bayesian networks Friedman et al. 2000; Imoto et al. 2003; Savoie et al. 2003; Tamada et al. 2003; Zou and Conzen 2005 a Application of these analytical tools is not limited to individual datasets but can be applied across studies if the data and relevant ancillary information (for example, about treatment, phenotype) are available. b A wide range of algorithms has been developed to facilitate analysis of toxicogenomic datasets. Although most approaches have been applied in the context of gene expression microarray data, the algorithms are generally applicable to any toxicogenomic-based data. A representative sample is presented here; many similar approaches are being developed. c In general, the citations represent the first published use of a particular method or those that are most widely cited. of these rats after they begin to experience renal failure (Amin et al. 2004). Evaluation of the gene expression data may indicate the nephrotoxic compounds can be grouped based on the cell type affected, the mechanism responsible for renal failure, or other common factors. This analysis may also suggest a new subgroup of nephrotoxic compounds that either affects a different tissue type or represents a new toxicity mechanism. 49
50 Applications of Toxicogenomic Technologies of these rats after they begin to experience renal failure (Amin et al. 2004). Evaluation of the gene expression data may indicate the nephrotoxic compounds can be grouped based on the cell type affected, the mechanism responsible for renal failure, or other common factors. This analysis may also suggest a new subgroup of nephrotoxic compounds that either affects a different tissue type or represents a new toxicity mechanism. Class discovery analyses rely on unsupervised data analysis methods (al- gorithms) to explore expression patterns in the data (see Box 3-1). Unsupervised data analysis methods are often among the first techniques used to analyze a microarray dataset. Unsupervised methods do not use the sample classification as input; for example, they do not consider the treatment groups to which the samples belong. They simply group samples together based on some measure of similarity. Two of the most widely used unsupervised approaches are hierarchi- cal clustering (Weinstein et al. 1997; Eisen et al. 1998; Wen et al. 1998) and k- means clustering (Soukas et al. 2000). Other approaches have been applied to unsupervised analysis, including self-organizing maps (Tamayo et al. 1999; Toronen et al. 1999; Wang et al. 2002), self-organizing trees (Herrero et al. 2001), relevance networks (Butte and Kohane 1999), force-directed layouts (Kim et al. 2001), and principal component analysis (Raychaudhuri et al. 2000). Fundamentally, each of these methods uses some feature of the data and a rule for determining relationships to group genes (or samples) that share similar pat- terns of expression. In the context of disease analysis, all the methods can be extremely useful for identifying new subclasses of disease, provided that the subclasses are reproducible and can be related to other clinical data. All these methods will divide data into âclusters,â but determining whether the clusters are meaningful requires expert input and analysis. Critical assessment of the results is essential. There are anecdotal reports of clusters being found that sepa- rate data based on the hospital where the sample was collected, the technician who ran the microarray assay, or the day of the week the array was run. Clearly, microarrays can be very sensitive. However, recent reports suggest that adhering to standard laboratory practices and carefully analyzing data can lead to high- quality, reproducible results that reflect the biology of the system (Bammler et al. 2005; Dobbin et al. 2005; Irizarry et al. 2005; Larkin et al. 2005). In the context of toxicogenomics, class discovery methods can be applied to understand the cellular processes involved in responding to specific agents. For example, in animals exposed to a range of compounds, analysis of gene ex- pression profiles with unsupervised âclusteringâ methods can be used to dis- cover groups of genes that may be involved in cellular responses and suggest hypotheses about the modes of action of the compounds. Subsequent experi- ments can confirm the gene expression effects, confirm or refute the hypotheses, and identify the cell types and mode of action associated with the response. A goal of this type of research would be to build a database of gene expression profiles of sufficiently high quality to enable a gene expression profile to be used to classify compounds based on their mode of action.
Experimental Design and Data Analysis 51 Class Comparison Class comparison experiments compare gene expression profiles of differ- ent phenotypic groups (such as treated and control groups) to discover genes and gene expression patterns that best distinguish the groups. The starting point in such an experiment is the assumption that one knows the classes represented in the data. A logical approach to data analysis is to use information about the various classes in a supervised fashion to identify those genes that distinguish the groups. One starts by assigning samples to particular biological classes based on objective criteria. For example, the data may represent samples from treat- ment with neurotoxic and hepatotoxic compounds. The first question would be, âWhich genes best distinguish the two classes in the data?â At this stage, the goal is to find the genes that are most informative for distinguishing the samples based on class. A wide variety of statistical tools can be brought to bear on this question, including t-tests (for two classes) and analysis of variance (for three or more classes) that assign p values to genes based on their ability to distinguish among groups. One concern with these statistical approaches is the problem of multiple testing. Simply put, in a microarray with 10,000 genes, applying a 95% confi- dence limit on gene selection (p â¤0.05) means that, by chance, one would expect to find 500 genes as significant. Stringent gene selection can minimize but not eliminate this problem; consequently, one must keep in mind that the greatest value of statistical methods is that they provide a way to prioritize genes for further analysis. Other approaches are widely used, such as significance analysis of microarrays (Tusher et al. 2001), which uses an adjusted t statistic (or F sta- tistic) modified to correct for overestimates arising from small values in the de- nominator, along with permutation testing to estimate the false discovery rate in any selected significant gene set. Other methods attempt to correct for multiple testing, such as the well-known Bonferroni correction, but these methods as- sume independence between the measurements, a constraint that is violated in gene analysis as many genes and gene products operate together in pathways and networks and so are co-regulated. Further confounding robust statistical analysis of toxicogenomic studies is the ân < p problem,â which means the number of samples analyzed is typically much smaller than the number of genes, proteins, or metabolites assayed. For these reasons, statistical analysis of higher- dimensional datasets produced by toxicogenomic technologies remains an area of active research. As described above, class comparison analyses provide collections of genes that the data indicate are useful in distinguishing among the various ex- perimental groups being studied. These collections of genes can be used either as a starting point for mechanistic studies or in an attempt to classify new com- pounds as to their mode of action.
52 Applications of Toxicogenomic Technologies BOX 3-1 Supervised and Unsupervised Analysis Analysis methods for toxicogenomic data can be divided into two broad classes depending on how much prior knowledge, such as information about the samples, is used. Unsupervised methods examine the data without benefit of information about the samples. Unsupervised methods, such as hierarchical clustering, are particularly useful in class discovery studies as they group samples without prior bias and may allow new classes to be found in samples previously thought to be âidentical.â These methods are also useful for quality control in large experiments as they can verify similarity among related samples or identify outliers (for exam- ple, failed assays). Supervised methods use information about the samples being analyzed to find features in the data. Most statistical approaches are supervised; once samples are assigned to groups, the data for each gene, protein, or metabolite are compared across groups to find those that can distinguish between groups. Class comparison and classification studies use supervised methods in the early stages of analysis. Class Prediction Class prediction experiments attempt to predict biologic effects based on the gene expression profile associated with exposure to a compound. Such an experiment asks âCan a particular pattern of gene expression be combined with a mathematical rule to predict the effects of a new compound?â The underlying assumption is that compounds eliciting similar effects will elicit similar effects on gene expression. Typically, one starts with a well-characterized set of com- pounds and associated phenotypes (a database of phenotype and gene expression data) and through a careful comparison of the expression profiles finds genes whose patterns of expression can be used to distinguish the various phenotypic groups under analysis. Class prediction approaches then attempt to use sets of informative genes (generally selected using statistical approaches in class com- parison) to develop mathematical rules (or computational algorithms) that use gene expression profiling data to assign a compound to its phenotype group (class). The goal is not merely to separate the samples but to create a rule (or algorithm) that can predict phenotypic effects for new compounds based solely on gene expression profiling data. For example, to test a new compound for possible neurotoxicity, gene ex- pression data for that compound would be compared with gene expression data for other neurotoxic compounds in a database and a prediction would be made about the new compoundâs toxicity. (The accuracy of the prediction depends on the quality of the databases and datasets.) When developing a classification approach, the mathematical rules for analyzing new samples are encoded in a classification algorithm. A wide range
Experimental Design and Data Analysis 53 of algorithms have been used for this purpose, including weighted voting (Golub et al. 1999), artificial neural networks (Ellis et al. 2002; Bloom et al. 2004), dis- criminant analysis (Nguyen and Rocke 2002; Orr and Scherf 2002; Antoniadis et al. 2003; Le et al. 2003), classification and regression trees (Boulesteix et al. 2003), support vector machines (Brown et al. 2000; Ramaswamy et al. 2001), and k-nearest neighbors (Theilhaber et al. 2002). Each of these uses an original set of samples, or training set, to develop a rule that uses the gene expression data (trimmed to a previously identified set of informative genes) for a new compound to place this new compound into the context of the original sample set, thus identifying its class. Functional and Network Inference for Mechanistic Analysis Although class prediction analysis may tell us what response a particular compound is likely to produce, it does not necessarily shed light on the underly- ing mechanism of action. Moving from class prediction to mechanistic under- standing often relies on additional work to translate toxicogenomic-based hy- potheses to validated findings. Bioinformatic tools play a key role in developing those hypotheses by integrating information that can facilitate interpretationâ including gene ontology terms, which describe gene products (proteins), func- tions, processes, and cellular locations; pathway database information; genetic mapping data; structure-activity relationships; dose-response curves; phenotypic or clinical information; genome sequence and annotation; and other published literature. Software developed to facilitate this analysis includes MAPPFinder (Doniger et al. 2003), GOMiner (Zeeberg et al. 2003), and EASE (Hosack et al. 2003), although they may only provide hints about possible mechanisms. There is no universally accepted way to connect the expression of genes, proteins, or metabolites to functionally relevant pathways leading to particular phenotypic end points, so a good deal of user interaction and creativity is currently required. New approaches to predict networks of interacting genes based on gene expression profiles use several modeling techniques, including boolean net- works (Akutsu et al. 2000; Savoie et al. 2003; Soinov 2003), probabilistic boolean networks (Shmulevich et al. 2002a,b; Datta et al. 2004; Hashimoto et al. 2004; ), and Bayesian networks (Friedman et al. 2000; Imoto et al. 2003; Savoie et al. 2003; Tamada et al. 2003; Zou and Conzen 2005). These models treat in- dividual objects, such as genes and proteins, as ânodesâ in a graph, with âedgesâ connecting the nodes representing their interactions. A set of rules for each edge determines the strength of the interaction and whether a particular response will be induced. These approaches have met with some success, but additional work is necessary to convert the models from descriptive to predictive. In metabolic profiling, techniques that monitor metabolic flux and its modeling (Wiback et al. 2004; Famili et al. 2005) also may provide predictive models. The advent of global toxicogenomic technologies, and the data they pro- vide, offers the possibility of developing quantitative, predictive models of bio-
54 Applications of Toxicogenomic Technologies logic systems. This approach, dubbed âsystems biology,â attempts to bring to- gether data from many different domains, such as gene expression data and metabolic flux analysis, and to synthesize them to produce a more complete un- derstanding of the biologic response of a cell, organ, or individual to a particular stimulus and create predictive biomathematical models. Whereas toxicogenomic data are valuable even when not used in a systems biology mode, achieving this systems-level understanding of organismal response and its relationship to the development of a particular phenotype is a long-term goal of toxicogenomics and other fields. The best efforts to date have allowed the prediction of networks of potentially interacting genes. However, these network models, while possibly predictive, lack the complexity of the biochemical or signal transduction path- ways that mediate cellular responses. Attempts to model metabolic flux, even in simpler organisms like yeast and bacteria, provide only rough approximations of system function responses and then only under carefully controlled conditions. However, significant progress in the ability to model complex systems is likely and additional toxicogenomic research will continue to benefit from and help advance systems biology approaches and their applications. TOXICOGENOMICS AND BIOMARKER DISCOVERY An opinion paper by Bailey and Ulrich (2004) outlined the use of microar- rays and related technologies for identifying new biomarkers; see Box 3-2 for definitions. Within the drug industry, there is an acute need for effective bio- markers that predict adverse events earlier than otherwise could be done in every phase of drug development from discovery through clinical trials, including a need for noninvasive biomarkers for clinical monitoring. There is a widespread expectation that, with toxicogenomics, biomarker discovery for assessing toxic- ity will advance at an accelerated rate. Each transcriptional âfingerprintâ reflects a cumulative response representing complex interactions within the organism that include pharmacologic and toxicologic effects. If these interactions can be significantly correlated to an end point, and shown to be reproducible, the mo- lecular fingerprint potentially can be qualified as a predictive biomarker. Several review articles explore issues related to biomarker assay development and pro- vide examples of the biomarker development process (Wagner 2002; Colburn 2003; Frank and Hargreaves 2003). The utility of gene expression-based biomarkers was clearly illustrated by van Leeuwen and colleaguesâ 1986 identification of putative transcriptional biomarkers for early effects of smoking using peripheral blood cell profiling (van Leeuwen et al. 1986). Kim and coworkers also demonstrated a putative transcriptional biomarker that can identify genotoxic effects but not carcino- genesis using lymphoma cells but noted that the single marker presented no clear advantage over existing in vitro or in vivo assays (Kim et al. 2005). Sa- wada et al. discovered a putative transcriptional biomarker predicting phosphol- ipidosis in the HepG2 cell line, but they too saw no clear advantage over exist-
Experimental Design and Data Analysis 55 ing assays (Sawada et al. 2005). In 2004, a consortium effort based at the Inter- national Life Sciences Instituteâs Health and Environmental Sciences Institute identified putative gene-based markers of renal injury and toxicity (Amin et al. 2004). As has been the case for transcriptional markers, protein-based expres- sion assays have also shown their value as predictive biomarkers. For example, Searfoss and coworkers used a toxicogenomic approach to identify a protein biomarker for intestinal toxicity (Searfoss et al. 2003). Exposure biomarker examples also exist. Koskinen and coworkers devel- oped an interesting model system in rainbow trout, using trout gene expression microarrays to develop biomarkers for assessing the presence of environmental contaminants (Koskinen et al. 2004). Gray and colleagues used gene expression in a mouse hepatocyte cell line to identify the presence of aromatic hydrocarbon receptor ligands in an environmental sample (Gray et al. 2003). BOX 3-2 Defining Biomarkers Throughout this chapter, a wide range of applications of gene expression microarray and other toxicogenomic technologies has been discussed. Many of the most promising applications involve using gene, protein, or metabolic expression profiles as diagnostic or prognostic indicators and refer to them as biomarkers. However, use of this term has been rather imprecise, in part because the term has developed a rather broad range of interpretations and associations with detection of a range of measurable end points. To resolve some of the potential confusion about the termâs use, the Na- tional Institutes of Health (NIH) formed a committee to provide working defini- tions for specific terms and a conceptual model of how biomarkers could be used (BDW Group 2001). According to the NIH Initiative on Biomarkers and Surrogate Endpoints a biologic marker (biomarker) is defined as follows: âA characteristic that is objectively measured and evaluated as an indicator of normal biologic proc- esses, pathogenic processes, or pharmacologic responses to a therapeutic interven- tion.â A biomarker is distinguished from a clinical end point, which is defined as âa characteristic or variable that reflects how a patient feels, functions, or survivesâ and is distinguished from a surrogate end point, which is defined as âa biomarker that is intended to substitute for a clinical end point. A surrogate end point is ex- pected to predict clinical benefit (or harm or lack of benefit or harm) based on epidemiologic, therapeutic, pathophysiologic, or other scientific evidence.â In the terminology used in this report, this NIH definition is consistent with a âbiomarker of effect,â whereas the phrase âbiomarker of exposureâ is more con- sistent with the following definition from the National Research Councilâs 2006 report Human Biomonitoring for Environmental Chemicals: âA biomarker of ex- posure is a chemical, its metabolite, or the product of an interaction between a chemical or some target molecule or cell that is measured in humans.â (NRC 2006d, p.4).
56 Applications of Toxicogenomic Technologies Ultimately, toxic response is likely to be mediated by changes at various levels of biologic organization: gene expression, protein expression, and altered metabolic profiles. Whereas most work to date has focused on developing bio- markers based on the output of single toxicogenomic technologies (for example, transcriptomics, proteomics, metabolomics), an integrated approach using mul- tiple technologies provides the opportunity to develop multidomain biomarkers that are more highly predictive than those derived from any single technology. Further, existing predictive phenotypic (and genotypic) measures should not be ignored in deriving biomarkers. Finally, particular attention must be paid to developing toxicogenomic- based biomarkers, especially those that are not tied mechanistically to a particu- lar end point. In 2001, Pepe and colleagues outlined the stages of cancer bio- marker development (Pepe et al. 2001) (see Table 3-2), suggesting that a sub- stantial effort involving large populations would be required to fully validate a new biomarker for widespread clinical application. The Netherlands breast can- cer study discussed in Chapter 9 (validation) is an example of a biomarker that has reached Phase 4, with a prospective analysis in which 6,000 women will be recruited and screened at an estimated cost of $54 million (Bogaerts et al. 2006; Buyse et al. 2006). Most toxicogenomic studies have reached only Phase 1 or Phase 2 and significant additional work and funding are necessary if toxicoge- nomic biomarkers are to achieve the same level of validation. CONCLUSIONS This chapter focused largely on questions of experimental design and the associated analytical approaches that can be used to draw biologic inferences. The published examples have largely drawn on individual studies in which data- sets have been analyzed in isolation. However, any of these methods can be ap- plied more generally to larger collections of data than those in individual stud- ies, provided that the primary data and the information needed to interpret them are available. Clearly, a carefully designed database containing toxicogenomic data along with other information (such as structure-activity relationships and infor- mation about dose-response and phenotypic outcome for exposure) would allow many of the unanswered questions about the applicability of genomic technolo- gies to toxicology to be addressed. In fact, a more extensive analysis would al- low scientists to more fully address questions about reproducibility, reliability, generalizability, population effects, and potential experimental biases that might exist and that would drive the development of standards and new analytical methods. A distinction must be drawn between datasets and a database. A database compiles individual datasets and provides a structure for storing the data that captures various relationships between elements, and it facilitates our ability to
Experimental Design and Data Analysis 57 TABLE 3-2 Phases of Cancer Biomarker Development As Defined by Pepe et al. (2001) Phase 1 Preclinical exploratory Promising directions identified Phase 2 Clinical assay and Clinical assay detects established disease validation Phase 3 Retrospective Biomarker detects disease before it becomes longitudinal clinical and a âscreen-positiveâ rule is defined Phase 4 Prospective screening Extent and characteristics of disease detected by the test and the false referral rate are identified Phase 5 Cancer control Impact of screening on reducing the burden of disease on the population is quantified investigate associations among various elements. Such a database must go be- yond individual measurements and provide information about, for example, how the individual experiments are designed, the chemical properties of the individ- ual compound tested, the phenotypes that result, and the genetic background of the animals profiled. Many considerations must go into designing such a data- base and populating it with relevant data; a more detailed discussion is provided in Chapter 10. However, creating such a database that captures relevant informa- tion would allow more extensive data mining and exploration and would provide opportunities currently not available. Making full use of such a database would also require a commitment to develop new analytical methods and to develop software tools to make these analytical methods available to the research and regulatory communities. Although assembling a central toxicogenomic database would be a mas- sive undertaking, creating such a resource, with a focus not only on data produc- tion but also on delivery of protocols, databases, software, and other tools to the community, should serve as a catalyst to encourage others to contribute to build- ing a more comprehensive database. Mechanisms should be investigated that would facilitate the growth of such a database by using data from academic and industrial partners. When possible and feasible, attention should be paid to inte- grating development of such a database and related standards with the work of parallel efforts such as caBIG (NCI 2006d) at the National Cancer Institute. The success of any toxicogenomic enterprise depends on data and information and the National Institute of Environmental Health Sciences and other federal agen- cies must make an investment to produce and provide those to the research community. RECOMMENDATIONS 1. Develop specialized bioinformatics, statistical, and computational tools to analyze toxicogenomic data. This will require a significant body of carefully collected controlled data, suggesting the creation of a national data resource
58 Applications of Toxicogenomic Technologies open to the research community. Specific tools that are needed include the fol- lowing: a. Algorithms that facilitate accurate identification of ortholo- gous genes and proteins in species used in toxicologic re- search, b. Tools to integrate data across multiple analytical platforms (for example, gene sequences, transcriptomics, proteomics, and metabolomics), and c. Computational models to enable the study of network re- sponses and systems-level analyses of toxic responses. 2. Continue to improve genome annotation for all relevant species and elucidation of orthologous genes and pathways. 3. Emphasize the development of standards to ensure data quality and to assist in validation.