Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 11
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment 1 Introduction Scientists, regulators, and the public all desire efficient and accurate approaches to assess the toxicologic effects of chemical, physical, and biologic agents on living systems. Yet, no single approach exists to analyze toxicologic responses, a difficult task given the complexity of human and animal physiology and individual variations. The genomic knowledge and new technologies that have emerged in the post-genomic era promise to inform the understanding of many risks as well as enlighten current approaches and lead to novel predictive approaches for studying disease risk. As biologic knowledge progresses with the science of toxicology, “toxicogenomics” (see Box 1-1 for definition) has the potential to improve risk assessment and hazard screening. BACKGROUND Approaches to Assessing Toxicity Detection of toxicity requires a means to observe (or measure) specific effects of exposures. Toxicology traditionally has focused on phenotypic changes in an organism that result from exposure to chemical, physical, or biologic agents. Such changes range from reversible effects, such as transient skin reactions, to chronic diseases, such as cancer, to the extreme end point of death. Typical whole-animal toxicology studies may range from single-dose acute to chronic lifetime exposures, and (after assessment of absorption, distribution, metabolism, and excretion properties) they include assessments of end points such as clinical signs of toxicity, body and organ weight changes, clinical chemistry, and histopathologic responses.
OCR for page 12
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment BOX 1-1 Toxicogenomics Definition Toxicogenomics: In this report, toxicogenomics is defined as the application of genomic technologies (for example, genetics, genome sequence analysis, gene expression profiling, proteomics, metabolomics, and related approaches) to study the adverse effects of environmental and pharmaceutical chemicals on human health and the environment. Toxicogenomics combines toxicology with information-dense1 genomic technologies to integrate toxicant-specific alterations in gene, protein, and metabolite expression patterns with phenotypic2 responses of cells, tissues, and organisms. Toxicogenomics can provide insight into geneenvironment interactions and the response of biologic pathways and networks to perturbations. Toxicogenomics may lead to information that is more discriminating, predictive, and sensitive than that currently used to evaluate toxic exposure or predict effects on human health. Toxicology studies generally use multiple doses that span the expected range from where no effects would be observed to where clinical or histopathologic changes would be evident. The highest dose at which no overt toxicity occurs in a 90-day study (the maximum tolerated dose), is generally used to establish animal dosing levels for chronic assays that provide insight into potential latent effects, including cancer, reproductive or developmental toxicity, and immunotoxicity. These studies constitute the mainstays of toxicologic practice.3 In addition to animal studies, efforts to identify and understand the effects of environmental chemicals, drugs, and other agents on human populations have used epidemiologic studies to examine the relationship between a dose and the response to exposures. In contrast to animal studies, in which exposures are experimentally controlled, epidemiologic studies describe exposure with an estimate of error, and they assess the relationship between exposure and disease distribution in human populations. These studies operate under the assumption that many years of chemical exposures or simple passage of time may be required before disease expression can be detected. 1 Toxicogenomic approaches are often referred to as “high-throughput,” a term that can refer to the density of information or the ability to analyze many subjects (or compounds) in a short time. Although the toxicogenomic techniques described here create information that is highly dense, the techniques do not all offer the ability to analyze many subjects or compounds at one time. Therefore, the term high-throughput is not used except in reference to gene sequencing technologies. 2 Relating to “the observable properties of an organism that are produced by the interaction of the genotype and the environment” (Merriam-Webster’s Online Dictionary, 10th Edition). 3 For more information, see the National Research Council report Toxicity Testing for Assessment of Environmental Agents: Interim Report (NRC 2006a).
OCR for page 13
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment As medical science has progressed, so have the tools used to assess animal toxicity. For example, more sensitive diagnostic and monitoring tools have been used to assess organ function, including tools to detect altered heart rhythms, brain activity, and changes in hormone levels as well as to analyze changes visible by electron microscopy. Most notable, however, are the contributions of chemistry, cell and molecular biology, and genetics in detecting adverse effects and identifying cellular and molecular targets of toxicants. It is now possible to observe potential adverse effects on molecules, subcellular structures, and organelles before they manifest at the organismal level. This ability has enhanced etiologic understanding of toxicity and made it possible to assess the relevance of molecular changes to toxicity. These molecular and cellular changes have been assessed in studies of animals but have also been applied to study human populations (“molecular epidemiology”) with some success. For example, our understanding of geneenvironment interactions has benefited greatly from studies of lung, head, and neck cancer among tobacco users—studies that examined differences in genes (polymorphisms) that are related to carcinogen metabolism and DNA repair. Similarly, studies of UV sunlight exposure and human differences in DNA repair genes have clarified gene-environment interactions in skin cancer risk. Current technology now enables the role of multiple genes of cell signaling pathways to be examined in human population studies aimed at assessing the interplay between environmental exposures and cancer risk. Although current practice in toxicology continues to strongly emphasize changes observable at the level of the whole organism as well as at the level of the organ, the use of cellular and molecular end points sets the stage for applying toxicogenomic technologies to a more robust examination of how complex molecular and cellular systems contribute to the expression of toxicity. Predictive Toxicology Predictive toxicology describes the study of how toxic effects observed in humans or model systems can be used to predict pathogenesis, assess risk, and prevent human disease. Predictive toxicology includes, but is not limited to, risk assessment, the practical facilitation of decision making with scientific information. Many of the concepts described in this report relate to approaches to risk assessment; key risk assessment concepts are reviewed in Appendix C. Typical information gaps and inconsistencies that limit conventional risk assessment are listed in Box 1-2. These gaps and inconsistencies present opportunities for toxicogenomics to provide useful information. Although toxicogenomics includes effects on wildlife and other environmental effects, this report is limited to a discussion of toxicogenomics as it applies to the study of human health.
OCR for page 14
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment BOX 1-2 Typical Information Gaps and Inconsistencies That Limit Conventional Risk Assessment (Modified from NRC 2005a) Lack of sufficient screening data—basic short-term in vitro or animal-bioassay data on toxicity or carcinogenicity of the compound. Lack of information or inconsistent information about effects on humans—epidemiologic studies. Paucity of accurate information on human exposure levels. Relevance of animal data to humans—quantitative or qualitative. Paucity of information on the relationship between dose and response, especially at low doses relevant to human environmental exposures. Inconsistent animal-bioassay data on different species—differential responses in varied animal test models. Paucity of information or inconsistencies in data on different exposures, particularly exposure during development and early-life exposures or by varied routes of exposure (inhalation, diet, drinking water). Lack of data on impacts of coexposures to other chemicals—current risk assessment practices are uncertain about “how to add” coexposures to the many and varied chemicals present in real-world environments. Paucity of data on the impact of human variability on susceptibility, including age, gender, race, disease, and other confounders. Overview of Toxicogenomic Technologies Toxicogenomic technologies comprise several different technology platforms for analysis of genomes, transcripts, proteins, and metabolites. These technologies are described briefly here and in more detail in Chapter 2. It is important to recognize two additional issues associated with the use of toxicogenomic technologies. First, the large quantity of information that a single experiment can generate, and the comprehensive nature of this information, is much greater than what traditional experiments generate. Second, the advancement of computing power and techniques enable these large amounts of information to be synthesized from different sources and experiments and to be analyzed in novel ways. Genomic technologies encompass both genome sequencing technologies, which derive DNA sequences from genes and other regions of DNA, and genotype analysis, which detects sequence variations between individuals in individual genes. Whereas the sequencing of genomes was once an extraordinary undertaking, rapid evolution of sequencing technology has dramatically increased throughput and decreased cost, now outperforming the benchmark technology standard used for the Human Genome Project. The convergence of genome sequencing and genotyping technologies will eventually enable whole-genome sequences of individuals to be analyzed. Advances in genotyping technologies
OCR for page 15
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment (see Chapter 2) allow the simultaneous assessment of multiple variants across the whole genome in large populations rather than just single or several gene polymorphisms. Transcriptomic technologies (or gene expression profiling) measure mRNA expression in a highly parallel assay system, usually using microarrays. As the first widely available method for global analysis of gene expression, DNA microarrays are the emblematic technology of the post-genomic era. Microarray technology for transcriptomics has enabled the analysis of complex, multigene systems and their responses to environmental perturbations. Proteomics is the study of collections of proteins in living systems. Because the same proteins may exist in multiple modified and variant forms, proteomes are more complex than the genomes and transcriptomes that encode them. Proteomic technologies use mass spectrometry (MS) and microarray technologies to resolve and identify the components of complex protein mixtures, to identify and map protein modifications, to characterize protein functional associations, and to compare proteomic changes quantitatively in different biologic states. Metabolomics is the study of small-molecule components of biologic systems, which are the products of metabolic processes. Because metabolites reflect the activities of RNAs, proteins, and the genes that encode them, metabolomics allows for functional assessment of diseases and drug and chemical toxicity. Metabolomics technologies, employing nuclear magnetic resonance spectroscopy and MS, are directed at simultaneously measuring dozens to thousands of compounds in biofluids (for example, urine) or in cell tissue extracts. A key strength of metabolomic approaches is that they can be used to noninvasively and repeatedly measure changes in living tissues and living animals and that they measure changes in the actual metabolic flow. As with proteomics, the major limitation of metabolomics is the difficulty of comprehensively measuring diverse metabolites in complex biologic systems. Bioinformatics is a branch of computational biology focused on applying advanced computational techniques to the collection, management, and analysis of numerical biologic data. Elements of bioinformatics are essential to the practice of all genomic technologies. Bioinformatics also encompasses the integration of data across genomic technologies, the integration of genomic data with data from other observations and measurements, and the integration of all these data in databases and related information resources. It is helpful to think of bioinformatics not as a separate discipline but as the universal means of analyzing and integrating information in biology. Policy Context Regulatory agencies with the biggest stake in predictive toxicology include the EPA, the Occupational Safety and Health Administration (OSHA), and the Food and Drug Administration (FDA). The EPA and OSHA are concerned
OCR for page 16
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment with potentially toxic exposures in the community and in the workplace. The mission of the EPA is to protect human health and the environment and to safeguard the nation’s air, water, and land. OSHA’s mission is to ensure the safety and health of America’s workers by setting and enforcing standards; providing training, outreach, and education; establishing partnerships; and encouraging continual improvement in workplace safety and health. The FDA is responsible for protecting the public health by ensuring the safety, efficacy, and security of human and veterinary drugs, biologic products, medical devices, the nation’s food supply, cosmetics, and products that emit radiation. The FDA is also responsible for advancing public health by facilitating innovations that make medicines and foods more effective, safer, and more affordable. Finally, the FDA is responsible for helping the public receive the accurate, science-based information it needs to use these regulated products to improve health. Working in parallel with these agencies and providing additional scientific underpinning to regulatory agency efforts is the Department of Health and Human Services (DHHS), National Institutes of Health (NIH). The NIH (2007) mission is “science in pursuit of fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to extend healthy life and reduce the burdens of illness and disability,” including research on “causes, diagnosis, prevention, and cure of human disease.” The NIH National Institute of Environmental Health Sciences (NIEHS) strives to use environmental sciences to understand human disease and improve human health, including “how environmental exposures fundamentally alter human biology” and why some people develop disease in response to toxicant exposure and others do not.4 In sum, NIH, regulatory agencies, the chemical and pharmaceutical industries, health professionals, attorneys, the media, and the general public are all interested in knowing how new genomic technologies developed in the aftermath of the Human Genome Project can improve our understanding of toxicity and ultimately protect public health and the environment. Although the FDA and the EPA have developed planning documents on toxicogenomic policies (see Chapter 9), specific policies have not yet emerged, and it is clear that stakeholders are grappling with similar questions: Where is the science of toxicogenomics going? What are the potential benefits and drawbacks of using toxicogenomics information for regulatory agencies, industry, and the public? What are the challenges in implementing toxicogenomic technologies, collecting and using the data, and communicating the results? Can genomic technologies predict health effects? How will government agencies, industry, academics, and others know when a particular technology is ready to be used for regulatory purposes? 4 See http://www.niehs.nih.gov/od/fromdir.htm (accessed April 2, 2007).
OCR for page 17
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment Will regulatory requirements have to be changed to reap the benefits of the new technologies to protect public health and the environment? COMMITTEE CHARGE AND RESPONSE In April 2004, NIEHS asked the National Academies to direct its investigative arm, the National Research Council (NRC), to examine the impact of toxicogenomic technologies on predictive toxicology (see Box 1-3 for the complete statement of task). In response, the NRC formed the Committee on Applications of Toxicogenomic Technologies to Predictive Toxicology, a panel of 16 members that included experts in toxicology, molecular and cellular biology, epidemiology, law and ethics, bioinformatics (including database development and maintenance), statistics, public health, risk communication, and risk assessment (see Appendix A for committee details). The committee held two public meetings in Washington, DC, to collect information, meet with researchers and decision makers, and accept testimony from the public. The committee met five additional times, in executive session, to deliberate on findings and complete its report. The remaining chapters of this report constitute the findings of the NRC committee. The committee approached its charge by focusing on potential uses of toxicogenomics often discussed in the broad toxicology community, with a focus on human health issues and not environmental impact. The committee determined that the applications described in Chapters 4-8 capture much of the often-cited potential value of toxicogenomics. After identifying potential toxicogenomic applications, the committee searched the scientific literature for case examples that demonstrate useful implementations of toxicogenomic technologies in these areas. This report is not intended to be a compendium of all studies that used toxicogenomic technologies and does not attempt to highlight the full range of papers published. Peer-reviewed and published papers in the public domain were selected to illustrate applications the committee identified as worthy of consideration. For example, Box 1-4 contains brief summaries of selected studies where toxicogenomic technologies have shown promise in predictive toxicology. New studies using toxicogenomic technologies are published almost daily. Likewise, approaches to analyzing and interpreting data are rapidly evolving, resulting in changes in attitudes toward various approaches.5 Even while this report was being prepared, such changes were observed, and therefore the committee has attempted to provide a snapshot of this rapidly evolving field. This report is the product of the efforts of the entire NRC committee. The report underwent extensive, independent external review overseen by the NRC 5 The value of platforms and software is also evolving rapidly and any discussion of platforms or software should not be considered an endorsement by the committee.
OCR for page 18
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment Report Review Committee. It specifically addresses, and is limited to, the statement of task as agreed upon by the NRC and the DHHS. This report consists of chapters on existing or potential “applications” and chapters that deal with broader issues. The technologies encompassed in toxicogenomics are described in Chapter 2. This is followed by a discussion of experimental design and data analysis in Chapter 3. Chapter 4 discusses the applications of toxicogenomic technologies to assess exposure: “Can toxicogenomic technologies determine whether an individual has been exposed to a substance and, if so, to how much?” Chapter 5 asks “Can toxicogenomic data be used to detect potential toxicity of an unknown compound quickly, reliably, and at a reasonable cost?” Chapter 6 addresses the assessment of individual variability in humans. The question in this context is “Can toxicogenomic technologies detect variability in response to exposures and provide a means to explain variability between individuals?” Chapter 7 addresses the question “What can toxicogenomic technologies teach us about the mechanisms by which toxicants produce adverse effects in biologic systems?” Considerations for risk assessment not covered in these four application chapters are discussed in Chapter 8. Chapter 9 focuses on validation issues that are relevant to most of the applications. In Chapter 10, sample and data collection and analysis are discussed, as are database needs. The ethical, legal, and social issues raised by use of toxicogenomics are considered in Chapter 11. Finally, Chapter 12 summarizes the recommendations from the other chapters and identifies several overarching recommendations. BOX 1-3 Statement of Task A committee of the NRC will examine the impact of “toxicogenomic” technologies on predictive toxicology. These approaches include studying gene and protein activity and other biologic processes to begin to characterize toxic substances and their potential risks. For the promise of these technologies to be realized, significant challenges must be recognized and addressed. This study will provide a broad overview for the public, senior government policy makers, and other interested and involved parties of the benefits potentially arising from these technologies, identify the challenges to achieving them, and suggest approaches and incentives that may be used to address the challenges. Potential scientific benefits might include identifying susceptible populations and mechanisms of action and making better use of animal toxicity testing. Potential challenges might include scientific issues such as correlating gene expression with adverse effects; conflicting, nonexistent, or inadequate regulatory requirements; legal, social, and ethical issues; coordination between regulators and the regulated communities; organizational infrastructure for handling large volumes of data, new analytic tools, and innovative ways to synthesize and interpret results; communication with appropriate audiences about scientific and nonscien
OCR for page 19
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment tific information; and the need for scientific standards in conducting and interpreting toxicogenomic experiments. This study will highlight major new or anticipated uses of these technologies and identify the challenges and possible solutions to implementing them to improve the protection of public health and the environment. BOX 1-4 Selected Examples of the Use of Toxicogenomic Technologies in Predictive Toxicology Predictive toxicology is predicated on the hypothesis that similar treatments leading to the same end point will share comparable changes in gene expression. Examples where toxicogenomic technologies have such shown promise in predictive toxicology are presented below. Explanations of the technologies, methodologies, and concepts are presented in greater detail in later chapters of the report. Steiner et al. (2004) evaluated different classes of toxicants by transcript profiling in male rats treated with various model compounds or the appropriate vehicle controls. The results of this study demonstrated the feasibility of compound classification based on gene expression profile data. Most of the compounds evaluated were either well-known hepatotoxicants or showed hepatatotoxicity during preclinical testing. These compounds included acetaminophen, amiodarone, aflatoxin B1, carbon tetrachloride, coumarin, hydrazine, 1,2-dichlorobenzene, and 18 others. The aim was to determine whether biologic samples from rats treated with these various compounds can be classified based on gene expression profiles. Hepatic gene expression profiles were analyzed using a supervised learning method (support vector machines [SVM]) to generate classification rules and combine this with recursive feature elimination to identify a compact set of probe sets with potential use as biomarkers. In all studies, more than 150 genes were expressed above background and showed at least a 2-fold modulation with a p value of <0.05 (two-tailed, unpaired t test). The predictive models were able to discriminate between hepatotoxic and non-hepatotoxic compounds. Furthermore, they predicted the correct class of hepatotoxicant in most cases. As described in this report (Chapter 5), the predictive model produced virtually no false-positive outcomes but at the cost of some falsenegative results—amiodarone, glibenclamide, and chlorpromazine were not recognized as toxic. This example is also important in that the it shows that a predictive model based on transcript profiles from the Wistar rat strain can successfully classify profiles from another rat strain (Sprague-Dawley) for the peroxisome proliferator WY14643. In addition, the model identified non-responding animals (those treated with a toxicant but not exhibiting conventional toxicologic effects). Ruepp et al. (2002) performed both genomic and proteomic analyses of acetaminophen toxicity in mouse liver. Acetaminophen overdose causes severe
OCR for page 20
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment centrilobular hepatic necrosis in humans and in experimental animals. In this case, the researchers explored the mechanism of toxicity using sub toxic and toxic doses to overnight fasted mice. Animals were sacrificed at different time points from 15 minutes to 4 hours postinjection. Liver toxicity was assessed by plasma ALT activity (a liver enzyme marker) and by electron microscopy. Using RT-PCR, genomic expression analysis was performed. In addition, proteomic analysis on liver mitochondrial subfractions using a quantitative fluorescent 2D-DIGE method was done. The results showed Kupffer cell-derived GM-CSF mRNA (GM-CSF is a granulocyte specific gene) induction at both doses acutely, and chaperone proteins Hsp10 and Hsp60 decreased in mitochondria at both doses, most likely by leaking into the cytoplasm. All of these perturbations occurred before morphologic changes. Other genomic studies of acetaminophen have shown that its hepatotoxicity can be reproduced. Liver diseases that induce nonuniform lesions often give rise to greatly varying histopathology results in needle biopsy samples from the same patient. Heinloth et al. (2007) examined whether gene expression analysis of such biopsies utilizing acetaminophen as a model hepatotoxicant that gives a multifocal pattern of necrosis following toxic doses. Rats were treated with a single toxic or subtoxic dose of acetaminophen and sacrificed 6, 24, or 48 hours after exposure. Left liver lobes were harvested, and both gene expression and histopathologic analysis were performed on the same biopsy-sized samples. While histopathologic evaluation of such small samples revealed significant sample to sample differences after toxic doses of acetaminophen, gene expression analysis provided a very homogeneous picture and allowed clear distinction between subtoxic and toxic doses. The results show that the use of genomic analysis of biopsy samples together with histopathologic analyses could provide a more precise representation of the overall condition of a patient's liver than histopathologic evaluation alone. Haugen et al. (2004) examined arsenic-response networks in Saccharomyces cerevisiae by employing global gene expression and sensitivity phenotype data in a metabolic network composed of all known biochemical reactions in yeast, as well as the yeast network of 20,985 protein-protein/protein-DNA interactions. Arsenic is a nonmutagenic carcinogen to which millions of people are exposed. The phenotypic-profiling data mapped to the metabolic network. The two significant metabolic networks unveiled were shikimate and serine, threonine and glutamate biosynthesis. Transcriptional profiling of specific deletion strains confirmed that the several transcription factors strongly mediate the cell's adaptation to arsenic-induced stress. By integrating phenotypic and transcriptional profiling and mapping the data onto the metabolic and regulatory networks, the researchers demostrated that arsenic is likely to channel sulfur into glutathione for detoxification; that leads to indirect oxidative stress by depleting glutathione pools and alters protein turnover via arsenation of sulfhydryl groups on proteins. As described by Haugen et al., “Our data show that many of the most sensitive genes … are involved in serine and threonine metabolism, glutamate, aspartate and arginine metabolism, or shikimate metabolism, which are pathways upstream of the differentially expressed sulfur, methionine and homocysteine metabolic pathways,
OCR for page 21
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment respectively. These downstream pathways are important for the conversion to glutathione, necessary for the cell's defense from arsenic…. This overlap of sensitive upstream pathways and differentially expressed downstream pathways provides the link between transcriptional and phenotypic profiling data.”