Alternative Approaches and Emerging Technologies
There is a coordinated international effort to develop alternatives to animals for toxicity testing of environmental agents. Numerous methods have already been developed and validated to reduce, replace, or refine animal testing, and many more are under development in the United States and Europe. The effort to reduce animal use has generated some additional benefits. For example, some nonanimal methods provide useful mechanistic information that can offer insight into the likely human relevance of observed findings or may offer the ability to predict patterns of toxicity. Furthermore, some approaches that use alternative nonmammalian species allow testing of much larger numbers of organisms, thereby increasing statistical power for evaluating dose-response relationships at the low end of the curve.
This chapter reviews approaches specifically focused on alternatives to animal testing that reduce, replace, or refine animal use. The second part discusses some new toxicity-testing approaches (-omics technologies and computational toxicology) that may have longer-term potential for achieving greater depth, breadth, animal welfare, and conservation in toxicity testing. The chapter concludes with a discussion of validation to emphasize the importance of evaluating new toxicity-testing methods to ensure that the information obtained from them is at least as good as, if not better than, conventional mammalian models. Validation, as defined in this chapter, is a formal process that grew out of the experience of the European Centre for the Validation of Alternative Methods (ECVAM), the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and others in evaluating the per-
formance of new tests. The important point for this report is that validation is now seen as a formal, although flexible, process that new tests must satisfy to be accepted by regulators. The details of validation exercises may vary as one shifts from in vitro and in vivo tests to -omics and computational toxicology techniques.
ALTERNATIVES TO CURRENT ANIMAL-TESTING APPROACHES
One of the tensions in designing new chemical-testing strategies is between reducing animal use and suffering and regulatory needs for more information on a wider array of chemicals or more detailed information on a smaller group of chemicals. Russell and Burch (1992) provided a framework for addressing that tension. They proposed that scientists pursue techniques and approaches that follow the Three Rs, namely, methods that can replace or reduce animal use in specific procedures or refine animal use to eliminate or decrease animal suffering. Replacement, reduction, and refinement have also come to be known as alternative methods.
First proposed in 1959, the Three Rs approach (3Rs) advanced in the 1980s when cosmetics and consumer-product companies began to invest millions of dollars in alternative methods in response to consumer pressure (Stephens et al. 2001). During that same decade, national governments incorporated the Three Rs approach into their animal-protection legislation and in some cases began to fund research on and development of alternatives, academic centers devoted to the alternatives began to be established, the field of in vitro toxicology blossomed, and companies began to market alternative test kits. In the 1990s, government centers devoted to the validation and regulatory acceptance of alternative methods were established in Europe and the United States, alternative tests began to be formally approved and accepted by regulatory agencies, and the triennial World Congresses on Alternatives were inaugurated. There is evidence that, owing in part to the implementation of Three Rs approaches, use of laboratory animals in research and testing in the United States decreased by about 30%1 in the decade after the estab
lishment of ICCVAM in 1997, which marked the beginning of widespread efforts to implement the Three Rs. In the 21st century, as acceptance and implementation of the Three Rs approach continue to spread, a major challenge in advancing the approach is to harness the potential of new technologies, including -omics, to replace, reduce, and refine animal use.
The following sections explore in more detail the refinement, reduction, and replacement alternatives. The replacement of commonly used laboratory animals with less sentient animal species is addressed specifically.
Refinement alternatives are changes in existing practices that either decrease animal pain and distress or increase animal welfare. Refinements are best practices, namely, ways of carrying out animal-based procedures and practices that ensure the best practical outcomes with respect to both animal welfare and science. The principle of refinement can be applied to any aspect of laboratory care and use—including anesthesia, analgesia, supportive veterinary care, and euthanasia—and to the more general aspects of animal transport, handling, housing, environmental enrichment, and personnel training (Morton 1995). Refinement approaches of particular relevance to toxicity testing include best practices in dose administration, dose-volume limits, and humane end points (Hendriksen and Morton 1999; ILAR 2000; OECD 2000; Diehl et al. 2001; Stephens et al. 2002). Humane end points in an animal experiment are early indicators of pain, distress, or death and, once validated, can be used to terminate an experiment early to preclude or lessen animal suffering without compromising study objectives (Stokes 2000). The application of humane end points is often associated with frequent monitoring of animals and scoring of their clinical signs. Scoring systems are an important tool for evaluating the efficacy of proposed refinements.
In toxicology, refinements include not only modifications of existing tests but also new animal-based tests that result in less pain or distress than conventional procedures or in no pain or distress. For example, historically the guinea pig maximization test (GPMT) was the conventional assay for acute contact dermatitis (ACD). A new procedure, the local lymph node assay (LLNA), assesses ACD by examining local lymph node proliferation instead of the ensuing clinically evident
allergic reaction. The animals, in this case mice, are euthanized before experiencing the discomfort of ACD. The LLNA can be considered an elaboration of the humane-end-point approach, which was made possible by knowledge of the mechanism of ACD. The LLNA has been accepted by EPA, the Food and Drug Administration (FDA), and the Occupational Safety and Health Administration (OSHA) as a refinement alternative to the GPMT for assessing ACD (NTP 1999).
Refinements in toxicology obviously benefit the animals involved in testing, but they can also be advantageous from scientific and societal viewpoints. Pain or distress stemming from poor technique can cloud study outcomes (Morton et al. 2001). Refined approaches, such as the use of humane end points, can lead to earlier completion of testing. Scoring of clinical signs can reveal toxicologic outcomes that might have been overlooked if death were the only outcome noted. Finally, implementing refinement can improve the morale of laboratory personnel and help to satisfy mandates in humane legislation, such as the U.S. Animal Welfare Act, with its emphasis on minimizing pain and distress.
Reduction alternatives are methods that use fewer animals than conventional procedures but yield comparable levels of information. They can include methods that use the same number of animals but yield more information so that fewer animals are needed to complete a given project or test (Balls et al. 1995). One of the most dramatic illustrations of reduction is the acute systemic toxicity-testing guidelines of the Organisation for Economic Co-operation and Development (OECD), which apply primarily to industrial chemicals. The number of animals used in OECD’s Test Guideline 401 for the LD50 test dropped from 100 to 25 when the guideline—adopted in 1981—was modified in 1987. OECD also adopted three new guidelines in the 1990s that reflected additional reduction approaches that typically use under 10 animals per test. The new alternatives—the up-and-down procedure, the fixed-dose procedure, and the acute-toxic-class method—led OECD to drop Guideline 401 altogether from its guidelines in 2002 (OECD 2002a).
One straightforward way to explore reduction approaches for a given animal test is through retrospective analyses of test data on individual animals. If N is the number of animals conventionally used in a
test, do (N 1) animals typically yield the same conclusion? Do (N – 2) animals yield the same conclusion? That approach has been applied to the Draize eye-irritancy test to reduce the conventional number of animals used per test from six to three (see EPA 1998).
A rigorous application of experimental design and statistical approaches is one of the best ways to pursue reduction in animal numbers (Festing et al. 1998; Vaughan 2004). Statistical aids can yield precise estimates of the number of animals needed to test a hypothesis. Block designs can lead to reduction in animal numbers. And using animals that have genetically defined backgrounds can limit statistical variance and thereby achieve a given level of statistical power with fewer animals (Russell and Burch 1992; Festing 1999).
Animal reduction can also be achieved by applying adaptive Bayesian statistical techniques to study design. Such approaches have been used in clinical trials for evaluating new drugs and have resulted in reduced numbers of subjects and early termination in specific arms of clinical trials, reducing ineffective treatments and life-threatening side effects and improving survival (Berry et al. 2002; Giles et al. 2003). The same techniques could be adapted to reduce the numbers of animals used in toxicity testing.
Various noninvasive imaging techniques can be used to track the progression of toxic effects or disease in a cohort of animals, eliminating the need for interim killing of animals at selected times. To date, those techniques, such as biophotonic imaging (Contag et al. 1996), have been implemented primarily in biomedical research, as opposed to toxicity testing. If applied to regulatory toxicity testing, they could not only reduce animal numbers in some tests but facilitate the refinement of tests by allowing the monitoring of animals over time to gauge how close they are getting to specified humane end points, such as tumor size.
Animal use can also be limited by careful design of testing schemes. For example, EPA modified the testing scheme in its high-production-volume (HPV) chemical testing program after pressure from animal protectionists. The agency called on program participants to take a number of steps intended to reduce animal use, including grouping chemicals into appropriate categories and testing only representative chemicals from a category, avoiding some types of testing of closed-system intermediates, and encouraging a thoughtful, qualitative analysis rather than a rote checklist approach (see EPA 1999). Reducing animal numbers in toxicity tests not only subjects fewer animals to potential suffering but has the potential to lower the cost of testing.
Replacement alternatives use nonanimal approaches in lieu of animal-based methods. In toxicology, such nonanimal approaches include physiochemical measures, quantitative structure-activity relationship (QSAR) models, and other methods. Replacement might include substituting invertebrates in testing typically done with vertebrates, for example, the use of Caenorhabditis elegans in chronic toxicity testing. It might also include substituting primary culture of tissues or cells, such as neuromuscular preparations, for whole animals; however, such cultures entail animal use to harvest the tissues that will be cultured and therefore do not truly replace animal use.
Some nonanimal methods can serve as screens to limit the number of chemicals that move on to later stages of testing. For example, a simple pH determination can characterize a chemical as highly acidic or alkaline and so almost certainly an eye irritant, thus obviating a Draize eye-irritancy test in rabbits (see OECD 2002b). Such a screen can be labeled a “partial replacement” to distinguish it from a nonanimal method that serves as the definitive test, a “full replacement.” Full replacements clearly are more satisfactory from a humane perspective, but partial replacements do limit animal use and suffering in toxicity tests.
Replacement approaches have been successfully implemented over the last several decades for a variety of applications, including culturing viruses, assaying vitamins, diagnosing pregnancy, and preparing monoclonal antibodies (Stephens 1989). In toxicology, in vitro tests have shown great potential as replacement alternatives. The Ames mutagenesis test, developed in 1971, was the first in vitro test used in regulatory toxicology. In vitro tests and other nonanimal methods have since been accepted in regulatory toxicology case by case after the development of the field of validation and the establishment of ICCVAM and ECVAM in the 1990s. In recent years, ICCVAM, ECVAM, and OECD have validated or accepted as validated a number of in vitro tests (see Chapter 2, Table 2-3), including the 3T3 neutral red update phototoxicity test, a skin-absorption assay, cytotoxicity assays for acute systemic toxicity, and skin-corrosivity assays, such as the transcutaneous electrical resistance assay, the Corrositex assay, and the Episkin and Epiderm assays (ICCVAM 2004; ECVAM 2005). Their validations have established the strengths and weaknesses of the assays and in some cases limited their applicability to particular chemical classes or levels within tiered testing strategies.
Overall, nonanimal approaches offer several potential advantages over classical animal-based tests. First, they can be less time-consuming and more humane. Second, they can be more mechanistically relevant to human toxicity when they are selected or tailored to reflect a specific biochemical pathway or a chemical receptor that does not occur in a given animal model. Third, they can allow for higher throughput. Because of the technical advantages, such approaches are being evaluated for large-scale testing programs, including HPV chemical testing and endocrine-disruptor testing. As the large-scale testing programs are developed and implemented, nonanimal methods are being incorporated as screens into tier-testing approaches with animal testing being reserved primarily for the highest tiers. Efforts to develop a full array of nonanimal methods to address all end points in some testing programs are under way (Worth and Balls 2002). That approach would rely on mechanistically based assays and, where appropriate, incorporate metabolic activation. Such an approach to toxicity-testing programs might be able to eliminate the need for extrapolation from animals to humans in some cases and to aid in identifying hazards to potentially sensitive human populations.
Use of Alternative Species
Nonmammalian vertebrates, such as fish, are being used increasingly in human health effects testing. To the extent that such species are less sentient than mammals, their use constitutes an example of refinement. Some nonmammalian species have a high degree of structural and physiologic similarity to higher vertebrates, enhancing the likelihood that similar toxicities would be produced. In addition, nonmammalian species have shorter developmental periods and shorter overall life spans, which are useful characteristics for simulating effects of chronic exposure. And they usually require simpler, less expensive laboratory maintenance than mammals.
The effectiveness of alternative models is well illustrated by historically prominent studies that used rainbow trout as a model for carcinogenicity and mechanistic cancer research. Trout have been shown to share many mechanisms of carcinogenesis with mammals, such as pathways of metabolic activation and production of mutagenic DNA adducts. Recently, the low cost and ease of maintenance of trout were taken advantage of to carry out the largest dose-response study of chemical-
induced carcinogenesis ever conducted (William et al. 2003). The goal of this project, which used 42,000 trout, was to identify the dose at which one additional cancer in 103 animals occurred, an order-of-magnitude increase in sensitivity over the largest mouse study, which used 24,000 mice (Gaylor 1980). The dose-response data deviated significantly from linearity, although a threshold dose could not be statistically established. Studies that use large numbers of animals and thereby have increased sensitivity would have profound implications for modeling human health risk assessment if the animal models used were found to be relevant to humans.
Another fish model that is gaining increased attention from toxicology researchers is the zebrafish (Sumanas and Lin 2004). Zebrafish have many features that make them highly desirable as a laboratory model, including small size, high fecundity, and rapid development. The embryos are transparent, and this allows visualization of fundamental developmental processes with a simple dissecting microscope. A generation time of only 3 months makes genetic screening practical. Furthermore, a variety of diseases have been successfully modeled in zebrafish via simple genetic alterations or mutations. Much of the zebrafish genome has been sequenced, and at least two zebrafish oligonucleotide microarrays are available, each containing over 14,000 unique sequences. Transgenic zebrafish that express green fluorescent protein (GFP) under the control of various tissue-specific promoters have also been developed.
From a toxicology perspective, zebrafish have been shown to express the aryl hydrocarbon receptor (AhR) and the AhR nuclear translocator (ARNT), two proteins that are responsible for initiating the toxic effects of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and structurally related halogenated aromatic hydrocarbons in mammals (Andreason et al. 2002). Zebrafish respond to TCDD with induction of cytochrome P4501A, a key gene controlled by TCDD-activated AhR in all species examined (Andreason et al. 2002). Scientists at the National Toxicology Program (NTP) are evaluating zebrafish to determine their usefulness in screening chemicals for potential toxicity and carcinogenicity. Because of their genetic uniformity and low rates of spontaneous tumor, the use of zebrafish minimizes the experimental variability normally associated with other alternative animal species.
Although nonmammalian models show great promise at both ends of the toxicity-testing spectrum (screening and mechanistic studies), there are obvious limitations on the use and applicability of such non-
mammalian species in some aspects of toxicity testing. Metabolic differences may be greater between nonmammalian species and humans than between humans and other mammals, so the use of such data for human health risk assessment may be more tenuous. Substantial anatomic and physiologic differences between mammals and other species will also prevent their application to assessment of some toxic end points.
Novel -omics technologies and computational toxicology may one day contribute to resolution of much of the current tension around the objectives of toxicity testing. The new fields are developing rapidly, and their integration into traditional testing strategies is being investigated. This section provides an overview of the tools, techniques, and science that show promise for advancing toxicity testing and risk assessment.
Individuals differ in their responses to environmental toxicants, and that variability can be attributed to many factors. One possible factor is the variation in the human genome. Each person’s genome is different, and the differences are thought to influence a person’s response and susceptibility to a chemical exposure. The Human Genome Project at the National Institutes of Health has greatly facilitated the search for susceptibility genes—genes that influence a person’s response to a stimulus or probability of developing a particular disease. In the last decade, researchers have been successful in identifying genes for diseases, such as cystic fibrosis, that are due to mutations in single genes. The effect of such a mutation is large and therefore relatively easy to identify. Identifying the susceptibility genes for complex human traits has been more challenging, but recent molecular and statistical advances stimulated by the Human Genome Project have led to the identification of susceptibility genes for several complex human diseases, such as asthma and Crohn’s disease. Those advances have also led to identification of genetic variations that make some people more and other people less susceptible to environmental toxicants.
Several key developments in addition to the Human Genome Project have advanced the field of genetics. First is the characterization of DNA-sequence polymorphisms, particularly the single-nucleotide polymorphisms (SNPs).2 Three entities—the SNP Consortium (TSC), the International HapMap Project, and the National Institute of Environmental Health Sciences (NIEHS) Environmental Genome Project (EGP)—have identified and characterized millions of SNPs. Specifically, they have provided positional information and allele frequencies and have developed assays for genotyping them. The SNPs identified by TSC and the HapMap Project are distributed across the entire genome and were not selected specifically for their functional significance. The SNPs identified by the EGP reside in environmentally responsive genes, such as genes involved with the cell cycle, DNA repair, and metabolism. The work of all three entities has provided a well-characterized set of SNPs that can be used as genetic “landmarks” to localize genes that influence one’s susceptibility to disease and sensitivity to toxicants.
The second advance is the development of technologies for high-throughput genotyping. Although millions of polymorphisms have been identified, genotyping them for routine analysis has been an expensive, labor-intensive task. Until recently, genotyping was performed marker by marker; thus, the throughput was low and the cost high. Several recent developments allow thousands of markers to be genotyped in parallel. Large numbers of genotypes can be generated from DNA samples from many individuals. That advance is particularly important because the effect of each sequence variation is likely to be small, and these small effects would be very difficult to detect without a sample of adequate size.
The third advance is the improvement of phenotyping methods. A phenotype is the biochemical, physiological, or physical characteristics of an individual as determined by his or her genetic background and the environment. Defining phenotypes and collecting material for study often present challenges. To determine the genetic basis of a phenotype, one must study how the phenotype is passed along in families; therefore, phenotypic measurements and DNA from family members are often needed for analysis. However, in trying to define a phenotype that would indicate susceptibility to an environmental toxicant, it is difficult or impossible, to identify family members who have been exposed to the same
agents under similar circumstances. To circumvent that problem, cell cultures from family members are exposed to environmental agents. The agent and dose are controlled, and a large amount of family material can be evaluated. Those studies have demonstrated that gene expression and phenotypes, such as cellular functions, can be accurately measured in cultured cells (Schork et al. 2002; Yan et al. 2002; Lo et al. 2003). In addition, the phenotypes identified in cultured cells are amenable to genetic analysis (Schadt et al. 2003; Greenwood et al. 2004; Morley et al. 2004). In recent studies, cells from members of large three-generation families were exposed to chemotherapeutic agents, such as cisplatin (Dolan et al. 2004), 5-fluorouracil, and docetaxel (Watters et al. 2004), and the genes that influence chemotherapy toxicity were identified (Dolan et al. 2004; Watters et al. 2004). Improvements in phenotyping methods are important for elucidating the genetics of chemical response and measuring the consequences of genetic variation.
The human genome has been estimated to consist of about 25,000 genes. The gene-expression pattern varies from cell to cell and determines the identity of each cell. Cells induce or repress particular genes in response to environmental stimuli. Changes in gene expression help the cells to adapt to the “new” environment or repair damage resulting from the stimuli. One can identify genes that change in response to exposure by comparing the expression level of genes at baseline to the expression level in response to stimuli. With such technologies as microarrays, the expression levels of tens of thousands of genes can be measured accurately and efficiently. Those genes may serve as biomarkers of exposure and also aid in understanding the mechanism of action of the stimuli and the cellular pathways involved in the response.
Several groups, such as ILSI-HESI, have initiated projects to investigate the use of genomic data in risk assessment (Pennie et al. 2004; Hood 2004). Other organizations have initiated programs to investigate the use of genomic and other -omic technologies in toxicology. For example, the NRC Standing Committee on Emerging Issues and Data on Environmental Contaminants, which was convened at the request of NIEHS, currently is focused on toxicogenomics and its applications in environmental and pharmaceutical safety assessment, risk communica-
tion, and public policy.3 The standing committee is not tasked with developing a consensus report but has convened workshops on a variety of topics, including use of bioinformatics to manage toxicogenomics information across laboratories; identification of critical knowledge gaps in cancer risk assessment and potential application of toxicogenomics technologies to address those gaps; identification of methods for communicating toxicogenomics information with the public and other nonexpert audiences; application of toxicogenomics to cross-species extrapolation to determine whether the effects of chemicals in animals can be used to predict human responses; and investigation of strategies to overcome obstacles to sharing toxicogenomics data.
Genomic experiments have been performed to answer questions in toxicology and include the following studies:
Studies of transcriptional changes in cells exposed to a particular agent, such as the exposure of mouse hepatoma cells to chromium (Wei et al. 2004), the exposure of the liver of transgenic mice that have constitutively active dioxin-AhR to N-nitrosodiethylamine (Moennikes et al. 2004), exposure of MCF-7 cells to estrogen (Terasaka et al. 2004), and exposure of human keratinocytes to inorganic arsenic (Rea et al. 2003).
Studies of dose-response assessment, such as those to evaluate dose-dependent expression changes in kidney HEK293 cells exposed to arsenite (Zheng et al. 2003).
Studies of the extent of individual variation in response to exposure, such as those to determine susceptibility genes for resistance to dichlorodiphenyltrichloroethane (DDT) in Drosophila (Daborn et al. 2002; Pedra et al. 2004).
Most of the published studies focus on identifying the transcriptional changes associated with exposure. However, studies need to examine mechanisms of action of toxicants and determine general and toxicant-specific cellular responses. Furthermore, studies need to include sufficiently large samples, assess dose-response relationships, characterize the temporal nature of gene expression in relation to the relevant end point, and, to the extent possible, examine an appropriate variety of tis
sues from different organisms. To identify susceptible populations and the potential hazards of chemicals with genomic technologies, large amounts of information must be accumulated and compared. Common microarray techniques and databases are being developed to make data storage and analysis feasible on a large number of experiments. Minimum Information About a Microarray Experiment for Toxicogenomics (MIAME/TOX) (EBI 2005) is a standard for microarray experiments; it is based on a particular microarray model and data-exchange format, and it allows integration of data from other sources, such as clinical data and data from histopathologic studies. That standard is intended to address at least some of the difficulties that arise in comparison of datasets that have been acquired with different technologies, compiled at different times, or generated from different laboratories and should facilitate the construction of databases with broader utility, such as those being developed by the National Center for Toxicogenomics.
Although genomics technologies, such as transcript profiling, have considerable potential in both predictive and mechanistic toxicology, their appropriate application to a risk-benefit analysis of novel chemical entities requires an understanding of the utility of the resulting data and, ideally, regulatory guidance or policy regarding their use. Recognizing the potential of genomics approaches, a number of regulatory agencies, including FDA and EPA, have issued draft guidance on the integration of these approaches into established risk-assessment schemes. For example, in 2005, FDA released final guidance on pharmacogenomics-data submission (FDA 2005) that recognizes the research applications of genomics, such as priority-setting among chemicals in a chemical class and selection of compounds for further development. Submission of genomic data is not required except when “known” or “probable” valid biomarkers of effect are recognized. In the absence of those biomarkers, data are required only for submission in an investigational new drug or new drug application filing if they are being used to support a safety argument (for example, the relevance of an effect in humans vs animal species), as a component in clinical trial design (for example, as a method to stratify patients or to monitor patients during the trial), or to clarify a labeling issue. FDA is also seeking voluntary submission of genomic data to increase its experience in handling and interpreting the data.
In a similar vein, EPA (2002) issued a brief interim policy statement on genomics technology that recognizes that genomics data would most likely be used as supportive or research data—that is, potentially used in ranking chemicals for further testing or in supporting regulatory
action. In March 2004, EPA reviewed the potential role of genomics technologies across a broad array of issues related to toxicity testing, risk assessment, and regulation of chemicals in the environment (EPA 2004). The review was the product of the EPA Genomics Task Force, formed at the request of the EPA Science Policy Council.
The task force recognized that standardization of experimental design and the emergence of data-quality standards may be necessary for the use of the data in regulatory policy and processes. It identified four elements of regulation in which genomics activities are likely to influence regulatory practice, policy, or review: priority-setting among contaminants and contaminated sites, monitoring, reporting provisions, and enhanced risk assessment. Many research needs and activities were recognized—for example, linking the Office of Research and Development’s Computational Toxicology Research Program to genomics activities, developing an analytical framework and acceptance criteria for genomics data, and developing internal expertise and methods to evaluate such data at EPA. Throughout the review, EPA recognized the role of the emerging technologies in informing the risk-assessment process and in potentially increasing the scientific rigor of the regulatory process.
Characterizing the protein components of a biologic system and elucidating their functions are key factors in understanding the toxicity that may result from biochemical-pathway disruptions or malfunctions due to environmental exposures. Proteomic technologies, such as two-dimensional gel electrophoresis and mass spectrometry, provide avenues for measuring protein-expression changes in response to exposure, identifying the proteins, and characterizing protein modifications, function, and activity (Bandara and Kennedy 2002; Kennedy 2002). Microarray technologies can also be applied to the study of proteins, but they are still in the early development stages.
Many of the proteome investigations address issues in toxicology. The most common form of analysis is differential proteome profiling, which determines the relative expression levels of proteins within a system and may also give information on secondary modifications, such as phosphorylation. The following are examples of proteome profiling experiments:
Studies that identify protein patterns associated with toxicity, such as acetaminophen-induced toxicity (Fountoulakis et al. 2000), azoxymethane-induced colon tumors (Chaurand et al. 2001), cardiotoxicity (Petricoin et al. 2004), and drug-induced steatosis in liver (Meneses-Lorente et al. 2004). Those studies have also examined dose-response relationships.
Studies that identify protein biomarkers of effect, such as biomarkers of liver toxicity (Gao et al. 2004) and biomarkers of compound-induced skeletal muscle toxicity (Dare et al. 2002).
Studies that provide insights into toxicity mechanisms, such as those of biliary canalicular membrane injury (Jones et al. 2003).
Studies that investigate species differences by proteome characterization of organs and organelles, such as liver proteins in rats (Fountoulakis and Suter 2002) and proteins in liver mitochondrial inner membranes in mice (Da Cruz et al. 2003).
Proteome characterization—determination of the composition of the proteins in a specific system—is a first step in understanding mechanisms of action and the biochemical processes behind induced toxicities. Characterizing the protein differences between species may assist in understanding the differences in species’ responses to toxicants. Other proteomic analyses include profiling of protein isoforms and modifications, investigation of protein-protein interactions, and characterization of protein-binding sites related to toxic events (Leonoudakis et al. 2004; Nisar et al. 2004).
Major challenges in proteomics include determining the most appropriate technology to use, processing and interpreting the experimental data, and placing the findings in the correct biologic context. New technologies for differential-expression analysis continue to emerge rapidly, and many are in the validation phase (Zhu et al. 2003). However, difficulties arise when one tries to compare datasets that have been acquired with different technologies, compiled at different times, or generated from different laboratories. Those variations can produce datasets that may or may not lead to the same conclusion (Baggerly et al. 2004). Integrating other types of experimental data, such as genomics datasets, provides additional value and may aid in interpretation (Heijne et al. 2003; Ruepp et al. 2002).
Characterizing various proteomes and using the findings to identify and understand toxicologic events is an enormous undertaking. The Human Proteome Organization (HUPO) was formed in 2001 and has mem-
bers in various government, industry, and academic organizations (Omenn 2004). One of HUPO’s goals is to compare the various technologies that can be used to profile proteomes. There are also plans to develop a comprehensive characterization of the proteins found in human serum and plasma, to evaluate differences within the human population, and to create a global knowledge base and data repository. Concerted efforts, such as HUPO, will expedite our understanding of the proteome, and similar efforts will be needed to answer toxicity-related questions.
Metabonomics is defined as the study of metabolic responses to drugs, environmental agents, and diseases and involves the quantitative measurement of changes in metabolites in living systems in response to internal or external stimuli or as a consequence of genetic change (Nicholson et al. 2002). The term is often used interchangeably with metabolomics, which is related more specifically to the analysis of all metabolites in a biologic sample. Metabonomics is a logical extension of the more established fields of genetics, genomics, and proteomics and is increasingly used as a research tool to characterize chemical-induced changes in physiological processes.
The technique normally involves the processing of biologic fluids—such as urine, plasma, and cerebral spinal fluid—or other tissue preparations followed by analysis with high-resolution nuclear magnetic resonance spectra to identify the metabolites present. As in genomics and proteomics, a large amount of data is generated, and sophisticated computational methods are needed to reduce the “noise” and identify the important changes (Forster et al. 2002). Combining data from multiple -omics sources can give a more holistic understanding of mechanistic toxicology. Mechanistic understanding of even relatively well-characterized agents can be increased by such a combinatorial approach, as recently demonstrated in studies on acetaminophen, which have characterized both genomic and metabonomic end points (Coen et al. 2004).
Many researchers believe that metabonomics can be used in the commercial sector to characterize potential adverse drug effects (Nicolson et al. 2002; Robosky et al. 2002) and as a complementary approach to other -omics technologies in toxicology research (Reo 2002). The pharmaceutical sector has partnered with academia in the Consortium for Metabonomic Toxicology to define appropriate methods and to
generate metabolic “fingerprints” of potential use in preclinical screening of candidate drugs (Lindon et al. 2004). Metabonomics may also be used to characterize the effect of environmental stressors in wildlife populations by identifying and characterizing metabolic biomarkers as an indication of organism health. For example, metabonomics has been used to study the “withering syndrome” in shellfish (Viant et al. 2003). The risks and benefits related to novel or engineered food products or mixtures, such as “nutriceuticals,” may also be clarified by metabolic assessment of possible consumers or appropriate model systems (German et al. 2003).
Computational toxicology, as defined by EPA (EPA 2003), is the application of mathematical and computer models to predict the effect of an environmental agent and elucidate the cascade of events that result in an adverse response. It uses technologies developed in computational chemistry (computer-assisted simulation of molecular systems), molecular biology (characterization of genetics, protein synthesis, and molecular events involved in biologic response to an agent), bioinformatics (computer-assisted collection, organization, and analysis of large datasets of biologic information), and systems biology (mathematical modeling of biologic systems and phenomena). The goals of using computational toxicology are to set priorities among chemicals on the basis of screening and testing data and to develop predictive models for quantitative risk assessment. Computational toxicology, like the other nonanimal approaches to toxicology previously discussed, holds the potential to lessen the tension between the four major objectives of regulatory testing schemes—breadth, depth, animal welfare, and conservation. Although computational-modeling approaches have the clear advantages of being rapid and of potentially reducing animal testing, the success and validation of any given computational approach clearly depend on the end point being modeled—is the end point amenable to a computational approach?—and on the quality, volume, and chemical diversity contained in the dataset used to generate the model. Some of the computational toxicology products available today are proprietary. To be valuable for risk assessment, the computational approaches must be validated, adequately explained, and made accessible to peer review. Models that are
not accessible for review may be useful for many scientific purposes but are not appropriate for regulatory use.
This section first discusses several well-defined modeling activities that have emerged, including structure-activity-relationship (SAR) models, physiologically based pharmacokinetic (PBPK) models, and biologically based dose-response (BBDR) models. It then discusses emerging computational modeling activities, including computational models that predict metabolic fate and three-dimensional models that predict protein-ligand interactions. Finally, it discusses the integration of the various technologies.
The fundamental premise of SAR analyses is that molecular structure determines chemical and physical properties, which determine biologic and toxicologic responses. SAR analyses attempt to answer the questions, What constitutes a class of molecules that are active? What determines relative activity? What distinguishes these from inactive classes? (McKinney et al. 2000). The analyses can be qualitative or quantitative. Generally, SAR analyses are qualitative analyses that predict biologic activity on an ordinal or categorical scale, whereas quantitative SAR (QSAR) analyses use statistical methods to correlate structural descriptors with biologic responses and predict biologic activity on an interval or continuous scale.
SAR and QSAR techniques have been applied to a wide variety of toxicologic end points. They have been used to predict LD50 values, maximum tolerated doses, Salmonella typhimurium (Ames) assay results, carcinogenic potential, and developmental-toxicity effects. SAR approaches also are used by EPA to screen new industrial chemicals under the Toxic Substances Control Act (TSCA) program. However, some toxicologic end points—such as carcinogenicity, reproductive effects, and hepatotoxicity—are mechanistically ill-defined at the molecular level and this leads to added complexity when one tries to build predictive models for these end points.
Numerous SAR and QSAR modeling packages are commercially available. They are in two main categories: knowledge-based approaches and statistically based systems. Knowledge-based systems, such as DEREK, use rules about generalized relationships between struc-
ture and biologic activity that are derived from human expert opinion and interpretation of toxicologic data to predict the potential toxicity of novel chemicals (LHASA Ltd. 2005a). Statistically based systems use calculated measures, structural connectivity, and various statistical methods to derive mathematical relationships for a training set of noncongeneric compounds. Examples of the latter approach are MultiCASE (MultiCASE 2005) and MDL QSAR (Elsevier MDL 2005).
Physiologically Based Pharmacokinetic Models
PBPK models predict distribution of chemicals throughout the body and describe the interactions of chemicals with biologic targets. For example, PBPK models might help to predict rates of appearance of metabolites or reaction products in target tissues or cellular consequences of interactions, such as mutation or impaired proliferation. PBPK models offer great promise for extrapolating tissue doses and responses from high dose to low dose and for extrapolating from test animals to humans and from one exposure route to another. Over the past 25 years, these models have been developed for a broad array of environmental compounds and drugs and have found diverse applications in reducing uncertainties in chemical risk assessments (Reddy et al. 2005). Specifically, PBPK models have helped to define areas of uncertainty and variability in risk assessment and to show explicitly how variability and uncertainty influence toxicity testing and data interpretation. A variety of software tools are now available to support PBPK modeling, including analytic approaches for sensitivity and variability analyses and for Markov-chain Monte Carlo optimization techniques. Some have the expectation today that PBPK models should be available for dose-response assessment and exposure analysis of all important environmental chemicals.
Another use of PBPK models is human exposure surveillance monitoring. Concentrations of a variety of exogenous chemicals in human tissues, blood, and excreta are often measured (CDC 2005). PBPK models can be used in a form of reverse dosimetry to reconstruct the intensity of exposure required to give specific concentrations in tissues, blood, or excreta of exposed humans. The combination of PBPK analysis with biomonitoring results promises to provide improved measurement of human exposure, which can lead to more precise estimates of risks in exposed human populations.
Biologically Based Dose-Response Models
PBPK models can describe the relationship of dose with an initial biologic response, but BBDR models describe the progression from the initial biologic response through the biologic events leading to alteration of tissue function and disease. They predict the dose-response relationship on the basis of principles of biology, pharmacokinetics, and chemistry. Development of BBDR models requires collection of specific mechanistic data and their organization through quantitative, iterative modeling of biologic processes. The datasets involved in BBDR model construction, particularly those for toxicologic evaluations, will rely increasingly on high-throughput, high-content genomic data to assess cell signaling pathways perturbed by exposure to chemicals and the concentration at which the perturbations become large enough to alter specific biologic processes.
One use emphasized in EPA’s computational-toxicology framework is characterizing pathways of toxicity (EPA 2003). The key aspect is identifying the initial biologic alteration that leads to the observed adverse effect. For example, a group of structurally similar chemicals may interact with a specific nuclear receptor and cause a cascade of events, which may be species-specific or tissue-specific but lead to a similar adverse response. Identifying the initial biologic interaction that precipitates the observed adverse effect creates a foundation on which to develop generic BBDR models, that is, BBDR models for classes of compounds.
The main goal in developing BBDR models is to use the validated models to refine low-dose and interspecies extrapolation. Such application would require careful analysis of variability, sensitivity, and robustness of various model structures. BBDR models also may be used to improve the experimental design of toxicology studies so that data needs for risk assessment are fulfilled.
Computational Approaches to Predicting Metabolic Fate
Predicting metabolic fate is important in determining the risks associated with environmental exposure to chemicals. In some cases, for example, the parent compounds are benign but are metabolized to reactive intermediates that form protein or DNA adducts that elicit a toxicologic
response. Therefore, identifying the potential metabolites and likely clearance routes is critical for a complete hazard and risk assessment.
Numerous metabolic-fate computational models have been reported, and several are commercially available. Examples of products are METEOR (LHASA Ltd. 2005b), Meta (MultiCASE Inc. 2005), MetaDrug (GeneGo Inc. 2005), and, more recently, MetaSite (Molecular Discovery Ltd. 2005). METEOR, Meta, and MetaDrug use a rule-based approach to biotransformation in that they recognize structural motifs that are susceptible to metabolism and use “weighting” algorithms to determine the most likely metabolic products. Those systems have focused on a general mammalian model but in some cases have been able to generate species-specific predictions where knowledge is available. MetaDrug and MetaSite can predict the most likely sites of metabolism and the responsible P450 enzyme. The predictions rely on three-dimensional models of the individual cytochrome active sites. However, the products are based largely on three-dimensional structure models that have been extrapolated from crystallography data from microbial or other nonhuman P450 enzymes and suffer from the limitations of three-dimensional modeling described below.
Three-Dimensional Modeling of Chemical-Target Interactions
Three-dimensional modeling of a protein-ligand interaction raises the possibility of assessing structures on the basis of a computed ligand docking score (see Jones et al. 1997; Abagayan and Totrov 2001). Many docking software products and scoring algorithms have been developed and are commercially available through organizations, such as the Cambridge Crystallographic Data Center (CCDC 2005; Tripos 2005; Accelrys 2005; Schrödinger 2005). Those methods often assume a flexible ligand but a rigid binding site; they also assume that a single binding site is responsible for the inhibition or activation of the protein function. Such assumptions may not hold true for “promiscuous” receptors, such as the estrogen receptor, that have broad substrate specificity and multiple potential binding ligands. Recent developments in software and advances in computing power have enabled some companies to develop potential solutions to the difficult and computationally expensive problem of dealing with those receptors.
The major limitation in hazard prediction is a general lack of knowledge about protein-ligand interactions mechanistically involved in
observed toxicity. Accordingly, most of the focus has been placed on proteins that have been identified as potentially important in toxicology, namely the P450 family of cytochromes, which is thought to be primarily responsible for most drug metabolism,4 and the human ether-a-go-go (hERG) potassium channel, which is thought to play a role in cardiac QT prolongation considered by several regulatory bodies, including FDA, to be a surrogate indicator of potential drug-induced cardiac arrhythmia.5
No x-ray crystal structures for the human variants of the cytochrome P450 proteins are in the public domain. Therefore, most efforts have focused on homology models constructed from bacterial or other mammalian protein structures. However, some commercial vendors claim to have human x-ray structures available for use with drug-design models (Astex Technology 2005). Successful use of three-dimensional modeling techniques, other than that discussed above, has not been widely published. The value of these approaches in predictive toxicology remains to be determined.
Future Uses of Computational Toxicology
Clearly, the computational approaches discussed here represent a set of related scientific disciplines that continue to mature. Their placement in a testing strategy will depend on what questions they can address. For example, regulatory scientists, such as Richard (1998), have commented that the opportunities offered by SAR approaches are most likely to be in hazard identification; this reflects the current inability of these systems (or any other system) to rule out hazard definitively. Nevertheless, it seems clear that these evolving computational tools have an opportunity to contribute substantially to the early stages of a more holistic toxicity-testing strategy. Inevitably, the value of a specific approach will be somewhat context-sensitive and will depend on the robustness of the model and on the quality of the underlying data that support it.
There are opportunities to link PBPK models with SAR approaches. PBPK models require a variety of input parameters, including partition coefficients, metabolic parameters, and rates of metabolism of test compounds. A long-term goal has been to use SAR approaches to provide those input parameters and produce generic PBPK models for
classes of chemicals that vary quantitatively with their structures and with the associated inputs. Continuing improvements in computational methods, especially with regard to predicting metabolic rates and sites of metabolism on complex molecules, could make the technologies feasible in the relatively near future. It may eventually be possible to link SAR, PBPK, and BBDR models to predict dose-response behaviors for the perturbations of cellular signaling networks by exogenous compounds and to provide estimates of risk to exposed humans.
New or revised toxicity-testing methods for regulatory toxicology are developed for a number of reasons, such as to increase chemical throughput, to provide more detailed information about individual chemicals, to reduce animal use and suffering, and to decrease costs associated with testing. A new or revised method may satisfy one or more of those objectives, indicating that they are not necessarily in conflict. Regardless of the rationale for developing a new method, any such method should be evaluated objectively—that is, validated—to determine whether it fulfills its intended purpose.
The need for formal principles of validation in toxicity testing became evident in the middle to late 1980s when various in vitro tests were developed as potential alternatives to established in vivo tests. The question arose as to how the new tests should be objectively assessed to determine whether they were as good as or better than the existing animal tests in predicting toxicity. Scientists and regulators recognized that formal validation principles would facilitate the implementation of new testing methods that could replace, reduce, or refine animal use and of any methods that involved new and improved technologies or helped to address new regulatory needs. Such validation principles would also help to ensure that the assessment of new methods was conducted in a scientifically sound and high-quality manner.
As a result of several workshop reports that discussed the conceptual and practical aspects of validation (Balls et al. 1990, 1995; OECD 1996), key terms were defined. Validation of a test method is a process by which the reliability and relevance of the method for a specific purpose are established (Balls et al. 1990). The reliability of a test method is the extent of reproducibility of results within and between laboratories over time when the test is performed using the same protocol. Relevance
is related to the scientific basis of the test system and to its predictive capability. Predictions are sometimes made with the help of a model, which translates the results from the test system into a prediction of toxicity. Test methods should be both reliable and relevant, and their limitations duly noted. Because in vivo mammalian models are currently assumed to have some relevance to humans, they are generally used as the standard against which alternative models are validated. In the validation of new in vivo mammalian assays, there has been some confusion about how relevance should be assessed. In such cases, validation is directed primarily at determining reproducibility, although relevance remains an important consideration.
Validation is one of several phases in the evolution of a test method from conception to application. The phases are test development, prevalidation or test optimization, formal validation, independent assessment, and regulatory acceptance. Validation is often a time-consuming and expensive process. Consequently, a prevalidation or optimization phase is considered necessary to ensure that a method is ready to enter the validation process. Prevalidation addresses, at least in a preliminary way, many of the issues addressed later in the validation phase, especially the availability of an optimized protocol.
In the validation of new in vivo mammalian bioassays, the adequacy of the test method’s end points to evaluate the biologic effect of interest in the species of interest may be difficult to determine. Ideally, the results of the in vivo mammalian bioassay should be compared with results of human studies. However, it is often difficult to validate a mammalian bioassay against health effects in humans because of the lack of high-quality data in humans and ethical constraints in conducting human clinical studies. Therefore, the validation principles discussed below are more easily applied to the validation of nonmammalian assays.
ECVAM was established in 1992 to coordinate validation activities in the European Union, and its U.S. counterpart, ICCVAM, was established in 1994. ECVAM has been more active in coordinating validation exercises, whereas ICCVAM has focused more on assessing the validation status of methods submitted for consideration. Both ECVAM and ICCVAM follow the validation principles developed at an OECD workshop in 1996 (OECD 1996). The principles are intended to apply to the validation of new or updated in vivo or in vitro test methods for hazard assessment. They address such issues as the scientific and regulatory rationale for the proposed test method, the adequacy of the test method’s end points to evaluate the biologic effect of interest in the species of in-
terest, the availability of a detailed, formal protocol for the test method, the reproducibility of the test within and among laboratories, the performance of the test method relative to the performance of the test it is designed to replace (if appropriate), the availability of the supporting data for review, and the adherence to good laboratory practices. The international consistency with the validation process is important because validation studies, which can be resource-intensive, expensive, and time-consuming, do not need to be repeated simply to satisfy differing international requirements.
ICCVAM and ECVAM have been increasingly collaborative on projects to improve their collective efficiency. However, they face several challenges in validating toxicologic methods, including those that can replace, reduce, and refine existing animal-based tests. The challenges are as follows:
The expense, time, and resources entailed by some validation efforts are impediments to more rapid progress. ICCVAM should strive for ways to overcome the logistical constraints without compromising the scientific integrity of the process. ECVAM’s “modular” approach to validation may be helpful in this regard. The modular approach decouples the stepwise process and emphasizes the data needed to address various principles of validation, such as within- and between-laboratory variability. The data needs are regarded as discrete modules, each of which can be satisfied with a distinct set of data, some of which may be derived from pre-existing data. Although new data may be needed, the number of laboratories required may be smaller than in a standard validation exercise (Hartung et al. 2004).
Many validation efforts compare a new test method undergoing validation with an existing animal-based test for the same end point. Such comparisons necessarily require comprehensive data not only on the new method but also on the existing method—the reference test. Experience has shown that the data from such reference tests are limited in availability. Test results, if published at all, are often provided in summary form, whereas ICCVAM typically needs individual animal data. The challenge to ICCVAM is to work with industry and others to assemble as complete a set of high-quality animal data as possible. Such efforts, when successful, would preclude the need to conduct further animal testing to generate new data.
Ideally, human data should serve as the standard against which to evaluate the performance of new tests. In the absence of such
human data, ICCVAM and similar entities consider the existing test to be the default standard and judge the new test against it. A challenge arises when the reference tests, typically animal-based, have considerable variability across laboratories. Such variability makes it difficult to show correlations between the results of the new test method and those of the reference test. One way to address this challenge is to make greater efforts to collect available human data as the true standard for comparison. In the absence of such data, however, approaches need to be developed to account for the inherent variability in some animal tests when conducting validation assessments.
New test methods are not always stand-alone substitutes for existing test methods. New test methods that prove to be inadequate in head-to-head comparisons with existing test methods might pass muster when combined with complementary approaches into tiered or battery approaches. Consequently, ICCVAM might benefit from providing greater guidance on developing and validating such approaches, rather than relying on one-for-one correspondence between the new and existing test methods.
Another challenge facing ICCVAM is helping to ensure a steady flow of new test methods into its validation pipeline. Without such candidate methods, ICCVAM would have nothing to validate or assess. ICCVAM or its parent agency should consider funding research to identify biomarkers or mechanisms of toxicity that could be incorporated into test methods and channeled into the ICCVAM pipeline for validation.
Meeting the challenges discussed above would enable ICCVAM to be more productive and efficient in assessing new test methods for their suitability for regulatory toxicology.
In addition to its guidance on validation principles, ICCVAM and the NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM) have issued practical guidance on submitting validation data for assessment and nominating promising test methods for further development or validation (ICCVAM/NICEATM 2004). Several new or revised tests have gone through the ICCVAM process and have been assessed according to its validation and regulatory acceptance criteria. For example, in 1998, after a submission by industry representatives, ICCVAM established an independent peer-review panel to review the validation status of the local lymph node assay (LLNA), a reduction and refinement alternative to the guinea pig maximization test
(GPMT) test for allergic contact dermatitis. The panel judged the LLNA to be an adequate substitute for the GPMT according to the ICCVAM validation criteria. ICCVAM forwarded the results of the review to relevant federal agencies, which accepted the LLNA as a validated test for allergic contact dermatitis.
The ICCVAM-NICEATM validation and submission criteria are intended to help industry and the federal government to update and enhance the inventory of chemical testing methods. New or revised methods can be reviewed by ICCVAM and NICEATM, and the resulting recommendations can be sent to individual agencies for their consideration. Thus, the guidelines can help stakeholders to meet the challenges posed by new testing programs or needs. For example, EPA has contracted with ICCVAM and NICEATM to validate receptor-binding assays for its endocrine-disruptor program, and it is using ICCVAM and NICEATM criteria to validate some animal-based tests for the program. It should be emphasized that the formal validation process applies to methods intended for immediate regulatory testing. It is not intended for methods that, for example, are used only inhouse in industry or are purely investigational or newly emerging.
Abagyan, R., and M. Totrov. 2001. High-throughput docking for lead generation. Curr. Opin. Chem. Biol. 5(4):375-382.
Accelrys. 2005. Products and Services. Accelrys Software Inc. [online]. Available: http://www.accelrys.com [accessed April 12, 2005].
Andreasen, E.A., J.M. Spitsbergen, R.L. Tanguay, J.J. Stegeman, W. Heideman, and R.E. Peterson. 2002. Tissue-specific expression of AHR2, ARNT2, and CYP1A in zebrafish embryos and larvae: Effects of developmental stage and 2,3,7,8- tetrachlorodibenzo-p-dioxin exposure. Toxicol. Sci. 68 (2):403-419.
Aronov, A.M., and B.B. Goldman. 2004. A model for identifying HERG K+ channel blockers. Bioorg. Med. Chem. 12(9):2307-2315.
Astex Technology. 2005. Current Portfolio. Astex Technology, Cambridge, UK [online]. Available: http://www.astex-technology.com/current_portfolio.html [accessed April 12, 2005].
Baggerly, K.A., J.S. Morris, and K.R. Coombes. 2004. Reproducibility of SELDI-TOF protein patterns in serum: Comparing datasets from different experiments. Bioinformatics 20(5):777-785.
Balls, M., P. Botham, A. Cordier, S. Fumero, D. Kayser, H. Koëter, P. Koundakjian, N.G. Lindquist, O. Meyer, L. Pioda, C. Reinhardt, H. Rozemond,
T. Smyrniotis, H. Spielmann, H. Van Looy, M.T. van der Venne, and E. Walum. 1990. Report and recommendations of an international workshop on promotion of the regulatory acceptance of validated non-animal toxicity test procedures. ATLA 18:339-344.
Balls, M., A.N. Goldberg, J.H. Fentem, C.L. Broadhead, R.L. Burch, M.F.W. Festing, J.M. Frazier, C.F.M. Hendricksen, M. Jennings, M.D.O. van der Kamp, D.B. Morton, A.N. Rowan, C. Russel, W.M.S. Russell, H. Spielmann, M.L. Stephens, W.S. Stokes, D.W. Straughan, J.D. Yager, J. Zurlo, and B.F.M. van Zutphen. 1995. The three Rs: The way forward. ATLA 23(6):838-866.
Bandara, L., and S. Kennedy. 2002. Toxicoproteomics—a new preclinical tool. Drug Discov. Today 7(7):411-418.
Berry, D.A., P. Mueller, A.P. Grieve, M. Smith, T. Parke, R. Balazek, N. Mitchard, and M. Krams. 2002. Adaptive Bayesian designs for dose-ranging drug trials. Pp. 99-181 in Case Studies in Bayesian Statistics, Vol. V, C. Gatsonis, R.E. Kass, B. Carlin, A. Carriquiry, A. Gelman, I. Verdinelli, and M. West, eds. New York, NY: Springer.
CCDC (Cambridge Crystallographic Data Centre). 2005. Products. Cambridge Crystallographic Data Centre, Cambridge, UK [online]. Available: http://www.ccdc.cam.ac.uk [accessed April 12, 2005].
CDC (Centers for Disease Control and Prevention). 2005. Third National Report on Human Exposure to Environmental Chemicals. U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, Atlanta, GA [online]. Available: http://www.cdc.gov/exposurereport/3rd/pdf/thirdreport.pdf [accessed Sept. 26, 2005].
Chaurand, P., B.B. DaGue, R.S. Pearsall, D.W. Threadgill, and R.M. Caprioli. 2001. Profiling proteins from azoxymethane-induced colon tumors at the molecular level by matrix-assisted laser desorption/ionization mass spectrometry. Proteomics 1(10):1320-1326.
Coen, M., S.U. Ruepp, J.C. Lindon, J.K. Nicholson, F. Pognan, E.M. Lenz, and I.D. Wilson. 2004. Integrated application of transcriptomics and metabonomics yields new insight into the toxicity due to paracetamol in the mouse. J. Pharm. Biomed. Anal. 35(1):93-105.
Contag, C.H., P.R. Contag, S.D. Spilman, D.K. Stevenson, and D.A. Benaron. 1996. Photonic monitoring of infectious disease and gene regulation. Pp. 220-224 in Biomedical Optical Spectroscopy and Diagnostics, E. Sevick-Muraca, and D. Benaron, eds. Trends in Optics and Photonics Vol. 3. Washington, DC: Optical Society of America.
Daborn, P.J., J.L. Yen, M.R. Bogwitz, G. Le Goff, E. Feil, S. Jeffers, N. Tijet, T. Perry, D. Heckel, P. Batterham, R. Feyereisen, T.G. Wilson, and R.H. ffrench-Constant. 2002. A single p450 allele associated with insecticide resistance in Drosophila. Science 297(5590):2253-2256.
Da Cruz, S., I. Xenarios, J. Langridge, F. Vilbois, P.A. Parone, and J.C. Martinou. 2003. Proteomic analysis of the mouse liver mitochondrial inner membrane. J. Biol. Chem. 278(42):41566-41571.
Dare, T., H.A. Davies, J.A. Turton, L. Lomas, T.C. Williams, and M.J. York. 2002. Application of surface-enhanced laser desorption/ionization technology to the detection and identification of urinar parvalbumin-alpha: A biomarker of compound-induced skeletal muscle toxicity in the rat. Electrophoresis 23(18):3241-3251.
de Groot, M.J., M.J. Ackland, V.A. Horne, A.A. Alex, and B.C. Jones. 1999. A novel approach to predicting P450 mediated drug metabolism. CYP2D6 catalyzed N-dealkylation reactions and qualitative metabolite predictions using a combined protein and pharmacophore model for CYP2D6. J. Med. Chem. 42(20):4062-4070.
Diehl, K.H., R. Hull, D. Morton, R. Pfister, Y. Rabemampianina, D. Smith, J.M. Vidal, and C. van de Vorstenbosch. 2001. A good practice guide to the administration of substances and removal of blood, including routes and volumes. J. Appl. Toxicol. 21(1):15-23.
Dolan, M.E., K.G. Newland, R. Nagasubramanian, X. Wu, M.J. Ratain, E.H. Cook Jr., and J.A. Badner. 2004. Heritability and linkage analysis of sensitivity to cisplatin-induced cytotoxicity. Cancer Res. 64(12):4353-4356.
EBI (European Bioinformatics Institute). 2005. Microarray, Tox-MIAMExpress. European Bioinformatics Institute, European Molecular Biology Laboratory [online]. Available: http://www.ebi.ac.uk/tox-miamexpress/ [accessed April 12, 2005].
ECVAM (European Centre for the Validation Alternative Methods). 2005. Scientifically Validated Methods [online]. Available: http://ecvam.jrc.cec.eu.int/index.htm [accessed March 16, 2005].
Ekins, S., M.J. de Groot, and J.P. Jones. 2001. Pharmacophore and three-dimensional quantitative structure activity relationship methods for modling cytochrome p450 active sites. Drug Metab. Dispos. 29(7):936-944.
Elsevier MDL. 2005. MDL QSAR. MDL Discovery Predictive Science [online]. Available: http://www.mdl.com/products/predictive/qsar/index.jsp [accessed April 12, 2005].
EPA (U.S. Environmental Protection Agency). 1998. Health Effects Test Guidelines OPPTS 870.2400. Acute Eye Irritation. EPA 712-C-98-195. Office of Prevention, Pesticides, and Toxic Substances, U.S. Environmental Protection Agency, Washington, DC [online]. Available: http://www.epa.gov/opptsfrs/OPPTS_Harmonized/870_Health_Effects_Test_Guidelines/Series/870-2400.pdf [accessed April 7, 2005]
EPA (U.S. Environmental Protection Agency). 1999. Letters to Manufacturers/Importers, October 14, 1999. High Production Volume Challenge Program, Office of Prevention, Pesticides, and Toxic Substances, U.S. Environmental Protection Agency, Washington, DC [online]. Available: http://www.epa.gov/chemrtk/ceoltr2.htm [accessed April 11, 2005].
EPA (U.S. Environmental Protection Agency). 2002. Interim Policy on Genomics. Science Policy Council, Office of the Science Advisor, U. S. Environmental Protection Agency, Washington, DC [online]. Available: http://epa.gov/osa/spc/htm/genomics.htm [accessed April 12, 2005].
EPA (U.S. Environmental Protection Agency). 2003. A Framework for a Computational Toxicology Research Program in ORD. Draft Report. EPA/600/R-03/065. Office of Research and Development, U.S. Environmental Protection Agency. July 2003 [online]. Available: http://www.epa.gov/sab/03minutes/ctfcpanel_091203mattach_e.pdf [accessed April 7, 2005].
EPA (U.S. Environmental Protection Agency). 2004. Potential Implications of Genomics for Regulatory and Risk Assessment Applications at EPA. External Review Draft. EPA 100/B-04/002. Genomics Task Force Workgroup, Science Policy Council, U. S. Environmental Protection Agency, Washington, DC [online]. Available: http://www.epa.gov/osa/genomics-external-review-draft.pdf [accessed April 12, 2005].
FDA (Food and Drug Administration). 2005. Guidance for Industry: Pharmacogenomic Data Submissions. Center for Drug Evaluation and Research, Center for Biologics Evaluation and Research, Center for Devices and Radiological Health, Food and Drug Administration. March 2005 [online]. Available: http://www.fda.gov/cder/guidance/6400fnl.htm [accessed June 3, 2005].
Festing, M.F.W. 1999. Reduction in animal use in the production and testing of biologicals. Dev. Biol. Stand. 101:195-200.
Festing, M.F.W., V. Baumans, R.D. Combes, M. Halder, C.F.M. Hendriksen, B.R. Howard, D.P. Lovell, G.J. Moore, P. Overend, and M.S. Wilson. 1998. Reducing the use of laboratory animals in biomedical research: Problems and possible solutions. ATLA 26(3):283-301 [online]. Available: http://altweb.jhsph.edu/publications/ECVAM/ecvam29.htm [accessed April 7, 2005].
Forster, J., A.K. Gombert, and J. Nielsen. 2002. A functional genomics approach using metabolomics and in silico pathway analysis. Biotechnol. Bioeng. 79(7):703-712.
Fountoulakis, M., and L. Suter. 2002. Proteomic analysis of the rat liver. J. Chromatogr. B Analyt. Technol. Biomed. Life Sci. 782(1-2):197-218.
Fountoulakis, M., P. Berndt, U.A. Boelsterli, F. Crameri, M. Winter, S. Albertini, and L. Suter. 2000. Two-dimensional database of mouse liver proteins: Changes in hepatic protein levels following treatment with acetaminophen or its nontoxic regioisomer 3-acetamidophenol. Electrophoresis 21(11):2148-2161.
Gao, J., L.A. Garulacan, S.M. Storm, S.A. Hefta, G.J. Opiteck, J.H. Lin, F. Moulin, and D. Dambach. 2004. Identification of in vitro protein biomarkers of idiosyncratic liver toxicity. Toxicol. In Vitro 18(4):533-541.
Gaylor, D.W. 1980. The ED01 study: Summary and conclusions. J. Environ. Pathol. Toxicol. 3(3 Spec.):179-183.
GeneGo Inc. 2005. MetaDrug. GeneGo Inc., St. Joseph, MI [online]. Available: http://www.genego.com/about/products.shtml#metadrug [accessed April 12, 2005].
German, J.B., M.A. Roberts, and S.M. Watkins. 2003. Genomics and metabolomics as markers for the interaction of diet and health: Lessons from lipids. J. Nutr. 133(6 Suppl. 1):2078S-2083S.
Giles, F.J, H.M. Kantarjian, J.E. Cortes, G. Garcia-Manero, S. Verstovsek, S. Faderl, D.A. Thomas, A. Ferrajoli, S. O'Brien, J.K. Wathen, L.C. Xiao, D.A. Berry, and E.H. Estey. 2003. Adaptive randomized study of idarubicin and cytarabine versus troxacitabine and cytarabine versus troxacitabine and idarubicin in untreated patients 50 years or older with adverse karyotype acute myeloid leukemia. J. Clin. Oncol. 21(9):1722-1727.
Greenwood, T.A., P.E. Cadman, M. Stridsberg, S. Nguyen, L. Taupenot, N.J., Schork, and D.T. O’Connor. 2004. Genome-wide linkage analysis of chromogranin B expression in CEPH pedigrees: Implications for exocytotic sympathochromaffin secretion in humans. Physiol Genomics. 18(1):119-127.
Hartung, T., S. Bremer, S. Casati, S. Coecke, R. Corvi, S. Fortaner, L. Gribaldo, M. Halder, S. Hoffmann, A.J. Roi, P. Prieto, E. Sabbioni, L. Scott, A. Worth, and V. Zuang. 2004. A modular approach to the ECVAM principles on test validity. ATLA 32(5):467-472.
Heijne, W., R.H. Stierum, M. Slijper, P.J. van Bladeren, and B. van Ommen. 2003. Toxicogenomics of bromobenzene hepatotoxicity: A combined transcriptomics and proteomics approach. Biochem. Pharmacol. 65(5): 857-875.
Hendricksen, C.F.M., and D.B. Morton, eds. 1999. Humane Endpoints in Animal Experimentation for Biomedical Research: Proceedings of the International Conference, November 22-25, 1998, Zeist, The Netherlands. London: Royal Society of Medicine Press.
Hood, E. 2004. Taking stock of toxicogenomics: Mini-monograph offers overview [comment]. Environ. Health Perspect. 112(4):A231.
ICCVAM (Interagency Coordinating Committee on the Validation of Alternative Methods). 2004. Test Methods Evaluations [online]. Available: http://iccvam.niehs.nih.gov/methods/review.htm [accessed March 16, 2005].
ICCVAM–NICEATM (Interagency Coordinating Committee on the Validation of Alternative Methods- National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods). 2004. ICCVAM - NICEATM Documents [online]. Available: http://iccvam.niehs.nih.gov/docs/docs.htm [accessed April 12, 2005].
ILAR (Institute for Laboratory Animal Research). 2000. Humane Endpoints for Animals Used in Biomedical Research and Testing. ILAR J. 41(2):59-123
Jones, G., P. Willett, R.C. Glen, A.R. Leach, and R. Taylor. 1997. Development and validation of a genetic algorithm for flexible docking. J. Mol. Biol. 267(3):727-748.
Jones, J.A., L. Kaphalia, M. Treinen-Moslen, and D.C. Leibler. 2003. Proteomic characterization of metabolites, protein adducts, and biliary proteins in rats exposed to 1,1-dichloroethylene or diclofenac. Chem. Res. Toxicol. 16(10):1306-1317.
Kennedy, S. 2002. The role of proteomics in toxicology: Identification of biomarkers of toxicity by protein expression analysis. Biomarkers 7(4): 269-290.
Leonoudakis, D., L.R. Conti, S. Anderson, C.M. Radeke, L.M. McGuire, M.E. Adams, S.C. Froehner, J.R. Yates, and C.A. Vandenberg. 2004. Protein trafficking and anchoring complexes revealed by proteomic analysis of inward rectifier potassium channel (Kir2.x)-associated proteins. J. Biol. Chem. 279(21):22331-22346.
LHASA Ltd. 2005a. DEREK for Windows. LHASA Limited, Department of Chemistry, University of Leeds, Leeds, UK [online]. Available: http://www.chem.leeds.ac.uk/luk/derek/index.html [accessed April 12, 2005].
LHASA Ltd. 2005b. METEOR. LHASA Limited, Department of Chemistry, University of Leeds, Leeds, UK [online]. Available: http://www.chem.leeds.ac.uk/luk/meteor/index.html [accessed April 12, 2005].
Lindon, J.C., E. Holmes, M.E. Bollard, E.G. Stanley, and J.K. Nicholson. 2004. Metabonomics technologies and their applications in physiological monitoring, drug safety assessment and disease diagnosis. Biomarkers 9(1):1-31.
Lo, H.S., Z. Wang, Y. Hu, H.H. Yang, S. Gere, K.H. Buetow, and M.P. Lee. 2003. Allelic variation in gene expression is common in the human genome. Genome Res. 13(8):1855-1862.
McKinney, J.D., A. Richard, C. Waller, M.C. Newman, and F. Gerberick. 2000. The practice of structure activity relationships (SAR) in toxicology. Toxicol. Sci. 56(1):8-17.
Meneses-Lorente, G., P.C. Guest, J. Lawrence, N. Muniappa, M.R. Knowles, H.A. Skynner, K. Salim, I. Cristea, R. Mortishire-Smith, S.J. Gaskell, and A. Watt. 2004. A proteomic investigation of drug-induced steatosis in rat liver. Chem. Res. Toxicol. 17(5):605-612.
Moennikes, O., S. Loeppen, A. Buchmann, P. Andersson, C. Ittrich, L. Poellinger, and M. Schwarz. 2004. A constitutively active dioxin/aryl hydrocarbon receptor promotes hepatocarcinogenesis in mice. Cancer Res. 64(14):4707-4710.
Molecular Discovery Ltd. 2005. MetaSite. Molecular Discovery Ltd [online]. Available: http://www.moldiscovery.com/soft_metasite.php [accessed April 12, 2005].
Morley, M., C.M. Molony, T.M. Weber, J.L Devlin, K.G. Ewens, R.S. Spielman, and V.G. Cheung. 2004. Genetic analysis of genome-wide variation in human gene expression. Nature 430(7001):743-747.
Morton, D.B. 1995. Advances in refinement in animal experimentation over the past 25 years. ATLA 23(6):812-822.
Morton, D.B., M. Jennings, A. Buckwell, R. Ewbank, C. Godfrey, B. Holgate, I. Inglis, R. James, C. Page, I. Sharman, R. Verschoyle, L. Westfall, and A.B. Wison. 2001. Refining procedures for the administration of substances. Lab. Anim. 35(1):1-41.
MultiCASE Inc. 2005. META Program. MultiCASE Inc., Beachwood, OH [online]. Available: http://www.multicase.com/ [accessed April 12, 2005].
Nicholson, J.K., J. Connelly, J.C. Lindon, and E. Holmes. 2002. Metabonomics: A platform for studying drug toxicity and gene function. Nat. Rev. Drug Discov. 1(2):153-161.
Nisar, S., C.S. Lane, A.F. Wilderspin, K.J. Welham, W.J. Griffiths, and L.H. Patterson. 2004. A proteomic approach to the identification of cytochrome P450 isoforms in male and female rat liver by nanoscale liquid chromatography-electrospray ionization-tandem mass spectrometry. Drug Metab. Dispos. 32(4):382-386.
NTP (National Toxicology Program). 1999. The Murine Local Lymph Node Assay: A Test Method for Assessing the Allergic Contact Dermatitis Potential of Chemicals/Compounds. NIH Publication No. 99-4494. National Toxicology Program, Research Triangle Park, NC. February 1999 [online]. Available: http://iccvam.niehs.nih.gov/methods/llnadocs/llnarep.pdf [accessed March 16, 2005].
OECD (Organisation for Economic Cooperation and Development). 1996. Final Report of the OECD Workshop on Harmonization of Validation and Acceptance Criteria for Alternative Toxicological Test Methods. Paris: OECD.
OECD (Organisation for Economic Cooperation and Development). 2000. Guidance Document on the Recognition, Assessment, and Use of Clinical Signs as Humane Endpoints for Experimental Animals Used in Safety Evaluation. ENV/JM/MONO(2000)7. Paris: OECD [online]. Available: http://www.olis.oecd.org/olis/2000doc.nsf/LinkTo/env-jm-mono(2000)7 [accessed April 7, 2005].
OECD (Organization for Economic Cooperation and Development). 2002a. OECD Test Guideline 401 will be deleted: A Major Step in Animal Welfare: OECD Reaches Agreement on the Abolishment of the LD50 Acute Toxicity Test [online]. Available: http://www.oecd.org/document/52/0,2340,en_2649_34377_2752116_1_1_1_1,00.html [accessed March 1, 2005].
OECD (Organisation for Economic Cooperation and Development). 2002b. Acute Eye Irritation/Corrosion. Chemicals Testing Guidelines No. 405.
Paris: Organisation for Economic Cooperation and Development. April 24, 2002.
Omenn, G.S. 2004. The Human Proteome Organization Plasma Proteome Project pilot phase: Reference specimens, technology platform compar-isons, and standardized data submissions and analyses. Proteomics 4(5): 1235-1240.
Payne, V.A., Y.T. Chang, and G.H. Loew. 1999. Homology modeling and substrate binding study of human CYP2C9 enzyme. Proteins 37(2):176-190.
Pedra, J.H., L.M. McIntyre, M.E. Scharf, and B.R. Pittendrigh. 2004. Genome-wide transcription profile of field- and laboratory-selected dichlorodiphenyltrichloroethane (DDT)-resistant Drosophila. Proc. Natl. Acad. Sci. U.S.A. 101(18):7034-7039.
Pennie, W., S.D. Pettit, and P.G. Lord. 2004. Toxicogenomics in risk assessment: An overview of an HESI collaborative research program. Environ. Health Perspect. 112(4):417-419.
Petricoin, E.F., V. Rajapaske, E.H. Herman, A.M. Arekani, S. Ross, D. Johann, A. Knapton, J. Zhang, B.A. Hitt, T.P. Conrads, T.D. Veenstra, L.A. Liotta, and F.D. Sistare. 2004. Toxicoproteomics: Serum proteomic pattern diagnostics for early detection of drug induced cardiac toxicities and cardio-protection. Toxicol. Pathol. 32(Suppl. 1):122-130.
Rea, M.A., J.P. Gregg, Q. Qin, M.A. Phillips, and R.H. Rice. 2003. Global alteration of gene expression in human keratinocytes by inorganic arsenic. Carcinogenesis 24(4):747-756.
Reddy, M.B., R.S.H. Yang, H.J. Clewell III, and M.E. Andersen, eds. 2005. Physiologically Based Pharmacokinetics: Science and Applications. Hoboken, NJ: John Wiley & Sons, Inc.
Reo, N.V. 2002. NMR-based metabolomics. Drug Chem. Toxicol. 25(4):375-382.
Richard, A.M. 1998. Commercial toxicology prediction systems: A regulatory perspective. Toxicol. Lett. (102-103):611-616.
Robosky, L.C., D.G. Robertson, J.D. Baker, S. Rane, and M.D. Reily. 2002. In vivo toxicity screening programs using metabonomics. Comb. Chem. High Throughput Screen. 5(8):651-662.
Ruepp, S., R.P. Tonge, J. Shaw, N. Wallis, and F. Pognan. 2002. Genomics and proteomics analysis of acetaminophen toxicity in mouse liver. Toxicol. Sci. 65(1):135-150.
Russell, W.M.S., and R.L Burch. 1992. The Principles of Humane Experimental Technique, Special Ed. Herts, England: Universities Fed-eration for Animal Welfare. 238 pp.
Schadt, E.E., S.A. Monks, T.A. Drake, A.J. Lusis, N. Che, V. Colinayo, T.G. Ruff, S.B. Milligan, J.R. Lamb, G. Cavet, P.S. Linsley, M. Mao, R.B. Stoughton, and S.H. Friend. 2003. Genetics of gene expression surveyed in maize, mouse and man. Nature 422(6929):297-302.
Schork, N.J., J.P. Gardner, L. Zhang, D. Fallin, B. Thiel, H. Jakubowski, and A. Aviv. 2002. Genomic association/linkage of sodium lithium counter-transport in CEPH pedigrees. Hypertension 40(5):619-628.
Schrödinger. 2005. Product Information. Schrödinger, Portland, OR [online]. Available: http://www.schrodinger.com/index.html [accessed April 12, 2005].
Stephens, M.L. 1989. Replacing animal experiments. Pp. 144-168 in Animal Experimentation: The Consensus Changes, G. Langley, ed. New York: Chapman and Hall.
Stephens, M.L, A.M. Goldberg, and A.N. Rowan. 2001. The first forty years of the alternatives approach: Refining, reducing, and replacing the use of laboratory animals. Pp. 121-135 in The State of the Animals 2001, 1st Ed., D.J. Salem, and A.N. Rowan, eds. Washington, DC: The Humane Society Press.
Stephens, M.L., K. Conlee, G. Alvino, and A.N. Rowan. 2002. Possibilities for refinement and reduction: Future improvements within regulatory testing. ILAR J. 43(Suppl.):S74-S79.
Stokes, W.S. 2000. Introduction: Reducing unrelieved pain and distress in laboratory animals using humane endpoints. ILAR J. 41(2):59-61.
Sumanas, S., and S. Lin. 2004. Zebrafish as a model system for drug target screening and validation [review]. Drug Discov. Today Targets 3(3):89-96.
Terasaka, S., Y. Aita, A. Inoue, S. Hayashi, M. Nishigaki, K. Aoyagi, H. Sasaki, Y. Wada-Kiyama, Y. Sakuma, S. Akaba, J. Tanaka, H. Sone, J. Yonemoto, M. Tanji, and R. Kiyama. 2004. Using a customized DNA microarray for expression profiling of the estrogen-responsive genes to evaluate estrogen activity among natural estrogens and industrial chemicals. Environ. Health Perspect. 112(7):773-781.
Tripos Inc. 2005. Discovery Informatics Products. Tripos Inc., St. Louis, MO [online]. Available: http://www.tripos.com/ [accessed April 12, 2005].
Vaughan, S. 2004. Optimising resources by reduction: The FRAME Reduction Committee. ATLA 32(Suppl. 1):245-248.
Viant, M.R., E.S. Rosenblum, and R.S. Tieerdema. 2003. NMR-based metabolomics: A powerful approach for characterizing the effects of environmental stressors on organism health. Environ. Sci. Technol. 37(21):4982-4989.
Watters, J.W., A. Kraja, M.A. Meucci, M.A. Province, and H.L. McLeod. 2004. Genome-wide discovery of loci influencing chemotherapy cytotoxicity. Proc. Natl. Acad. Sci. U.S.A. 101(32):11809-11814.
Wei, Y.D., K. Tepperman, M.Y. Huang, M.A. Sartor, and A. Puga. 2004. Chromium inhibits transcription from polycyclic aromatic hydrocarbon-inducible promoters by blocking the release of histone deacetylase and preventing the binding of p300 to chromatin. J. Biol. Chem. 279(6):4110-4119.
William, D.E., G.S. Bailey, A. Reddy, J.D. Hendricks, A. Oganesian, G.A. Orner, C.B. Pereira, and J.A. Swenberg. 2003. The rainbow trout (Oncorhynchus mykiss) tumor model: Recent applications in low-dose exposures to tumor initiators and promoters. Toxicol. Pathol. 31(Suppl.):58-61.
Worth, A.P., and M. Balls, eds. 2002. Alternative (Nonanimal) Methods for Chemicals Testing: Current Status and Future Prospects. ATLA 30(Suppl. 1).
Yan, H., W. Yuan, V.E. Velculescu, B. Vogelstein, and K.W. Kinzler. 2002. Allelic variation in human gene expression. Science 297(5584):1143.
Zheng, X.H., G.S. Watts, S. Vaught, and A.J. Gandolfi. 2003. Low-level arsenite induced gene expression in HEK293 cells. Toxicology 187(1):39-48.
Zhu, H., M. Bilgin, and M. Snyder. 2003. Proteomics. Annu. Rev. Biochem. 72:783-812.