3
Current Practices for Assessing Risk for Developmental Defects and Their Limitations

Since the mid-1900s, various governmental agencies in the United States have taken responsibility for protecting the health of the public by regulating safe usage practices for drugs, food additives, pesticides, and environmental and occupational chemicals (Gallo 1996; Omenn and Faustman 1997). In the 1970s, risk assessment began as an organized activity of federal agencies to set acceptable exposure levels or tolerance levels. Earlier the American Conference of Governmental Industrial Hygienists had set threshold limit values for workers and the U.S. Food and Drug Administration (FDA) had established acceptable daily intakes for dietary pesticide residues and food additives. In 1983, the National Research Council published a report entitled Risk Assessment in the Federal Government: Managing the Process (often referred to as the “Red Book”), which provided a common framework for risk assessment (NRC 1983). In 1991, the U.S. Environmental Protection Agency (EPA) published risk assessment guidelines specific for developmental toxicity (EPA 1991).

In this chapter, the committee highlights risk assessment practices as they relate to the evaluation of chemicals as potential developmental toxicants and identifies limitations in the current risk assessment approaches.

THE DEVELOPMENTAL TOXICITY RISK ASSESSMENT PROCESS

“Human health risk assessment” refers to the process of systematically characterizing potential adverse health effects in humans that result from exposure to chemicals and physical agents (NRC 1983). For developmental toxicity, this assessment means evaluating the potential for chemical exposure to cause any of four types of adverse developmental end points: growth retardation; gross, skel-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment 3 Current Practices for Assessing Risk for Developmental Defects and Their Limitations Since the mid-1900s, various governmental agencies in the United States have taken responsibility for protecting the health of the public by regulating safe usage practices for drugs, food additives, pesticides, and environmental and occupational chemicals (Gallo 1996; Omenn and Faustman 1997). In the 1970s, risk assessment began as an organized activity of federal agencies to set acceptable exposure levels or tolerance levels. Earlier the American Conference of Governmental Industrial Hygienists had set threshold limit values for workers and the U.S. Food and Drug Administration (FDA) had established acceptable daily intakes for dietary pesticide residues and food additives. In 1983, the National Research Council published a report entitled Risk Assessment in the Federal Government: Managing the Process (often referred to as the “Red Book”), which provided a common framework for risk assessment (NRC 1983). In 1991, the U.S. Environmental Protection Agency (EPA) published risk assessment guidelines specific for developmental toxicity (EPA 1991). In this chapter, the committee highlights risk assessment practices as they relate to the evaluation of chemicals as potential developmental toxicants and identifies limitations in the current risk assessment approaches. THE DEVELOPMENTAL TOXICITY RISK ASSESSMENT PROCESS “Human health risk assessment” refers to the process of systematically characterizing potential adverse health effects in humans that result from exposure to chemicals and physical agents (NRC 1983). For developmental toxicity, this assessment means evaluating the potential for chemical exposure to cause any of four types of adverse developmental end points: growth retardation; gross, skel-

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment etal, or visceral malformations; adverse functional outcomes; and lethality. Developmental toxicity risk assessment includes evaluating all available experimental animal and human toxicity data and the dose, route, duration, and timing of exposure to determine if an agent causes developmental toxicity (EPA 1991; Moore et al. 1995). As discussed in the “Red Book,” risk management, in contrast to risk assessment, is the application of risk assessment information in policy and decision-making processes to balance risks and benefits (e.g., for therapeutic applications); set target levels of acceptable risk (e.g., for food contaminants and water pollutants); set priorities for the program activities of regulatory agencies, manufacturers, and environmental and consumer organizations; and estimate residual risks after a risk-reduction effort has been taken (e.g., folic acid supplementation in food). Figure 3-1 shows the NRC paradigm for risk assessment and risk management. As shown in this figure, risk characterization refers to the synthesis of qualitative and quantitative information for both toxicity and exposure assessments (EPA 1995). It also usually includes a discussion of the uncertainties in the analysis. The following sections describe some of the specific approaches used for toxicity assessment. Four types of informational methods that can be used for FIGURE 3-1 Risk assessment and risk management paradigm from the NRC modified for developmental toxicity risk assessments. In accordance with this committee’s deliberations, the research section now includes a two-way arrow and specifically highlights emerging research on gene-environment interaction and developmental cell-signaling pathways. The iterative feedback loop between research and risk assessment is necessary to translate new findings in biology into scientifically based risk assessments. Source: Adapted from NRC 1983.

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment these assessments are chemical structure-activity information, in vitro assessments, in vivo animal bioassays, and epidemiological studies. Two additional steps in risk assessment, dose-response assessment and exposure assessment, are described. Finally, the use of toxicokinetic information and biomarkers in developmental toxicity risk assessment is discussed. Chemical Structure-Activity Information Information on a chemical’s structure, stability, solubility, reactivity, and electrophilicity can provide useful clues to its potential to be absorbed and distributed throughout the body and to be reactive with biological tissues. In fact, despite early concepts of a true placental barrier, it is now appreciated that all lipid-soluble compounds have access to the developing cells of an embryo and fetus. Properties of lipid solubility and characteristics such as chemical size and pKa can be used to predict the potential for chemicals to cross the placenta and have access to conceptus tissues (Slikker and Miller 1994). Structure-activity relationships (SARs) are developed to show the relationship between the specific chemical structures or moieties of agents and their capacity to produce certain toxic effects. For glycol ethers, retinoic acid, valproic acid, and their derivatives and for several other commercial products and therapeutics, good SAR data exist for developmental effects. Recently, SARs were reported for valproic acid derivatives that activate the peroxisomal proliferation pathway and cause developmental toxicity (Lampen et al. 1999). Early research on receptor binding identified SARs for environmental agents such as benzopyrene and dioxin. Toxicity equivalency factors (TEFs) have been developed that relate the relative toxicity of each compound to a reference compound, such as benzo[a]pyrene (B[a]P) or 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) for pyrenes and dioxins, respectively (Van den Berg et al. 1998). Complexities arise when different toxicity end points have different SARs. To be useful for developmental toxicity risk assessments, SARs (and TEFs) must be evaluated for each of the end points of developmental toxicity. In Vitro Assessments Alternatives to pregnant experimental mammals in studies of developmental toxicology have often been grouped together as in vitro approaches, but that is misleading, because they include not only ex vivo mammalian embryos, tissues, cells, and subcellular preparations but also embryos of nonmammalian species. Broadly, such alternatives have had two applications: to test chemicals for potential effects and to analyze mechanisms of effect. Mechanistic uses of ex vivo methods have much in common with investigative studies in other areas of biology. They have made major contributions to understanding developmental toxicity, because of the manipulations possible in

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment vitro, including the removal of the maternal environment, the ablation or transplantation of tissues and cells, labeling and tracking of cells or molecules, biochemical and gene manipulations by the use of inhibitors and anti-sense RNA, and real-time physiological monitoring of the embryo. The types of information generated include the identification of proximate developmental toxicants, exact tissue sites of accumulation, initial biochemical insults, gene expression changes, intrinsic SARs (of the parent compound), and identification of disrupted developmental pathways. The search for alternatives for testing purposes is driven by the need to assess a larger number of chemicals than that allowed by using available resources for in vivo methods and also by the desire to reduce or replace the use of experimental mammals. Two levels of testing should be distinguished: secondary and primary. Secondary testing is the assessment of chemicals that have some known potential developmental toxicity. Most commonly, secondary testing involves analogs of prototype chemicals that have known in vivo developmental toxicity. The objective is to replicate the observed developmental toxicity in a simple system. The approach has been successful, especially for pharmaceuticals and particularly with the use of isolated mammalian embryos and embryonic cells in culture. For example, the approach has been used for testing retinoids (Kistler and Howard 1990) and triazoles (Flint and Boyle 1985). For that type of use, a universal validation of the method is not required. It is sufficient to show that the method replicates a specific in vivo effect for the particular chemicals under study. Primary testing, in contrast, is the testing of chemicals that have no known potential toxicity, the aim being to predict in vivo actions. There must be confidence that the test outcome will accurately classify most chemicals by their potential to cause human developmental toxicity. Furthermore, the required sensitivity and selectivity will vary, depending on the purpose of the test. Sensitivity is the proportion of in vivo toxicants that are positive in the test, and selectivity is the proportion of inactive chemicals that are negative in the test. In some contexts, for example, in drug discovery by combinatorial chemistry, the aim is the early elimination of potential toxicants. False-positive results are not problematic, because there are many other chemicals from which to choose. Conversely, if the context is hazard identification and the aim is to set priorities for further in vivo testing, then a high rate of false-positive results would be inappropriate. Thus, there is a drive to validate tests for screening purposes by measuring their sensitivity and selectivity (Lave and Omenn 1986). Regardless of all the testing-related problems about to be discussed, it is worth bearing in mind that some countries have already banned the use of mammals for testing in certain situations, so there is an obligation to continue to refine in vitro approaches. Alternative testing for developmental toxicity has a long history, encompassing regular international conferences (Ebert and Marois 1976; Kimmel et al. 1982; Schwetz 1993), comprehensive reviews (Brown and Freeman 1984; Faustman 1988; Welsch 1992; Brown et al. 1995), and much debate in print (Mirkes 1996;

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment Daston 1996). Alternative tests for development toxicity are not currently used by any regulatory agency. Intrinsic Limitations Alternatives to in vivo testing will never detect all the developmental toxicants that have actions in pregnant mammals. This is true for several reasons. First, some toxicants initiate their effects outside the embryo and in the maternal or placental compartments. Second, some effects are mediated by physiological changes only represented within the intact embryo (e.g., peripheral vascular perfusion). Third, known mechanisms of developmental toxicity are diverse, so it is unlikely that all targets will be present in a simple system. Fourth, some adverse outcomes are only observable as functional impairment postnatally. Finally, most alternative systems are static and have neither the dynamic changes in concentration associated with physiological disposition in vivo nor the metabolic transformation of the test agent. Validation Validation is complex and includes protocol standardization, interlaboratory consistency, and statistical prediction models, but the fundamental question remains: how well does the system mimic the susceptibility of human development? This has yet to be answered for any system, and there are a number of problems that are discussed below. In Vitro Test for What Outcome? The type of adverse outcome induced by a chemical in vivo often varies between individuals, across species, and sometimes with routes or schedules of administration. Thus, although the initial aim of alternative tests was to predict the overall induction of congenital malformations, it is more appropriate to consider that in vitro tests can help to predict specified developmental toxicity and to identify potential mechanisms of disruption to particular cell-signaling and genetic regulatory pathways. General Versus Specific Toxicity. Presumably, all chemicals would disturb development, if a high enough concentration were delivered to the embryo, even though such concentrations might be unattainable in mammals because of maternal toxicity. However, chemicals vary widely in their intrinsic hazard to development. For example, high-affinity ligands for some nuclear-hormone receptors cause irreversible developmental defects (see Chapter 4). It would be helpful to be able to discriminate such chemicals from those that affect development (D) only at exposures that are simultaneously toxic to the adult (A). The A/D ratio (the ratio of adult toxic dose to developmentally toxic dose) attempts to measure that specificity. However, use of that value has been tempered by the demonstra-

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment tion that A/D ratios are not necessarily consistent across species (Daston et al. 1991). Which In Vivo Database? The database on humans is probably too heterogeneous to use for validation studies. For example, it is biased toward pharmaceuticals, and the exposure range is too small for most chemicals, so chemicals with reliable negative results are hard to identify. By default then, comparisons have been made with experimental mammal testing data. The information in this database is also heterogeneous in exposure times, routes, and doses; species; end points; and adverse outcomes. To avoid some of these problems but retain the use of existing data, perhaps the only option is to use data exclusively from orthodox segment II type tests in which animals are exposed during the period of major organogenesis. This approach eliminates many of the chemicals historically used in validation studies, because they have never been formally tested in vivo. Chemicals for Validation. Much effort has been expended on the analysis of in vivo animal test data to produce a list of chemicals for use in validation studies. A prototype list produced by Smith et al. (1983) was subsequently found to be inadequate and an expert committee was set up to address that issue (Schwetz 1993). Because of the difficulty of the task, that committee was not able to complete its task. There has been considerable disagreement over what is and what is not developmentally toxic in vivo and over the severity of that action. There is currently no consensus on how to categorize, stratify, or quantify the developmental toxicity of chemicals. Most validation studies have used a binary classification: developmental toxicants or nontoxicants (Parsons et al. 1990; Uphill et al. 1990). This is a gross oversimplification of the richness of information available. More recently, chemicals have often been grouped informally into three or four categories: (1) toxic to development in all species, no maternal toxicity; (2) toxic to development in some species, no maternal toxicity; (3) toxic to development in some species, some maternal toxicity; (4) no evidence for developmental toxicity in any species tested. However, without formal definition of categories and consistent in vivo testing, there is disagreement in assigning chemicals to such groups (Wise et al. 1990; Daston et al. 1995; Newall and Beedles 1996; Spielmann et al. 1997). Many validation studies have been biased by the inherent toxicity inequality of chemicals selected (Brown 1987). It has been common to select chemicals of potent and general biological activity, such as antimetabolites, nucleotide or nucleoside analogs, and alkylating agents, as developmental toxicants. In contrast, the chosen nondevelopmental toxicants have frequently been endogenous intermediary biochemicals, such as acetate, glutamate, and lysine, or chemicals specifically designed to be nontoxic to mammalian cells, such as antibiotics, saccharin, and cyclamate. It comes as no surprise that developmental models respond differently to two such disparate groups. The proper strategy should be to select chemicals that are largely similar in their gen-

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment eral toxicity and potency, but different in specific developmental hazard (Schwetz and Harris 1993). This has never been achieved. An additional problem in categorizing chemicals, even those tested according to standard protocols, is that toxicokinetics and metabolism are rarely investigated sufficiently to indicate whether a negative outcome in vivo is a reflection of a true lack of inherent developmental toxicity potential or a low embryonic exposure. This outcome can lead to a situation in which a chemical is correctly identified as a potential developmental toxicant from an in vitro test, but the effective exposure can never be achieved in vivo. Existing and Extinct Alternative Tests for Primary Screening Because there are no known common mechanisms of developmental toxicity on which to base a design for a primary screening test, three other approaches have been taken. These are the use of (1) mammalian embryos or parts of embryos in culture, (2) free-living nonmammalian embryos, and (3) cell cultures in which processes thought to be required for normal development are assayed (e.g., proliferation, adhesion, communication, and differentiation). More than 30 test systems have been devised and preliminarily assessed (see Table 3-1). All those test systems that use embryos monitor gross morphological end points. Few tests are actively used for screening purposes (Brown et al. 1995). Rodent embryo culture, micromass, and stem-cell assays are currently being validated in a European Union-sponsored trial (Spielmann et al. 1998). The validation of the frog embryo teratogenesis assay in Xenopus (FETAX) is being reviewed by the U.S. National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods (NIEHS 1997; Fort et al. 1998). Rather than having been eliminated by objective criteria, most other systems were simply not adopted by scientists and were not pursued by their originators. For example, there have been no studies comparing several systems for relative performance or using more sophisticated molecular end points. A few systems have been eliminated by poor performance. The mouse ovarian tumor (MOT) cell-attachment method and the human embryonic palatal mesenchymal (HEPM) cell-proliferation method were simultaneously assessed by the U.S. National Toxicology Program (Steele at al. 1988) and shown to have a combined specificity of only 50%. The hydra assay is novel in having been designed specifically to estimate the A/D ratio. Although transiently popular, usage diminished with the demonstration that the A/D ratio is not consistent across species (Daston et al. 1991) and with other concerns about comparability with mammalian responses. New Receptor-Based Tests Endocrine disruptions by chemicals are beyond the scope of this report but are relevant in terms of the overlap in receptors involved and in the in vitro ap-

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment TABLE 3-1 Systems Proposed as Alternatives to Pregnant Mammals to Test for Developmental Toxicitya System End Point Monitored References Mammalian embryos ex utero   Rodent whole-embryo culture Morphogenesis Webster et al. 1997 Rodent embryonic organ culture: limb; palate Morphogenesis Kochhar 1982; Abbott et al. 1992 Sub-mammalian embryos   Avia: Chick embryotoxicity test (CHEST) Morphogenesis Peterka and Pexieder 1994 Amphibia: Frog embryo test (FETAX) Morphogenesis Fort et al. 1998 Fish: Mekada, zebrafish Morphogenesis Herrmann 1993 Arthropods: Cricket, artemia, Drosophila Morphogenesis Walton 1983; Sleet and Brendel 1985; Lynch et al. 1991 Flatworms: Planaria Morphogenesis Best and Morita 1991 Echinoderms: Sea urchin Morphogenesis Graillet et al. 1993 Coelenterates: Hydra Regeneration Johnson et al. 1988 Protista: Slime mold Morphogenesis Tillner et al. 1996 Cell cultures—primary   Micromass (limb or mid-brain, rodent or chick embryo) Differentiation Flint 1993 Human amniotic and chorionic villus Stress response Honda et al. 1991 Human placental explants Proliferation, differentiation Genbacev et al. 1993 Drosophila embryo Differentiation Bournias Vardiabasis 1990 Chick embryo: neural retinal, neural crest, brain Differentiation Reinhardt 1993 Cell cultures—established lines   Mouse ovarian tumor Attachment Braun et al. 1982 Human embryo palatal mesenchyme Proliferation Zhao et al. 1995 Neuroblastoma Differentiation Kisaalita 1997 V79/HEPM Communication Toraason et al. 1992 Pox virus in infected cells Viral replication Keller and Smith 1982 HELA Proliferation Freese 1982 EC cells Differentiation Hooghe and Ooms 1995 ES lines Differentiation Spielmann et al. 1997 a The reference given for each system is the most recent available, or one that will lead into the appropriate literature. Italicized systems are those still in active use in 1998.

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment proaches being pursued. Interference with estrogen, androgen, glucocorticoid, or thyroxine receptor function can result in developmental and endocrine toxicities. Major efforts are under way to devise screening methods to assess interference with those receptors (EPA 1998b). This task is complex, because chemicals could be agonists, partial agonists, antagonists, or negative antagonists (Limbird and Taylor 1998) or interact with other steps in the pathway. Caution is needed, therefore, in extrapolating from simple tests. Nevertheless, a variety of tests have been devised to assess receptor binding, activation of response elements, and cellular responses (e.g., proliferation). Similar approaches could be devised for other signaling pathway receptors involved in developmental toxicity. Animal Bioassays In vivo animal bioassays are a critical component in human health risk assessment. A basic underlying premise of risk assessment is that mammalian animal bioassays are predictive of potential adverse human health impacts. This assumption, and the assumption that humans are the most sensitive mammalian species, have served as the basis for human health risk assessment. Several study protocols to test for developmental toxicity in animals are accepted and used by regulatory agencies such as EPA and FDA. Describing the various protocols goes beyond the scope of this report and the reader is referred to the original guidelines (EPA 1991, 1996a, 1998c,d,e; FDA 1994; OECD 1998) for detailed descriptions. T.F. Collins et al. (1998) contains a discussion and a comparison of EPA, FDA, and OECD guidelines. Information obtained from in vivo bioassays includes the identification of potentially sensitive target organ systems; maternal toxicity; embryonic and fetal lethality; specific types of malformations including gross, visceral, and skeletal malformations; and altered birth weight and growth retardation. These assays can also provide information on reproductive effects, multigenerational effects, and prenatal and postnatal function. In vivo bioassays determine critical effects that are used for quantitative assessments by taking the no-observed-adverse-effect level (NOAEL) for the most sensitive effects. The focus of animal bioassays primarily has been toxicity assessment, including hazard identification and dose-response assessment. The aim of such studies is to identify qualitatively what spectrum of effects a test chemical can produce and to put those effects in the context of dose-response relationships. Because there is uncertainty in extrapolation from animal studies to humans, several assumptions are made, including the following: (1) an agent that causes an adverse developmental effect in experimental animals might cause an effect in humans; (2) all end points (i.e., death, structural abnormalities, growth alterations, and functional deficits) of developmental toxicity are of potential concern; and (3) specific types of developmental effects observed in experimental animals might not be manifested in the exact same manner as those observed in humans.

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment Much of the literature before 1975 concerning studies of in-utero-induced adverse developmental outcome is troubled by small sample sizes, inappropriate routes and modes of exposure, inconsistent methodology, and excessively high dose or concentration exposures. Many of those deficiencies have been corrected by the regulatory mandate of adhering to Good Laboratory Practices (OECD 1987; FDA 1987; EPA 1990). Several studies of concordance between the perturbed developmental outcomes in experimental animal studies and the human clinical experience have been made (Nisbet and Karch 1983; Kimmel et al. 1984; Francis et al. 1990; Hemminki and Vineis 1985; Newman et al. 1993). The most rigorous and earliest of those was done in the early 1980s and is contained in a technical report for the National Center for Toxicological Research (NCTR) (C.A. Kimmel, EPA, unpublished report, 1984).1 In general, these studies concluded that there is concordance of developmental effects between animals and humans and that humans are as sensitive or more sensitive than the most sensitive animal species. The NCTR study was notable because it employed criteria of acceptance for both human and experimental animal reports that included study design and statistical power considerations. Additionally, the authors held to the premise that adverse developmental effects represented a continuum of responses—or at least a number of interrelated effects—including in utero growth retardation, death of the products of conception, frank malformations, and functional deficits that manifest themselves in later stages in life. Hence, an effect on any one of these end points in experimental animals or human studies was considered a basis for concordance. Concordance did not require an exact mimicry of response among species. This was not required because exposure conditions (e.g., timing and duration of exposure and toxicokinetic differences) and tissue sensitivity (e.g., toxicodynamic differences) could differ enough between experimental animals and humans to result in a different type of effect. Many different agents—mostly chemical agents but also physical agents—have been evaluated to determine their capacity to produce developmental toxicity in experimental animal models, such as the rat, mouse, and rabbit. Most of those studies have been conducted by private industry and federal government-funded research programs and involved test agents that had not yet entered the market. Schwetz and Harris (1993) provide a good review of 50 chemicals that the National Toxicology Program has evaluated for developmental toxicity using rodent bioassays. As discussed in Chapter 2, humans were never exposed to many of the materials that have been evaluated in rodent bioassays and that have been shown to affect animal prenatal development adversely. Thus, it will never be known 1   Dr. Kimmel presented this information to the committee during its meeting on October 6, 1997, in Washington, D.C.

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment whether comparable adverse effects would have been caused had similar human exposure occurred. Summary compilations from published data covering more than 4,000 different entities of exposure conditions indicate that more than 1,200 agents, predominantly chemical agents, have produced adverse developmental outcomes by the end point criteria stated above, often including congenital anomalies, in one or more species of experimental animals (Shepard 1998). Among this large number are about 50 agents (almost exclusively designated as drugs) that are known to cause adverse developmental effects in human beings. For most of the agents that were evaluated for developmental hazard potential in experimental animals, human exposures will never occur. Thus, public health was protected, but ascertainment of concordance of animal and human responses was undetermined for those agents. When exposures occur, rarely have human assessments been sufficient for definitive evaluation and establishment of cause-and-effect associations. Because of the background incidence of human developmental abnormalities (addressed in Chapter 2) and the difficulties in conducting epidemiology studies, such associations are extremely difficult to establish unless the outcome is unusual and striking, as was the case of thalidomide. Among industrial chemicals and environmental contaminants that have been studied in pregnant animal models, often the estimated maximum tolerated dose (MTD) was repeatedly given in conformance with the testing guidelines. Internationally, regulatory authorities require in many instances that the MTD, even up to maternally toxic concentrations, be administered to ensure that no developmental toxicity occurs. Therefore, the underlying principle is that, if regulatory standards are set to protect against maternal toxicity, no adverse effects will occur in offspring. Unfortunately, all too frequently the focus of developmental toxicity testing has been to study the effects of an agent only at high doses that are most likely irrelevant to environmental and occupational exposures. For industrial and environmental chemicals, the dosing regimens at or even above MTDs, as applied in hazard identification studies, typically contrast sharply with anticipated human exposures that are commonly much lower in extent or magnitude, often uncertain, or even entirely unknown. Because of the design of developmental hazard identification studies, the overwhelming majority of the more than 1,200 agents found to elicit adverse developmental outcomes in experimental animals were tested at doses many times higher than anticipated human exposures during pregnancy and have often elicited extreme maternal toxicity. Furthermore, exposure of the pregnant animals was sustained throughout all of organogenesis by daily repeated administrations, and minimal or no regard was taken for toxicokinetic considerations (see toxicokinetics section of this chapter for details). Therefore, there are problems associated with the application of these assays for assessing human developmental toxicity potential. Repeated administration of an MTD might produce adverse results that are not indicative of risk from ambient exposure concentrations or intermittent exposures. It is a continuing

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment FIGURE 3-3 Two chemicals with different toxicokinetic properties are schematically illustrated. The concentration in maternal plasma of Chemical 1 (solid line) rises rapidly to reach its maximum (Cmax). Chemical 1 is then eliminated from the blood-plasma compartment in less than 2 hr after administration. In contrast, the plasma concentration of Chemical 2 (dashed line) rises more gradually and is slowly cleared from maternal plasma in an apparently biphasic fashion. The area under the curve (AUC) is defined by the plot of concentration of the chemical against time after administration. mental toxicity and for assessing the validity of negative in vivo studies for compounds otherwise suspected to have high potential for developmental toxicity. One chemical for which toxicokinetic data have been collected from maternal and conceptus compartments at two stages of pregnancy is 2-methoxyacetic acid, the proximate developmental toxicant derived from the maternal oxidation of 2-methoxyethanol, an ethylene glycol ether used as an industrial solvent. This chemical produces gross malformations in several test animal species examined, including nonhuman primates. Depending on the developmental age of an embryo at the time of exposure to sufficiently high concentrations of 2-methoxyacetic acid, the target tissues are either the developing anterior neuropore or

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment the differentiating paw skeleton of the limbs, and exposure causes exencephaly or digit malformations, respectively (Terry et al. 1994). In the case of 2-methoxyethanol, the maternal plasma AUC of 2-methoxyacetic acid was highly indicative of that in the embryo and might serve as a surrogate of separate conceptus toxicokinetic measurements (Welsch et al. 1995, 1996). Toxicokinetic information could be helpful in judging the extent of the hazard to humans from exposures if human kinetics are known (Yacobi et al. 1993). For example, the anticonvulsant drug valproic acid given to pregnant mice induces exencephaly in their embryos when a certain maternal plasma threshold concentration is surpassed for a very brief duration (Nau 1986). Larger total exposure over time (larger AUC) achieved by constant maternal drug infusion causes a dramatically lower incidence of exencephaly, indicating that the peak concentration (Cmax) rather than total exposure over time (AUC) induces the teratogenic response in mice. In contrast, clinical use of valproic acid for antiepileptic therapy requires the maintenance of valproic acid concentrations in an effective therapeutic range at which the required human doses produce serum Cmax values that are 6-10-fold lower than the teratogenic concentrations in mice (Nau 1986). A similar inference regarding Cmax as a cause of embryotoxic effects was made for caffeine in mice. A large single dose (100 mg/kg) induced a teratogenic response, whereas the same total amount divided into four separate administrations did not cause any malformations (Sullivan et al. 1987). The embryotoxicity of other agents appears to depend on the total exposure over time (AUC). For example, the developmental toxicity of all-trans retinoic acid and cyclophosphamide (a chemotherapeutic alkylating agent) in the rat correlates best with duration of exposure (Tzimas et al. 1997; Reiners et al. 1987). Caution in the interpretation of maternal AUC information without concomitant conceptus toxicokinetics is necessary because a single agent might act through both toxicokinetic exposure patterns, depending on the stage of development. 2-Methoxyacetic acid seems to induce mouse digit malformations best correlated with maternal and conceptus AUC (Clarke et al. 1992, 1993; Welsch et al. 1995, 1996). However, additional toxicokinetic data from both the maternal and the conceptus compartments at an earlier stage of mouse embryogenesis indicate that the agent induces neural tube defects that correlate best with Cmax in the conceptus tissues (Terry et al. 1994; Welsch et al. 1996). What is still lacking in these data is information on the toxicodynamic interaction of 2-methoxyacetic acid with a specific and still unknown recognition site (receptor) in the conceptus. The significance of considering both AUC and Cmax measurements for developmental toxicity risk assessment is especially important because of known temporal differences in tissue susceptibility. In cancer risk assessment, Haber’s law (the product of concentration times time is equal to a constant) is used to normalize risk impacts. Such generalizing concepts cannot be applied in developmental toxicity risk assessment. A recent study by Weller et al. (1999) illustrated these differences for ethylene oxide developmental toxicity.

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment The toxicokinetic patterns that have been important in discriminating developmental toxicity are described here in terms of AUC and Cmax, and not in terms of metabolite profile (i.e., the qualitative similarities in a parent compound and its metabolites). Species are known to differ in the rates that they absorb, distribute, and excrete compounds (i.e., the metabolic rate manifested at AUC and Cmax). Pharmaceutical studies have demonstrated that metabolite profiles between species are often similar (Nau et al. 1994), and this similarity is one of the reasons that it is common practice to use various animal models to assess the potential toxicity of chemicals. The committee will later propose that human DME genes be introduced into model animals to further reduce differences in metabolism. These transgenic animals are likely to have similar metabolite profiles as humans but will be considerably different from humans in terms of metabolic rate. In summary, the correct application of toxicokinetic information in the determination of hazard and in judgments concerning risk characterization requires a broad view of pharmacological, toxicological, and embryological principles. These principles have guided the committee in their considerations on how most effectively to incorporate recent advances in molecular and developmental biology in risk assessment. BIOMARKERS As the committee has outlined in the previous sections of this chapter, key challenges facing risk assessors include the need to understand critical initial events caused by toxicants (events that occur at low doses and early stages of toxicity) and to understand the implications of animal toxicity for human health. Ideally, appropriate biomarkers could serve as indicators to link exposure and early biological effects, and ultimately link those early effects with disease or pathogenesis. As numerous NRC reports have indicated, biomarkers of exposure, effects, and susceptibility are exactly the types of indicators that are needed to address these risk assessment challenges. Specifically, biomarkers for developmental toxicity have been reviewed in the context of reproductive toxicology in a previous NRC (1989) report, Biologic Markers in Reproductive Toxicology. Three types of biomarkers have been defined (NRC 1989): A biologic marker of exposure is an exogenous substance or its metabolite(s) or the product of an interaction between a foreign chemical and some target molecule or cell. The biomarker is measured in a compartment within an organism. A biologic marker of effect represents a measurable biochemical, physiological, or other alteration within an organism that, depending on magnitude, can be recognized as causing an established or potential health impairment or disease.

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment A biologic marker of susceptibility is an indicator of an inherent or acquired limitation of an organism’s ability to respond to the challenge of exposure to a specific foreign chemical substance. It is easy to see from those definitions that biomarkers of exposure and effect should be useful for linking early, low-dose exposures with pathogenesis and providing a platform for cross-species and cross-compound comparisons. Likewise, it is easy to see how biomarkers of susceptibility could be especially useful for assessing differences in temporal sensitivity and developing tissues and for cross-species and intraspecies comparisons. The validity of a biomarker for risk assessment depends on a demonstration that it is highly associated with the outcome, such as in this context, a developmental defect. At present, few biomarkers meet the test. When evaluating biomarkers, one must investigate the mechanistic basis of the association between the biomarker and the adverse events and then determine the reliability of the comparison in a large and varied population for specificity, sensitivity, and reproducibility. A biomarker does not have to be the definitive end point for defining the problem, although that is preferable, but even having a tool to identify candidate individuals for more definitive testing can be helpful. For example, the maternal serum α-fetoprotein levels are useful in clinical screens for neural tube defects, but serum α-fetoprotein is not a definitive test for such defects. Temporal considerations are important for using biomarkers for developmental toxicity. When considering the validity of a screening test, the gestational age at the time of assessment and, more important, the gestational age at the time of exposure to the toxicant must be considered. Such issues have growing importance as fetal therapeutic interventions are increasingly available for use (Miller 1991). Accessibility to the biological material of interest is temporally determined. For example, invasive (e.g., percutaneous umbilical blood sampling, PUBS) and noninvasive biomarker procedures (e.g., ultrasound and Doppler) for assessing the developmental state of the fetus have made possible the use of interventions that have revolutionized the clinical capabilities to treat the affected fetus. At the same time, physicians can now predict, based upon patterns of uterine blood flow, which pregnancies have a greater risk for a poor reproductive outcome (Jaffe 1998). Biomarkers of Exposure The pre-eminent example in this category is methylmercury (MeHg). Maternal hair concentrations, as well as blood concentrations, of MeHg correlate with adverse developmental outcomes in the children exposed in utero (Clarkson 1987). Different threshold exposures have been observed in the adult and fetus

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment for detrimental effects, based upon the hair analyses for MeHg. In fact, temporal records of MeHg exposure can be determined by measuring MeHg levels at various places in the human hair shaft. Unfortunately, not all substances are comparable to MeHg in lending themselves to use as exposure biomarkers. For example, a debate continues concerning the dose of vitamin A, as retinyl esters versus retinol, that will produce malformations in humans. Of particular interest is the discussion about what dosages of vitamin A are needed to increase the blood concentrations of retinoic acid metabolites significantly above those seen in normal pregnant women. Doses of 30,000 international units of retinyl palmitate per day administered orally did not significantly increase the concentrations of retinoic acid in nonpregnant women above those concentrations already circulating in untreated pregnant women (R.K. Miller et al. 1998). Still, for many agents (e.g., ethanol, solvents, and retinoids) that cause developmental toxicity at or near adult toxic dosages, one might be able to monitor concentrations of the compound (or metabolites) in the exposed individual and thereby establish possible risk. Thus, biomarkers of exposure have the potential to be critical in establishing potential risk at a sensitive period during development. For developmental toxicants that can produce developmental defects at dosages or concentrations not causing identifiable immediate adult toxicity (e.g., thalidomide and cigarette smoking), biomarkers of exposure that reveal actual concentrations of parent compounds or metabolites (e.g., cotinine as a nicotine-metabolite measure of cigarettes smoked) might be the only available indicators of risk. It is believed that subtle changes in gene expression, as assayed by large-scale microassay analyses, are good examples of newly developing biomarkers of exposure. Those biomarkers still need to relate expression changes with early biological effects, occurring well before toxicity. In fact, there are extensive discussions to determine if these are truly “biomarkers of exposure” or “biomarkers of effect.” Current efforts are under way to improve the detection of differences in patterns of gene expression for various chemical classes (e.g., peroxisomal proliferators and oxidants), with the aim of improving use of patterns rather than single changes as exposure biomarkers. In cases in which maternal toxic effects occur, the patterns of expression changes might be especially useful biomarkers to improve detection of developmental versus maternal toxicity. Biomarkers of exposure often are used in occupational and molecular epidemiology. Aniline-hemoglobin adducts, benzo[a]pyrene-DNA adducts, aflatoxin B1-DNA adducts, elevated metallothionein, and elevated urinary 8-hydroxy-deoxyguanosine levels have been useful biomarkers for specific exposures. The cancer risk of exposure to dangerous concentrations of foreign or endogenous chemicals is assessed by the activation of a proto-oncogene or the inactivation of a tumor-suppressor gene (e.g., p53), reflecting the mutation of these genes in

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment somatic tissues. In this usage, the biomarker of exposure also comes close to being a biomarker of effect, insofar as mutagenesis is thought to be an important step in carcinogenesis. Still other biomarkers, such as aryl hydrocarbon hydroxylase (AHH) and CYP1A1 at high concentrations, are taken to reflect induction of the enzymes by high internal concentrations of potentially toxic agents and are used to predict whether a population or individual might be at risk for perinatal morbidity or mortality. Biomarkers of Effect Biomarkers of effect at the molecular level are becoming as important as monitoring metabolites or a parent compound. Recently, Perera et al. (1998) confirmed an inverse relationship between concentrations of cotinine in plasma from newborns, a metabolite of nicotine, and birth weight and length. They also demonstrated a significant association between decreased body size at birth, body weight, and head circumference and increased concentrations of polycyclic aromatic hydrocarbon (PAH)-DNA adducts in umbilical cord blood above the median. This had previously been demonstrated for PAH-DNA adducts measured in the human placenta (Everson 1987; Everson et al. 1988). Such associations were related to cigarette smoking and environmental pollution. Those examples show that there can be a practical use of biomarkers of effect at the molecular level to assess exposure. Such measurements not only allow for epidemiological evaluations of environmental pollutants, such as cigarette smoke and air pollution, but they also allow those evaluations to help identify a subpopulation of individuals that might be at risk. Critical applications for such biomarkers in developmental toxicology are in the identification of those at risk, with hopes of reducing that risk by modifying exposure and by developing other intervention strategies to decrease the incidence of developmental defects. Other biomarkers include indicators of normal cell processes (e.g., cell proliferation that may occur at inappropriate times or at different levels of expression). Proliferation markers are often used for assessing immunological impacts where proliferation status is evaluated in the context of differentiation status. These immunological studies present similar issues to those in biomarker studies in developmental toxicology. Likewise, biomarkers of the apoptotic process (e.g., early biomarkers such as nexin, enzymatic changes in various caspase levels or types, and late biomarkers such as DNA fragmentation) can provide temporal, mechanistic biomarkers of effect that are also highly relevant for developmental toxicity assessments. Other biomarkers of effect include increased concentrations of α-fetoprotein in amniotic fluid as indicative of neural tube defects, since delayed closure of the tube is thought to allow escape of this protein. Other biomarkers might be used in combination to enhance the collective ability to diagnosis or predict possible developmental anomalies (e.g., the triple assay of human chorionic gonadotropin, estriol, and α-fetoprotein for trisomy 21).

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment Biomarkers of Susceptibility These biomarkers are used to identify either individuals or populations who might have a different risk based upon differences that are inherent (i.e., genetic) or acquired (i.e., from life history and conditions). The inherent category includes the polymorphisms for genes encoding DMEs and for genes for the receptor and transcription factors regulating the expression of the genes for DMEs, as discussed in a previous section of this chapter. The category also includes polymorphisms for genes encoding components of developmental processes, although the latter are still not well understood. The acquired category includes previous disease conditions, antibody immunity, nutrition, other chemical and pharmaceutical exposures, and various capacities for homeostasis. As a monitor, the placenta has been a key test organ for identifying such sensitive populations and their responses to environmental exposures. For example, Welch et al. (1969) and Nebert et al. (1969) demonstrated that AHH is induced in the human placenta of cigarette smokers. With the ever-improving tools for investigation, biomarkers now have moved from proteins and enzyme activities induced by polycyclic aromatic hydrocarbons (e.g., benzo[a]pyrene) and dioxin (Manchester et al. 1984; Gurtoo et al. 1983) to biomarkers of combined effect and exposure, such as mRNAs (e.g., CYP1A1) plus DNA adducts (Everson et al. 1987,1988; Perera et al. 1998). The molecular probes to identify such subpopulations are useful as biomarkers not only for identifying individuals at risk but also for exploring the underlying mechanisms by which those individuals or populations are at risk by demonstrating allelic polymorphisms in a particular gene. As discussed above, gene-environment interactions have been noted for the induction of cleft palate in humans (Hwang et al. 1995) through a combination of cigarette smoking and TGF genotype. Alone, neither variable demonstrates an association with cleft palate. Such an example demonstrates the possibility for understanding why only a small percentage of a population exposed to a developmental toxicant might be at risk, but more important, it identifies a biological association that might lead to a mechanistic understanding of how a particular developmental defect might occur. There are serious concerns about using the term “biomarkers of susceptibility” to describe a person’s particular set of alleles because these have a hereditary basis (e.g., slow acetylator activity, low G6PD activity, low 5,10-methylene tetrahydrofolate reductase activity, or high CYP1A1 activity). The committee emphasizes the need for a distinction between biomarkers of susceptibility reflecting inherent limitations versus those reflecting acquired limitations. The former require a full understanding of the complex genetic implications before they can be used. An allele encoding an altered DME, for example, might put the individual at increased risk for toxicity caused by one environmental chemical but at decreased risk for toxicity caused by another drug or environmental

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment chemical. Combinations of alleles might also have exaggerating or compensating effects. LIMITATIONS IN DEVELOPMENTAL TOXICITY RISK ASSESSMENTS Although it can be argued that the current approach of risk assessment appears to have worked reasonably well for hazard identification, many assumptions must be made before it can be applied. One such default assumption is that outcomes for rodent tests are relevant for human risk prediction. Such assumptions are generically used because information on the mechanisms of action for specific developmental toxicants is inadequate and because the lack of mechanistic information results in the use of default uncertainty factors. The most important limitation is the paucity of human data, and the lack of methodology to adequately assess humans. Mechanism of action can be pursued in animal models, but it is also the lack of an understanding of human development that hampers risk assessment. For risk characterization, the bioassays used for regulatory assessment have provided limited dose-response information. The information is limited because the focus is on the effects of high doses at or near maternal toxicity to emphasize identification of hazards. That focus has provided little quantitative information on the dose-response relationship in the low-dose region, the region of greatest importance for extrapolation in human risk assessment. The lack of useful dose-response data has had several impacts. As mentioned previously, conservative use of uncertainty factors predominates for converting NOAELs and BMDs to RfDs for determination of acceptable safe exposure levels. The dominance of animal testing at high doses has also had the unfortunate consequence of providing minimal useful mechanistic information, because assessments are frequently conducted at doses where homeostatic mechanisms are overwhelmed (Nebert 1994), and mechanistic clues about critical toxicant-induced changes are hidden. The lack of mechanistic information has also resulted in assumptions about sensitivity among humans. Present practice in risk assessment almost always makes use of a default factor of 10 to take into account the variability in sensitivity (i.e., there is a 10-fold difference in susceptibility of the most sensitive individual and the average individual). This assumption has been experimentally addressed for relatively few chemicals (for a review, see Neumann and Kimmel 1998). However, the default assumption could change as researchers gain more information about the underlying basis for responses to toxicants. To date, the greatest progress in characterizing human variability is from research on DMEs. With time, there will also be data on other factors that influence susceptibility. For example, as discussed in detail in Chapter 5, a particular allele of transforming growth factor conveys more than a 10-fold increase in risk of oral clefts in infants whose mothers smoke cigarettes (Hwang et al. 1995; Shaw et al.1996).

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment As knowledge of human variation in responses increases from the results of the Human Genome Project, a risk-assessment framework is needed in which these default factors are replaced with mechanistic data on relevant toxicant-induced changes. Molecular approaches should be useful in resolving the issue of extrapolation across species. There is general agreement that the molecular control of development is highly conserved, although the pattern of development of structures at higher levels of biological organization can be very dissimilar. The committee discusses such conservatism in Chapter 7 and suggests that models that assess a small number of those control points and pathways might be relevant for evaluating the potential for chemicals to impact the critical pathways in development, regardless of the species from which the model is derived. The same principle applies to extrapolation from rat or mouse to humans. The critical data to support predictions of ultimate effect in human embryos include a description of the pathogenetic steps that ensue from toxicant actions at the molecular level and lead to structural malformations. Pattern formation genes and signal transduction pathways are so highly conserved across groups of animals that actions of toxicants on those gene products and processes are likely to be comparable and have similar toxicodynamic impacts. The events that follow perturbations at the molecular level are likely to be more prone to interspecies variability, for example, the toxicokinetic differences observed with chemicals that require metabolic activation, as metabolic rates often vary markedly between species. At this point, the predictive value of hazard identification data in alternative models, particularly those that are phylogenetically removed from humans, becomes limited. Therefore, characterization of the pathogenetic events that result in dysmorphogenesis will lead to better prediction of (1) whether the critical events are present and when they are functional in humans, predisposing them to an adverse outcome; and (2) the kinds of adverse outcomes that are possible, based on the temporal and spatial locations of the critical events. A better understanding of the molecular and cellular mechanisms involved in the pathogenesis of abnormal development might provide a method for answering such questions as to whether a residual level of risk exists at the RfD or ADI and what that level might be. A residual level of risk might also provide a method for determining what exposure concentration can be permitted before the probability of an adverse event begins to increase. The resolving power of animal studies to distinguish an increase in the rate of frank malformations is relatively weak. For example, in a study with 20 pregnant rats per dose group, an increase in the malformation rate must double from the background rate to be statistically significant. Mechanistic and pathogenetic events may prove to be much more sensitive and, therefore, provide a data-driven means to extend the dose-response curve below the NOAEL or BMD for malformations. Although these effects might not be adverse, they might be biomarkers or early indicators for the process of pathogenesis and might help to determine the shape and slope of the dose-

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment response curve at doses that approach relevance for human exposure. It then becomes a matter of conducting further research to understand the magnitude of response on a molecular or cellular end point that is needed to produce a structural defect with adverse physiological, structural, or functional developmental effects. A method has been proposed for constructing dose-response curves that combine data on frank malformations with data on less-severe effects that are not considered adverse. For example, Allen et al. (1996) combined data on rib malformations with those on rib variations in rats prenatally exposed to boric acid. This method could easily be adapted to include molecular events. Although it can be postulated that many molecular and cellular events that are the precedents of abnormal development are unlikely to have strictly linear dose-response curves, there is minimal information on developmental specific processes. There have been extensive discussions on the shape of receptor-versus-nonreceptor-based responses that were initiated directly from recent advances in the understanding of molecular events; yet, little is known about actual events at low-dose exposures, as opposed to generation of hypothesized dose-response relationships at low doses. Hypothetical biologically based dose-response models have been proposed on a toxicant-specific basis (Shuey et al. 1995; Leroux et al. 1996). What appears to be most relevant for this report is a call for increased understanding of toxicant-induced molecular changes and an investigation of how these early events are linked to manifestations of adverse developmental outcomes. Empirical work will be needed to establish the magnitude of response at each level of organization required to provoke a response at the next level. Such investigations should be conducted to obtain quantitative information on the kinetics of the toxicant and the dynamics of the toxicological interaction in the temporal context of development (Faustman et al. 1999; Faustman et al. 2000). Current practices in developmental toxicity risk assessment recognize the concept of “critical windows of sensitivity” in development, but a fundamental understanding of applying the molecular and developmental biological events that define those windows is lacking. This lack of understanding again results in the application of additional child-specific uncertainty factors in efforts to address the sensitivity of the developing conceptus rather than emphasizing the search for the biological understanding of critical windows of susceptibility. A corollary to the problem of low-dose extrapolation is the assumption that effects observed at high-dose concentrations in experimental animals are relevant to the prediction of risk of adverse effects at ambient exposure concentrations. As discussed previously, testing of chemicals has the inherent dilemma of requiring exaggerated doses and concentrations to maximize the chances of detecting the potential for adverse effects and requiring understanding that the interpretability of the results might be limited because of the possibility that physiological processes in the pregnant animal have been so overwhelmed that the observed responses are qualitatively different from the responses at lower doses. The uneasy resolution of the dilemma has been to assume that the high dose and concen-

OCR for page 26
Scientific Frontiers in Developmental Toxicology and Risk Assessment tration effects are predictive of all effects at low exposure doses and concentrations unless they are proved to be secondary to maternal toxicity. Some research has been conducted to demonstrate the existence of maternally mediated mechanisms of adverse development (Daston et al. 1991a, 1994). Examples are the induction of transitory zinc deficiencies in the dam by metallothionein inducers (Daston and Lehman-McKeeman 1996) and the overwhelming of acid-base buffering by acidic metabolites of ethylene glycol (Carney et al. 1996). Understanding the molecular processes that lead to specific developmental abnormalities will be useful in determining the low-dose relevance of high-dose effects. In those instances in which the high-dose effects are predictive of low-dose responses, the relevant molecular processes would be expected to increase with dose (i.e., to involve higher levels of gene expression or cellular response and involve more cells). For those instances in which the high-dose effects are the result of a secondary mechanism, the dose-response curve for the adverse effect and the underlying molecular perturbation would be expected to be steep, with an inflection at the dose where the maternal homeostasis was overwhelmed. SUMMARY This chapter has defined developmental toxicity risk assessment and outlined issues that regulators face as they strive to protect the human population from chemically induced birth defects. Each section also identified limitations in the current knowledge and methodologies. Biomarkers for developmental toxicity are also discussed. They hold great potential for epidemiological analysis of developmental defects, especially those defects due to complex gene-environment interactions. The information presented in this chapter, and in the next chapter on mechanisms of developmental toxicants (Chapter 4), will be used to define the current state of developmental toxicology and will provide a context for how advances in developmental biology and genomics can improve the approaches for protecting public health.