Developing the Science Base and Assays to Implement the Vision
Rapid advances in the understanding of the organization and function of biologic systems provide the opportunity to develop innovative mechanistic approaches to toxicity testing. In comparison with the current system, the new approaches should provide wider coverage of chemicals of concern, reduce the time needed for generating toxicity-test data required for decision-making, and use animals to a far smaller extent. Accordingly, the committee has proposed development of a testing structure that evaluates perturbations in toxicity pathways and relies on a mix of high-and medium-throughput assays and targeted in vivo tests as the foundation of its vision for toxicity testing. This chapter discusses the kinds of applied and basic research needed to support the new toxicity-testing approach, the institutional resources required to support and encourage it, and the valuable products that can be expected during the transition from the current apical end-point testing to a mechanistically based in vivo and in vitro test system.
Most tests in the committee’s vision would be unlike current toxicity tests, which generate data on apical end points. The mix of tests in the vision include in vitro tests that assess critical mechanistic end points involved in the induction of overt toxic effects rather than the effects themselves and targeted in vivo tests that ensure adequate testing of metabolites and coverage of end points. The move toward a mechanism-oriented testing paradigm poses challenges. Implementation will require (1) the availability of suites of in vitro tests—preferably based on human cells, cell lines, or components—that are sufficiently comprehensive to evaluate activity in toxicity pathways associated with the broad array of possible toxic responses; (2) the availability of targeted tests to complement the in vitro tests and ensure overall adequate data for decision-making; (3) models of toxicity pathways to support application of in vitro test results to predict general-population exposures that could potentially cause adverse perturbations; (4) infrastructure changes to support the basic and applied research needed to develop the tests and the pathway models; (5) validation of tests and test strategies for incorporation into chemical-assessment guidelines that will provide direction on interpreting and drawing conclusions from the new assay results; and (6) acceptance of the idea that the results of tests based on perturbations in toxicity pathways are adequately predictive of adverse responses and can be used in decision-making. Development of the new assays and the related basic research—the focus of this chapter—requires a substantial research investment over quite a few years. Institutional acceptance of the new tests and the requisite new risk-assessment approaches—the focus of Chapter 6—also require careful planning. They cannot occur overnight.
Ultimately, the time required to conduct the research needed to support large-scale incorporation of the new mechanistic assays into a test strategy that can adequately and rapidly address large numbers of agents depends on the institutional will to commit resources to support the changes. The committee believes that with
a concerted research effort, over the next 10 years high-throughput test batteries could be developed that would substantially improve the ability to identify toxicity hazards caused by a number of mechanisms of action. Those results in themselves would be a considerable advance. The time for full realization of the new test strategy, with its mix of in vitro and in vivo test batteries that can rapidly and inexpensively assess large numbers of substances with adequate coverage of possible end points, could be 20 or more years.
This chapter starts by discussing basic research that will provide the foundation for assay development. It then outlines a research strategy and milestones. It concludes by discussing the scientific infrastructure that will support the basic and applied research required to develop the high-throughput and targeted testing strategy envisioned by the committee.
SCOPE OF SCIENTIFIC KNOWLEDGE, METHODS, AND ASSAY DEVELOPMENT
This section outlines the scientific inquiry required to develop the efficient and effective testing strategy envisioned by the committee. Several basic-research questions need to be addressed to develop the knowledge base from which toxicity-pathway assays and supporting testing technologies can be designed. The discussion here is intended to provide a broad overview, not a detailed research agenda. The committee recognizes the challenges and effort involved in addressing some of these research questions.
Knowledge critical for the development of high-throughput assays is emerging from biologic, medical, and pharmaceutical
research. Further complementary, focused research will be needed to address fully the key questions that when answered will support toxicity-pathway assay development. Those questions are outlined in Box 5-1 and elaborated below.
Toxicity-pathway identification. The key pathways that, when sufficiently perturbed, will result in toxicity will be identified primarily from future, current, and completed studies in the basic biology of cell-signaling motifs. Identification will involve the discovery of the protein components of toxicity pathways and how the pathways are altered by environmental agents. Many pathways are under investigation with respect to the basic biology of cellular processes. For example, the National Institutes of Health (NIH) has a major program under way to develop high-throughput screening (HTS) assays based on important biologic responses in in vitro systems. HTS has the potential to identify chemical probes of genes, pathways, and cell functions that may ultimately lead to characterization of the relationship between chemical structure and biologic activity (Inglese et al. 2006). Determining the number and nature of toxicity pathways involved in human disease and impairment is an essential component of the committee’s vision for toxicity testing.
Multiple pathways. Adverse biologic change can occur from simultaneous perturbations of multiple toxicity pathways. Environmental agents typically affect more than one toxicity pathway. Although the committee envisions the design of a suite of toxicity tests that will provide broad coverage of biologic perturbations in all key toxicity pathways, biologic perturbations in different pathways may lead to synergistic interactions with important implications for human health. For some adverse health effects, an understanding of the interplay of multiple pathways involved may be important. For others, the research need will be to identify the pathway affected at the lowest dose of the environmental agent.
Key Research Questions in Developing Knowledge to Support Pathway Testing
Toxicity-Pathway Identification—What are the key pathways whose perturbations result in toxicity?
Multiple Pathways—What alteration in response can be expected from simultaneous perturbations of multiple toxicity pathways?
Adversity—What adverse effects are linked to specific toxicity-pathway perturbations? What patterns and magnitudes of perturbations are predictive of adverse health outcomes?
Life Stages—How can the perturbations of toxicity pathways associated with developmental timing or aging be best captured to enable the advancement of high-throughput assays?
Effects of Exposure Duration—How are biologic responses affected by exposures of different duration?
Low-Dose Response—What is the effect on a toxicity pathway of adding small amounts of toxicants in light of pre-existing endogenous and exogenous human exposures?
Human Variability—How do people differ in their expression of toxicity-pathway constituents and in their predisposition to disease and impairment?
Adversity. An understanding of possible diseases or functional losses that may result from specific toxicity-pathway perturbations will support the use of pathway perturbations for decision-making. Current risk assessments rely on toxicity tests that demonstrate apical adverse health effects, such as disease or functional deficits, that are at various distances downstream of the toxicity-pathway perturbations. In the committee’s vision, the assessment of potential human health impact will be based on perturbations in toxicity pathways. For example, activation of estrogenic action to abnormal levels during pregnancy is associated with undescended testes and, in later life, testicular cancer. Research will be needed to understand the patterns and magnitudes of the perturbations that will lead to adverse effects. As part of the
research, biomarkers of effect that can be monitored in humans and studied in whole animals will be useful.
Life stages. An understanding of how pathways associated with developmental timing or aging can be adversely perturbed and lead to toxicity will be needed to develop high-throughput assays that can capture and adequately cover developmental and senescing life stages. Many biologic functions require coordination and integration of a wide array of cellular signals that interact through broad networks that contribute to biologic function at different life stages. That complexity of pathway interaction holds for reproductive and developmental functions, which are governed by parallel and sequential signaling networks during critical periods of biologic development. Because of the complexity of such pathways, the challenge will be to identify all important pathways that affect such functions to ensure adequate protection against risks to the fetus and infant. That research will involve elucidating temporal changes in key toxicity pathways that might occur during development and the time-dependent effects of exposure on these pathways.
Effects of exposure duration. The dose of and response to exposure to a toxicant in the whole organism depend on the duration of exposure. Thus, conventional toxicity testing places considerable emphasis on characterizing risks associated with exposures of different duration, from a few days to the test animal’s lifetime. The ultimate goal in the new paradigm is to evaluate conditions under which human cells are likely to respond and to ensure that these conditions do not occur in exposures of human populations. Research will be needed to understand how the dose-response relationships for perturbations might change with the duration of exposure and to understand pathway activation under acute, subchronic, and chronic exposure conditions. The research will involve investigating the differential responses of cells of various ages and backgrounds to a toxic compound and
possible differences in responses of cells between people of different ages.
Low-dose response. The assessment of the potential for an adverse health effect from a small environmental exposure involves an understanding of how the small exposure adds to preexisting exposures that affect the same toxicity pathways and disease processes. For the more common human diseases and impairments, a myriad of exposures from food, pharmaceuticals, the environment, and endogenous processes have the potential to perturb underlying toxicity pathways. Understanding how a specific environmental exposure contributes, with the other exposures, to modulate a toxicity pathway is critical for the understanding of low-dose response. Because the toxicity tests used in the committee’s long-range vision are based largely on cellular assays involving sensitive biomarkers of alterations in biologic function, it will be possible to study the potential for adverse human health effects at doses lower than is possible with conventional whole-animal tests. Given the cost-effectiveness of the computational methods and in vitro tests that form the core of the toxicity testing, it will be efficient to evaluate effects at multiple doses and so build a basis of detailed dose-response research.
Human variability. People differ in their expression of toxicity-pathway constituents and consequently in their predisposition to disease and impairment. An understanding of differences among people in the level of responsiveness of particular toxicity pathways is needed to interpret the importance of small environmental exposures. The comprehensive mapping of toxicity pathways provides an unprecedented opportunity to identify gene loci and other determinants of human sensitivity to environmental exposures. That research will support the development of biomarkers of exposure, effect, and susceptibility for surveillance in the human population, and these discoveries in turn will support an assessment of host susceptibility for use in extrapolating results from the in vitro assays to the general population and susceptible
groups. The enhanced ability to characterize interindividual differences in sensitivity to environmental exposures will provide a firmer scientific basis of the establishment of human exposure guidelines that can protect susceptible subpopulations.
Research on most, or all, of the above subjects is going on in the United States and internationally. It is taking place in academe, industry, and government institutions and is funded by foundations and the federal government mainly to understand the basis of human disease and treatment. Private firms, such as pharmaceutical and biotechnology companies, conduct the research for product development. However, efforts directed specifically toward developing toxicity-testing systems are small.
Test and Analytic Methods Development
The research described above will provide the foundation for the development of toxicity tests and comprehensive testing approaches. The categories of toxicity tests and methods to be developed are outlined below, and the primary questions to be answered in their development are presented in Box 5-2.
Methods to predict metabolism. A key issue to address at an early phase will be development of methods to ensure adequate testing for metabolites in high-throughput assays. Understanding the range of metabolic products and the variation in metabolism among humans and being able to simulate human metabolism as needed in test systems are critical for developing valid toxicity-pathway assays. Without such methods, targeted in vivo assays will be needed to evaluate metabolism.
Chemical-characterization tools. In addition to metabolism, further development of tools to support chemical characterization
Main Questions in Developing Tests and Methods
Methods to Predict Metabolism—How can adequate testing for metabolites in the high-throughput assays be ensured?
Chemical-Characterization Tools—What computational tools can best predict chemical properties, metabolites, xenobiotic-cellular and molecular interactions, and biologic activity?
Assays to Uncover Cell Circuitry—What methods will best facilitate the discovery of the circuitry associated with toxicity pathways?
Assays for Large-Scale Application—Which assays best capture the elucidated pathways and best reflect in vivo conditions? What designs will ensure adequate testing of volatile compounds?
Suite of Assays—What mix of pathway-based high- and medium-throughput assays and targeted tests will provide adequate coverage? What targeted tests should be developed to complement the toxicity-pathway assays? What are the appropriate positive and negative controls that should be used to validate the assay suite?
Human-Surveillance Strategy—What surveillance is needed to interpret the results of pathway tests in light of variable human susceptibility and background exposures?
Mathematical Models for Data Interpretation and Extrapolation—What procedures should be used to evaluate whether humans are at risk from environmental exposures?
Test-Strategy Uncertainty—How can the overall uncertainty in the testing strategy be best evaluated?
will be important. The tools will include computational and structure-activity relationship (SAR) methods to predict chemical properties, potential initial interactions of a chemical and its metabolites with cellular molecules, and biologic activity. A National Research Council report (NRC 2000) indicated that early cellular interactions are important in understanding potential toxicity and include receptor-ligand interactions, covalent binding with DNA and other endogenous molecules, peroxidation of lipids and proteins, interference with sulfhydryl groups, DNA methylation, and
inhibition of protein function. Good predictive methods for chemical characterization will reduce the need for targeted testing and enhance the efficiency of the testing.
Assays to uncover cell circuitry. Development of methods to facilitate the discovery of the circuitry associated with toxicity pathways will involve functional genomic techniques for integrating and interpreting various data types and for translating dose-response relationships from simple to complex biologic systems, for example, from the pathway to the tissue level. It will most likely require improved methods in bioinformatics, systems biology, and computational toxicology. Some advances in overexpression with complementary DNA (cDNA) and gene knockdown with small inhibitory RNAs are likely to allow improved pathway mapping and will also lead to studies with cells or cell lines that are more readily transfectable.
Assays for large-scale application. Several substantive issues will need to be considered in developing assays for routine application in a testing strategy. First, as pathways are identified, medium- and high-throughput assays that adequately evaluate pathways and human biology will be developed, including new, preferably human, cell-based cultures for assessment of perturbations. Second, the assay designs that best capture the elucidated pathways and can be applied for rapid large-scale testing of chemicals will need to be identified. Third, an important design criterion for assays will be that they are adequately reflective of the in vivo cellular environment. For any given assay, that will involve an understanding of the elements of the human cellular environment that must be simulated and of culture conditions that affect response. Fourth, the molecular evolution of cell lines during passage in culture and related interlaboratory differences that can result will have to be controlled for. Fifth, approaches for the testing of volatile compounds will require early attention in the development of high-throughput assays; this has been a challenge for in vitro test systems in general. Sixth, assay sensitivity
(the probability that the assay identifies the phenomenon that it is designed to identify) and assay specificity (the probability that the assay does not identify a phenomenon as occurring when it does not) will be important considerations in assay design. Individual assays and test batteries should have the capability to predict accurately the effects that they are designed to measure without undue numbers of false positives and false negatives. And seventh, it will be important to achieve flexibility to expand or contract the suites of assays as more detailed biologic understanding of health and disease states emerges from basic research studies.
Suite of assays. An important criterion for the development of a suite of assays for assessing the potential for a substance to cause a particular type of disease or group of toxicities will be adequate coverage of causative mechanisms, affected cell types, and susceptible individuals. Ensuring the right mix of pathway-based high-throughput assays and targeted tests will involve research. For diseases for which toxicity pathways are not fully understood, targeted in vivo or other tests may be included to ensure adequate coverage.
Human-surveillance strategy. Human data on the fundamental biologic events involved in the activation of toxicity pathways will aid the interpretation of the results of high-throughput assays. They will provide the basis of understanding of determinants of human susceptibilities related to a toxicity pathway and of background exposures to compounds affecting the pathway. Research will be needed to assess how population-based studies can best be designed and conducted to complement high-throughput testing and provide the information necessary for data interpretation.
Mathematical models for data interpretation and extrapolation. Procedures for evaluating the impact of human exposure concentrations will involve pharmacokinetic and other modeling methods to relate cell media concentrations to human tissue doses and biomonitoring data and to account for exposure patterns and interindividual variabilities. To facilitate interpretation of high-
throughput assay results, models of toxicity pathways (see Chapter 3) and other techniques will be needed to address differences among people in their levels of activation of particular response pathways. Although it is not a key aspect of the current vision, in the distant future research may enable the development of biologically based dose-response models of apical responses for risk prediction.
Test-strategy uncertainty. Methods to evaluate the overall uncertainty in a possible testing strategy will assist the validation and evolution of the new methods. Formal methods could be developed that use systematic approaches to evaluate uncertainty in predicting from the test battery results the doses that should be without biologic effect in human populations. These uncertainty evaluations can be used in the construction and selection of testing strategies.
Whether the testing strategy will detect and predict harmful exposures will depend on whether the major toxicity pathways are addressed by the high-throughput assays or covered by the targeted in vivo and other tests. To ensure that the test system is adequate, the committee envisions a multipronged approach that includes the following components:
A continuing research and evaluation program to develop, improve, and assess the testing program.
Adequate validation of the assays, including examination of false-negative and false-positive rates, by applying the assays to sufficient numbers of chemicals of known toxicity.
A robust program of biomonitoring, human health surveillance, and molecular epidemiology to assess exposures and early indicators of toxicity, to aid in interpretation of high-throughput assay results, and to monitor exposures to ensure that toxic ones are not missed.
Aspects of those endeavors are discussed in the following sections.
STRATEGY FOR KNOWLEDGE AND ASSAY DEVELOPMENT AND VALIDATION
The research strategy to develop the computational tools, suites of in vitro assays, and complementary targeted tests envisioned by the committee will likely involve contributions on multiple fronts, including the following:
Basic biologic research to obtain the requisite knowledge of toxicity pathways and the potential human health impacts when the pathways are perturbed.
Science and technology milestones that ensure timely achievement of assays and tool development for the new paradigm.
Phased basic and applied research to demonstrate success in the transition to the testing emphasis on toxicity pathways.
The basic-research effort will be directed at discovering and mapping toxicity pathways that are the early targets of perturbation by environmental agents and at understanding how agents cause the perturbations. That will be followed by research focused on the design of assays that can be used to determine, first, whether an agent has the potential to perturb the pathway and, if so, the levels and durations of exposure required. The scientific inquiry will involve research at multiple levels of biologic organization, that is, understanding the nature of toxicity pathways at the molecular and cellular levels and how toxicity-pathway alterations may translate to disease processes in tissues, organs, and the whole organism. Some of the tools and technologies that enable this research are described in Chapter 4.
In each broad field of toxicity testing, such as neurotoxicology and reproductive and developmental toxicity, systematic approaches to assay development, assay validation, and generalized acceptance of the assays will be organized and pursued. As the research questions presented in the previous section are answered, milestones would be achieved in an orderly manner. Some important milestones to move from pathway research through assay development to validated test strategies are presented in broad brush strokes in Box 5-3. The committee recognizes that the implementation of its recommendations would entail extensive planning and expert deliberation; through those processes, the important milestones would be subdivided, elaborated, reshaped, and perhaps even replaced.
The research would progress in sequential phases, whose timelines would overlap. The committee finds that four phases would evolve as follows:
Phase I: Toxicity-pathway elucidation. A focused research effort is pursued first to understand the toxicity pathways for a select group of health effects (that is, apical end points) or molecular mechanisms. Early in this first phase, a data-storage, -access, and -management system would be established to enable broad use of the data being generated to facilitate the understanding of the toxicity pathways and research and knowledge development in later phases. A third element of this phase would involve developing standard practices for research methods and reporting of results so that they are understandable and accessible to a broad audience of researchers and to facilitate consistency and validity in the research methods used. Research in this phase would also focus on developing tools for predicting metabolism, characterizing chemicals, and planning a strategy for human surveillance and biomonitoring of exposure, susceptibility, and effect markers associated with the toxicity-pathway perturbations.
Some Science and Technology Milestones in Developing Toxicity-Pathway Tests As the Cornerstone of Future Toxicity-Testing Strategies
Develop rapid methods and systems to enable in vitro dosing with chemical stressors (including important metabolites and volatile compounds).
Create and adapt human, human-gene-transfected rodent, and other cell lines and systems, with culture medium conditions, to have an adequate array of in vitro human cell and tissue surrogates.
Adapt and develop technologies to enable the full elucidation of critical toxicity pathways causing the diseases by the mechanisms selected for pilot project study.
Develop toxicity-pathway assays that fully explore the possible effects of exogenous chemical exposure on the diseases and mechanisms selected for a pilot-project study, thereby demonstrating proof of concept.
Establish efficient approaches for validating suites of high-throughput assays.
Develop the infrastructure for data management, assay standardization, and reporting to enable broad data-sharing across academic, government, industry, and nongovernment-organization sectors and institutions.
Phase II: Assay development and validation. High- and medium-throughput assays would be developed for toxicity pathways and points for chemical perturbation in the pathways organized for assay development. During this phase, attempts would be pursued to develop biologic markers of exposure, susceptibility, and effect for use in surveillance and biomonitoring of human populations where these toxicity pathways might activated.
tested and would begin the biomonitoring and surveillance of human populations.
Some of the key science and technology development activities for the phases are listed out in Figure 5-1, and some of the critical aspects are described below. All phases would include research on toxicity pathways. Progression through the phases would involve exploring the research questions outlined in Box 5-1.
Research to Understand Toxicity Pathways
Phase I research would develop pathway knowledge from which assays for health effects would emerge. Systems-biology approaches—including molecular profiling microarrays, pathway mining, and other high-resolution techniques—would reveal key molecular interactions. Mechanistic understanding provides the basis for identifying the key molecular “triggers” or mechanisms of interactions that can alter biologic processes and ultimately cause toxicity after an environmental exposure. Those nodal triggers or interactions would be modeled in vitro and computationally to provide a suite of appropriate assays for detecting toxicity-pathway perturbations and the requisite tools for describing dose-response relationships.
Early efforts would explore possible toxicity pathways for health effects where there is fairly advanced knowledge of mechanisms of toxicity, molecular signaling and interactions. As a
case study, the following sketches out how knowledge development might begin for toxic responses that are associated with estrogenic signaling alterations caused by agonists and antagonists of estrogen function.
Even our current appreciation of the number of potential toxicity pathways highlights the breadth of responses that might be evaluated in various high-throughput assays. Consideration of adverse responses at the level of the intact organism that might be associated with altered signaling through estrogen-receptor-mediated responses illustrates some of the challenges. Xenobiotic-caused alteration in estrogen signaling can occur or be measured at a number of points in the various process that affect estrogen actions, including steroidogenesis, hormone transport and elimination, receptor binding and alteration in numbers of receptors, and changes in nuclear translocation. Those pathways may also be evaluated at different levels of organization—ligand binding, re-
ceptor translocation, transcriptional activation, and integrated cellular responses. Some of the processes are outlined here.
Estrogen steroidogenesis. Upstream alterations in steroidogenesis pathways or other independently regulated pathways that affect endocrine signaling would be explored. Knowledge development would focus on understanding of enzymatic function for key steroidogenesis pathways and the interactions of the pathways with each other and on understanding of how key elements of the pathways might be altered, including alterations of precursors, products, and metabolites when pathway dysregulation occurs. The research might involve quantitative assessment of key enzyme functions in in vitro and in vivo systems, analytic techniques to measure various metabolites, and modeling to understand the target and key steps that undergo estrogen-related dysregulation. Other assays would develop SAR information on compounds already associated with altered steroidogenesis in other situations.
Estrogen-receptor interactions. Much is known about the molecular interactions between xenobiotics and estrogen receptors (ERs), for example, direct xenobiotic interaction with ERs, including differential interaction with specific ER subtypes, such as ER-α and ER-β xenobiotic interactions with discrete receptor domains that give rise to different biologic consequences, such as interactions with the ligand-binding domain that could cause conformational changes that activate or inhibit signaling; and direct xenobiotic interactions with other components of the ER complex, including accessory proteins, coactivators, and other coregulatory elements. Most responses associated with altered estrogen signaling would be more easily evaluated in assays that evaluated a larger-scale function, such as receptor activation of estrogen-mediated transcription of reporter genes or estrogen-mediated cell responses (for example, cell proliferation of estrogen-sensitive cells in vitro).
Processes that lead to estrogenic transgenerational epigenetic effects. Assay development to address estrogen-induced transgenerational epigenetic effects would involve understanding how early-life exposures to estrogenic compounds permanently alter transcriptional control of genes, understanding how such early-life exposures might be priming events for later-life alterations in reproductive competence or the development of cancer, and understanding how such exposures may produce transgenerational effects. Specific approaches in this research might include genomewide methods to analyze the patterns of DNA methylation with and without estrogenic exposure, quantification of histone modifications, measurements of microRNAs, and the dissection and mechanistic understanding of hormonal inputs to the epigenetic regulatory phenomena.
Those are just a few examples of the kinds of research on estrogenic compounds that would support assay development. The approaches include relatively small-scale research efforts for processes that are fairly well understood (such as direct ligand-receptor interactions) and larger endeavors for the yet-to-be-explained (such as the epigenetic and transgenerational effects of early-life estrogenic-compound exposure). A holistic understanding of estrogenic and other pathways and signaling in humans would be derived incrementally by building on studies in a wide variety of species and tissues. New information from basic studies in biology is likely to lead to improved assays for testing specific toxicity pathways.
The identified estrogenic pathways and signaling processes, once understood, would serve as the substrate for further pathway mining to highlight the critical events that could be tested experimentally in assay systems, that is, events that are obligatory for downstream, apical responses and occur at the lowest exposure of a biologic system to an environmental agent. With studies on the organization of response circuitry controlling the toxicity-
pathway responses, a dose-response model would be based on key, nodal points in the circuits that control perturbations rather than on the overall detail of all steps in the signaling process.
Assessing Validity of Pathway Knowledge and Linkage to Adversity at the Organism Level
The next step in pathway elucidation would be the assessment of the validity of the pathway knowledge, which would proceed in two steps and involve the broader scientific community.
First, the validity would be tested by artificially modulating the pathways to establish that predicted downstream molecular consequences are consistent and measurable. The perturbations could take place, for example, with the use of standard reference compounds, such as 17β-estradiol, or discrete molecular probes, such as genetically modified test systems, knockout models, or other interventions with siRNA or small-molecule inhibitors of key enzymes of other cellular factors.
Second, the consequences of pathway disruption for the organism—the linkage of molecular events to downstream established biologic effects considered to be adverse or human disease—would be assessed. For the case of perturbations of estrogen signaling, it may include linkage with results from short-term in vivo assays, such as an increase in uterine weight in rats in the uterotrophic assay. The link between the toxicity pathways and adverse effects at the level of the whole organism would be assessed in a variety of in vivo and in vitro experiments.
Development of Data-Storage, Data-Access, and Data-Management Systems
Very early stage in Phase I, data-storage, -access, and -management systems should be developed and standardized. As the
altered-estrogen-signaling case study indicates, the acquisition of the knowledge to develop high-throughput testing assays would involve the discovery of toxicity pathways and networks from vast amounts of data from studies of biologic circuitry and interactions of environmental agents with the circuitry. Organization of that knowledge would require data analysis and exploration by interdisciplinary teams of scientists. Understanding the relationships of pathways to adverse end points would also involve large-volume data analysis, as would the design of test batteries and their validation. Those efforts could be stymied without easy and wide public access to databases of results from a broad array of research studies: high-throughput assays, quantitative-SAR model development, protein and DNA microarrays, pharmacokinetic and metabolomic experiments, in vivo apical tests, and human biomonitoring, clinical, and population-based studies. Central repositories for -omics data are under development and exist to a small extent for some in vivo toxicity data. The scale of data storage and access envisioned by the committee is much larger.
The data should be available, regardless of whether they were generated by industry, academe, federal institutions, or foundations. However, the data-management system must also be able to accommodate confidential data but allow for data-sharing of confidential components of the database among parties that agree to the terms of confidentiality. The data-management system would also provide procedures and guidelines for adequate quality control. Central storage efforts would need to be coordinated and standardized as appropriate to ensure usefulness.
Standardization of Research Assays and Results
With the development of data-management systems, processes for standardizing platforms would have to be developed. Currently, there is little standardization of microarrays, although
such efforts are moving more quickly with the Minimum Information About a Microarray Experiment formats now in use (Brazma et al. 2001). Too much standardization can stifle innovation, so approaches to identifying and using the appropriate level of standardization would be needed. Bioinformatics should proceed jointly with the development of assay-platform technology. Data-management systems would have to evolve flexibly to accommodate new data forms and assay platforms.
Assay Development and Validation
After the Phase I validity assessment, pathways would be selected for assay development. The focus would be on critical toxicity pathways that lead reliably to adverse effects for the organism and that are not secondary consequences of other biologic perturbations. The first section of this chapter outlined some of the technical issues that would require research to support assay development.
The case-study example of altered estrogen signaling above indicates how assays may follow from toxicity-pathway identification. Understanding the direct gene-regulation consequences of modulated ER-mediated transcriptional activation would lead to specific assays for quantitative assessment of transcription (RNA), translation (protein), metabolite markers, and altered function. Rapid assays to evaluate function on the scale of receptor activation of estrogen-mediated transcription of reporter genes or even estrogen-mediated cell responses, such as cell proliferation of estrogen-sensitive cells in vitro, could be developed to assess altered estrogen signaling.
Also important for assessing the potential for perturbations in estrogen signaling would be reliable assays for detecting estrogen receptor interactions rapidly. Specific assays that might be developed include ligand-receptor binding assays and more so-
phisticated computational structural models of ligand interactions with receptor and receptor-complex conformational changes. Further sets of assays would be needed to address the wide variety of toxicity pathways by which estrogenic compounds can operate. In this phase, biomarkers of effect, susceptibility, and exposure would be developed for use in human biomonitoring and surveillance.
Demonstrating that a test is reliable and relevant for a particular purpose is a prerequisite for its routine use for regulatory acceptance. But establishing the validity of any new toxicity assay can be a formidable process—expensive, time-consuming, and logistically and technically challenging. Development of efficient approaches for validating the new mechanistically based assays would add to the challenge. How can the assays come into use within a reasonable time and be sufficiently validated to be used with confidence? That question is discussed by considering first the relevant existing guidance on validation and then the challenges faced in validating the new tests. Finally, some general suggestions are made regarding validation of the new tests. In making its suggestions, the committee acknowledges the considerable work going on in institutions in the United States and Europe to improve validation methods.
Existing Validation Guidance
Guidelines on the validation of new and revised methods for regulatory acceptance have been developed by both regulatory agencies and consortia (ICCVAM/NICEAM 2003; OECD 2005). Such guidelines focus on multifactorial aspects of a test, which cover the following elements:
Definition of test rationale, test components, and assay conduct and the provision of details on the test protocol.
Consideration of the relationship of the test-method end points to the biologic effect of interest.
Characterization of reproducibility in and among laboratories, transferability among laboratories, sources of variability, test limits, and other factors related to the reliability of test measurements (sometimes referred to as internal validity).
Demonstrated biologic performance of the test with reference chemicals, comparison of the performance with that of the tests it is to replace, and description of test limitations (sometimes referred to as external validity).
Availability, peer review, and good-laboratory-practices status of the data supporting the validation of the test method.
Independent peer review of the methods and results of the test and publication in the peer-reviewed literature.
Criteria for regulatory acceptance of new test methods have also been published (ICCVAM/NICEAM 2003). They cover some of the subjects noted above and include criteria related to robustness (insensitivity to minor changes in protocol), time and cost effectiveness, capability of being harmonized and accepted by agencies and international groups, and capability of generating useful information for risk assessment.
Validation of a new test method typically is a prerequisite for regulatory acceptance but is no guarantee of acceptance. It establishes the performance characteristics of a test method for a particular purpose. Different regulatory agencies may decide that they have no need for a test intended for a given purpose, or they may set their criteria of acceptable performance higher or lower than other agencies. To minimize problems associated with acceptance, the Organisation for Economic Co-operation and Development (OECD 2005) recommends that validation and peer-review processes take place before a test is considered for acceptance as an OECD test guideline. OECD recognizes, however, that factors
beyond the technical performance of an assay may be viewed differently by different regulatory authorities.
Challenges in Validating Mechanistically Based Assays
Validation of the mechanistically based tests envisioned by the committee may be especially challenging for several reasons. First, the tests in the new paradigm that are based on nonapical findings depart from current practice used by regulatory agencies in setting health advisories and guidelines based on apical outcomes. Relevant policy and legal issues are discussed at length in Chapter 6 and will not be repeated here except to note that scientific acceptance of a test and its relationship to disease is a critical component of establishment of the validity of the test for regulatory purposes.
Second, the new -omics and related technologies will need to be standardized and refined before specific applications can be validated for regulatory purposes (Corvi et al. 2006). Such preliminary work could be seen as an elaborate extension of the routine step of test-method optimization or prevalidation leading to validation of conventional in vivo or in vitro assays. The committee also notes above that some degree of standardization will be necessary early to promote understanding and use of assay findings by researchers for knowledge development.
Third, because -omics and related technologies are evolving rapidly, the decision to halt optimization of a particular application and begin a formal validation study will be somewhat subjective. Validation and regulatory acceptance of a specific test do not preclude incorporating later technologic advances that would enhance its performance. If it is warranted, the effects of such modifications on performance can be evaluated through an expedited validation that avoids the burdens of a second full-blown validation.
Fourth, the committee envisions that a suite of new tests typically will be needed to replace an individual in vivo test, given that apical findings can be triggered by multiple mechanisms. Consequently, although it is current practice to validate a single test against the corresponding conventional test and then to look for one-to-one correspondence, the new paradigm would routinely entail validation of test batteries and would use multivariate comparisons.
Fifth, existing validation guidelines focus on concordance between the results of the new and the existing assays. In practice, that often means comparing results from cell-based in vitro assays with in vivo data from animals. One of the challenges of validating the medium- and high-throughput assays in the new vision—with its emphasis on human-derived cells, cell lines, and cellular components—will be to identify standards of comparison for assessing their relevance and predictiveness while aiming for a transformative paradigm shift that emphasizes human biology, mechanisms of toxicity, and initial, critical perturbations of toxicity pathways.
Sixth, it is anticipated that virtually all xenobiotics will perturb signaling pathways to some degree, so a key challenge will be to determine when a perturbation leads to downstream toxicity and when it does not. Thus, specificity may be a bigger challenge than sensitivity.
Assay Validation under New Toxicity-Testing Paradigm
Validation should not be viewed as an inflexible process that proceeds sequentially through a fixed series of steps and is then judged according to unvarying criteria. For example, because validation assesses fitness for purpose, such exercises should be judged with the specific intended purpose in mind. A test’s intended purpose may vary from use as a preliminary screening
tool to use as the definitive test. Similarly, a new test may be intended to model one or a few toxicity mechanisms for a given apical end point but not the full array of mechanisms. Given that the new paradigm would emerge gradually, it would be important to consider validating incremental gains, while recognizing their current strengths and weaknesses.
Consequently, applying a one-size-fits-all approach to validation is not conducive to the rapid incorporation of emerging science or technology into regulatory decision-making. A more flexible approach to assay validation would facilitate the evolution of testing toward a more mechanistic understanding of toxicity end points; the form the validation should take is a point of discussion and deliberation (Balls et al. 2006; Corvi et al. 2006). For nonregulatory use of assays, preliminary data-gathering, and exploration of mechanisms, at a minimum some general guidance on assay performance appears warranted for intended assays. For assays to be used routinely, somewhat rigorous performance standards and relevance would have to be established.
Returning to the case study on estrogen signaling, the validation sequence involves the development of specific assays that track the key molecular triggers linked to human estrogenic effects. This validation component is largely focused first on validating that the assay components recapitulate the key molecular interactions above and then on the traditional approach of looking at assay performance in terms of reproducibility and relevance.
Assessing intralaboratory and interlaboratory reproducibility is more straightforward than assessing relevance, which is sometimes labeled accuracy. To assess relevance, assays would be formally linked to organism-level adverse health effects. For example, they would provide the basis of evaluating the level of molecular change that potentially corresponds to an adverse effect. In addition, reference compounds would be used to determine the assays’ positive and negative predictive value. Ideally, substances known to cause and substances known not to cause the effect in
humans would be used as the reference agents for positive and negative predictivity. In the absence of adequate numbers of xenobiotics known to be positive and negative in humans, animal data may have to be used in validation. For the assays based on human cell lines, that could be problematic, and some creativity and flexibility in the validation process would be desirable. For example, rodent-based cell assays comparable with the human assay could be used to establish relevance and support the use of the human cell-based assay.
Assay Relevance and Validity Trial
Once assays are developed and formally validated, they would become available for use. The committee suggests three distinct strategies that could aid in the assessment of test validity and relevance and could further the development of improved assays.
First, research entities, such as the National Toxicology Program (NTP), should further develop and run the experimental high-throughput assays, some before they are fully validated, on chemicals that have already been extensively tested with standard or other toxicity tests. The NTP has, for example, initiated mechanistic high-throughput assays on at least 500 chemicals that have already been tested using NTP cancer and reproductive and developmental toxicity studies; and, in collaboration with the NIH Molecular Library Initiative, further developed and applied cell-based screening assays that can be automated (NTP 2006). The Environmental Protection Agency (EPA) National Center for Computational Toxicology (NCCT) also has an initiative to screen numerous pesticides and some industrial chemicals in high-throughput tests. Those processes would be essential for validating the new assays and for learning more about which
health effects can be predicted from specific perturbations of toxicity pathways.
Second, new validated assays should be conducted in parallel with existing toxicity tests for chemicals, such as pesticides and pharmaceuticals, that will be undergoing or have recently undergone toxicity testing under regulatory programs. This research testing, which would be conducted by research entities, would help to foster the evolution of the assays into cell-based test batteries to eventually replace current tests. The testing would also help to gauge the positive and negative predictive values of the various assays and thereby help to avoid (or at least begin to quantify) the associated risks with missing important toxicities with the new assays or incorporating a new assay that detects meaningless physiologic alterations that are not useful for predicting human risk.
Third, as the new assays are developed further and validated, they should be deployed as screens for evaluation of chemicals that would not currently undergo toxicity testing, such as existing high-production-volume chemicals that have not been tested or have been evaluated only with the screening information dataset, or new chemicals that are not currently subject to test requirements. Used as screens for chemicals that would otherwise not be tested or be subject only to little testing, the assays could begin to help to set priorities for testing and could also help to guide the focus of any testing that may be required. Eventually they could provide the basis of an improved framework for addressing chemicals for which testing is limited or not done at all. This is illustrated in Figure 5-2.
Resources will be required to implement the three approaches: testing of chemicals with large and robust datasets of apical tests, parallel research testing of chemicals subject to existing regulatory testing requirements, and applying high-through-
put screens to chemicals that are currently not tested. In making those suggestions, the committee is not recommending expanding test requirements for pesticides or pharmaceuticals. Rather, it notes that the tests developed will be a national resource of wide benefit and worthy of funding by federal research programs. Voluntary testing by industry using validated new assays should also be encouraged. The three approaches are anticipated to pay off substantially in the longer term as scientists, regulators, and stakeholders develop enough familiarity and comfort with the new assays that they begin to replace current apical end-point
tests and as mechanistic indicators are increasingly used in environmental decision-making.
In addition to the high-throughput testing by NTP and EPA of chemicals with robust datasets described above, the committee notes the increasing use of mechanistic assays, primarily for further evaluation of chemicals that have demonstrated toxicity in standard apical assays. The mechanistic studies are done to evaluate further a tailored subset of toxicity pathways, such as those involving the peroxisome proliferators-activated receptor, the aryl hydrocarbon receptor, and thyroid and sex hormones. Some companies are also using high-throughput assays to guide internal decision-making in new chemical development, but their results typically are not publicly available.
A recent example of how the high-throughput assays could play out in the near term is the risk assessment of perchlorate. The data on perchlorate include standard subchronic- and chronic-toxicity tests and developmental-neurotoxicity tests, but risk assessments and regulatory decisions have been based on perturbation of iodide-uptake inhibition—the known toxicity pathway through which perchlorate has its effects (EPA 2006; NRC 2006). If a new chemical were found to inhibit iodide uptake, standard toxicity tests would not be necessary to demonstrate the predictable effects on thyroid hormone and neurodevelopment. Regulatory decisions could be based on the dose-response relationship for iodide-uptake inhibition. The new data on perchlorate-susceptible subpopulations (for example, those with low iodide) emerging from biomonitoring would also be considered (see Blount et al. 2006). Such a chemical would need to undergo a full battery of toxicity-pathway testing to ascertain that no other important pathways that might have effects at lower doses were disrupted.
In the long run, using upstream indicators of toxicity from high-throughput assays based on toxicity pathways can be more sensitive and hence more protective of public health then using apical-end-point observations from assays in small numbers of
live rodents. However, while the new assays are under development, there will be a long period of uncertainty during which the false-positive and false-negative rates of the testing battery will remain unclear, and the ability of the battery to adequately predict effects in susceptible subpopulations or during susceptible life stages will also be unclear. During the phase-in period and afterward, there will be a need to pay close attention to whether important toxicities are being missed or are being exaggerated by the toxicity-pathway screening battery. The concern about missing important toxic end points is one of the main reasons for the committee’s recommendation for a long phase-in period during which the new assays are run in parallel with existing assays and tested on chemicals on which there are already large robust datasets of apical findings. Parallel testing will allow identification of toxicities that might be missed if the new assays were used alone and will compel the development of assays to address these gaps.
Many additional issues would need to be considered during the interim phase of assay development. For example, technical issues, such as cell-culture conditions, and selective pressures that result in molecular evolution of cell lines over time and across laboratories could result in issues that could be addressed only with experience and careful review of assay results. Parallel use of new assays and current tests would probably continue for some time before the adoption of the new assays as first-tier screens or as definitive tests of toxicity.
Assembly and Validation of Test Batteries
Once toxicity pathways are elucidated and translated into high-throughput assays for a broad field of toxicity testing, such as neurotoxicology, a progressively more comprehensive suite of validated medium- to high-throughput tests would become available to cover the field. Single assays would not be comprehensive
or predictive in isolation but would be assembled into suites with targeted tests that would cover the field. The suite or “panel” of assays and the scoring of the assays would need to be assessed. This may involve a computational assessment of multivariate end points. Turning again to the estrogen-signaling case study, known estrogen modulators should register as positive in one or more assays. Confidence in the suite of assays can come from the knowledge that all known mechanisms of estrogenic-signaling alteration are modeled.
The development and assessment of batteries and the overall testing strategy would be facilitated by a formal uncertainty evaluation. For the different risk contexts and decisions to be made (Chapter 3), the preferred test batteries may differ in sensitivity, in this context the probability that the battery identifies as harmful a dose that is harmful, and specificity, the probability that a test battery identifies as not harmful a dose that is not harmful. In screening, the effect of a false-negative finding of no harm at a given dose can be far more costly than a false-positive finding of harm (see, for example, Lave and Omenn 1986). The ability to characterize the specificity and sensitivity of the test battery would aid the consideration of the cost effectiveness and value of the information to be obtained from the test battery (Lave and Omenn 1986; Lave et al. 1988) and ultimately help to identify preferred test strategies.
Although considerable effort would be directed at the construction of high-throughput batteries, targeted tests would probably also be needed in routine testing strategies to address particular risk contexts (for example, registration of a pesticide for food uses). Still, the end-point-focused targeted assays should by no means remain static. Instead, they should evolve to incorporate new refinements. For example, the rapid developments in imaging technologies have offered scientists important new tools to enhance the collection of information from animal bioassays. Promising new assays that use nonmammalian models, such as
Caenorhabditis elegans, are in development. Combined mammalian assays that incorporate a broader array of sensitive end points in a more efficient manner have been developed. The committee assumes that development of those approaches will continue, and it encourages development and validation of them in targeted testing. As newer targeted-testing approaches become available, older apical approaches should be retired.
Intermediate Products of Assay-Development Research
One important benefit of the research described is that it could add public-health protection and refinement to current regulatory testing. For example, in some risk contexts, particularly widespread human exposure to existing chemicals, the dose-response data from toxicity-pathway tests could help to refine quantitative relationships between adverse effects identified in the apical tests and perturbations in toxicity pathways and improve the evaluation of perturbations at the low end of the dose-response curve. The results of the toxicity-pathway tests could provide data to aid in interpreting the results of apical tests on a given substance and may guide the selection of further follow-up tests or epidemiologic surveillance. The mechanistic assays would also help to permit the extrapolation of toxicity findings on a chemical under study to other chemicals that target the same mechanism. Additional benefits and research products anticipated for use in the near term include the following:
A battery of inexpensive medium- and high-throughput screening assays that could be incorporated into tiered-testing schemes to identify the most appropriate tests or to provide preliminary results for screening risk assessments. With experience, the assays would support the phase-out of apical end-point tests.
Early cell-based replacements for some in vivo tests, such as those for acute toxicity.
Work to develop consensus approaches for DNA-reactivity and mutagenicity assays and strategies for using mechanistic studies in cancer risk assessment.
On-line libraries of results of medium- and high-throughput screens for use in toxicity prediction and improving SAR models. For classes of chemicals well studied in apical endpoint tests, the comparison of results from high-throughput studies with those from whole-animal studies could provide the basis of extrapolating toxicity to untested chemicals in the class.
Elucidation of the mechanisms of toxicity of chemicals well studied in high-dose apical end-point tests. Research to achieve the vision must include the study of perturbations of toxicity pathways of well-studied chemicals, many of which have widespread human exposure. Such research would bring about better understanding of the mechanisms of toxicity of the chemicals and improve risk assessment. Chemicals with known adverse effects and mechanisms well elucidated with respect to toxicity pathways would be good candidates to serve as positive controls in the high-throughput assays. Such studies would help to distinguish between exposures that result in impaired function and disease and exposures that result in adaptation and normal biologic function (see Figure 2-2).
Indicators of toxicity-pathway activation in the human population. This knowledge could be used to understand the extent to which a single chemical might contribute to disease processes and would be critical for realistic dose-response modeling and extrapolation.
Refined analytic tools for assessing the pharmacokinetics of environmental agents in humans exposed at low concentrations. Such evaluations could be used directly in risk assessments based on apical end-point tests and could aid in design and interpretation of in vitro screens.
Improvements in targeted human disease surveillance and exposure biomonitoring.
BUILDING A TRANSFORMATIVE RESEARCH PROGRAM
Instituting Focused Research
A long-term, large-scale concerted effort is needed to bring the new toxicity-testing paradigm to fruition. A critical element is the conduct of transformative research to provide the scientific basis of creating the new testing tools and to understand the implications of test results and how they may be applied in risk assessments used in environmental decision-making.
What type of institutional structure would be most appropriate for conducting and managing the research effort? It is beyond the committee's charge and expertise to make specific recommendations either to change or to create government institutions or to alter their funding decisions. The committee will simply sketch its thoughts on an appropriate institutional structure for implementing the vision. Other approaches may also be appropriate.
The committee notes that an institutional structure should be selected with the following considerations in mind:
The realization of the vision will entail considerable research over many years and require substantial funding—hundreds of millions of dollars.
Much of the research will be interdisciplinary and consequently, to be most effective, should not be dispersed among discipline-specific laboratories.
The research will need high-level coordination to tackle the challenges presented in the vision efficiently.
The research should be informed by the needs of the regulatory agencies that would adapt and use the emerging testing
procedures, but the research program should be insulated from the short-term orientation and varied mandates of the agencies.
Interdisciplinarity, Adaptability, and Timeline
The need for an institutional structure that encourages and coordinates the necessarily multidisciplinary research cannot be overstated, and a spirit of interdisciplinarity should infuse the research program. Accordingly, the effort would need to draw on a variety of technologies and a number of disciplines, including basic biology, bioinformatics, biostatistics, chemistry, computational biology, developmental biology, engineering, epidemiology, genetics, pathology, structural biology, and toxicology. Good communication and problem-solving across disciplines are a must, as well as leadership adept at fostering interdisciplinary efforts. The effort will have to be monitored continually, with the necessary cross-interactions engineered, managed, and maintained.
The testing paradigm would be progressively elaborated over many years or decades as experience and successes accumulate. It should continue to evolve with scientific advances. Its evolution is likely to entail midcourse changes in the direction of research as breakthroughs in technology and science open more promising leads. Neither this committee nor any other constituted committee will be able to foresee the full suite of possibilities or potential limitations of new approaches that might arise with increasing biologic knowledge. The research strategy outlined above provides a preview to the future and suggests general steps needed to arrive at a new toxicity-testing paradigm. Some of the suggested steps would need to be reconsidered as time passes and experience is developed with new cell-based assays and interpretive tools, but no global change in the vision, which the committee regards as robust, is expected.
The transition from existing tests to the new tests would require active management, involvement of the regulatory agencies, and coherent long-range planning that invests in the creation of new knowledge while refining current testing and, correspondingly, stimulating changes in risk-assessment procedures and guidelines. Over time, the research expertise and infrastructure involved in testing regimes could be transformed in important ways as the need for animal testing decreases and pathway-related testing increases.
The committee envisions that the new knowledge and technology generated from the proposed research program will be translated to noticeable changes in toxicity-testing practices within 10 years. Within 20 years, testing approaches will more closely reflect the proposed vision than current approaches. That projection assumes adequate and sustained funding. As in the Human Genome Project, progress is expected to be nonlinear, with the pace increasing as technologic and scientific breakthroughs are applied to the effort.
Cross-Institution and Sector Linkages
The research to describe cellular-response networks and toxicity pathways and to develop the complementary human biomonitoring and surveillance strategy would be part of larger current efforts in medicine and biotechnology. Funding of that research is substantial in medical schools and other academic institutions, some U.S. federal and European agencies, and pharmaceutical, medical, and biotechnology industries. Links among different elements in the research community involved in relevant research will be needed to capitalize on the new knowledge, technologies, and analytic tools as they develop. Mechanisms for ensuring sustained communication and collaboration, such as data-sharing, will also be needed.
Some form of participation by industry and public-interest groups should be ensured. Firms have a long-term interest in the new paradigm, and most stand to gain from more efficient testing requirements. Public-health and environmental interest groups, as well as those promoting alternatives to animal testing, should also be engaged.
A large-scale, long-term research program is needed to elucidate the cellular-response networks and individual toxicity pathways within them. Given the scientific challenges and knowledge development required, moderately large funding will be required. The committee envisions a research and test-development program similar in scale to the NTP or the Institute for Systems Biology in Seattle, Washington.
The success of the project will depend on attracting the best thinkers to the task, and the endeavor would compete with related research programs in medicine, industry, and government for these researchers. Attracting the best researchers in turn would depend on an adequately funded and managed venture that appears well placed to succeed.
The committee concludes that an appropriate institutional structure for the proposed vision is a research institute that fosters multidisciplinary research intramurally and extramurally. A strong intramural research program is essential. The effort cannot succeed merely by creating a virtual institution to link and integrate organizations that are performing relevant research and by dispersing funding on relevant research projects. A mission-
oriented, intramural program with core multidisciplinary programs to answer the critical research questions can foster the kind of cross-discipline activity essential for the success of the initiative. There would be far less chance of success within a reasonable period if the research were dispersed among different locations and organizations without a core integrating and organizing institute. A collocated, strong intramural research initiative will enable the communication and problem-solving across disciplines required for the research and assay development.
Similarly, a strong, well-coordinated, targeted extramural program will leverage the expertise that already exists within academe, pharmaceutical companies, the biotechnology sector, and elsewhere and foster research that complements the intramural program. Through its intramural and highly targeted extramural activities, the envisioned research institute would provide the nexus through which the new testing tools would be conceived, developed, validated, and incorporated into coherent testing schemes.
The committee sees the research institute funded and coordinated primarily by the federal government, given the scale of the necessary funding, the multiyear nature of the project, and links to government regulatory agencies. That does not mean that there will be no role for other stakeholders. Biotechnology companies, for example, could cofund specific projects. Academic researchers could conduct research with the program’s extramural funds. Moreover, researchers in industry and academe will continue making important progress in fields related to the proposed vision independently of the proposed projects.
The key institutional question is where to house the government research institute that carries out the intramural program of core multidisciplinary research and manages the extramural program of research. Should it be an existing entity, such as the National Institute of Environmental Health Sciences (NIEHS), or a new entity devoted exclusively to the proposed vision? The com-
mittee notes that the recognized need for research and institutional structures that transcend disciplinary boundaries to address critical biomedical research questions has spawned systems-biology institutes and centers at biomedical firms and several leading universities in the country. However, the committee found few examples in the government sector. The Department of Energy (DOE) Genomics GTL Program seeks to engineer systems for energy production, site remediation, and carbon sequestration based on systems-biology research on microorganisms. In its review of this DOE program, NRC (2006) found collocated, integrated vertical research to be essential to its success.
If one were to place the proposed research program into an existing government entity, a possible choice would be the NTP, a multiagency entity administered and housed in NIEHS. The NTP has several features that suggest it as a possible institutional home for the research program envisioned here, including its mandate to develop innovative testing approaches, its multiagency character, the similarities between its Vision and Roadmap for the Future and what is envisioned here, and its expertise in validating new tests through the NTP Interagency Center for the Evaluation of Alterative Toxicological Methods and its sister entity, the Interagency Coordinating Committee on the Validation of Alternative Methods, and in -omics testing at its Center for Toxicogenomics. It is conceivable that the NTP could absorb the research mandate outlined here if its efforts dramatically scaled up to accommodate the focused program envisioned. If it were placed in the NTP, structures would have to be in place to ensure that the day-to-day technical focus on short-term problems of high-volume chemical testing would not impede progress in evolving testing strategies. As the new test batteries and strategies are developed and validated, they would be moved out of the research arm and be made available for routine application.
The committee considered housing the proposed research institute in a regulatory agency and notes that this could be prob-
lematic. The science and technology budgets of regulatory agencies have been under considerable stress and appear unlikely to sustain such an effort. Although EPA’s NCCT has initiated important work in this field, the scale of the endeavor envisioned by the committee is substantially larger and could not be sufficiently supported if recent trends in congressional budgeting for EPA continue. For example, EPA’s science and technology research budget has been suboptimal and decreasing in real dollars for a number of years (EPA 2006, 2007).
The research portfolio entailed by the committee’s vision will also require active management to maintain relevance and the scientific focus needed for knowledge development. Although sufficient input from regulatory agencies is needed, insulation of the institute from the short-term orientation of regulatory-agency programs that depend on the results of toxicologic testing is important.
In the end, the committee noted that wherever the institute is housed, it should be structured along the lines of the NTP, with intramural and focused extramural components and interagency input but with its own focused mission and funding stream.
Scientific Surprises and the Need for Midcourse Corrections
Research often brings surprises, and today’s predictions concerning the promise of particular lines of research are probably either pessimistic or optimistic in some details. For example, the committee’s vision of toxicity testing stands on the presumption that a relatively small number of pathways can provide sufficiently broad coverage to allow a moderately sized set of high-and medium-throughput assays to be developed for the scientific community to use with confidence and that any important gaps in coverage can be addressed with a relatively small set of targeted assays. That presumption may be found to be incorrect. Further-
more, the establishment of links between perturbations and apical end points may prove especially challenging for some end points. Thus, as the research proceeds and learning takes place, adjustments in the vision and the research focus can be anticipated.
In addition to program oversight noted above, the research program should be assessed every 3-5 years by well-recognized scientific experts independently of vested interests in the public and private sectors. The assessment would weigh practical progress, the promise of methods on the research horizon, and the place of the research in the context of other research, and it would recommend midcourse corrections.
In the traditional approach to toxicity testing, the whole animal provides for the integration and evaluation of many toxicity pathways. Yet each animal study is time-consuming and expensive and results in the use of many animals. In addition, many animal studies need to be done to evaluate different end points, life stages, and exposure durations. The new approach may require individual assays for hundreds of relevant toxicity pathways. Despite that apparent complexity, emerging methods allow testing of many pathways extremely rapidly and efficiently (for example, in microarrays or wells). If positive signals from the assays can be used with confidence to guide risk management, the new approach will ultimately prove more efficient than the traditional one.
It is clear, however, that much development and refinement will be needed before a new and efficient system could be in place. For some kinds of toxicity, such as developmental toxicity and neurotoxicity, the identification of replacement toxicity-pathway assays might be particularly challenging, and some degree of targeted testing might continue to be necessary. In addition, the
validation process may uncover unexpected and challenging technical problems that will require targeted testing. Finally, the parallel interim process may discover that some categories of chemicals or of toxicity cannot yet be evaluated with toxicity-pathway testing. Nonetheless, the committee envisions the steady evolution of toxicity testing from apical end-point testing to a system based largely on toxicity-pathway batteries in a manner mindful of information needs and of the capacity of the test system to provide information.
In the long term, the committee expects toxicity pathways to become sufficiently well understood and calibrated for batteries of high-throughput assays to provide a substantial fraction of the toxicity-testing data needed for environmental decision-making. Exposure monitoring, human surveillance for early perturbations of toxicity-response pathways, and epidemiologic studies should provide an additional layer of assurance that early indications of adverse effects would be detected if they occurred. The research conducted to realize the committee’s vision would support a series of substantial improvements in toxicity testing in the relatively near term.
Balls, M., P. Amcoff, S. Bremer, S. Casati, S. Coecke, R. Clothier, R. Combes, R. Corvi, R. Curren, C. Eskes, J. Fentem, L. Gribaldo, M. Halder, T. Hartung, S. Hoffmann, L. Schectman, L. Scott, H. Spielmann, W. Stokes, R. Tice, D. Wagner, and V. Zuang. 2006. The principles of weight of evidence validation of test methods and testing strategies. The report and recommendations of ECVAM workshop 58. Altern. Lab. Anim. 34(6):603-620.
Blount, B.C., J.L. Pirkle, J.D. Osterloh, L. Valentin-Blasini, and K.L. Caldwell. 2006. Urinary perchlorate and thyroid hormone levels in adolescent and adult men and women living in the United States. Environ. Health Perspect. 114(12):1865–1871.
Brazma, A., P. Hingamp, J. Quackenbush, G. Sherlock, P. Spellman, C. Stoeckert, J. Aach, W. Ansorge, C.A. Ball, H.C. Causton, T. Gaasterland, P. Glenisson, F.C. Holstege, I.F. Kim, V. Markowitz, C. Matese, H. Parkinson, A.
Robinson, U. Sarkans, S. Schulze-Kremer, J. Stewart, R. Taylor, J. Vilo, and M. Vingron. 2001. Minimum information about a microarray experiment (MIAME)-toward standards for microarray data. Nat. Genet. 29(4):365-371.
Corvi, R., H.J. Ahr, S. Albertini, D.H. Blakey, L. Clerici, S. Coecke, G.R. Douglas, L. Gribaldo, J.P. Groten, B. Haase, K. Hamernik, T. Hartung, T. Inoue, I. Indans, D. Maurici, G. Orphanides, D. Rembges, S.A. Sansone, J.R. Snape, E. Toda, W. Tong, J.H. van Delft, B. Weis, and L.M. Schechtman. 2006. Meeting report: Validation of toxicogenomics-based test systems: ECVAM-ICCVAM/NICEATM considerations for regulatory use. Environ Health Perspect. 114(3):420-429.
EPA (U.S. Environmental Protection Agency). 2006. Science and Research Budgets for the U.S. Environmental Protection Agency for Fiscal Year 2007; An Advisory Report by the Science Advisory Board. EPA-SAB-ADV-06-003. U.S. Environmental Protection Agency, Washington DC. March 30, 2006 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/36a1ca3f683ae57a85256ce9006a32d0/0EDAAECA1096A5B0852571450072E33E/$File/sab-adv-06-003.pdf [accessed April 4, 2007].
EPA (U.S. Environmental Protection Agency). 2007. Comments on EPA’s Strategic Research Directions and Research Budget for FY 2008, An Advisory Report of the U.S. Environmental Protection Agency Science Advisory Board. EPA-SAB-ADV-07-004. U.S. Environmental Protection Agency, Washington DC. March 13, 2007 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/997517EFA5FC48798525729F0073B4D4/$File/sab-07-004.pdf [accessed April 7, 2007].
ICCVAM (Interagency Coordinating Committee on the Validation of Alternative Methods) and NICEATM (National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicolgical Methods). 2003. ICCVAM Guidelines for the Nomination and Submission of New, Revised, and Alternative Test Methods. NIH Publication No. 03-4508. National Institute of Environmental Health Sciences, National Institutes of Health.
Inglese, J., D.S. Auld, A. Jadhav, R.L. Johnson, A. Simeonov, A. Yasgar, W. Zheng, and C.P. Austin. 2006. Quantitative high-throughput screening: A titration-based approach that efficiently identifies biological activities in large chemical libraries. Proc. Natl. Acad. Sci. U.S.A. 103(31):11473-11478.
Lave, L.B., and G.S. Omenn. 1986. Cost-effectiveness of short-term tests for carcinogenicity. Nature 324(6092):29-34.
Lave, L.B., F.K. Ennever, H.S. Rosenkranz, and G.S. Omenn. 1988. Information value of the rodent bioassay. Nature 336(6200):631-633.
NRC (National Research Council). 2000. Scientific Frontiers in Developmental Toxicology and Risk Assessment. Washington, DC: National Academy Press.
NRC (National Research Council). 2006. Review of the Department of Energy’s Genomics: GTL Program. Washington, DC: The National Academies Press.
NTP (National Toxicology Program). 2006. Current Directions and Evolving Strategies. National Toxicology Program, National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, NC [online]. Available: http://ntp.niehs.nih.gov/files/NTP_CurrDir2006.pdf [accessed April 4, 2007].
OECD (Organisation for Economic Co-operation and Development). 2005. Guidance Document on the Validation and International Acceptance of New or Updated Test Methods for Hazard Assessment. OECD Series on Testing and Assessment No. 34. ENV/JM/Mono(2005)14. Organisation for Economic Co-operation and Development, Paris [online]. Available: http://appli1.oecd.org/olis/2005doc.nsf/linkto/env-jm-mono(2005)14 [accessed April 4, 2007].