The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment
comprehensive hazard assessment for a chemical substance generally requires a variety of in vitro and in vivo toxicologic assays as well as evaluations of physical properties.
The selection of individual screening tests depends greatly on the setting and specific regulatory requirements. For example, the current practice of the U.S. Environmental Protection Agency (EPA) under the Toxic Substances Control Act (TSCA), in the absence of more extensive preexisting data, is to screen new chemicals based solely on physicochemical data using quantitative structure-activity relationship models. In this setting, chemical tests may be limited to properties such as boiling point, octanol-water partition coefficient, vapor pressure, and solubility. If environmental fate and transport of substances are not primary concerns, short-term in vivo rodent assays may be used, such as a 28-day feeding study, which examines histopathology in most critical target organs. More comprehensive screening programs have adopted batteries of tests that provide information on different types of toxicity but remain insufficient to fully assess chemical risks. As one example, the Organization of Economic Cooperation and Development (OECD) has developed the Screening Information Data Set (SIDS), which consists of the 21 data elements shown in Table 5-1. Each toxicity test involves administering a measured amount of a compound to whole organisms or to cells in culture and then measuring indicators of toxic outcomes.
Compared with more extensive tests, screening tests tend to use higher and fewer doses of the compound being studied, fewer test subjects, a shorter time period of observation, and less extensive evaluation of the toxic outcomes. To reduce the use of mammals for laboratory testing, there is a strong impetus to develop and validate screening tests that use cultured cells or lower order animals, such as worms.
The incorporation of toxicogenomics into screening tests involves measuring gene, protein, or metabolite changes in response to specific doses of an administered test compound at specific time points, with or without the parallel measurement of more traditional markers of toxicity. The critical question about new toxicogenomic techniques is whether they can improve hazard screening by making tests faster, more comprehensive, less reliant on higher order animals, and more predictive and accurate without being prohibitively expensive.
For a screening test to be useful, it must be capable of detecting the property or state being tested when it truly exists. This is the definition of the “sensitivity” of a screening test. In many cases, screening tests are designed to be highly sensitive, sometimes at the expense of the specificity of the test or the ability of the test to return a negative result when the property or state of concern does not exist. Another way to describe this quality is that hazard screening tests often accept a higher rate of false-positive results to avoid not detecting a hazard because of a high rate of false-negative results.
When the data generated by screening tests are continuous, as is the case with gene and protein expression and metabolite assays, the selection of thresholds for positive and negative results plays a dominant role in determining the