The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment
cogenomics remains an area of active research, a number of universal principles have emerged. First and foremost is the value of broad sampling of biologic variation (Churchill 2002; Simon et al. 2002; Dobbin and Simon 2005). Many early experiments used far too few samples to draw firm conclusions, possibly because of the cost of individual microarrays. As the cost of using microarrays and other toxicogenomic technologies has declined, experiments have begun to include sampling protocols that provide better estimates of biologic and systematic variation within the data. Still, high costs remain an obstacle to large, population-based studies. It would be desirable to introduce power calculations into the design of toxicogenomic experiments (Simon et al. 2002). However, uncertainties about the variability inherent in the assays and in the study populations, as well as interdependencies among the genes and their levels of expression, limit the utility of power calculations.
FIGURE 3-1 Overview of the workflow in a toxicogenomic experiment. Regardless of the goal of the analysis, all share some common elements. However, the underlying experimental hypothesis, reflected in the ultimate goal of the analysis, should dictate the details of every step in the process, starting from the experimental design and extending beyond what is presented here to the methods used for validation.