Click for next page ( 184


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 183
PART IV Uncertainties: Integration with Risk Assessment and Resou roes

OCR for page 183

OCR for page 183
Dealing with Uncertainty in Pharmacokinetic Models Using SIMUSOLV Gary E. Blau and W. Brock Neely INTRODUCTION In the course of investigating the behavior of a chemical, the toxicologist normally doses a mammalian species and follows the movement and dis- tribution of the chemical over a period of time. Having obtained this type of information, it is desirable to have a physiological explanation of the observed results. The usual method of arriving at such an explanation lies in characterizing the concentration-time data by the terms of a mathe- matical model. If the model is based on known mechanisms, it is possible to predict the behavior of the chemical in other species. The ultimate purpose of these types of studies is to use the results to assess the impact on other species, including man, to help quantify various risk assessment scenarios. The building of such models is an iterative process requiring a sophis- ticated mathematical analysis. This paper will discuss the art of model building and demonstrate how the recently developed software package SIMUSOLV (trademark of the Dow Chemical Company) can help the toxicologist analyze pharmacokinetic data in a relatively easy fashion. MODEL BU I LDI NO The first step in the building process is to define the problem to be solved. This can range from simply estimating the bioavailability of a 185

OCR for page 183
186 GARY E. BLAU AND W. BROCK NEELY chemical for a specific species to the elucidation of the fate and distribution in a variety of mammals, including man. Once the problem has been stated, the next step is to postulate several physiologically meaningful mechanisms to describe the set of data that has been collected. The third step is to use the data to discriminate between candidate models. To help make this choice the maximum likelihood principle will be introduced. Bayesian methods can also be used (Reilly and Blau, 19741. The utility of this later approach, however, can best be demonstrated for parameter estimation rather than model discrimination. Once the best model has been identified, classical statistical procedures can be used to measure the ad- equacy and to help select additional models. Quite frequently the existing data set does not contain sufficient infor- mation to discriminate rival models. In this case additional experiments must be designed and carried out. The type of new information required is determined by using the models to predict the behavior of the system under a variety of different conditions and to find that point where the predicted behavior is the most different. For example, suppose the problem is to determine whether a chemical follows a monophasic or a biphasic clearance. The key data points to discriminate these models would be at the latest time points that are still detectable by the analytical method being used. In this case the answer of where to look for additional infor- mation is obvious, but normally, this is not true. It is a big mistake to try and build models after the data have been collected. Model postulation and analysis should be part of the experimental process. SIMUSOLV has been developed to bring this sophisticated approach to the laboratory so that the experimentalist can quickly examine the data as soon as they are collected. Such instant feedback provides a greater breadth of knowledge with which to plan future experiments. After the best model has been identified and validated by a residual analysis and/or goodness of fit test, the next step is to determine the uncertainty in the parameter estimates. SIMUSOLV produces linear and nonlinear confidence regions for the parameters. In a similar fashion to the model discrimination discussed above, there may be insufficient data to produce acceptable confidence regions. If this happens, additional ex- periments must be designed to minimize the region of uncertainty of the parameters (Reilly and Blau, 19741. This entire description of the model- building scenario is summarized in Figure 1. Having gone through this extensive model-building process, the final task is the obvious one of using the model to solve the problem under consideration. Since model building is based on data, a word of caution is necessary. Data have errors associated with them. This uncertainty is an inherent part of both the model selected and the parameters estimated. Any use of the model must reflect this uncertainty in terms of confidence

OCR for page 183
PK Models Using SIMUSOLV 187 MODELING PROBLEM rPRIOR FACTS l | AND THEORIES ~ 1-' 1 NO /SUITABLE \ MODEL SELECTED,/ \/ YES ESTIMATE MODEL PARAMETERS < MODEL YES USE MODES PARAMETERS - FIGURE 1 Sequential model-building procedure. POSTULATE MODELS EXPERIMENTAL | I DATA l , _ COMPARED | DESIGN XPTS. TO MODELS I I DISCRIMINATE MODELS I _J NO EXPERIMENTAL DATA \ No al_ _ DESIGN EXPTS. TO REDUCE UNCERTAINTY

OCR for page 183
88 GARY E. BLAU AND W. BROCK NEELY limits in the values predicted by the model, as well as confidence regions around the estimated parameters. Finally, a model is developed to answer a specific question. There is always a great tendency to use the model for something other than its intended purpose. This is fraught with many dangers and should be avoided if at all possible. In the remainder of this paper reduction to practice of these philosophical concepts will be accomplished with SIMUSOLV. UNCERTAINTIES AND ERROR ANALYSIS IN MODEL BUILDING To illustrate the concept of model building consider the case of two models (M~ and M2) characterized by the following equations: For M1: hi = f ~ Ski ~k2,xi) + c For M2: Yi = f2(k~,k2,k3,xi) + c (1) Each model is a function of two or more parameters, i.e., rate constants for clearance, metabolism, absorption, etc., and a single independent variable (xi), usually time. In both cases Hi is the dependent variable, such as concentration in the blood, and ci is the experimental error in the ith measurement. The first problem in uncertainty is to examine the two models and apply a set of criteria (to be discussed below) to make a decision as to which one is best. In other words, find the most suitable of the models described in Equation 1 which best describes the data. This is done by comparing how well the models fit the data. The comparison is done when the models are examined at their individual best. The actual procedure is accomplished by measuring the probability of generating the observed concentration- time data by each model for a given set of parameters. The method of maximum likelihood accepts it as obvious that the values of the parameters k~,k2 for Me and k~,k2,k3 for M2 which maximize this probability are the best. These parameters are called maximum likelihood estimators denoted k~*,k2* form and k~*,k2*,k3* for M2. The function which calculates the probabilities for any set of parameters is called the likelihood function, denoted as L~(k~,k2) for Me and L2(k~,k2,k3) for M2. To distinguish one model from another, the ratio of likelihood functions is evaluated by using the maximum likelihood estimators. A ratio of 10 is ordinarily taken as showing a difference in plausibility, whereas 100 denotes a strong pref- erence for one model over the other (Reilly and Blau, 19741. For example, if L~(k~*,k2*11L2(k~*,k2*,k3*) for Models 1 and 2 were 100, there would be a strong preference for Me being better at explaining the data than M2. The above heuristics assume that the number of parameters in each model

OCR for page 183
PK Models Using SIMUSOLV 189 is the same. If the number is different, then the likelihood ratios must be somewhat higher than 100 to discriminate between them. The above illustration can be generalized to M models as follows. For M candidate models of the form: Hi = fn~kn,ti) + e~ (n = 1,2,. . . M), (2) where k becomes a dimensional vector of parameters for the nth model selected. For each of the M models, SIMUSOLV can be used to calculate the maximum likelihood estimates kn*. From these estimates likelihood ratios can be determined for all pairs of models. The heuristics can be applied as described above in the two-by-two comparison test, and the most suitable model can be selected. An example of this technique is described in the section "Applications." The second problem in uncertainty is obtaining the best estimate of the parameters for the model selected. When the method of maximum like- lihood is used, this task is embedded within the discrimination problem as described above. In other words, by comparing the models at their optimum the best set of values is automatically generated. The determi- nation of these maximum likelihood estimates is a nontrivial task, except for the special case in which the models in Equation 1 are linear. Unfor- tunately, this situation does not occur with phenomenologically based models; consequently, it is necessary to apply estimation procedures (Bard, 19741. SIMUSOLV uses both direct and indirect algorithms for parameter estimation. In the direct method different values of the parameters are selected, and the likelihood function is estimated for each. The set which gives the lowest value of the likelihood function is deleted, and a new point is selected according to a procedure called the flexible polygon method (Bard, 19741. This process of adding and deleting points is continued until the maximum value of the likelihood function is obtained. Indirect methods not only use values of the likelihood function but also its curvature to direct the search for the maximum value. In SIMUSOLV, the generalized reduced gradient method is used (Lasdon et al., 19781. From an initial guess of the maximum, succeeding estimates are selected along a direction of improved function values. Determination of this di- rection comes from a knowledge of the gradient, which can be determined numerically or analytically. The former is usually preferred because the user does not need to supply any information. Numerical approximations to the gradient, however, frequently lead to less than adequate directions. This problem has been overcome in SIMUSOLV by solving both the sensitivity and the model equations to yield the gradients directly. This method, called the direct decoupled method, has reduced the computa- tional time over the conventional reduced gradient method by an order of

OCR for page 183
190 GARY E. BLAU AND W. BROCK NEELY magnitude (Dunker, 19841. In addition, the state-of-the-art numerical in- tegration techniques Lawrence Solver for Ordinary Differential Equations (LSODE) are being used to solve all the differential equations associated with the model (Hindmarsh, 19821. Despite the sophistications of the techniques included in SIMUSOLV, every effort has been made to make these capabilities invisible to the user. However, the user must be aware that the true maximum likelihood estimates require this level of technology to ensure efficient use of the computer resources and achievement of the proper solution. Once point estimates are obtained by maximizing the likelihood func- tion, their uncertainty must be quantified. SIMUSOLV uses several meth- ods to handle this quantification. The first method, called linearization, represents the region of uncer- tainty as an ellipse in parameter space (e.g., two-dimensional contour plots for the two parameters kit and k2 in Model 11. Here, the likelihood function is linearized by a Taylor's series expansion about the maximum likelihood estimates. These confidence regions are drawn based on a statistical F distribution (Draper and Smith, 19811. A 95% confidence region means that if the experiment was repeated 100 times, 95 times the maximum likelihood estimates would fall within this region. The linear- ization method is attractive in that the confidence regions are easily gen- erated by the computer (Reilly, 19761. Unfortunately, linearizing the function distorts the shape of the confidence regions. Rather than ellipses, confi- dence regions may take on different shapes dependent on the information content of the experimental data. The second method available in SIMUSOLV to determine these con- fidence regions, called the nonlinear method, does not linearize the like- lihood functions but uses the familiar F statistic to determine the region. In Problem 2 in the section "Applications," it will be shown just how significant these departures from linearity can be in the shape of the regions. The determination of the confidence regions requires a large amount of computational effort because the likelihood function must be evaluated for all points in parameter space. It is also an approximate method because the F statistic, which is being used, is valid only for linear models. The third method in SIMUSOLV for representing the confidence re- gions, called the exact method, requires no assumptions. It is based on a subjective interpretation of statistics called Bayesian methods (Reilly and Blau, 1974~. By this method the degree of belief in an event is quantified versus the familiar objective or frequency of occurrence interpretation. The Bayesian approach applied to the quantification of parameter uncer- tainty is expressed by the following theorem: Privily) = Prick) PrLylk), where Prickly) is the posterior probability of the parameter k and is the

OCR for page 183
PK Models Using SIMUSOLV 19i true value given the set of data y, Prick) is the prior probability of the parameter k and is the true value before the data were collected, and Pr (ylk) is the likelihood of generating the data y for the set of parameters k. Although the above likelihood is different in interpretation, it gives the same values as the likelihood function discussed earlier. To apply the method, it is necessary to multiply the believed value of the parameters before the experiment is conducted by the value of the likelihood function calculated after the experiments are performed. These values must be normalized so that the total values of the posterior probabilities add to 1.0. If no prior information is available for the parameters before the experiment is conducted, a uniform, noninformative prior distribution is assumed by SIMUSOLV (Bard, 19741. By using this posterior probability, SIMUSOLV calculates regions of uncertainty by using frequency of oc- currence arguments independent of any assumptions in the form of the likelihood function. Once again, an example later in the text will illustrate the effect of the assumptions inherent in the last two methods. SIMUSOLV allows the user to construct confidence regions by either of these methods. The computational burden associated with the nonlinear and exact methods is prohibitive for large problems (where the number of parameters is greater than four). They must be used, however, if there is any suspicion about the inflation content of the data. The Bayesian method should be used exclusively if prior information on the parameters is available. Experimental Error Before concluding this section, the effect of experimental error on model development must be discussed. Because the method of maximum like- lihood is being used, it is necessary to accommodate the structure of the experimental error in the model-building process. To simplify the analysis, it is frequently assumed that the errors in different experiments are (1) independent of one another, (2) uncorrelated, (3) have constant var- iance, and (4) are normally distributed. If these assumptions are not rea- sonably valid it may have serious effects on the statistical analysis. To illustrate, suppose that blood plasma concentrations are measured by an analytical procedure in which the error in the measurement is a constant fraction of the quantity being measured. Let the model expressing the blood plasma concentrations as a function of rate parameters be as follows: Cj = f~k~,k2, , ti) + ei, (3) where the kj's are the individual rate constants, and the error ci is a constant fraction of the amount being measured. The variance of the error in the observations will not be constant if the blood level extends over any

OCR for page 183
|92 GARY E. BLAU AND W. BROCK NEELY considerable range. On the other hand, if the assumption is made that the variance of the error is constant, the consequences both for parameter estimation and model building could be equally serious. For example, during a long clearance phase from the body, the model will fail to give proper weight to any tailing in the curve from secondary processes. This will be demonstrated more clearly in the next section ("Applications") when the data in Problem 1 are analyzed. In the past, much effort has been expended in getting models into convenient form for plotting on ordinary or special graph paper. A bio- logical example is the use of Lineweaver-Burk plots for determining kinetic constants in enzyme systems. In most cases this convenience is achieved at the cost of distorting the error structure. When curves are fitted by eye or by simple linear least squares, constant variance is almost invariably assumed. For example, one method of plotting blood clearance data is by using logarithmic graph paper. This assumes that the elimination from the blood follows a first-order process and can be expressed as: dCldt = kt, (4) where C is the blood concentration, and k is the elimination rate constant, which on integration yields the following nonlinear equation: C = C0 expf - kt) * c, (5) where ~ is the error in measurement and is a multiplicative function of the concentration. By making the log transform of Equation 4, a linear equation results. In this case the error becomes an additive function of the log of the concentration, as shown in Equation 6: In C = At + In C0 + In e. (6) Plotting the logarithm of the concentration C versus time yields a straight line. The straight line, however, is achieved by assuming that the scatter in the data is a constant fraction of the quantity being measured when, in fact, it might be that the scatter in the data is constant. If it is the latter, then the width of the error bars as the concentration becomes small will make the interpretation of a straight line from Equation 5 misleading. It is generally much better to write the model directly in the form of Equation 1 and use SIMUSOLV, which can accommodate the error struc- ture. It is more the rule than the exception that the error, while unpre- dictable, depends to some extent on the magnitude of the quantity being measured. That is, the absolute value of the error usually tends to be large when larger quantities are measured. In the SIMUSOLV program the error is described by the following model (Reilly et al., 19771: Variance ~ Hi) = Variance (e ~) = o2 Ef (k,xi) ~ A (7)

OCR for page 183
PK Nlodels Using SIMUSOLV where ~ and By are statistical parameters of the error model. The latter is called the heteroscedasticity parameter. Usually its value will be between 0, in which case the error variance is a constant, and 2, when the variance is proportional to the quantity being measured. Equation 7 is used to weight the measurements to allow for changes in error variance. Although the maximum likelihood approach is restrictive in requiring explicit assumptions concerning the form of the experimental error, any unknown parameters such as ~ and By appearing in the error model are estimated along with the pharmacokinetic parameters of the model. APPLICATIONS The SIMUSOLV computer program was used to analyze the three simulated problems presented below. This is a package that has been developed at the Dow Chemical Co. to aid the nonmathematician in build- ing and solving models containing systems of algebraic and ordinary differential equations. After the program is wntten, SIMUSOLV allows the investigator to match experimental data with different models. Based on the statistical discussion in the previous section, guidance is provided in deciding which model is "best." If the information content of the experimental data is inadequate to make such decisions, SIMUSOLV will help the user design additional experiments to make the final model se- lection. Finally, SIMUSOLV contains graphical routines to display sim- ulated results. Problem ~ Assume that an animal has been given a C~4-labeled chemical by means of a single oral dose. At periodic time intervals blood samples are collected and analyzed for total radioactivity. The data in Table 1 are the result of such a hypothetical expenment. TABLE 1 Simulated Sequential Blood Concen- tration-Time Data Following an Oral Dose of 45 mg Time (h) 0.5 1.0 1.5 2.0 3.0 4.0 6.0 8.0 10.0 Blood (mg/liter) 0.10 0.11 0.082 0.057 0.035 0.024 0.0091 0.0050 0.0036

OCR for page 183
240 DANIEL B. MENZEL ET AL. TABLE 2 An Example of Physiological Data Needed for a Generic Model: Male 300-g Sprague-Dawley Rat A. Anatomical Compartment Dimensions Capillary Extraction Elimination Compartment Mass (g) Volume (ml) Ratio (ml/g) Rate (liter/in) Lung 1.27 0.126 0.86 0.0 Kidney 2.10 0.013 17.4 0.85 Liver 11.71 0.090 0.31 0.025 Gut 4.60 0.016 0.36 0.0 Spleen 0.54 0.0046 0.36 0.0 Testes 2.73 0.0022 0.31 0.0 Muscle 150.0 0.194 0.18 0.0 Carcass 111.46 0.516 0.33 0.0 Venous blood o.oa 12.90 1.0 0.0 Arterial blood o.oa 4.30 1.0 0.0 B. Blood Flow Rates Among Anatomical Compartments Source Destination Flow Rate (ml/h) 1,241 1,522 947 237 61 1,795 1,241 338 947 237 61 1,795 846 846 175 175 5,640 5,640 Kidney Liver Gut Spleen Testes Carcass Arterial blood Arterial blood Arterial blood Arterial blood Arterial blood Arterial blood Arterial blood Muscle Lung Arterial blood Venous blood Lung Venous blood Venous blood Liver Liver Venous blood Venous blood Kidney Liver Gut Spleen Testes Carcass Muscle Venous blood Venous blood Lung Lung Arterial blood aMost compartments contain both a tissue (mass) and blood (capillary volume) compartment. However, the blood compartments are unusual in that they contain no tissue. live metabolites has been completed (Keller et al., in press). In Figure 6 are shown the chemical reactions involving the production of the putative ultimate carcinogen of benzofa~pyrene, benzo~a~pyrene diol epoxide (BPDE), and its reaction with nuclear DNA to form covalent adducts. 13PDE is conjugated with glutathione (GSH) by glutathione S-transferase

OCR for page 183
Simulation Resources 241 ~ I So 2- SO2- 3CNU .h AH sp ~ Ow G' SO ~ X a ~ D \m 7 ~: ASH MICROSOMES ~1 C ~ BPOE GET t~.,\~\'\ I\ . GS-BPD ~ 6860f l 1 FIGURE 6 The overall scheme of benzo(a)pyrene chemical reactions involving the metabolism of benzo(a)pyrene to benzo(a)pyrene dial epoxide, and the reaction of benzo(a)pyrene dial epoxide with glutathione and nuclear DNA to form covalent adducts. The detoxification of benzo(a)pyrene diol epoxide by glutathione is inhibited by glutathione S-sulfonate, which is formed by a reaction between sulfite and glutathione disulfide and by added BCNU. to form the unreactive dial conjugate. This pathway effectively removes BPDE from reaction with DNA. The glutathione S-transferase pathway is the rate-limiting step in the detoxification of most polyaromatic hydro- carbon carcinogens. In lung tissue, alternative pathways are limited, mak- ing the lung highly dependent on the functioning of the glutathione pathway. Even so, the lung has limited stores of glutathione. Exposure of cells or the lung to sulfur dioxide or bisulfite results in the formation of glutathione S-sulfonate (GSSO3H) from the naturally occur- ring glutathione disulfide (GSSG). GSSO3H is a competitive inhibitor of glutathione S-transferase (Leung et al., 1985, in press). Exposure of cells to sulfite results in the competitive inhibition of this major detoxification pathway of benzofa~pyrene LB(a)P]. GSSO3H and GSSG are reduced by the same enzyme glutathione re- ductase (Leung et al., 1985~. These two substrates compete with each

OCR for page 183
242 DAN ~ EL B. M ENZEL ET AL. Other for reduction by glutathione reductase. The Km and Vma,` values for the two substrates differ significantly. Production of GSSG also depends on the availability of reducing equiv- alents within the cell. Reduced nicotinamide adenine dinucleotide phos- phate (NADPH) is the principal source of reducing equivalent, but NADP + may be increased by the flow of reducing equivalent through other path- ways. NADPH is also consumed by other pathways within the cell. At present, it is only possible to estimate the oxidation of GSH to GSSG in general. Experimental values for GSH and GSSG serve as the basis for choosing the initial values in the cell. The reactions described are interdependent, and some compete with each other. Thus, the overall fate of B(a)P through metabolism, detoxi- fication, and reaction with DNA is not easily predictable from a qualitative or intuitive analysis. The rates of these reactions can be described math- ematically, including the efflux of GSSG and GSH from cells by the equations shown in Table 3. The form of the equations suitable for sim- ulation using SCoP are shown in Table 4. The ease of conversion from conventional mathematical form to that needed by the program is a major strength. The names of variables can be made more logical through the use of words descriptive of the variable. The integrated form of the Michaelis-Menten equation is used to de- scribe the enzymatic reactions. To increase the precision of measurement TABLE 3 Equations Describing Benzotaypyrene, Glutathione, and Sulfite Metabolism and DNA Adduct Formation = k3[gssg] [SO3] + 2[Vgssg] + VgssO3h + k5[S03][ h] k, [ash] Vex Vegsh dt -= k,[gsh]l2k3[gssg][S03] Vgssg dt FIGS ' = k3[gssg][S03] V~ssc3hk20[gssO3h] bEPOXIDE dt bADDUCT dt = = k7[diol] Vexk2[epoxide][DNA] - = k2[epoxide][DNA] - k4[adduct] V = - Vgsso3h Vex = Vm gssg [gssg] gss8 Kmgssg + [gssg] VmgssO3h [gSSO3h] KmgssO3h + [gSSO3h] Vmex [ex] Kmex( 1 + [gssO3h]/K;) + [ex] Vegsh = klO(e( k8 t) + k, l (eon t)) VegsSg = kl4(e( kl2 ,) + kI5(e( kl3 i)) VegSsg + peroxidase

OCR for page 183
Simulation Resources 243 TABLE 4 Modifications of Mathematical Equations Describing SO2-Benzo (a~pyrene Interaction in a Form Usable by the Simulation Language SCoPa D ash = k3 * gssg * so3 kl * ash + 2 * V gssg + V_ gsso3h V_ex + kS * so3 * (1/gsh) V_egsh; D gssg = kl * ash / 2k3 * gssg *so3 V gssg V_egssg + peroxidase; D gsso3h = k3 * gssg * so3 V gsso3hk20 * gsso3h; D ex = k7 * diolV exk2 * ex * dna; D adduct = k2 * ex * dnak4 * adduct; V gssg = V mgssg * gssg I (Kmgssg + gssg); V gsso3h = Vmgsso3h * gsso3h / (Kmgsso3h + gsso3h); V_ex = Vmex * ex / (Kmex * (1 + gsso3h / Ki) + ex); V_egsh = klO * exp( - (k8 * Time)) + kl 1 * exp( - (k9 * Time)); V_egssg = kl4 * exp( - (kl2 * Time )) + klS * exp( - (kl3 * Time)); aSee Table 3 for conventional mathematical representation. Of the components of the GSH pathway, the amounts of GSSG and GSSO3H were increased during the experiment by prior treatment of the cells with the glutathione reductase inhibitor BCNU. By measuring the Km, Ki, and Vmax for all of the enzymatic reactions, the simulation can be solved for the condition with or without BCNU inhibition. In other words, the ab- normal physiological state of inhibition by BCNU can be used to predict the normal physiological results occurring in the absence of BCNU. Actual experimental values were measured in the absence of BCNU to confirm the validity of the assumptions made for BCNU inhibition. Differences in chemical reactivity, such as between syn and anti isomers of BPDE with DNA and glutathione S-transferase substrates, can be ac- counted for also. Experimental values for these rates can be used in the compiled form of the PK model with SCoP and can be modified inter- actively without having to recompile the model. Interaction between the modeler and the model is made much more rapid by this technique. An example of the time course formation of the syn and anti BPDE adducts with DNA over the experiment is shown in Figure 7. The effects of sulfite treatment are shown by the increased B(a)P binding to DNA throughout the course of the experiment. The results agree well with the experimental values found by Leung et al. (in press). Increased B(a)P- DNA adduct formation in the presence of sulfite or sulfur dioxide is mostly due to the inhibition of the glutathione S-transferase pathway by the sulfite metabolite GSSO3H. Using this model and a simple calculation of the intracellular sulfite concentrations that are likely from exposure to sulfur dioxide concentra- tions used in three studies of the cocarcinogenicity of sulfur dioxide with

OCR for page 183
244 DAN ~ EL B. M ENZEL ET AL. z Ad m Cal - - m Be in AS Cal Be TO O' 10 - O- 0.5 me SUUITE 5.0 me SUUITE (a) - 5 - 30 - A z m 20- - - m at tt' 10- LO a: Z 0- _o o an n mM SU[FITE - I 1 1 1 1 1 1 1 T i I I I I I I I 1 5 10 15 20 25 30 35 40 45 50 55 60 MINUTES 0.5 me SUUITE s n mM ';ULFITF 20.0 me SULFITE (b) - I I I I I I I I 1 1 ~ 1 0 5 10 15 20 25 30 35 40 45 50 55 60 MINUTES FIGURE 7 Formation of the anti (a) and syn (b) BPDE adducts with DNA. polyaromatic hydrocarbons (Laskin et al., 1976; Pauluhn et al Pott and Stober, 1983), we can predict that a marked increase in DNA adducts occurred in the study done by Pauluhn et al. (1985), in which there was a significant increase in the number of lung tumors and shortened time to tumor appearance. Only very small increases of 3-4% in DNA adducts were predicted to have occurred in the other two studies. The changes in numbers and time to tumor for these studies was marginal and of questionable statistical significance (Figure 81. ., 1985;

OCR for page 183
Simulation Resources 245 _ _ 30- o At L' 20- o At a m ~ 10- cn o // / l / I I / I / I i/ / - Laskin ot al. (10 ppm) Pott and Stober (11.5 ppm) Pauluhn et al. (1972ppm) ~ 50 100 150 200 250 300 350 MINUTES FIGURE 8 Predicted increase in amounts of syn-BPDE adduct over control from studies by Laskin et al. (1976), Pott and Stober (1983), and Pauluhn et al. (1985).

OCR for page 183
246 DAN ~ EL B. M ENZEL ET AL. Our model is still only a crude approximation of the actual flux of BAP through the lung cell and the effects of sulfite or sulfur dioxide on these reactions. The relationship between increased DNA binding and tumor rates is unknown at present. The rate of removal of adducts under these conditions has not been measured. The model overestimates the adduct formation as a consequence. SHARING RESOURCES As stated at the outset of this paper, the simulation of toxicological events, either pharmacokinetics or pharmacodynamics, is but a special case of the more general methodology of simulation of physical, chemical, and biological events. The same mathematical approaches are applicable to all of these fields, as are the same computational resources. Fortunately for toxicologists much of the developmental work has been accomplished in other fields, particularly engineering, and is now available for appli- . . . cation In toxins ogy. The sharing of resources, especially computational resources, provides greater power to pharrnacokinetics and pharmacodynamics than could be justified by the field alone. The NBSR provides a major computation facility specifically designed for biological simulation (see Table 51. Training in simulation is available from the NBSR and other NIH simulation resources at two levels. Introductory classes are designed for researchers with little or no background in computer usage or program- ming. The NBSR presents introductory classes lasting four and a half days several times a year at locations where there is a group of 8 to 15 students and a suitable cluster of microcomputers for teaching. The NBSR intro- ductory class is built around the SCoP software package, and students can take a copy of the software home with them to continue their work after the class is over. Researchers with no prior programming experience can be developing their own simulation programs by the end of the class. Advanced simulation courses are also available or in preparation by TABLE 5 Computation Facilities at the National Biomedical Simulation Resource VAX 11/750 Array Processors (MAP-6430 and Mini-MAP) Special Parallel Coprocessors for Simulation (AD-10 and AD-100)

OCR for page 183
Simulation Resources 247 NBSR. These courses are for the researcher with some previous experience and cover such topics as the use of special purpose processors, a wider range of simulation languages, and advanced numerical techniques for equation solving. TH E TOXI N CONCEPT To facilitate the exchange of information in the toxicology community, the Toxicology Information Network (TOXIN) has been established within the NBSR. The goals of TOXIN are to improve communication and understanding among toxicologists about mathematical modeling, to pro- vide access to PK and risk assessment models, to develop consensus on PK and risk assessment models through collaborative use of the same model on the same computer, and to provide access to special data bases needed for modeling. A schematic diagram of the relationship between users and TOXIN is shown in Figure 9. Users in the United States can interconnect with TOXIN through the commercial data communications network TYMNET. Users enter TYM- NET through a local telephone number using regular telephone lines. Microcomputers are encouraged as smart terminals for use with TOXIN, but any terminal can be used. Certain terminals have limited graphics, which may prevent the use of some programs. Depending upon the needs of the user, either a TOXIN account or a regular NBSR collaborators account provides access to the facilities of the NBSR, including the spe- cialized computing facilities. Electronic mail for exchange of information, models, and data files is available. PK models contributed to TOXIN are placed in common computer files available to every TOXIN user. The models can then be used to run simulations with the same computer code for use on the same computer by different investigators. Problems, improvements, enhancements, and comments on the models can be made by users and sent to the contributor of the model or to the general community of TOXIN users through the electronic mail system. In this manner a consensus can be built about the critical aspects of PK models. The selection of physiological parameters for PK models is a particularly critical area in which comparisons of different values may prove especially useful. As better physiological val- ues are amassed, they can be stored in data tables in a form useful for modeling. Alternative models can be compared and potentially integrated to form second- and third-generation models. As more PK models are developed and made available to TOXIN users, an archive of PK and pharmacodynamic models can be accumulated. Historical reference will then be possible. At present such an archive does

OCR for page 183
248 DANIEL B. MENZEL ET AL. National & International Users , 1 | TYMNET 1 . ~ NBSR HOST I TOXIN .1 SPECIALIZED SYSTEMS FIGURE 9 Schematic diagram of the relationship between users and TOXIN. not exist, and original computer codes are inaccessible to investigators beyond the originator. If PK models are to have a major impact on reg- ulatory decision making, the establishment of an archive and the requisite data for risk estimation is essential for the future review and improvement of regulations. Selection among different PK models is now difficult to make because direct comparisons are generally not possible. Storage of multiple PK models in a form that can be easily used by investigators will provide a means of systematic testing and further validation of PK modeling ap- proaches. Estimation of errors within different PK models can be made more easily, and the methodology of estimating error in PK models (as discussed by Gary E. Blau and W. Brock Neely, this volume) is made more feasible. TOXIN also provides access to commercial programs that are expensive and difficult to justify for a single laboratory. At present, ACSL, CSSL, SIMNON, ADSIM, SCoP, and SCoPFIT are available as simulation lan- guages on NBSR and are accessible through TOXIN. Should user demand

OCR for page 183
Simulation Resources 249 warrant other commerical software, the languages supported by NBSR can be expanded. FUTURE TRENDS The rapid development of simulation languages should make simula- tions of all kinds much more accessible to nonprogrammers. These lan- guages have gone through several revisions and are improving rapidly. Reductions in costs are likely as the programs become more widely used. Versions for microcomputers will provide even greater access. Even those programs not supported by microcomputers should become more available through the use of shared resources and the telecommunications network. Shared resources provide the opportunity to accumulate libraries of PK models that can be used by multiple users. Consensus can be built and improvements in shared PK models can be accelerated through comparison of predictions for a variety of compounds. Development of special data bases for physiological parameters can be achieved. Much of our knowledge of organ blood volumes, flow rates, and gross anatomy are gained from tracer techniques. Pharmacokinetic data provide similar data, and the concordance of several experimenters' data increases our confidence in these basic physiological values. Errors are likely to be detected when reference values fail to predict experimental results. Data gaps will be identified and filled as physiologically based PK models become more widely used. Computational devices are evolving rapidly. The drastically reduced costs of data storage and the ease of transmission of masses of data by electronic means will result in more complex models useful for experi- mental design and, with hope, long-range planning. Fidelity with human experience should increase as PK models are integrated with pharmaco- dynamic models, making risk assessment more reliable. Distributed data storage will also improve data utilization and decrease repetition of ex- periments, lowering costs and animal usage and increasing generalization among classes of chemicals. Digital analog devices and their high-level simulation languages should speed computation to real time rates. Digital equipment reduces the uncertainties and difficulties of conventional wired analog or hybrid computers. CONCLUSIONS In summary, there are no major equipment limitations for most PK models in the United States. The specialized computer facilities such as the NBSR at Duke University and other components of the NIH simulation

OCR for page 183
250 DAN ~ EL B. M ENZEL ET AL. resources provide more than ample power for computation. Most PK models can be solved on microcomputers using commercial or public domain software. The rapid development of simulation software provides an opportunity for toxicologists to use the software firsthand to simulate their experiments, to assist in design of experiments, and to extrapolate from animal experiments to humans. ACKNOWLEDGM ENTS This work was supported in part by grants from NIH ES01859, ES02916, CA14236, and RP-940-5 from the National Institutes of Health and by a grant from the Electric Power Research Institute. We wish to thank Drs. K. H. Leung and D. A. Keller for sharing their data and models on sulfur dioxide and benzofa~pyrene metabolism and Drs. R. J. Francovitch and C. R. Shoaf for sharing their data arid model on nickel metabolism. Messrs. M. I. Tayyeb and J. Sandy and Ms. K. Wilkinson provided excellent technical assistance. REFERENCES Francovitch, R. J., C. R. Shoaf, M. I. Tayyeb, and D. B. Menzel. 1986. Modeling nickel dosimetry from kinetic measurements in rats. The Toxicologist 6:528. Keller, D. A., K. H. Leung, and D. B. Menzel. In press. Glutathione, sulfite, and benzo(a)pyrene interactions: A mathematical model. The Toxicologist. Laskin, S., M. Kuschner, A. Sellakumar, and G. V. Katz. 1976. Combined carcinogen- irritant animal inhalation studies. Pp. 190-213 in Air Pollution and the Lung, E. F. Aharonson, A. Ben-David, and M. A. Klingberg, eds. New York: John Wiley & Sons. Leung, K. H., G. B. Post, and D. B. Menzel. 1985. Glutathione S-Sulfonate, a sulfur dioxide metabolite, as a competitive inhibitor of glutathione S-transferase, and its re- duction by glutathione reductase. Toxicol. Appl. Pharmacol. 77:388-394. Leung, K. H., D. A. Keller, and D. B. Menzel. In press. Glutathion S-sulfonate inhibition of glutathione conjugation with benzo(a)pyrene epoxides. Toxicol. Appl. Pharmacol. Miller, F. J., D. B. Menzel, and D. L. Coffin. 1978. Similarity between man and laboratory animals in regional pulmonary deposition of ozone. Environ. Res. 17:84-101. Miller, F. J., J. H. Overton, Jr., R. H. Jaskot, and D. B. Menzel. 1985. A model of the regional uptake of gaseous pollutants in the lung. I. The sensitivity of the uptake of ozone in the human lung to lower respiratory tract secretions and exercise. Toxicol. Appl. Pharmacol. 79:11-27. Pauluhn, J., J. Thyssen, J. Althoff, G. Kimmerle, and U. Mohr. 1985. Long-term inhalation study with benzo(a)pyrene and SO2 in Syrian golden hamsters. Exp. Pathol. 28:31. Pott, F., and W. Stober. 1983. Carcinogenicity of airborne combustion products observed in subcutaneous tissue and lungs of laboratory rodents. Environ. Health Perspect. 47:293- 303.