Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 99

5
l~te~pretation arid Modeling of
Tox~ciW-Test Results
INTRODUCTION
Nearly all the problems inherent in the toxicity assessment of single agents
from the selection and measurement of health end points through the specifica-
tion of dose or exposure and the details of experimental design to the prediction
of risk are exacerbated in the toxicity assessment of complex chemical mix-
tures. Each of these problems is amplified by the possibility of interactions or
other unpredicted behavior among the mixture components.
The methods of biostatistics and quantitative modeling play important roles
in toxicology. The statistical/mathematical methods may be applied in four
ways. First, they provide some guidance in developing an overall strategy to
test the toxicity of mixtures economically. Second, they offer directions for
experimental design so that maximal information can be extracted from the
results of studies that cannot address directly every possible combination,
dose, and regimen of exposures. Third, they offer a framework for interpreting
data that often present unusual features and questions. Fourth, they offer a
choice of approaches for estimating the consequences of biologic assumptions
about interactions with an eye to developing more realistic models.
Quantitative models have been applied to assess the joint toxicity or effect of
chemicals since the seminal work ofBliss (1939), Plackett and Hewlett (1948),
and Finney (19521. This work has not focused on the unintentional, usually
low-level exposure to chemicals in the environment, but was undertaken to
understand whether the joint application of two pesticides or the joint admin-
istration of two or more drugs led to outcomes that were greater than the sum
of the outcomes predicted on the basis of individual applications of the
substances.
99

OCR for page 99

100
COMPLEX MIXTURES
The environment exposes us to thousands of chemicals, some appearing
intentionally (such as food additives) and some appearing as adventitious con-
taminants. Even if humans were exposed to no more than 100 potentially toxic
agents, the possibility of unusual or unexpected combined effects is sizable.
The matrix of only single-dose combinations of two of these agents at a time
would contain 4,950 cells. If the probability that the presence of one agent
influences the toxicity of another were as low as 0.01, there would still be 50
combinations of two agents that would interact in some unpredicted way.
In Chapter 1, "interaction" was referred to as deviation from dose-additive
behavior expected on the basis of dose-response curves obtained with individ-
ual agents. To a biostatistician, the definition includes information on the un-
derlying dose-response model and on units of measure. For ordinary linear
models, "interaction" refers to a departure from response additivity. Different
measures of response can lead to different conclusions concerning departure
from additivity. For nonlinear models, such as log-logistic, log-linear, log-
probit, and multistage models, "interaction" can be defined as a departure
from additivity for a transformation of the response variable. For example,
with multiplicative models, there is additivity for the logarithm of response.
Thus, when the word "interaction" is used, one must make certain of the units
in which toxicity or dose is expressed, as well as the assumptions about the
nature of joint action that was predicted by the model used. Kodell and Pounds
(in press) reviewed the use of the nomenclature and noted that different scien-
tific disciplines define these concepts differently; moreover, even within a dis-
cipline, there may be disagreement about the definition of specific concepts.
Models, the major topic of this chapter, constitute attempts to quantitatively
describe the important, measurable aspects of the complex biology or toxicol-
ogy of chemical mixtures. To date, all models have been imperfect, yet they
have been helpful in estimating the hazard associated with exposure to a com-
plex mixture. Most toxicology experiments obtain fairly limited data, because
resources are constrained. Models are a means of characterizing and summa-
rizing experimental results, often relating them to underlying biology. If this is
done well, it becomes possible to extend limited experimental results beyond
the dose range from which they were developed. Models furnish a way to test
the consistency of experimental results with biologic assumptions or hypothe-
ses. Tests for the existence of interaction are based on model assumptions.
Estimation of responses to low-level exposures introduces a difficult prob-
lem in assessing the toxicity of mixtures, as well as of single agents. Experi-
ments are usually conducted at exposures far greater than those occurring envi-
ronmentally. The logical problem in extending these results lies in the fact that
the presence (or absence) of interaction at high doses does not guarantee its
presence (or absence) at other doses. Hence, experimental results alone are
likely to be inadequate for assessing the toxicity of a mixture or the presence
of substantial interaction at doses comparable with human exposure. Mod-

OCR for page 99

INTERPRETATION AND MODELING OF TEST RESULTS
101
els have been constructed to estimate responses that are not experimentally
observable.
Models supply a framework for guiding experimentation or for experimental
designs that might be quite different from those traditionally used. For exam-
ple, where there is sufficient confidence in a model for regulatory use, it might
not be necessary to conduct experiments at joint doses; reliance on experimen-
tal results with separate components might be sufficient.
Models contribute to the estimation ofthe toxicity of complex mixtures. One
important facet of a complex-mixture research strategy is the development and
improvement of models that relate toxic response to exposure. The develop-
ment of models requires greater attempts to understand and describe biologic
mechanisms, so that better mathematical models can be developed.
WHAT TO TEST? WHOLE MIXTURES VERSUS SEPARATE
COMPONENTS OF MIXTURES
Approaches to estimating the toxicity of mixtures can emphasize the testing
of the mixtures themselves or the testing of their individual components, even-
tually building up to the mixture (see Chapter 31. When separate components
are tested, the toxicity of a mixture is estimated by testing more and more
complex combinations of the components or by applying models to estimate
the toxicity of the combination of constituents. When whole mixtures are
tested, the components of the mixture and their relative quantities might be
unknown, and the mixture can at times be divided into other mixtures that can
be examined for their contribution to overall toxicity.
The choice of approach depends on the information available, the complex-
ity of the mixture, and the goals of the assessment. If the objective of testing is
the toxicity assessment of the emissions from a dump site with a small number
of known constituents, the approach emphasizing knowledge about the constit-
uents, individually and jointly, might be useful in estimating the potential tox-
icity of the dump site at defined exposures. This approach could also be used to
estimate the toxicity of various combinations of a limited number of the constit-
uents. If the objective is the toxicity assessment of a very complex mixture,
such as gasoline, the constituents might be too numerous (or even unknown) to
permit testing combinations and all components. Sometimes, a combination of
the two approaches is optimal. For example, if the objective is the identifica-
tion of key toxic components or fractions in a mixture, various fractions of the
mixture could be tested to identify the most toxic ones. Separate components of
those fractions, in turn, could be tested in combination, to identify interactions
among them.
This chapter emphasizes relatively simple mixtures (usually of only two
materials) and the use of models to predict responses to their joint exposure,
given information about their constituents. This emphasis eases the develop-

OCR for page 99

102
COMPLEX MIXTURES
ment of concepts, models, and experimental designs. Many of the results can
be extended to more complex mixtures whose components or major fractions
can be defined. The models developed to date are largely for the estimation of
cancer risks associated with lifetime exposure.
This chapter is organized in four sections. The first discusses a number of
mathematical dose-response models, how they can be used to evaluate the
experimental strategies presented in Chapter 3, and how to apply such strate-
gies. The second section addresses the importance of knowledge of pharmaco-
kinetics in mixture toxicity assessment. The third section provides experimen-
tal design criteria and guidelines that arise from the previous considerations
regarding dose-response models. Finally, the fourth section offers recommen-
dat~ons for research that could generate better methods of dose-response mod-
eling and interpretation. Appendixes providing the mathematical bases for the
assertions made are included at the end of the report.
IMPLICATIONS FOR STRATEGIES
The most difficult issue posed by an assessment of a complex mixture's
toxicity is the possibility of unexpected interactions among the components of
the mixture. Complex-mixture toxicology is a young science and depends
heavily on existing tools and concepts. This chapter, while discussing direc-
tions for development, emphasizes currently available tools and provides some
guidance for their use.
Evaluation of Mixture Components
The specific experimental strategy selected depends, in part, on how much is
known about the mechanisms leading to a particular biologic response. Carcin-
ogenesis models provide an example of the utility and limitations of models.
Appendix E illustrates the use of these models for mixtures.
Expectation of Additivity at Low (ResponseJ Doses
The multistage model is one of the most widely used models to predict the
incidence of cancer at low exposures from incidence data associated with
higher doses. It describes the dose-response relationship over a full range of
doses and thus permits response extrapolation at untested doses. At sufficiently
low doses, the excess response or risk associated with exposure to a mixture of
two components will be equal to the sum of the excess responses (risks) associ-
ated with each of the components; that is, low-dose additivity will hold.
"Low" dose as used in Appendix E is defined as a dose associated with an
excess risk of less than 0.1-1 % and a small relative risk of cancer in the ex-

OCR for page 99

INTERPRETATION AND MODELING OF TEST RESULTS
TABLE 5- 1 Incremental Probability of Cancer Due to Exposure to
Carcinogenic Agents X and Y
Cancer Probability at Agent X Dose
103
Agent Y Dose O A (low) B (high)
O 0 1 x 10-5 9.0494 x 10-2
A' (low) 1 x 10-5 2.0045 x 10-5 9.0892 x 10-2
B' (high) 9.0494 x 10-2 9.0892 x 10-2 9.8403 x 10-i
posed group (e.g., less than 1 .01 ) . * The known human carcinogens with envi-
ronmental exposures at "high" doses for the general population include ciga-
rette smoke, ultraviolet light, and radon. Some workplace exposures might
also lead to high risk for some populations.
Appendix E gives an example of a mixture with a much greater than additive
effect in a specific multistage model; in fact, the example assumes one of the
largest synergistic effects that could be practically estimated with a two-dose
bioassay design with 50 animals in each of four treatment groups (control for
each agent; control agent X, treatment agent Y; control agent Y. treatment
agent X; treatment agents X and Y). Table 5-1 summarizes the results of the
example. At dose A of agent X and no exposure to agent Y. the probability of
cancer is 1 x 10-s; a similar probability exists for dose A' of agent Y and no
exposure to agent X. Given simultaneous exposure to agents X and Y at doses
A end A ', the risk is 2.0045 x 10-5, compared with an expectation of 2.000 x
10-5 if the risk were additive. If exposure to one ofthe agents, say X, increases
to dose B. which would give a response of 9.0494 x 10-2, and exposure
to the other agent remains at dose A ', the risk is 9.0892 x 10-2, compared
with the additivity assumption estimate of 9.04941 x 10-2. Only when both
doses are relatively "high," at B and B ', both giving an individual response of
9.0494 x 10-2, does the additivity assumption fail. The estimate from joint
exposure is 9.8403 x 10-~, compared with an additivity-only estimate of
1.81784 x 10-~.
Several assumptions are inherent in these results. Appendix E formally
states the assumptions, and several of them are examined here. First, the doses
considered are assumed to be those at the target organ or directly proportional
to those at the target organ. Second, the relative size of the background tumor
rate influences the accuracy of the additivity result. For the multistage model,
the computed additivity result depends on the assumption that the relative risks
of each agent are small (e.g., less than 1.011. The additivity property can also
be shown to hold for mixtures of three or more components, as well as for other
*The multistage model considered in the appendix also assumes that the components affect single
and different transition rates between stages in the model.

OCR for page 99

104
COMPLEX MIXTURES
dose-response models, such as the Moolgavkar-Knudson, linear-logistic, and
multiplicative models. The difficulty in invoking this result, however, can be
the ignorance about how low doses must be before additivity is achieved.
One important result of the development presented in Appendix E is that, for
a very wide range of models, if the model can be specified, the toxicity of the
mixture can be estimated through knowledge of the toxicity of the mixture
components. This may be true even where component-wise additivity does
not hold.
Appendix E presents a multistage model for a two-material mixture. Estima-
tion of the values of parameters for this conceptually simple model can require
considerable data including toxicity data on combinations of the two mixture
components at several dose combinations. For more complex mixtures, esti-
mation of values of model parameters might not be practical. The problem can
be simplified if another model, such as a logistic model, is used to describe the
dose-response surface.
Models cannot be fully validated, so their utility depends on the confidence
that one places in them, which in turn is related to the reasonableness of their
results and their concordance with biologic knowledge and experimental
results. Even in the case of carcinogenesis, for which there has been much
work on models, some assumptions might not be met for specific materials.
For example, the usual application of the multistage model ignores potential
actions of a promoter. If a chemical were identified as a likely promoter, an
alternative approach explicitly treating promoters would be required. Some
specifications of the multistage or Moolgavkar-Knudson model might be ap-
plied to address this issue.
Appendix F reviews the use of models for developmental toxicology. Given
the multiplicity of end points and mechanisms, no single model is likely to be
appropriate. Further research is needed before any of these models is likely to
be accepted.
Appendix D develops and illustrates how new models might be developed.
The set of dose-response models applied to biologic phenomena is small and
consists of the linear, multiplicative, linear-logistic, multistage, Weibull, and
probit models and a few others. These have been used to describe many bio-
logic phenomena. It has been suggested that using a set (or perhaps all) of them
to provide risk estimates might be appropriate for situations where no single
model truly reflects the biology. The set (or some subset) of models could be
applied simultaneously to define a range or limits of the toxic risk for a mixture
at a given dose. One concern with this approach, however, is that all the models
used might be inappropriate. The resulting range might then not bracket
the true risk. A further concern is that one or more of the models in a set might
be so poor in describing the biology that they could yield results far from
the true risks and thus make the range of results so broad as to be essentially
meaningless.

OCR for page 99

INTERPRETATION AND MODELING OF TEST RESULTS
105
The toxicity of individual components and combinations can be estimated
experimentally through the use of generalized linear models. Appendix G has a
description of this approach and an application of some of the fire-toxicology
data described in the Appendix C case studies. These methods are relatively
easy to apply, require few assumptions, and use currently available computer
software. The principal drawback is that many data points are required, so
experiments must incorporate far more doses than is usually done or can usu-
ally be done. In addition, it can be dangerous to extrapolate outside the range of
observed doses with these methods, because interactions between components
might be dose-dependent in some way that does not clearly manifest itself in
the observed data.
Evaluation of Composite Mixture
· .
A mixture can be treated as a single unknown substance and its toxicity
evaluated accordingly. Most of the problems associated with this approach
resemble those which arise in the toxicology of single substances and are ad-
dressed by similar models. The major questions are as follows:
· How big a departure is needed from specific mixtures before toxicity
changes?
· What happens when a component is added or removed?
The models and statistical methods available to address these issues were
developed for single substances; although the concepts are the same, a routine
application of these models to complex mixtures could cause misleading inter-
pretations of results. For example, misinterpretation could arise if the relative
composition of the mixture changed as a result of chemical interactions or as a
result of variations in differential absorption, distribution, metabolism, or
elimination with dose or some other factorindependent of the mixture. Current
pharmacokinetic models attempt to describe the fate of a single substance and
ignore possible complexities introduced by a mixture. Such complexities could
distort the predicted concentration and composition of a mixture at the target
sites where toxic effects are initiated. Pharmacokinetic assumptions about
compositional changes in the mixture would also have to apply to all the animal
species involved in any extrapolation. If the composition of the mixture
changed between administration and delivery to the target organ in the rat,
similar changes would have to occur in humans if an extrapolation were to be
valid. In addition, assumptions need to be made about the uniformity of re-
sponses of individuals within a species.
The dose-extrapolation issue is also complicated by mixtures. Models are
used to help to predict toxicity of low ambient exposures from observed toxic-
ity of high experimental exposures. According to the most widely used carcin-
ogenesis models, additivity of effects is a reasonable assumption if all the

OCR for page 99

106
COMPLEX MIXTURES
exposures to all the toxic components of a mixture are small. For high doses, as
noted earlier, substantial deviations from additivity can be observed. If a mix-
ture were tested at a high dose, the estimated toxicity would include the contri-
bution of interactions; that is, the estimated toxicity ofthe mixture would be the
sum of the toxicities of the individual components plus the interaction terms
(which could be negative if there were antagonism or inactivation of some
components). Extrapolation of toxicity to "low" doses with models developed
for a single toxic agent would reflect the presence of the interaction terms,
although those terms would not have been explicitly measured.
For a dose-extrapolation model to be satisfactory, when the doses of the
components are sufficiently small so that the incidence of the toxicity of con-
cern associated with each is below about 0.1-1% and the relative risks are
small (e.g., less than 1.01), the interaction terms should approach zero. If
explicit allowance for interactions is not made in the extrapolation model, the
toxicity of the mixture at low doses could be incorrectly estimated. If an inter-
action term were positive, the extrapolated risk of the mixture could be over-
estimated at low doses; if it were negative, the risk could be underestimated.
With the strategy in which toxicity information is collected only for the
mixture itself, no information on the toxicity of individual components would
be collected and it would be impossible to account for interaction explicitly in
the analysis indicated by the dose-extrapolation model. Practical consider-
ations, such as ignorance about the components or the existence of too many
components, might require the assessment of the composite as though it were a
single substance. In such a situation, concern about bias will be greatest when
the toxicity due to interaction is relatively large, compared with the toxicity of
the individual constituents.
A compromise for this situation would be to divide the mixture into individ-
ual components or fractions. It might be possible to define fractions so as to
minimize or maximize interactions within each fraction. The goal involved for
the specific test will drive the fractionation carried out.
COMPARATIVE EVALUATION
This approach is based on the premise that studies of the toxicity of one
mixture will provide information about the toxicity of a related mixture. A
prototype for examining a body of such data is the set of cross-mixture extrapo-
lation methods applied by Harris (1983a,b) and based on the methods of Du-
Mouchel and Harris (19831. Recent work on these methods has also been done
by Verducci (1985) and Laird and Louis (19861.
Harris illustrated the approach to estimate the lung-cancer risks of diesel
emissions, given estimates ofthese risks for two similar substances, coke-oven
emissions and roofing-tar emissions. He does this by assuming that the results
of laboratory bioassays of all three substances can be used to estimate the

OCR for page 99

INTERPRETATION AND MODELING OF TEST RESULTS
107
relative carcinogenic potencies of diesel emissions and coke-oven or roofing-
tar emissions. These potency estimates are then applied to the lung-cancer risk
estimates of the latter two substances, to allow an estimate of lung-cancer risks
associated with diesel emissions. Fundamental to this approach is the assump-
tion that the relative carcinogenic potencies of diesel emissions are the same for
humans and for the test systems used in the analysis. Harris acknowledged that
that assumption is, at best, an approximation.
The reasonableness of the approximation is unknown. The Harris example
yielded similar upper confidence limits for excess human lung-cancer risks.
Considerable additional experience with this approach is needed for its value to
be established. The advantage of the approach is that it makes use of existing
data, including those from short-term studies. However, the uncertainty in the
approach makes it, at best, useful for screening, in which it might indicate
whether a given mixture is considerably more or less toxic than a similar mix-
ture on which more information is available.
All the above methods could be included in a general strategy defined as a
"global approach" to assess the toxicity of a mixture. The strategy confines
itself to a class of mixtures, such as gasoline, and provides a way to estimate the
toxicity of mixtures in the class, taking into account the differences in composi-
tion of the mixtures. The strategy also avoids the need to assess the toxicity of
every mixture in the class.
The definition of the strategy will vale with the class of mixtures and the
extent of information available. In general, however, the global perspective
advocates testing extreme formulations of a mixture, as well as the more com-
monly used formulations. Hence, information would be available on the great-
est possible variation in the mixture formulations under study. In addition, the
approach recommends the testing of relatively homogeneous fractions or
components of the mixture individually or in combination with a few other
substances.
Satisfying that approach requires new data on different mixture formulations
and on mixture components and their combination. Application of some exist-
ing statistical theory in experimental design can reduce the data to be collected.
Fractional factorial designs are particularly useful. They permit estimation of
the individual effects of each factor, and they provide information on the most
important interactions among factors.
The use of fractional factorial designs depends on the assumption that non-
high-order interactions are of consequence. Looking only for low-order inter-
actions and main effects permits designs that place a few animals in each of a
number of previously defined dose groups. For example, suppose we wish to
study simultaneously the effects of 15 constituents, each at a high concentra-
tion and a low concentration. There are Its possible combinations of the 15
materials, each of which can be at one of two levels. There are, however,
fractional factorial designs that involve only 32 (2s) possible combinations and

OCR for page 99

108
COMPLEX MIXTURES
permit estimation of all the effects of the 15 individual constituents and interac-
tions among all pairs of any specified subset of six of the constituents (John,
19711. However, even 32 lifetime full-scale animal bioassays might be eco-
nomically impossible. A way of incorporating short-term tests needs to be
developed (see Chapter 3~. Alternatively, models, such as the multistage
model, might be applied where appropriate and where additivity of response
could be assumed in place of assessment of some combination of factors.
The comprehensiveness and flexibility of the global approach make it an
appealing framework for addressing complex-mixture toxicity. The specific
methods within the approach for assessing the collected and available infor-
mation require better definition. The best judgments about the promise of
this approach, however, will arise from its application to specific mixture
problems.
PHARMACOKINETIC MODELS
Pharmacokinetic models can be used to describe the absorption, distribu-
tion, metabolism, binding, and elimination of a substance that enters the body.
They thereby provide useful information on the formation and distribution of
the reactive metabolites responsible for the induction of toxic effects in individ-
ual tissues. These models can be used to describe the relationship between the
dose administered in toxicity tests and the dose delivered to the target tissue.
Pharmacokinetic models are mathematical/physiologic constructs that en-
visage a biologic system made up of a smaller or larger number of relevant
physiologic compartments. The interactions among these compartments are
often described by a set of differential equations (Gibaldi and Perrier, 1975)
and may be developed from more general information based on the anatomy
and physiology of the test animal, binding and metabolism in specific organs,
and the solubility in various organs and tissues of the chemical under test (Ram-
sey and Andersen, 1984; Andersen et al., 19871.
One of the potential applications of pharmacokinetics to risk assessment
involves the use of pharmacokinetic models to determine the effective dose of a
compound of interest that reaches the target tissue. Knowing the dose delivered
to the target tissue should then lead to more accurate estimates of risk (Hoer et
al., 1983; Whittemore et al., 19861.
The internal tissue dose for routes of exposure other than the ones used in the
experimental situation may be developed from pharrnacokinetic models when-
ever measurements of the physiologic and biochemical constants associated
with the several possible routes are available (Chen and Blancato, 19871. Ex-
trapolation between species is facilitated when physiologic models can be used
to predict the delivered dose in the species of interest (Andersen et al., 19871.
When not all the relevant model characteristics can be measured directly in

OCR for page 99

INTERPRETATION AND MODELING OF TEST RESULTS
109
humans, however, estimates of values of the parameters of the equations apply-
ing to humans must be obtained by scaling the corresponding animal values.
The application of pharmacokinetic models to complex mixtures is more
involved than their application to single chemicals. The simplest situation with
a mixture would occur if there were no saturation or capacity restrictions on any
of the activation, deactivation, or tissue-binding processes and no chemical
interactions among mixture components. Pharmacokinetic studies could be
carried out on the components of the mixture, and the concentration of a partic-
ular substance or metabolite in each organ could be taken to be the sum of the
tissue doses contributed by each of the components of the mixture. One way to
determine whether this simple situation occurs is to conduct pharmacokinetic
studies on the distribution, metabolism, and clearance of each of the compo-
nents separately and then for the mixture as a whole. The extent to which the
mixture results can be predicted by combining the results for the individual
components indicates the extent of interaction among the components.
If the pharmacokinetic behavior of the mixture differs from that predicted on
the basis of results obtained for the individual components, one possibility is
that the same components and metabolites occur in the mixture study and in the
component studies, but in different proportions. That would suggest competi-
tion among the components for activation, deactivation, and elimination
through the same pathways. Another possibility is that products formed from
the mixture are different from those formed from the components. That could
suggest chemical interactions among the components, rather than competition
with common pathways. In either case, the pharmacokinetic behavior of the
mixture could not be predicted solely from the pharmacokinetic behavior of the
components. The components would need to be considered jointly for model-
ing of the saturation of the resources for which they compete and modeling of
the joint reaction products.
Whether the components of a complex mixture will interact is often un-
known, and the interaction problem usually has to be approached empirically.
In general, a completely empirical approach to identifying and measuring in-
teractions is impossible. However, some interactions can be assumed to occur
only among a few components; fractional factorial designs may be used to test
for interactions with a relatively small number of test combinations.
In developing pharmacokinetic models for toxicity assessment of mixtures
consideration should be given to the following questions:
· Which constituents (or transformation products derived therefrom) are
known to be associated with toxicity and therefore need to be modeled?
· Can pharmacokinetic data on individual components be combined to pre-
dict the pharmacokinetic behavior of the mixture, or do some components alter
the pharmacokinetic behavior of others?

OCR for page 99

114
COMPLEX M[XTURES
Much of the standard work on response-surface designs originated in indus-
tly. The responses of interest are usually clearly specified, a few properties of
the mixture are considered, and the mixture ingredients are in some sense
uniform. Examples include experiments with gasoline-blending and mixtures
of food ingredients. One could argue that flour, sugar, and shortening are not
chemically similar and that baking a cake does not involve interactions within a
complex biologic system. A more relevant example is the blending of three
pesticides to produce maximal toxicity in an insect species (Cornell, 19811;
this example is interesting, because the physical, rather than chemical, proper-
ties of the constituents (one powder and two liquids) were important in map-
ping the response surface.
Two designs traditionally used in surface analysis are the simplex-lattice and
simplex-centroid designs (Cornell, 19811. Only the relative proportions ofthe
ingredients are assumed to affect the response, so the design space for a mix-
ture of q components is a simplex of q— 1 dimensions. If xi denotes the
proportion of the 'ah component included in a particular treatment combination
(i = 1, . . ., qy, the simplex consisting of all points (hi, . . ., Xq) with 0 ~ xi c
1 and x + . . . + Xq = 1 represents all possible treatment combinations
available for use in the experimental design. The simplex-lattice designs place
all points on the boundary of the simplex, where at least one value of xi is zero.
In addition to such boundary points, the simplex-centroid designs include a
point inside the boundary at the center of the simplex where all components are
present in equal proportions (i.e., xi = 1/q for i = 1, . . ., q). These designs are
appropriate for minimizing the variances of the estimated regression coeff~-
cients in a polynomial response-surface model.
Response-surface designs have also been developed for studying specific
regions within the design simplex. For example, a neighborhood around a
particular mixture of current interest might be explored with central composite
or rotatable designs (Carter et al., 19831. Alternatively, there might be upper or
lower limits on the relative concentrations of some components. Multiple con-
straints on mixture components can lead to complex polygonal subregions in-
side the simplex, and design points are typically placed on the vertices, edges,
and centroids of higher-dimensional faces of such subregions.
In considering the use of response-surface analysis in toxicologic applica-
tions, one must resolve several technical problems noted previously. In addi-
tion, the adequacy of statistical models for complex mixtures in toxicology is
questionable. In this context, the question of interaction or synergism appears
to be much more complex; in fact, there seems to be no general agreement as to
models for representing the various alternatives, or even as to terminology
(Kodell, 19861.
As a first approach, the natural additive model that is linear in the compo-
nents may be considered for toxicologic applications. This model is both dose-
additive and response-additive. Furthermore, the relative potency of two con-

OCR for page 99

INTERPRETATION AND MODELING OF TEST RESULTS
115
stituents is simply expressed as the ratio of the corresponding regression
coefficients. Departures from additivity would indicate interaction among the
components and may be represented by quadratic or cubic polynomial models.
However, the assumption of a linear response function might not be reasonable
in many situations.
Nonlinear models for continuous responses have been proposed by Ashford
(1981) and Ashford and Cobby (1974), whereas Hewlett and Plackett (1959),
Plackett and Hewlett (1967), and Christensen and Chen (1985) have discussed
the quantal-response case. The optimal placement of design points seems to
depend on the degree of nonlinearity of the response surface. Finney (1971)
addressed the design question in the case of probit analysis and concluded that
the optimal design points are intermediate between the median and extremes of
the tolerance distribution. That conclusion suggests that simplex designs that
put most of the design points on the boundary might not be appropriate for
quantal-response models. Further investigation is needed.
DESIGNS FOR PREDICTING LOW-DOSE RISKS
Optimal designs for low-dose extrapolation with single chemicals require
more information at low doses, because it is desirable to have more informa-
tion at or near the doses about which we wish to make inferences and because
the biologic responses at low doses might be different from those at high doses.
This implies that more doses are needed at the low end of the dose-response
curve, and greater weight will be placed on these doses, with respect to the
number of animals to be assigned there.
Optimal designs for low-dose extrapolation with studies involving a single
substance have been the subject of several investigations (Portier and Hoel,
1983; Krewski et al., 1984, 19861. If P(d) denotes the probability of a toxic
response occurring at dose d, the excess risk over background can be given by
H(d) = P(d)—P(O). The experimental designs developed in these investiga-
tions are constructed to minimize the variance of the estimator of the excess
risk at some fixed dose within the low-dose region based on a particular dose-
response model.
Optimal designs for single chemicals have been developed for the probit,
logit, Weibull, gamma multihit, and multistage models. These designs gener-
ally involve as many doses as there are parameters in the model, including a
control group receiving zero exposure and a high-dose group capable of induc-
ing a relatively high response rate. The intermediate doses tend to be assigned
more animals than the control and high-dose groups.
Optimal designs for binary mixtures could be similarly developed to mini-
mize the variance of the estimated excess risk II(d~, d2) at doses demand d2 of
chemicals Cat and C2. Although these designs remain to be developed, we are
able to provide some preliminary results here for particular special cases. Spe-

OCR for page 99

116
COMPLEX MIXTURES
civically, we consider the four-parameter logistic model in Equation E-36 of
Appendix E. This model is used not because of any particular biologic appeal,
but rather because it involves only four parameters (the minimum required to
represent the background response rate, the effects of the two chemicals given
separately, and their interaction). Rather than extrapolate directly from high to
low doses, we use the model-based linear extrapolation procedure proposed by
Van Ryzin (1980) for the case of single chemicals, as described in Appendix E.
With binary mixtures, this involves extrapolation along a straight line joining
the point II(dl, d;) = 7r* and the origin, where (d:, d2) represents any combina-
tion of doses leading to an excess risk of or* based on the fitted four-parameter
logistic model.
Because only four parameters are involved, the optimal design will involve
only four treatment combinations. For purposes of illustration, we take these to
be given by Ids, d2) = (0,0) (1,0), (0,1), and (1,1) as the 2 by 2 factorial
designs discussed previously. This being the case, it remains only to determine
the optimal allocation to these four groups.
The optimal allocations to these four cells with or* = 0.01 are shown in
Table 5-3 for selected values of tumor-response probabilities Pij, with Cat and
C2 assumed to act independently in the logistic model. (Here, we have selected
(d: = did to simplify the presentation, although this need not be the case.)
These results indicate that, as the spontaneous-response rate pee decreases, the
fraction of subjects c00 assigned to the control group increases. The most inter-
esting property of these designs is the relatively low-weight cat ~ assigned to the
(1,1) cell in which Cat and C2 are administered jointly. The fact that this design
places most weight on the cells in which Cat and C2 are administered separately
might simply reflect the near additivity of the excess risks at the point Ids, did.
To explore this point further, optimal designs for extrapolating from an ex-
TABLE 5-3 Optimal Experimental Designs Based on a
Logistic Model for Low-Dose Extrapolation
Response
Probabilities Doses OptimalAllocationb
pOo pal = plO pll ~ = ~ Coo Co! = C'o Cll
7r* = 0.01
).01 0.2 0.86 0.105 0.84 0.07 0.03
).05 0.2 0.54 0.059 0.41 0.28 0.03
). 10 0.2 0.36 0.060 0.29 0.34 0.03
7r* = 0.10
).01 0.2 0.86 0.340 0.60 0.10 0.21
).05 0.2 0.54 0.305 0.25 0.27 0.21
).10 0.2 0.36 0.322 0.19 0.31 0:19
apt ~ = <~/(1 +

INTERPRETATION AND MODELING OF TEST RESULTS
117
cess risk of or* = 0.10 were also calculated (Table 5-31. In this case, greater
weight is assigned to the (1,1) cell, because additivity of the excess risks for Cat
and C2 no longer holds at this point.
DESIGNS FOR INITIATION-PROMOTION STUDIES
Considerable evidence suggests that chemical carcinogenesis sometimes in-
volves an initial irreversible change in target cells (initiation) that is followed
by a partially reversible developmental phase (promotion), which leads to tu-
mor expression. Initiation is thought to involve direct interaction between the
proximate carcinogen and cellular DNA, whereas promotion is considered to
be nongenotoxic. Early evidence of this mechanism was provided by
Berenblum and Shubik (1947a,b, 1949), who applied a single noncarcinogenic
dose of a polycyclic hydrocarbon to a mouse's skin and then administered
croton oil repeatedly until tumors appeared. Similar results have since been
obtained in other tissues, including rodent liver (Pitot and Sirica, 19801.
A biologically motivated mathematical model of carcinogenesis that incor-
porates the concepts of initiation, promotion, and progression is provided by
the two-stage birth-death-mutation model developed by Moolgavkar, Venzon,
and Knudson (Moolgavkar and Venzon, 1979; Moolgavkar and Knudson,
19811. The model assumes three possible fates for normal stem cells: death,
division into normal progeny, and mutation resulting in one normal daughter
cell and an intermediate or initiated cell. The population of initiated cells can
similarly either divide, die, or undergo a second mutation to produce a fully
transformed malignant tumor cell. In the context of this model, an initiator is a
substance that increases the rate at which the first mutation occurs, whereas a
promoter increases the pool of initiated cells available for malignant transfor-
mation. The term "progressor" has been used to describe a substance that
increases the rate of the second mutation associated with the transformation of
an initiated cell to a cancerous cell.
In examining a particular initiator-promoter pair, several treatment combi-
nations might be required (Gart et al., 19861. To confirm a postulated initia-
tion-promotion system, it might be necessary to include additional groups sub-
jected to initiation but not promotion and to promotion but not initiation.
Lifetime administration of initiator and promoter can also be considered to
exclude the possibility that either one alone is acting as a complete carcinogen.
Other experimental protocols have involved the application of a second initi-
ator after the administration of the original initiator-promoter pair, to study
effects on the rate of occurrence of the second mutation (Scherer et al., 19841.
This so-called IPI protocol can result in a marked increase in the production
of neoplastic lesions, compared with the more conventional IF proto-
col, in which the second initiator is absent. In terms of experimental design, a
combination of IP and IPI exposure regimens may be required for differentia-

OCR for page 99

118
COMPLEX MIXTURES
lion of effects on the two mutation rates involved in the two-stage birth-death-
mutation model.
PRACTICAL IMPLICATIONS FOR EXPERIMENTAL DESIGN
If the identification of a carcinogen in a long-term animal experiment is
followed by another long-term animal experiment to develop a firm dose-re-
sponse curve, at least 5 or 6 years will elapse from the start of the study to the
development of the dose-response curve. Because such a long time might im-
ply inadequate protection of the public health, this sequence has rarely (if ever)
been followed. In its place, attempts have been made to use the data developed
in the screening experiments as a basis for developing a dose-response relation-
ship. It appears reasonable to develop omnibus experimental designs that can
be used for both screening and predicting low-dose dose-response relation-
ships.
Such hybrid designs will necessarily involve departures from optimal de-
signs for carcinogen identification or dose-response curve development. In an
attempt to identify a nearly optimal design that will perform reasonably well
for purposes of both screening and low-dose extrapolation, the committee
evaluated the efficiency of different four-point designs. Because of the low
weight assigned to (1,1) cells in the optimal extrapolation designs, these de-
signs are relatively inefficient for screening studies. Conversely, the screening
designs performed better when used for extrapolation. A balanced design (i.e.,
equal numbers of animals per treatment group) appears to provide fairly good
efficiency for both screening and extrapolation and might provide a reasonable
solution to the problem of finding a suitable four-point hybrid design.
Although useful as a benchmark for evaluating other designs, the use of
four-point optimal designs in experiments with two substances can be criti-
cized on several counts. First, with only four treatment groups, no degrees of
freedom are left over to assess the goodness of fit of a four-parameter model,
such as the logistic model. Second, the loss of even a single dose group for
example, because of intercurrent mortality or because the MTD is exceeded-
will result in a design with too few points for useful analysis. Third, additional
treatment combinations involving joint exposure to various doses of the two
substances of interest will be required, to yield some idea of the shape of the
underlying response surface without relying on a particular parametric model.
Additional doses can be introduced into the design in a systematic way. For
example, two materials can be given in all combinations of some maximal dose
(D) of each material, half the maximal dose (D/2), one-fourth the maximal
dose (D14), and controls (O), yielding a total of 16 treatment combinations for
testing two materials jointly. This rectangular design for testing two materials
jointly will consume the resources necessary to test four materials one at a time,

OCR for page 99

INTERPRETATION AND MODELING OF TEST RESULTS
119
although the number of animals at a dose can be reduced or fractional factorial
designs could be considered.
Because toxicity problems are likely to develop at combinations of high
doses, the rectangular designs might become wasteful and fail to yield any
usable response information at combinations of doses di and d2 in which both
do and d2 are high. Exploration of other designs based on attempts to develop
equitoxicity for combinations of exposures are thus certainly in order.
Other possibilities to be considered include test schemes for combinations in
which the dose of one substance is fixed and the dose of another substance
varies. This is an appealing procedure for materials to which the range of
human exposure is rather narrow, so that exposures outside this range are rare
enough to be uninteresting in a practical sense.
When the number of chemicals in combination is over two, fractional facto-
rial designs should be considered. Such designs are thought useful when the
effects of several factors are to be studied jointly. They permit estimation of the
individual effects of each factor and provide information about interaction
among factors, on the basis of results of tests with relatively few distinct labo-
ratory mixtures or blends. The efficiencies of such designs hinge on the ab-
sence of high-order interactions, but first screenings that look for only low-
order interactions and main effects might be well served by a design that places
only relatively few animals at each of a specified number of points.
FUTURE DIRECTIONS
This chapter provides some ideas for assessing the toxicity of complex mix-
tures, but many ofthese ideas need further development and testing, and consi-
derable progress needs to be made before definitive approaches can be
recommended. This section highlights some of the research needs.
CANCER MODELS
The models developed to predict cancer risk associated with exposure to
multiple agents are based on a general theory of the biologic mechanisms of
carcinogenesis. These models predict that, even if extensive synergism takes
place at exposures that lead to a directly observable carcinogenic response, one
could still expect that at much lower exposures, response would not be distin-
guishable from additivity. This conclusion is based here primarily on a theoret-
ical argument (see Chapter 2 for related human data). If the idea of low-dose
additivity is to be given general credence, it is necessary to obtain evidence
demonstrating that newly derived mathematical models can predict the joint
effect of agents.
Some evidence that is amenable to new experimentation could be generated

OCR for page 99

120
COMPLEX MIXTURES
relatively quickly and inexpensively. For example, joint-exposure experimen-
tation might be focused on for mutagenicity, cell proliferation, and DNA-ad-
duct formation. The general approach would be to predict the joint response to
agents on the basis of results obtained with single agents. The predictions
would be verified by comparison with observed experimental results.
The key in applying a model to a particular chemical mixture is to postulate a
specific mechanism of action for each component of the mixture and to use
whatever information is available on the components to estimate values of the
model parameters. Validation of this approach will require application of the
model to available data sets and comparison of the risk predictions obtained
with parameter estimates from short-term bioassays or other indirect measures
of response with the results of chronic bioassays and ultimately epidemiologic
results. For most mixtures, much of the information necessary for truly critical
analysis of the postulated mechanisms of carcinogenesis is missing. However,
validation of a model with available information is crucial and will be useful
for directing future experimentation with mixtures.
Currently postulated mechanisms of carcinogenesis include the effects of
chemicals on several biologic phenomena, such as DNA replication, cell pro-
liferation, cellular toxicity, DNA-adduct formation, and mutagenicity. A1-
though it has been difficult to incorporate this knowledge into risk assessments
for single materials, it is worth attempting to examine the effects of chemical
mixtures of unknown carcinogenicity on these phenomena. Establishing dose-
response curves can provide some insight into whether a mixture is carcino-
genic, the mechanism of such carcinogenicity, and how the mixture can be
expected to act at low doses.
To predict whether interactions will occur and how a mixture might behave
in environmental exposures, short-term bioassays—such as cell transforma-
tion, mutation, and proliferation might help to determine the potential mech-
anisms of action of the mixture or its constituents and the shape of the dose-
response curve (see Chapter 3~. In the absence of chronic carcinogenesis
bioassay data, mathematical models can be derived to describe the dose-re-
sponse relationships observed in short-term bioassays and used to predict the
carcinogenesis dose-response relationship (Thorslund et al., 19871. Research
should be directed to the use of short-term bioassays to establish possible
relationships.
DEVELOPMENTAL EFFECTS AND OTHER NONCANCER END POINTS
The research approach here is similar to that proposed by Murphy (19781:
Postulate underlying biologic mechanisms, infer statistical models, test (to the
extent possible) the reasonableness of the models with experimental and per-
haps even epidemiologic data, and repeat the process as necessary and if possi-
ble. Until there is greater understanding of the biologic mechanisms underly-

OCR for page 99

INTERPRETATION AND MODELING OF TEST RESULTS
121
ing these end points, the development of suitable models will be hampered. In
the interim, perhaps several candidate models could be examined for consis-
tency with available data.
Developmental-effects experiments require fewer resources than cancer bio-
assays, and perhaps more thought should be given to using these experiments
to identify potential interactive effects. The design of such experiments, how-
ever, needs development particularly with respect to exposure to mixtures and
their components.
STATISTICAL APPROACHES FOR SITUATIONS WITH NO PREFERRED
DOSE-RESPONSE MODE E
In Appendix D several models have been applied to develop the toxicity
estimates of a simple mixture. Research efforts include defining the set of
appropriate models to apply in a bounding exercise. Several additional data
sets could be examined in conjunction with this approach to help to determine
its reasonableness and to identify a subset of models that appear to perform
best. This approach might have implications for experimental design; these
need to be investigated.
EMPIRICAL MODELING OF THE TOXICITY OF MIXTURES
Generalized linear models are largely new to biologists and toxicologists,
and there is a need to acquaint them with these models. More examples of their
applicability should be sought with existing data sources.
Experimental design and generation of data for generalized linear models
need more attention. Some candidate methods were discussed earlier; they
need to be explored further.
As mixtures become more complex, potential independent variables in gen-
eralized linear models become more numerous. Research is needed to help to
select the appropriate variables for these models.
Models that are based to a limited extent on biologic theory, such as the
Ashford-Cobby model, might be useful, but they are awkward to apply (see
Appendix G). Software is needed to facilitate the use of these models. The
software and the models need to be extended beyond two-substance mixtures.
PHARMACOKINETICS
The role of pharmacokinetics in assessing the toxicity of single agents is
increasingly recognized, but pharmacologic models that allow for interactions
between agents are lacking and need to be developed. They could account, for
example, for responses to mixtures of carcinogens with different mechanisms
of action, such as initiators and promoters.

OCR for page 99

122
COMPLEX MIXTURES
Methods also need to be developed to incorporate pharmacokinetic studies
into chronic bioassays. Examination of drug studies might help in the design of
such methods. Fractional factorial designs, often applied to chemical mix-
tures, should sometimes be incorporated into pharmacokinetic studies.
Pharmacokinetics might have a major role in testing the toxicity of mixtures,
but the types of pharmacokinetic information that would be most useful
for studying mixture toxicity are not yet well defined. Such information must
be reconciled with the feasibility of estimating values of pharmacokinetic
parameters.
DESIGN OF EXPERIMENTS
This chapter has emphasized binary mixtures. More work is needed to ex-
tend some of the results with binary mixtures to more complex mixtures.
Because of toxicity problems that might arise with joint high-dose expo-
sures, perhaps such combinations should be eliminated or de-emphasized in
experimental designs involving complex mixtures. Ways to deal with this
problem by formulating alternative designs could be usefully explored.
The current needs in response-surface exploration include techniques to re-
duce the number of dose combinations thought necessary to characterize a
surface; ways to identify peaks and cliffs, which are expressions of nonadditive
behavior among materials; and fractional factorial designs, which have been
used with some success in agricultural research. Short-term stepwise or se-
quential testing techniques are clearly needed. Such techniques need to be
interactive that is, the results of experimentation should drive the next experi-
mental steps and ultimately lead to the creation of new models. Some tech-
niques have been applied to nonbiologic problems; their use with toxicology
data should be explored.
STRATEGY DEFINITION
Several strategies for testing complex mixtures are suggested elsewhere in
this report. Formal mathematical modeling has only partially encompassed
these proposed strategies, and additional development has been left for further
research. These strategies include the following:
· A multistep (tier) approach and a matrix approach of the type indicated
here for two-material comparisons, including a process involving critical-
element variability, that is, varying one factor while another is held constant.
· Bioassay-driven fractionation of a complex mixture in which fractions of
a mixture are evaluated in a standard bioassay to determine which components
or combinations of components are responsible for the carcinogenicity (or
other toxicity) of the mixture. Substantial work will be necessary to develop
search and testing strategies that will minimize the effort required to identify
the toxic fractions (and their constituents).

OCR for page 99

INTERPRETATION AND MODELING OF TEST RESULTS
123
· Analysis of the complex mixture to identify (known) toxic components
and testing of various components (in combination) in an attempt to reproduce
the toxicity of the parent mixture. No effective modeling or combining strate-
gies have been developed to minimize the work necessary to "rebuild" a com-
plex toxic material.
The last process mentioned should be capable of distinguishing noninterac-
tion of components (expected joint-action behavior) from interaction of com-
ponents (both synergism and antagonism) and, in the case of interaction,
provide leads to the discovery of the mechanisms of interaction.
REFERENCES
Andersen, M. E., H. J. Clewell III, M. L. Gargas, F. A. Smith, and R. H. Reitz. 1987. Physiologically
based pharmacokinetics and the risk assessment process for methylene chloride. Toxicol. Appl.
Pharmacol. 87: 185-205.
Ashford, J. R. 1981. General models for the joint action of mixtures and drugs. Biometrics 37:457-
474.
Ashford, J. R., and J. M. Cobby. 1974. A system of models for the action of drugs applied singly or
jointly to biological organisms. Biometrics 30: 11-31.
Berenblum, I., and P. Shubik. 1947a. A new, quantitative approach to the study of the stages of
chemical carcinogenesis in the mouse's skin. Br. J. Cancer 1 :383-391.
Berenblum, I., and P. Shubik. 1947b. The role of croton oil applications, associated with a single
painting of a carcinogen, in tumour induction of the mouse's skin. Br. J. Cancer 1 :379-382.
Berenblum, I., and P. Shubik. 1949. The persistence of latent tumour cells induced in the mouse's skin
by a single application of 9: 10-dimethyl-1 :2-benzanthracene. Br. J. Cancer 3:384-386.
Bliss, C. I. 1939. The toxicity of poisons applied jointly. Ann. Appl. Biol. 26:585-615.
Carter, W. H., G. L. Wampler, and D. M. Stablein. 1983. Experimental design, pp. 108-129. In
Regression Analysis of Survival Data in Cancer Chemotherapy. Marcel Dekker, New York.
Chen, C. W., and J. N. Blancato. 1987. Role of pharmacokinetic modeling in risk assessment: Per-
chloroethylene (PCE) as an example, pp. 367-388. In Pharmacokinetics and Risk Assessment. Vol.
8. Drinking Water and Health. National Academy Press, Washington, D.C.
Christensen, E. R., and C.-Y. Chen. 1985. A general noninteractive multiple toxicity model including
probit, logit, and Weibull transformations. Biometrics 41:711-725.
Cornell, J. A. 1981. Experiments with Mixtures: Designs, Models, and the Analysis of Mixture Data.
John Wiley & Sons, New York. (305 pp.)
Dumouchel, W. H., and J. E. Harris. 1983. Bayes methods for combining the results of cancer studies
in humans and other species. J. Am. Stat. Assoc. 78:293-315.
Finney, D. J. 1952. Probit Analysis: A Statistical Treatment of the Sigmoid Response Curve. 2nd ed.
Cambridge University Press, Cambridge.
Finney, D. J. 1971. Probit Analysis. 3rd ed. Cambridge University Press, New York. (333 pp.)
Gart, J. J., D. Krewski, P. N. Lee, R. E. Tarone, and J. Wahrendorf. 1986. Statistical Methods in
Cancer Research, Vol. III: The Design and Analysis of Long-Term Animal Experiments. IARC
Scientific Publications No. 79. International Agency for Research on Cancer, Lyon, France.
(213 pp.)
Gibaldi, M., and D. Perrier. 1975. Pharmacokinetics. Marcel Dekker, New York.
Harris, J. E. 1983a. Diesel emissions and lung cancer. Risk Anal. 3(2):83-100.
Harris, J. E. 1983b. Diesel emissions and lung cancer revisited. Risk Anal. 3(2): 139-146.
Hewlett, P. S., and R. L. Plackett. 1959. A unified theory for quantal responses to mixtures of drugs:
Non-interactive action. Biometrics 15:591-610.

OCR for page 99

124
COMPLEX MIXTURES
Hoel, D. G., N. L. Kaplan, and M. W. Anderson. 1983. Implication of nonlinear kinetics on risk
estimation in carcinogenesis. Science 219: 1032- 1037.
John, P. W. 1971. Statistical Design and Analysis of Experiments. Macmillan, New York. (356 pp.)
Kodell, R. L. 1986. Modeling the Joint Action of Toxicants: Basic Concepts and Approaches. Pre-
sented at the ASA/EPA Conference on Interpretation of Environmental Data: The Current Assess-
ment of Combined Toxicant Effects, Washington, D.C., May 5-6, 1986. (21 pp.)
Kodell, R. L., and J. Go Pounds. In press. Assessing the toxicity of mixtures of chemicals. In
D. Krewski and C. Franklin (eds.). Statistical Methods in Toxicological Research. Gordon and
Breach, New York.
Krewski, D., J. Kovar, and M. Bickis. 1984. Optimal experimental designs for low dose extrapolation.
II. The case of nonzero background, pp. 167-191. In Y. P. Chaubey and T. D. Dwivedi (eds.).
Topics in Applied Statistics. Proceedings of the Statistics '81 Canada Conference held at Concordia
University, Montreal, Quebec, April 29-May 1, 1981.
Krewski, D., M. Bickis, J. Kovar, and D. L. Arnold. 1986. Optimal experimental designs for low dose
extrapolation. I. The case of zero background. Utilitas Math. 29:245-262.
Laird, N. M., and T. A. Louis. 1986. Combining data from different sources: Empirical Bayes confi-
dence intervals. Paper presented at the XIIIth International Biometric Conference, Seattle, Washing-
ton, July 29-August 1, 1986.
Moolgavkar, S. H., and A. G. Knudson, Jr. 1981. Mutation and cancer: A model for human carcino-
genesis. J. Natl. CancerInst. 66:1037-1052.
Moolgavkar, S. H., and D. J. Venzon. 1979. Two-event models for carcinogenesis: Incidence curves
for childhood and adult tumors. Math. Biosci. 47:55-77.
Murphy, E. A. 1978. Epidemiological strategies and genetic factors. Int. J. Epidemiol. 7:7-14.
National Toxicology Program, Board of Scientific Counselors. 1984. Report of the NTP Ad Hoc Panel
on Chemical Carcinogenesis Testing and Evaluation. U.S. Government Printing Off~ce, Washing-
ton, D.C. (280 pp.)
Piepel, G. F., and J. A. Cornell. 1985. Models for mixture experiments when the response depends on
the total amount. Technometrics 27:219-227.
Pitot, H. C., and A. E. Sirica. 1980. The stages of initiation and promotion in hepatocarcinogenesis.
Biochim. Biophys. Acta605:191-215.
Plackett, R. L., and P. S. Hewlett. 1948. Statistical aspects of the independent joint action of poisons,
particularly insecticides. I. The toxicity of a mixture of poisons. Ann. Appl. Biol. 35:347-358.
Plackett, R. L., and P. S. Hewlett. 1967. A comparison oftwo approaches to the construction of models
for quantal responses to mixtures of drugs. Biometrics 23:27-44.
Portier, C., and D. Hoel. 1983. Optimal design of the chronic animal bioassay. J. Toxicol. Environ.
Health 12:1-19.
Ramsey, J. C., and M. E. Anderson. 1984. A physiologically based description of the inhalation
pharmacokinetics of styrene in rats and humans. Toxicol. Appl. Pharmacol. 73: 159- 175.
Scherer, E., A. W. Feringa, and P. Emmelot. 1984. Initiation-promotion-initiation. Induction of neo-
plastic foci within islands of precancerous liver cells in the rat. In M. Borsonyi, K. Lapis, N. E. Day,
and H. Yamasaki (eds.). Models, Mechanisms and Etiology of Tumour Promotion. IARC Scientific
Publications No. 56. International Agency for Research on Cancer, Lyon, France.
Thorslund, T. W., C. C. Brown, and G. Charnley. 1987. Biologically motivated cancer risk models.
Risk Anal. 7: 109-119.
Van Ryzin, J. 1980. Quantitative risk assessment. J. Occup Med. 22:321-326.
Verducci, J. S. 1985. Task 61: Chains of Extrapolation. EPA Contract No. 68-01 -6721. Report to EPA
Office of Pesticides and Toxic Substances. Batelle Columbus Division, Washington Operations,
Washington, D.C. (38 pp.)
Wahrendorf, J., R. Zentgraf, and C. C. Brown. 1981. Optimal designs for the analysis of interactive
effects of two carcinogens or other toxicants. Biometrics 37:45-54.
Whittemore, A. S., S. C. Grosser, and A. Silvers. 1986. Pharmacokinetics in low dose extrapolation
using animal cancer data. Fundam. Appl. Toxicol. 7: 183- 190.