Introduction And Background
It is always difficult to identify the true level of risk in an endeavor like health risk assessment, which combines measurement, modeling, and inference or educated guesswork. Uncertainty analysis, the subject of Chapter 9, enables one to come to grips with how far away from the desired answer one's best estimate of an unknown quantity might be. Before we can complete an assessment of the uncertainty in an answer, however, we must recognize that many of our questions in risk assessment have more than one useful answer. Variabilitytypically, either across space, in time, or among individualscomplicates the search for the desired value of many important risk-assessment quantities.
Chapter 11 and Appendix I-3 discuss the issue of how to aggregate uncertainties and interindividual differences in each of the components of risk assessment. This chapter describes the sources of variability1and appropriate ways to characterize these interindividual differences in quantities related to predicted risk.
Variability is a very well-known "fact of life" in many fields of science, but its sources, effects, and ramifications are not yet routinely appreciated in environmental health risk assessment and management. Accordingly, the first section of this chapter will step back and deal with the general phenomenon (using some examples relevant to risk assessment, but not exclusively), and then for the remainder of the chapter focus only on variability in quantities that directly influence calculations of individual and population risk.
When an important quantity is both uncertain and variable, opportunities
are created to fundamentally misunderstand or misestimate the behavior of the quantity.
To draw an analogy, the exact distance between the earth and the moon is both difficult to measure precisely (at least it was until the very recent past) and changeable, because the moon's orbit is elliptical, rather than circular. Thus, as seen in Figure 10-1, uncertainty and variability can complement or confound each other. When only scattered measurements of the earth-moon distance were available, the variation among them might have led astronomers to conclude that their measurements were faulty (i.e., ascribing to uncertainty what was actually caused by variability) or that the moon's orbit was random (i.e., not allowing for uncertainty to shed light on seemingly unexplainable differences that are in fact variable and predictable). The most basic flaw of all would be to simply misestimate the true distance (the third diagram in Figure 10-1) by assuming that a few observations were sufficient (after correcting for measurement error, if applicable). This is probably the pitfall that is most relevant for health risk assessment: treating a highly variable quantity as if it was invariant or only uncertain, thereby yielding an estimate that is incorrect for some of the population (or some of the time, or over some locations), or even one that is also an inaccurate estimate of the average over the entire population.
In the risk-assessment paradigm, there are many sources of variability. Certainly, the regulation of air pollutants has long recognized that chemicals differ from each other in their physical and toxic properties and that sources differ from each other in their emission rates and characteristics; such variability is built into virtually any sensible question of risk assessment or control. However, even if we focus on a single substance emanating from a single stationary source, variability pervades each stage from emission to health or ecologic end point:
Each of these variabilities is in turn often composed of several underlying variable phenomena. For example, the natural variability in human weight is due to the interaction of genetic, nutritional, and other environmental factors.
According to the central limit theorem, variability that arises from independent factors that act multiplicatively will generally lead to an approximately lognormal distribution across the population or spatial/temporal dimension (as is commonly observed when concentrations of air pollutants are plotted).
When there is more than one desired answer to a scientific question where the search for truth is the end in itself, only two responses are ultimately satisfactory: gather more data or rephrase the question. For example, the question "How far away is the moon from the earth?" cannot be answered both simply and correctly. Either enough data must be obtained to give an answer of the form "The distance ranges between 221,460 and 252,710 miles" or "The moon's orbit is approximately elliptical, with a minor axis of 442,920 miles, a major axis of 505,420 miles, and an eccentricity of 0.482," or the question must be reduced to one with a single right answer (e.g., "How far away is the moon from the earth at its perigee?").
When the question is not purely scientific, but is intended to support a social decision, the decision-maker has a few more options, although each course of action will have repercussions that might foreclose other courses. Briefly, variability in the substance of a regulatory or science-policy question can be dealt with in four basic ways:
The crucial point to bear in mind about all four of those strategies for dealing with variability is that unless someone measures, estimates, or at least roughly models the extent and nature of the variability, any strategy will be precarious. It stands to reason that strategy 1 ("hope for the best") hinges on the assumption that the variability is smallan assumption whose verification requires at least some attention to variability. Similarly, strategy 2 requires the definition of subregions or subpopulations in each of which the variability is small, so care must be taken to avoid the same conundrum that applies to strategy 1. (It is difficult to be sure that you can ignore variability until you think about the possible consequences of ignoring it.) Less obviously, one still needs to be somewhat confident that one has a handle on the variability in order to reduce the distribution to either an average (strategy 3) or a "tail" value (strategy 4). We know that 70 kg is an average adult body weight (and that virtually no adults are above or below 70 kg by more than a factor of 3), because weight is directly observable and because we know the mechanism by which people grow and the biologic limits of either extreme. Armed with our senses and this knowledge, we might need only a few observations to pin down roughly the minimum, the average, and the maximum. But what about a variable like "the rate at which human liver cells metabolize ethylene dibromide into its glutathione conjugate"? Here a few direct measurements or a few extrapolations from animals may not be adequate, because in the absence of any firm notion of the spread of this distribution within the human population (or the mechanisms by which the spread occurs), we cannot know how reliably our estimate of the average value reflects the true average, nor how well the observed minimum and maximum mirror the true extremes.
The distribution for an important variable such as metabolic rate should thus explicitly be considered in the risk assessment, and the reliability of the overall risk estimate should reflect knowledge about both the uncertainty and the variability in this characteristic. The importance of a more accurate risk estimate may motivate additional measurements of this variable, so that its distributions may be better defined with these additional data.
This chapter concentrates on how EPA treats variability in emissions, exposures, and dose-response relationships, to identify which of the four strategies it typically uses and to assess how adequately it has considered each choice and its consequences. The goals of this chapter are three: (1) to indicate how EPA can increase its sophistication in defining variability and handling its effects; (2) to provide information as to how to improve risk communication, so that Congress and the public understand at least which variabilities are and which are not accounted for, and how EPA's handling of variability affects the "conservatism" (or lack thereof) inherent in its risk numbers; and (3) to recommend specific research whose results could lead to useful changes in risk-assessment procedures.
In recent years, EPA has begun to increase its attention to variability. More-
over, the lack of attention in the past was due in part to a set of choices to erect a set of conservative default options (strategy 4 above) instead of dealing with variability explicitly. In theory at least, the question "How do you determine the extreme of a distribution without knowing the whole distribution?" can be answered by setting a highly conservative default and placing the burden of proof on those who wish to relax the default by showing that the extreme is unrealistic even as a "worst case." For example, the concept of the MEI (someone who breathes pollutants from the source for 70 years, 24 hours per day, at a specified location near a plant boundary) has been criticized as unrealistic, but most agree that as a summary of the population distribution of "number of hours spent at a given location during a lifetime" it might be a reasonable place to start from as a conservative short-cut for the entire distribution.
EPA has also tackled interindividual variability squarely in Exposure Factors Handbook (EPA, 1989c), which provides various percentiles (e.g., 5th, 25th, 50th, 75th, 95th) of the observed variability distributions for some components of exposure assessment, such as breathing rates, water ingestion, and consumption of particular foodstuffs. This document has not yet become a standard reference for many of EPA's offices, however. In addition, as we will discuss below, EPA has not dealt adequately with several other major sources of variability. As a result, EPA's methods to manage variability in risk assessment rely on an ill-characterized mix of some questionable distributions, some verified and unverified point values intended to be "averages," some verified and unverified point values intended to be "worst cases," and some "missing defaults," that is, hidden assumptions that ignore important sources of variability.
Moreover, several trends in risk assessment and risk management are now increasing the urgency of a broad and well-considered strategy to deal with variability. The three most important of these trends are the following:
Variability in human response to pollutants emitted from a particular source or set of sources can arise from differences in characteristics of exposure, uptake, and personal dose-response relationships (susceptibility). Exposure variability in turn depends on variability in all the factors that affect exposure, including emissions, atmospheric processes (transport and transformation), personal activity, and the pollutant concentration in the microenvironments where the exposures occur. Information on those variabilities is not routinely included in EPA's exposure assessments, probably because it has been difficult to specify the distributions that describe the variations.
Human exposure results from the contact of a person with a substance at some nonzero concentration. Thus, it is tied to personal activities that determine a person's location (e.g., outdoors vs. indoors, standing downwind of an industrial facility vs. riding in a car, in the kitchen vs. on a porch); the person's level of activity and breathing rate influences the uptake of airborne pollutants. Exposure is also tied to emission rates and atmospheric processes that affect pollutant concentrations in the microenvironment where the person is exposed. Such processes include infiltration of outside air indoors, atmospheric advection (i.e., transport by the prevailing wind), diffusion (i.e., transport by atmospheric turbu-
lence, chemical and physical transformation, deposition, and re-entrainmentvariability in each process tends to increase the overall variability in exposure. The variabilities in emissions atmospheric processes, characteristics of the microenvironment, and personal activity are not necessarily independent of each other; for example, personal activities and pollutant concentrations at a specific location might change in response to outdoor temperature; they might also differ between weekends and weekdays because the level of industrial activity changes.
There are basically four categories of emission variability that may need separate assessment methods, depending on the circumstances:
The last category is addressed in a separate section of the Clean Air Act and is not discussed in this report.
At least two major factors influence variability in emissions as it affects exposure assessment. First, a given source typically does not emit at a constant rate. It is subject to such things as load changes, upsets, fuel changes, process modifications, and environmental influences. Some sources are, by their nature, intermittent or cyclical. A second factor is that two similar sources (e.g., facilities in the same source category) can emit at different rates because of differences in such things as age, maintenance, or production details.
The automobile is an excellent example of both causes. Consider a single, well-characterized car with an effective control system. When it is started, the catalyst has not warmed up, and emissions can be high. Almost half the total automobile emissions in, say, Los Angeles can occur during the cold-start period. After the catalyst reaches its appropriate temperature range, it is extremely effective (›90%) at removing organic substances, such as benzene and formaldehyde, during most of the driving period. However, hard accelerations can overwhelm the system's capabilities and lead to high emissions. Those variations can lead to spatial and temporal distributions of emissions in a city (e.g.,
high emissions in areas with a large number of cold starts, particularly in the morning). The composition of the emissions, including the toxic content, differs between cold-start and driving periods. Emissions also differ between carsoften dramatically. Because of differences in control equipment, total emissions can vary, and emissions between cycles can vary between cars (e.g., cold-start vs. evaporative emissions). A final notable contribution to emission variability in automobiles is the presence of super-emitters, whose control systems have failed and may emit organic substances at a rate 10 times that of a comparable vehicle that is operating properly.
Thus, an exposure analysis based on source-category average emissions will miss the variability in sources within that category. And, exposure analyses that do not account for temporal changes in emissions from a particular source will miss an important factor, especially to the extent that emissions are linked to meteorologic conditions. In many cases, it is difficult or impossible to know a priori how emissions will vary, particularly because of upsets in processes that could lead to high exposures over short periods.
Atmospheric Process Variability
Meteorologic conditions greatly influence the dispersion, transformation, and deposition of pollutants. For example, ozone concentrations are highest during summer afternoons, whereas carbon monoxide and benzene concentrations peak in the morning (because of the combination of large emissions and little dilution) and during the winter. Formaldehyde can peak in the afternoon during the summer (because of photochemical production) and in the morning in the winter (because of rush-hour emissions and little dilution). Concentrations of primary (i.e., emitted) pollutants, such as benzene and carbon monoxide, are higher in the winter in urban areas, whereas those of many secondary pollutants (i.e., those resulting from atmospheric transformations of primary pollutants), such as ozone, are higher in the summer. Meteorologic conditions may also play a role in regional variations. Some areas experience long periods of stagnant air, which lead to very high concentrations of both primary and secondary pollutants. An extreme example is the London smog that led to high death rates before the mid-1950s. Wind velocity and mixing height also influence pollutant concentrations. (Mixing height is the height to which pollutants are rapidly mixed due to atmospheric turbulence; in effect, it is one dimension of the atmospheric volume in which pollutants are diluted.) They are usually correlated; the prevailing winds and velocities in the winter, when the mixing height is low, can be very different from those in the summer.
Some quantitative information is available about the impact of meteorologic variability on pollutant concentrations. Concentrations measured at one location over some period tend to follow a lognormal distribution. There are significant fluctuations in the concentrations about the medians (e.g., Seinfeld, 1986), which
often vary by a factor of more than 10. The extreme concentrations are usually related to time and season. The relative magnitudes and frequencies of such fluctuations in concentration increase as distance from the source decreases. Pollutant transport over complex terrain (e.g., presence of hills or tall buildings), which is generally difficult to model, can further increase relative differences in extreme concentrations about the medians. Two examples of the influence of complex terrain are Donora, Pennsylvania (in a river valley), and the Meuse Valley in Belgium. In those areas, as in London, periods of extremely high pollutant concentrations led to a period of increased deaths. Estimates of concentration over flat terrain cannot capture such effects.
Empirical data on concentration variability are sparse, except for a few pollutants, notably the criteria pollutants (including carbon monoxide, ozone, sulfur dioxide, and particulate matter). Some information on variations in formaldehyde and benzene concentrations is also available. One interesting study that considered air-pollutant exposure during commuting in the Los Angeles area was conducted by the South Coast Air Quality Management District (SCAQMD, 1989). The authors looked at exposure dependence on seasonal, vehicular-age, and freeway-use variations. They found that drivers of older vehicles had greater exposure to benzene and that exposure to benzene, formaldehyde, ethylene, and chromium was greater in the winter, although exposure to ethylene dichloride was greater in the summer. They did not report the variability in exposure between similar vehicles or distributions of the exposures (e.g., probability density functions).
Microenvironmental and Personal-Activity Variability
Microenvironmental variability, particularly when compounded with differences in personal activity, can contribute to substantial variability in individual exposure. For example, the lifetime-exposed 70-year-old has been faulted as an extreme case, but it is instructive to consider this hypothetical person in the distribution of personal activity traits. Although it is unlikely, this 70-year lifetime exposure activity pattern is one end of the spectrum in the variability of personal activity and time spent in a specific microenvironment.
Concentrations in various microenvironments vary considerably and depend on a variety of factors, such as species, building type, ventilation system, locality of other sources, and street canyon width and depth. Both the Los Angeles study (SCAQMD, 1989) and a New Jersey study (Weisel et al., 1992) revealed that exposure can be increased during commuting, particularly if the automobile itself is defective. The primary sources of many air pollutants are indoors, so their highest concentrations are found there. Those concentrations can be 10-1,000 times the outdoor concentrations (or even greater). However, the difference between outdoor and indoor concentrations of pollutants is not nearly so great when the indoor location is ventilated. Concentrations of compounds that do not
react rapidly with or settle on surfaces, such as carbon monoxide and many organic compounds might not decrease significantly when ventilated indoors. If there are additional sources of these compounds indoors, their concentrations might, in fact, increase. Concentrations of more reactive compounds, such as ozone, can decrease by a factor of 2 or more, depending on ventilation rate and the ventilation system used (Nazaroff and Cass, 1986). Particles can also be advected indoors (Nazaroff et al., 1990). One concern is that the ventilation of outdoor pollutants indoors can increase the formation of other pollutants (Nazaroff and Cass, 1986; Weschler et al., 1992). The lifetime-exposed person sitting on the porch outside his home may be at one extreme for exposure to emissions from an outdoor stationary source, but may be at the other extreme for net air-pollutant exposure; such a person may have effectively avoided "hot" microenvironments in both the home and the automobile.
Increased personal activity leads to a larger uptake, and this will add to variability by as much as a factor of about 2 or more. The activity-related component of variability depends on both the microenvironmental variability (e.g., outdoors vs. indoors) and personal characteristics (e.g., children vs. adults).
Variability In Human Susceptibility
Person-to-person differences in behavior, genetic makeup, and life history together confer on individual people unique susceptibilities to carcinogenesis (Harris, 1991). Such interindividual differences can be inherited or acquired. For example, inherited differences in susceptibility to physical or chemical carcinogens have been observed, including a substantially increased risk of sunlight-induced skin cancer in people with xeroderma pigmentosum, of bladder cancer in dyestuff workers whose genetic makeup results in the "poor acetylator" phenotype, and of bronchogenic carcinoma in tobacco smokers who have an "extensive debrisoquine hydroxylator" phenotype (both are described further in Appendix H). Similarly among different inbred and outbred strains of laboratory animals (and within particular outbred strains) exposed to carcinogenic initiators or tumor promoters there may be a factor of 40 variation in tumor response (Boutwell, 1964; Drinkwater and Bennett, 1991; Walker et al., 1992). Acquired differences that can significantly affect an individual's susceptibility to carcinogenesis include the presence of concurrent viral or other infectious diseases, nutritional factors such as alcohol and fiber intake, and temporal factors such as stress and aging.
Appendix H describes three classes of factors that can affect susceptibility: (1) those which are rare in the human population but which confer very large increases in susceptibility upon those affected; (2) those which are very common but only marginally increase susceptibility; and (3) those which may be neither rare nor of marginal importance to those affected. The Appendix provides particular detail on five of the determinants that fall into this third group. This
material in Appendix H represents both a compilation of existing literature as well as some new syntheses of recent studies; we commend the reader's attention to this important information.
Taken together, the evidence regarding the individual mediators of susceptibility described in Appendix H supports the plausibility of a continuous distribution of susceptibility in the human population. Some of the individual determinants of susceptibility, such as concentrations of activating enzymes or of proteins that might become oncogenic, may themselves exist in continuous gradations across the human population. Even factors that have long been thought to be dichotomous are now being revealed as more complicatede.g., the recent finding that a substantial fraction of the population is heterozygous for ataxia-telangiectasia and has a susceptibility midway between that of ataxia-telangiectasia homozygotes and that of "normal" people (Swift et al., 1991). Most important, the combination of a large number of genetic, environmental, and lifestyle influences, even if each were bimodally distributed, would likely generate an essentially continuous overall susceptibility distribution. As Reif (1981) has noted, "we would expect to find in [the outbred human population] what would be the equivalent result of outbreeding different strains of inbred mice: a spectrum of different genetic predispositions for any particular type of tumor."
A working definition of the breadth of the distribution of "interindividual variability in overall susceptibility to carcinogenesis" is as follows: If we identified persons of high susceptibility (say, we knew them to represent the 99th percentile of the population distribution) and low susceptibility (say, the 1st percentile), we could estimate the risks that each would face if subjected to the same exposure to a carcinogen. If the estimated risk to the first type of person were 10-2 and the estimated risk to the second type of person were 10-6, we could say that "human susceptibility to this chemical varies by at least a factor of 10,000."4
There are two distinct but complementary approaches to estimating the form and breadth of the distribution of interindividual variability in overall susceptibility to carcinogenesis. The biologic approach is a "bottom-up" method that uses empirical data on the distribution of particular factors that mediate susceptibility to model the overall distribution. In the major quantitative biologic analysis of the possible extent of human variations in susceptibility to carcinogenesis, Hattis et al. (1986) reviewed 61 studies that contained individual human data on six characteristics that are probably involved causally in the carcinogenic process. The six were the half-life of particular biologically active substances in blood, metabolic activation of drugs (in vivo) and putative carcinogens (in vitro), enzymatic detoxification, DNA-adduct formation, the rate of DNA repair (as measured by the rate of unscheduled DNA synthesis induced by UV light), and
the induction of sister-chromatid exchanges after exposure of lymphocytes to x-rays. They estimated the overall variability in each factor by fitting a lognormal distribution to the data and then propagated the variabilities by using Monte Carlo simulation and assuming that the factors interacted multiplicatively and were statistically independent. Their major conclusion was that the logarithmic standard deviation of the susceptibility distribution lies between 0.9 and 2.7 (90% confidence interval). That is, the difference in susceptibility between the most sensitive 1% of the population and the least sensitive 1% might be as small as a factor of 36 (if the logarithmic standard deviation was 0.9) or as large as a factor of 50,000 (if the logarithmic standard deviation was 2.7).5
The alternative approach is inferential or "top-down," and combines epidemiologic data with a demographic technique known as heterogeneity dynamics. Heterogeneity dynamics is an analytic method for describing the changing characteristics of a heterogeneous population as its members age. The power of the heterogeneity-dynamics approach to explain initially puzzling aspects of demographic data, as well as to challenge simplistic explanations of population behavior, stems from its emphasis on the divergence between forces that affect individuals and forces that affect populations (Vaupel and Yashin, 1983). The most fundamental concept of heterogeneity dynamics is that individuals change at rates different from those of the cohorts they belong to, because the passage of time affects the composition of the cohort as it affects the life prospects of each member. In a markedly heterogeneous population, the overall death rate can decline with age, even though every individual faces an ever-increasing risk of death, simply because the population as a whole grows increasingly more "resistant" to death as the more susceptible members are preferentially removed. Specifically with regard to cancer, heterogeneity dynamics can examine the progressive divergence of observed human age-incidence functions (for many tumor types) away from the function that is believed to apply to an individual's risk as a function of agenamely, the power function of age formalized in the 1950s by Armitage and Doll (which posits that risk increases proportionally with age raised to an integral exponent, probably 4, 5, or 6). In contrast with groups of inbred laboratory animals, which do exhibit age-incidence functions that generally obey the Armitage-Doll model, in humans the age-incidence curves for many tumor types begin to level off and plateau at higher ages.
Many of the pioneering studies that used heterogeneity dynamics to infer the amount of variation in human susceptibility to cancer used cross-sectional data, which might have been confounded by secular changes in exposures to carcinogenic stimuli (Sutherland and Bailar, 1984; Manton et al., 1986). One investigation that built on the previous body of work was that of Finkel (1987), who assembled longitudinal data on cancer mortality, including the age at death and cause of death of all males and females born in 1890, for both the United States and Norway. That study separately examined deaths due to lung cancer and colorectal cancer and tried to infer the amount of population heterogeneity that
could have caused the observed age-mortality relationships to diverge from the Armitage-Doll (ageN) function that should apply to the population if all humans are of equal sensitivity. The study concluded that as a first approximation, the amount of variability (for either sex, either disease, and either country) could be roughly modeled by a lognormal distribution with a logarithmic standard deviation on the order of 2.0 (i.e., general agreement with the results of Hattis et al., 1986). That is, about 5% of the population might be about 25 times more susceptible than the average person (and a corresponding 5% about 25 times less susceptible); about 2.5% might be 50 times more (or less) susceptible than the average, and about 1% might be at least 100 times more (or less) susceptible.
A later analysis (Finkel, in press) showed that such a conclusion, if borne out, would have important implications not only for assessing risks to individuals, but for estimating population risk in practice. In a highly heterogeneous population, quantitative uncertainties about epidemiological inferences drawn from relatively small subpopulations (thousands or fewer), as well as the frequent application of animal-based risk estimates to similarly ''small" subpopulations, will be increased by the possibility that the average susceptibility of small groups varies significantly from group to group.
The issue of susceptibility is an important one for acute toxicants as well as carcinogens. The NRC Committee on Evaluation of the Safety of Fishery Products addressed this issue in depth in their report entitled Seafood Safety (NRC, 1991b). Guidelines for the assessment of acute toxic effects in humans have recently been published by the NRC Committee on Toxicology (NRC, 1993d).
This section records the results of the committee's analysis of EPA's practice on variability.
Exposure Variability and the Maximally Exposed Individual
One of the contentious defaults that has been used in past air-pollutant exposure and risk assessments has been the maximally exposed individual (MEI), who was assumed to be the person at greatest risk and whose risk was calculated by assuming that the person resided outdoors at the plant boundary, continuously for 70 years. This is a worst-case scenario (for exposure to the particular source only) and does not account for a number of obvious factors (e.g., the person spends time indoors, going to work, etc.) and other likely events (e.g., changing residence) that would decrease exposure to the emissions from the specific source. This default also does not account for other, possibly countervailing factors involved in exposure variability discussed above. Suggestions to remedy this shortcoming have included decreasing the point estimate for residence time
at the location to account for population mobility, and use of personal-activity models (see Chapters 3 and 6).
EPA's most recent exposure-assessment guidelines (EPA, 1992a) no longer use the MEI, instead coining the terms "high-end exposure estimates" (HEEE) and "theoretical upper-bounding exposure" (TUBE) (see Chapter 3). According to the new exposure guidelines (Section 18.104.22.168), a high-end risk "means risks above the 90th percentile of the population distribution, but not higher than the individual in the population who has the highest risk." The EPA Science Advisory Board had recommended that exposures or risks above the 99.9th percentile be regarded as "bounding estimates'' (i.e., use of the 99.9th percentile as the HEEE) for large populations (assuming that unbounded distributions such as the lognormal are used as inputs for calculating the exposure or risk distribution). For smaller populations, the guidelines state that the choice of percentile should be based on the objective of the analysis. However, neither the HEEE nor the TUBE is explicitly related to the expected MEI.
The new exposure guidelines (Section 22.214.171.124) suggest four methods for arriving at an estimator of the HEEE. These are, in descending order of sophistication:
The first two methods are much preferable to the last two and should be used whenever possible. Indeed, EPA should place a priority on collecting enough data (either case-specific or generic) that the latter two methods will not be needed in estimating variability in exposure. The distribution of exposures, developed from measurements or modeling results or both, should be used to estimate population exposure, as an input in calculating population risk. It can also be used to estimate the exposure of the maximally exposed person. For
example, the most likely value of the exposure to the most exposed person is generally the 100[(N-1)/N]th percentile of the cumulative probability distribution characterizing interindividual variability in exposures, where N is the number of persons used to construct the exposure distribution. This is a particularly convenient estimator to use because it is independent of the shape of the exposure distribution (see Appendix I-3). Other estimators of exposure to the highest, or jth highest for some j‹N, person exposed are available (see Appendix I-3). The committee recommends that EPA explicitly and consistently use an estimator such as 100[(N-1)/N], because it, and not a vague estimate "somewhere above the 90th percentile," is responsive to the language in CAAA-90 calling for the calculation of risk to "the individual most exposed to emissions. …"
In recent times, EPA has begun incorporating into distributions of exposure assumptions that are based on a national average of years of residence in a home, as a replacement for its 70-year exposure assumption (e.g., an average lifetime). Proposals have been made for a similar "departure from default" for the time an individual spends at a residence each day, as a replacement for the 24 hours assumption. However, such analyses make the assumption that individuals move to a location of zero exposure when they change residences during their lifetime or leave the home each day. But, people moving from one place to another, whether it be changing the location of their residence or moving from the home to office, can vary greatly in their exposure to any one pollutant, from relatively high exposures to none. Furthermore, some exposures to different pollutants may be considered as interchangeable: moving from one place to another may yield exposures to different pollutants which, being interchangeable in their effects, can be taken as an aggregate, single "exposure." This assumption of interchangeability may or may not be realistic; however, because people moving from place to place can be seen as being exposed over time to a mixture of pollutants, some of them simultaneously and others at separate times, a simplistic analysis of residence times is not appropriate. The real problem is, in effect, a more complex problem of how to aggregate exposure to mixtures as well as one of multiple exposures of varying level of intensities to a single pollutant.
Thus, a simple distribution of residence times may not adequately account for the risks of movement from one region to another, especially for persons in hazardous occupations, such as agricultural workers exposed to pesticides, or persons of low socioeconomic status who change residences. Further, some subpopulations that might be more likely to reside in a high-exposure region might also be less mobile (e.g., owing to socioeconomic conditions). For these reasons, the default residency assumption for the calculation of the maximally exposed individual should remain at the mean of the current U.S. life expectancy, in the absence of supporting evidence otherwise. Such evidence could include population surveys of the affected area that demonstrate mobility outside regions of residence with similar exposures to similar pollutants. Personal activity (e.g., daily and seasonal activities) should be included.
If in a given case EPA determines that it must use the third method (combining various different "maximum," "near-maximum," and average values for inputs to the exposure equation) to arrive at the HEEE, the committee offers another caution: EPA has not demonstrated that these combinations of point estimates do in fact yield an output that reliably falls at the desired location within the overall distribution of exposure variability (that is, in the "conservative" portion of the distribution, but not above the confines of the entire distribution). Accordingly, EPA should validate (through generic simulation analyses and specific monitoring efforts) that its point-estimation methods do reasonably and reliably approximate what would be achieved via the more sophisticated direct-measurement or Monte Carlo methods (that is, a point estimate at approximately the 100[(N-1)/N]th percentile of the distribution). The fourth method, it should go without saying, is highly arbitrary and should not be used unless the bounding estimate can be shown to be "ultraconservative" and the concept of "backing off'' is better defined by EPA.
Human beings vary substantially in their inherent susceptibility to carcinogenesis, both in general and in response to any specific stimulus or biologic mechanism. No point estimate of the carcinogenic potency of a substance will apply to all individuals in the human population. Variability affects each step in the carcinogenesis process (e.g., carcinogen uptake and metabolism, DNA damage, DNA repair and misrepair, cell proliferation, tumor progression, and metastasis). Moreover, the variability arises from many independent risk factors, some inborn and some environmental. On the basis of substantial theory and some observational evidence, it appears that some of the individual determinants of susceptibility are distributed bimodally (or perhaps trimodally) in the human population; in such cases, a class of hypersusceptible people (e.g., those with germ-line mutations in tumor-suppressor genes) might be at tens, hundreds, or thousands of times greater risk than the rest of the population. Other determinants seem to be distributed more or less continuously and unimodally, with either narrow or broad variances (e.g., the kinetics or activities of enzymes that activate or detoxify particular pollutants).
To the extent that those issues have been considered at all with respect to carcinogenesis, EPA and the research community have thought almost exclusively in terms of the bimodal type of variation, with a normal majority and a hypersusceptible minority (ILSI, 1992). That model might be appropriate for noncarcinogenic effects (e.g., normal versus asthmatic response to SO2), but it ignores a major class of variability vis-à-vis cancer (the continuous, "silent" variety), and it fails to capture even some bimodal cases in which hypersusceptibility might be the rule, rather than the exception (e.g., the poor-acetylator phenotype).
The magnitude and extent of human variability due to particular acquired or inherited cancer-susceptibility factors should be determined through molecular epidemiologic and other studies sponsored by EPA, the National Institutes of Health, and other federal agencies. Two priorities for such research should be
Results of the research should be used to adjust and refine estimates of risks to individuals (identified, identifiable, or unidentifiable) and estimates of expected incidence in the general population.
The population distribution of interindividual variation in cancer susceptibility cannot now be estimated with much confidence. Preliminary studies of this question, both biologic (Hattis et al., 1986) and epidemiologic (Finkel, 1987) have concluded that the variation might be described as approximately lognormal, with about 10% of the population being different by a factor of 25-50 (either more or less susceptible) from the median individual (i.e., the logarithmic standard deviation of the distribution is approximately 2.0). While the estimated standard deviation of a susceptibility distribution suggested by these studies is uncertain, in light of the biochemical and epidemiological data reviewed earlier in this chapter it is currently not scientifically plausible that the U.S. population is strictly homogeneous in susceptibility to cancer induction by cancer-causing chemicals. EPA's guidelines are silent regarding person-to-person variations in susceptibility, thereby treating all humans as identical, despite substantial evidence and theory to the contrary. This is an important "missing default" in the guidelines. EPA does assume (although its language is not very clear in this regard) that the median human has susceptibility similar to that of the particular sex-strain combination of rodent that responds most sensitively of those tested in bioassays, or susceptibility identical with that of the particular persons observed in epidemiologic studies. These latter assumptions are reasonable as a starting point (Allen et al., 1988), but of course they could err substantially in either direction for a specific carcinogen or for carcinogens as a whole.
The missing default (variations in susceptibility among humans) and questionable default (average susceptibility of humans) are related in a straightforward manner. Any error of overestimation in rodent-to-human scaling (or in epidemiologic analysis) will tend to counteract the underestimation errors that must otherwise be introduced into some individual risk estimates by EPA's current practice of not distinguishing among different degrees of human susceptibil-
ity. Conversely, any error of underestimation in interspecies scaling will exacerbate the underestimation of individual risks for every person of above-average susceptibility. Therefore, EPA should increase its efforts to validate or improve the default assumption that the median human has similar susceptibility to that of the rodent strain used to compute potency, and should attempt to assess the plausible range of uncertainty surrounding the existing assumption. For further information, see the discussion in Chapter 11.
It can be argued, in addition, that EPA has a responsibility, insofar as it is practicable, to protect persons regardless of their individual susceptibility to carcinogenesis (we use protect here not in the absolute, zero-risk sense, but in the sense of ensuring that excess individual risk is within acceptable levels or below a de minimus level). It is unclear from the language in CAAA-90 Section 112(f)(2) whether the "individual most exposed to emissions" is intended to mean the person at highest risk when both exposure and susceptibility are taken into account, but this interpretation is both plausible and consistent with the fact that a major determinant of susceptibility is the degree of metabolism of inhaled or ingested pollutants and the resulting exposure of somatic and germ cells to carcinogenic compounds (i.e., two people of different susceptibilities will likely be "exposed" to a different extent even if they breathe or ingest identical ambient concentrations). Moreover, EPA has a record of attempting to protect people with a combination of high exposure and high sensitivity, as seen in the National Ambient Air Quality Standards (NAAQS) program for criteria air pollutants (e.g., SO2, NOx, ozone, etc.).
Therefore, EPA should adopt an explicit default assumption for susceptibility before it begins to implement those decisions called for in the Clean Air Act Amendments of 1990 that require the calculation of risks to individuals. EPA could choose to incorporate into its cancer risk estimates for individual risk (not for population risk) a "default susceptibility factor" greater than the implicit factor of 1 that results from treating all humans as identical. EPA should explicitly choose a default factor greater than 1 if it interprets the statutory language to apply to individuals with both high exposure and above-average susceptibility.6EPA could explicitly choose a default factor of 1 for this purpose, if it interprets the statutory language to apply to the person who is average (in terms of susceptibility) but has high exposure. Or, preferably, EPA could develop a "default distribution" of susceptibility, and then generate the joint distribution of exposure and cancer potency (in light of susceptibility), to find the upper 95th or 99th percentile of risk for use in a risk assessment. The distribution is the more desirable way of dealing with this problem, because it takes explicit account of the joint probability (which may be large or small) of a highly exposed individual who is also highly susceptible.
Many of the currently known individual determinants of susceptibility vary by factors of hundreds or thousands at the cellular level; however, many of these risk factors (see Appendix I-2) tend to confer excess risks of approximately a
factor of 10 on predisposed people, compared with "normal" ones. Although the total effect of the many such factors may cause susceptibility to vary upwards by more than a factor of 10, some members of the committee suggest that a default factor of 10 might be a reasonable starting point, if EPA wished to apply the statutory risk criteria (see Chapter 2) to the more susceptible members of the human population. Conversely, other members of the committee do not consider an explicit factor of 10 to be justified at this time. A 10-fold adjustment might yield a reasonable best estimate of the high end of the susceptibility distribution for some pollutants when only a single predisposing factor divides the population into normal and hypersusceptible people.
If any susceptibility factor greater than 1 is applied, the short-term practical effect will be to increase all risk assessments for individual risk by the same factor, except for chemical-specific risk estimates where there is evidence that the variation in human susceptibility is larger or smaller for that chemical than for other substances. Such a general adjustment of either the default factor or default distribution might become appropriate when more information becomes available about the nature and extent of interindividual variations in susceptibility.
Individual risk assessments may depart from the new default when it can be shown either that humans are systematically either more or less sensitive than rodents to a particular chemical or that interindividual variation is markedly either more or less broad for this chemical than for the typical chemical. Therefore, in the spirit of our recommendations in Chapter 6 and Appendixes N-1 and N-2, the committee encourages EPA both to rethink the new default in general and to depart from it in specific cases when appropriately justified by general principles the agency should articulate.
Although it is known that there are susceptibility differences among people due to such factors as age, sex, race, and ethnicity, the nature and magnitude of these differences is not well known or understood; therefore, it is critical that additional research be pursued. As knowledge increases, science may be able to describe differences in the population at risk and recognize these differences with some type of default or distribution, although caution will be necessary to ensure that broad correlations between susceptibility and age, sex, etc., are not interpreted as deterministic predictions, valid for all individuals, or used in areas outside of risk assessment without proper respect for autonomy, privacy, and other social values.
In addition to adopting a default assumption for the effect of variations in susceptibility on individual risk, EPA should consider whether these variations might affect calculations of population risk as well. Estimates of population risk (i.e., the number of cases of disease or the number of deaths that might occur as a result of some exposure) are generally based on estimates of the average individual risk, which are then multiplied by the number of exposed persons to obtain a population risk estimate. The fact that individuals have unique susceptibilities should thus be irrelevant to calculating population risk, except if ignor-
ing these variations biases the estimate of average risk. Some observers have pointed out a logical reason why EPA's current procedures might misestimate average risk. Even assuming that allometric or other interspecies scaling procedures correctly map the risk to test animals onto the "risk to the average human" (an assumption we encourage EPA to explore, validate, or refine), it is not clear which "average" is correctly estimatedthe median (i.e., the risk to a person who has susceptibility at the 50th percentile of the population distribution) or the expected value (i.e., the average individual risk, taking into account all of the risks in the population and their frequency or likelihood of occurrence).
If person-to-person variation in susceptibility is small or symmetrically distributed (as in a normal distribution), the median and the average (or mean) are likely to be equivalent, or so similar that this distinction is of no practical importance. However, if variation is large and asymmetrically distributed (as in a lognormal distribution with logarithmic standard deviation on the order of 2.0 or highersee earlier example), the mean may exceed the median by roughly an order of magnitude or more.7
The committee encourages EPA to explore whether extrapolations made from animal bioassay data (or from epidemiological studies) at high exposures are likely to be appropriate for the median or for the average human, and to explore what response is warranted for the estimation and communication of population risk if the median and average are believed to differ significantly. As an initial position, EPA might assume that animal tests and epidemiological studies in fact lead to risk estimates for the median of the exposed group. This position would be based on the logic that at high exposures and hence high risks (that is, on the order of 10-2 for most epidemiologic studies, and 10-1 for bioassays), the effect of any variations in susceptibility within the test population would be truncated or attenuated. In such cases, any test animal or human subject whose susceptibility was X-fold higher than the median would face risks (far) less than X-fold higher than the median risk, because in no case can risk exceed 1.0 (certainty), and thus the effect of these individuals on the population average would not be in proportion to their susceptibilities. On the other hand, when extrapolating to ambient exposures where the median risk is closer to 10-6, the full divergence between median and average in the general population would presumably manifest itself.
If, therefore, current procedures correctly estimate the median risk, then estimates of population risk would have to be increased by a factor corresponding to the ratio of the average to the median.
Other Changes in Risk-Assessment Methods
Even when the alternative to the default model hinges on a qualitative, rather than a quantitative, distinction, such as the possible irrelevance to humans of the alpha-2-globulin mechanism involved in the initiation of some male rat kidney tumors, the new model must be checked against the possibility that some humans are qualitatively different from the norm. Any alternative assumption might be flawed, if it turns out to be biologically inappropriate for some fraction of the human population. Finally, although epidemiology is a powerful tool that can be used as a "reality check" on the validity of potency estimates derived from animal data, there must be a sufficient amount of human data for this purpose. The sample size needed for a study to have a given power level increase under the assumption that humans are not of identical susceptibility.
When EPA proposes to adopt an alternative risk-assessment assumption (such as use of a PBPK model, use of a cell-kinetics model, or the determination that a given animal response is "not relevant to humans"), it should consider human interindividual variability in estimating the model parameters or verifying the assumption of "irrelevance." If the data are not available that would enable EPA to take account of human variability, EPA should be free to make any reasonable inferences about its extent and impact (rather than having to collect or await such data), but should encourage other interested parties to collect and provide the necessary data. In general, EPA should ensure that a similar level of variability analysis is applied to both the default and the alternative risk assessment, so that it can compare estimates of equal conservatism from each procedure.
EPA often does not adequately communicate to its own decision-makers, to Congress, or to the public the variabilities that are and are not accounted for in any risk assessment and the implications for the conservatism and representativeness of the resulting risk numbers. Each of EPA's reports of a risk assessment should state its particular assumptions about human behavior and biology and what these do and do not account for. For example, a poor risk characterization for a hazardous air pollutant might say "The risk number R is a plausible upper bound." A better characterization would say, "The risk number R applies to a person of reasonably high-end behavior living at the fenceline 8 hours a day for 35 years." EPA should, whenever possible, go further and state, for example, "The person we are modeling is assumed to be of average susceptibility, but eats F grams per day of food grown in his backyard; the latter assumption is quite conservative, compared with the average."
Risk-communication and risk-management decisions are more difficult
when, as is usually the case, there are both uncertainty and variability in key risk-assessment inputs. It is important, whenever possible, to separate the two phenomena conceptually, perhaps by presenting multiple analyses. For its full (as opposed to screening-level) risk assessments, EPA should acknowledge that all its risk numbers are made up of three components: the estimated risk itself (X), the level of confidence (Y) that the risk is no higher than X, and the percent of the population (Z) that X is intended to apply to in a variable population. EPA should use its present practice of saying that "the plausible upper-bound risk is X" only when it believes that Y and Z are both close to 100%. Otherwise, it should use statements like, "We are Y% certain that the risk is no more than X to Z% of the population," or use an equivalent pictorial representation (see Figure 10-2).
As an alternative or supplement to estimating the value of Z, EPA can and should try to present multiple scenarios to explain variability. For example, EPA could present one risk number (or preferably, an uncertainty distributionsee Chapter 9) that explicitly applies to a "person selected at random from the population," one that applies to a person of reasonably high susceptibility but "average" behavior (mobility, breathing rate, food consumption, etc.), and one that applies to a person whose susceptibility and behavioral variables are both in the "reasonably high" portion of their distributions.
Identifiability and Risk Assessment
Not all the suggestions presented here, especially those regarding variation in susceptibility, might apply in every regulatory situation. The committee notes that in the past, whenever persons of high risk or susceptibility have been identified, society has tended to feel a far greater responsibility to inform and protect them. For such identifiable variability, the recommendations in this section are particularly salient. However, interindividual variability might be important even when the specific people with high and low values of the relevant characteristic cannot currently be identified 8 Regardless of whether the variability is now identifiable (e.g., consumption rates of a given foodstuff), difficult to identify (e.g., presence of a mutant allele of a tumor-suppressor gene), or unidentifiable (e.g., a person's net susceptibility to carcinogenesis), the committee agrees that it is important to think about its potential magnitude and extent, to make it possible to assess whether existing procedures to estimate average risks and population incidence are biased or needlessly imprecise.
In contrast with issues involving average risk and incidence, however, some members of the committee consider the distribution of individual susceptibilities and the uncertainty as to where each person falls in that distribution to be irrelevant if the variation is and will remain unidentifiable. For example, some argue that people should be indifferent between a situation wherein their risk is determined to be precisely 10-5 or one wherein they have a 1% chance of being highly
susceptible (with risk = 10-3) and a 99% chance of being immune, with no way to know which applies to whom. In both cases, the expected value of individual risk is 10-5, and it can be argued that the distribution of risks is the same, in that without the prospect of identifiability no one actually faces a risk of 10-3, but just an equal chance of facing such a risk (Nichols and Zeckhauser, 1986).
Some of the members also argue that as we learn more about individual susceptibility, we will eventually reach a point where we will know that some individuals are at extremely high risk (i.e., carried to its extreme, an average individual risk of 10-6 may really represent cases where one person in each million is guaranteed to develop cancer while everyone else is immune). As we approach this point, they contend, society will have to face up to the fact that in order to guarantee that everyone in the population faces ''acceptable" low levels of risk, we would have to reduce emissions to an impossibly low extent.
Other committee members reject or deem irrelevant the notion that risk is ultimately either zero or 1; they believe that, both for an individual's assessment of how foreboding or tolerable a risky situation is and for society's assessment of how just or unjust the distribution of risks is, the information about the unidentifiable variability must be reportedthat it affects both judgments. To bolster their contentions, these members cite literature about the limitations of expected utility theory, which takes the view, contradicted by actual survey data, that the distribution of risky outcomes about their mean values should not affect the individual's evaluation of the situation (Schrader-Frechette, 1985; Machina, 1990), and empirical findings that the skewness of lotteries over risky outcomes matters to people even when the mean and variance are kept constant (Lopes, 1984). They also argue that EPA should maintain consistency in how it handles exposure variability, which it reports even when the precise persons at each exposure level cannot be identified; i.e., EPA reports the variation in air concentration and the maximal concentration from a source even when (as is usually the case) it cannot predict exactly where the maximum will occur. If susceptibility is in large part related to person-to-person differences in the amount of carcinogenic material that a person's cells are exposed to via metabolism, then it is essentially another form of exposure variability, and the parallel with ambient (outside-the-body) exposure is close. Finally, they claim that having agreed that issues of pure uncertainty are important, EPA (and the committee) must be consistent and regard unidentifiable variability as relevant (see Appendix I-3). Our recommendations in Chapter 9 reflect our view that uncertainty is important because individuals and decision-makers do regard values other than the mean as highly relevant. If susceptibility is unidentifiable, then to the individual it represents a source of uncertainty about his or her individual risk, and many members of the committee believe it must be communicated just as uncertainty should be.
Social-science research aimed at clarifying the extent to which people care about unidentifiable variability in risk, the costs of accounting for it in risk management, and the extent to which people want government to take such
variation and costs into account in making regulatory decisions and in setting priorities might be helpful in resolving these issues.
Findings And Recommendations
The committees findings and recommendations are briefly summarized below.
Historically, EPA has defined the maximally exposed individual (MEI) as the worst-case scenarioa continuous 70-year exposure to the maximal estimated long-term average concentration of a hazardous air pollutant. Departing from this practice, EPA has recently published methods for calculating bounding and "reasonably high-end" estimates of the highest actual or possible exposures using a real or default distribution of exposure within a population. The new exposure guidelines do not explicitly define a point on this distribution corresponding to the highest expected exposure level of an individual.
In recent times, EPA has begun incorporating into distributions of exposure assumptions that are based on a national average of years of residence in a home, as a replacement for its 70-year exposure assumption (e.g., an average lifetime). Proposals have been made for a similar "departure from defaults" for the time an individual spends at a residence each day, as a replacement for the 24 hours assumption. However, such analyses make the assumption that individuals move to a location of zero exposure when they change residences during their lifetime or leave the home each day. But, people moving from one place to another, whether it be changing the location of their residence or moving from the home to office, may vary greatly in their exposure to any one pollutant, from relatively
high exposures to none. Further, some exposures to different pollutants may be considered as interchangeable: moving from one place to another may yield exposures to different pollutants which, being interchangeable in their effects, can be taken as an aggregate, single "exposure." This assumption of interchangeability may or may not be realistic; however, because people moving from place to place can be seen as being exposed, over time to a mixture of pollutants, some of them simultaneously and others at separate times, a simplistic analysis of residence times is not appropriate. The real problem is, in effect, a more complex problem of how to aggregate exposure to mixtures as well as one of multiple exposures of varying level of intensities to a single pollutant. Thus, a simplistic analysis based on a simple distribution of residence times is not appropriate.
EPA has dealt little with the issue of human variability in susceptibility; the limited efforts to date have focused exclusively on variability relative to noncar-
cinogenic effects (e.g., normal versus asthmatic response to SO2). The appropriate response to variability for noncancer end points (i.e., identify the characteristics of "normal" and "hypersusceptible" individuals, and then decide whether or not to protect both groups) might not be appropriate for carcinogenesis, in which variability might well be continuous and unimodal, rather than either-or.
EPA does not account for person-to-person variations in susceptibility to cancer; it thereby treats all humans as identical in this respect in its risk calculations.
EPA does not adequately communicate to its own decision-makers, to Congress, or to the public the variabilities that are and are not accounted for in any risk assessment and the implications for the conservatism and representativeness of the resulting risk numbers.
1. Some specialists in different fields often use the term "variability" to refer to a dispersion of possible or actual values associated with a particular quantity, often with reference to random variability associated with any estimate of an unknown (i.e., uncertain) quantity. This report, unless stated otherwise, will use the terms interindividual variability, variability, and interindividual heterogeneity all to refer to individual-to-individual differences in quantities associated with predicted risk, such as in measures of or parameters used to model ambient concentration, uptake or exposure per unit ambient concentration, biologically effective dose per unit exposure, and increased risk per unit effective dose.
2. This assumes that risk is linear in long-term average dose, which is one of the bases of the classical models of carcinogenesis (e.g., the LMS dose-response model using administered dose). However, when one moves to more sophisticated models of the dose-exposure (i.e., PBPK) and exposure-response (i.e., biologically motivated or cell-kinetics models) relationships, shorter averaging times become important even though the health endpoint may manifest itself over the long-term. For example, the cancer risk from a chemical that is both metabolically activated and detoxified in
vivo may not be a function of total exposure, but only of those periods of exposure during which detoxification pathways cannot keep pace with activating ones. In such cases, data on average long-term concentrations (and interindividual variability therein) may completely miss the only toxicologically relevant exposure periods.
3. As discussed above, in many cases variability that exists over a short averaging time may grow less and less important as the averaging time increases. For example, if on average, adults breathe 20m3 of air per day, then over any random 1-minute period, in a group of 1,000 adults there would probably be some (those involved in heavy exertion) breathing much more than the average value of 0.014 (m3/min), and other (those asleep) breathing much less. Over the course of a year, however, the variation around the average value of 7300 m3/yr would be much smaller, as periods of heavy exercise, sleep, and average activity "average out." On the other hand, some varying human characteristics do not substantially converge over longer averaging periods. For example, the daily variation in the amount of apple juice people drink probably mirrors the monthly and yearly variation as wellthose individuals who drink no apple juice on a random day are probably those who rarely or never drink it, while those at the other "tail" of the distribution (drinking perhaps three glasses per day) probably tend to repeat this pattern day after day (in other words, the distribution of "glasses drunk per year'' probably extends all the way from zero to 365 × 3, rather than varying narrowly around the midpoint of this range).
4. Similarly, the two persons might face equal cancer risks at exposures that were 10,000-fold different. However, an alternative definition, which would be more applicable for threshold effects, would be to call the difference in susceptibility the ratio of doses needed to produce the same effect in two different individuals.
5. The logarithmic standard deviation is equivalent to the standard deviation of the normal distribution corresponding to the particular lognormal distribution. If one takes the antilog of the logarithmic standard deviation, one obtains the "geometric standard deviation", or GSD, which has a more intuitively appealing definition: N standard deviations away from the median corresponds to multiplying or dividing the median by the GSD raised to the power N.
6. Moreover, existing studies of overall variations in susceptibility suggest that a factor of 10 probably subsumes one or perhaps 1.5 standard deviations above the median for the normal human population. That is, assuming (as EPA does via its explicit default) that the median human and the rodent strain used to estimate potency are of similar susceptibility, an additional factor of 10 would equate the rodent response to approximately the 85th or 90th percentiles of human response. That would be a protective, but not a highly conservative, safety factor, inasmuch as perhaps 10 percent or more of the population would be (much) more susceptible than this new reference point.
Inclusion of a default factor of 10 could bring cancer risk assessment partway into line with the prevailing practice in noncancer risk assessment, wherein one of the factors of 10 that are often added is meant to account for person-to-person variations in sensitivity.
However, if EPA decides to use a factor of 10, it should emphasize that this is a default procedure that tries to account for some of the interindividual variation in dose-response relationships, but that in specific cases may be too high or too low to provide the optimum degree of "protection" (or to reduce risks to "acceptable" levels) for persons of truly unusual susceptibility. Nor does it ensure that (in combination with exposure estimates that might actually correspond to a maximally exposed or reasonably high-end person) risk estimates are predictive or conservative for the actual "maximally-at-risk" person. In contrast, some persons of extremely high susceptibility might, as a consequence of their susceptibility, not face high exposures. It might also be the case that some risk factors for carcinogenesis also predispose those affected to other diseases from which it might be impossible to protect them.
7. For example, suppose the median income in a country was $10,000, but 5 percent of the population earned 25 times less or more than the median and an additional 1 percent earned 100 times less or more. Then the average income would be [(0.05)(400) + (0.05)(250,000) + (0.01)(100) + (0.01)(1,000,000) + (0.88)(10,000)] = $31,321, or more than three times the median income.
8. "Currently" is an important qualifier given the rapid increases in our understanding of the molecular mechanisms of carcinogenesis. During the next several decades, science will doubtless become more adept at identifying individuals with greater susceptibility than average, and perhaps even pinpoint specific substances to which such individuals are particularly susceptible.