National Academies Press: OpenBook
« Previous: 5 Survey Research Methods and Exposure Assessment
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 169
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 170
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 171
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 172
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 173
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 174
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 175
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 176
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 177
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 178
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 179
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 180
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 181
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 182
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 183
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 184
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 185
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 186
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 187
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 188
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 189
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 190
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 191
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 192
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 193
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 194
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 195
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 196
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 197
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 198
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 199
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 200
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 201
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 202
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 203
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 204
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 205
Suggested Citation:"6 Models." National Research Council. 1991. Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities. Washington, DC: The National Academies Press. doi: 10.17226/1544.
×
Page 206

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

6 Models INS RODUCTION Mathematical models use systems of equations, based on a conceptual framework, to describe interactions among components of physical, chemical, or biological systems. The conceptual component of a model consists of the assumptions and approximations that reduce a complex problem to a simpli- fied, more manageable one. Models are used because they are an efficient way to examine the cause-effect relationships among components (or vari- ables) in a system. The bases of mathematical models are the fundamental physical and chemi- cal laws, such as the laws of conservation of mass, energy, and momentum. Modelers must choose the level of detail in which components of a system are described. Clearly, an extremely rigorous model that includes every phenome- non in microscopic detail would be so complex that it would take a long time to develop and might be impossible to use. A compromise is always required between a rigorous description and getting an answer that is meaningful for a specific application with limited resources. This compromise involves mak- ing many simplifying assumptions, which should be carefully considered and listed. They impose limitations on the model that should always be kept in mind when evaluating the model's results. Models are useful tools for quantifying the relationship between air-pollu- tant exposure and important variables, as well as for estimating exposures In situations where measurements are unavailable. Exposure models may obviate extensive environmental or personal measurement programs by providing estimates of population exposures that are based on small numbers of repre- sentative measurements. The challenge is to develop appropriate models that allow for extrapolation from relatively few exposure measurements to a much larger population (Sexton and Ryan, 1988~. A practical approach to assessing exposure through modeling requires 169

170 ASSESSING HUMAN EXPOSURE decisions as to how precise and accurate the assessments need to be. The ultimate focus is on the biological effects of exposure, so decisions on accuracy and precision require some quantitative knowledge of the biological effects. Limitations on resources require the exposure analyst to choose the most economical methods to answer the question, How accurately must the e~o- sure or exposure potential estimate be to provide the needed information for risk estimation, risk management, or epidemiology?" For risk-related prob- lems, the analyst seeks a magnitude of exposure that defines the threshold of Significant risk.- In some cases, the threshold has already been set with the establishment of an exposure limit (e.g., by ACGIH, OSHA, or EPA). In other cases, the threshold needs to be ascribed on the basis of available infor- mation on possible health effects of the contaminant of interest or a structural analogue. The judgment of those assigning limits should be driven by the quality of the data. For risk assessment and management, health-effects data bases with a high degree of uncertainty should result in concomitantly high levels of attributed risk/(unit exposure)-that is, a low exposure limit as a prudent safeguard against underestimating the health-effects potential of the agent. Thus, ex- tremely meager information on contaminants and biological effects will result in low exposure limits until the data base can be improved to justify a higher limit. For epidemiological studies, the modeler must understand the study design sufficiently to recognize the trade-offs between levels of uncertainty in expo- sure estimates and the ultimate risk evaluation that also depends on the level of uncertainty in the health-effects data. Given an exposure limit, the analyst needs to determine whether any particular exposure scenario constitutes a significant fraction of that limit. However, the analyst needs only to use models with ~enough" sophistication to do the job with the least cost. Simple models can be used first to explore an exposure scenario, because they require relatively few data and are thus less expensive to implement than the more sophisticated techniques. Simple mod- els generally yield biased estimates of exposure. It is recommended that only models known to be conservative be used in screening calculations so that any bias that exists is protective of the individual exposed. Consider a contami- nant with a vapor pressure of 0.1 torr, a molecular weight of 100, and a daily exposure limit of 8,000 (mg/m3) hr (an 8-hour time-weighted average of 1,000 mg/m3~. A simple model that assumes complete saturation of the air with this compound will render an estimated exposure of 4,300 (mg/m3) hr. or about 50% of the exposure limit. Assuming further that this compound is not present in particulate form (which would increase the amount of contami- nant inhaled) allows one to estimate a lack of significant risk vis-a-vis the

MODELS 171 exposure limit. The true exposure will most likely be below this very conser- vative estimate, but greater quality of assessment is not needed, because this is a worst-case scenario. Exposure models can be used to identify major exposure parameters (e.g., sources, emotion rates, etc*) =d to assist epidemiolog~cal studies and risk assessments. Although the input required for exposure models depends upon the nature of the model, all exposure models require information on who is exposed, to what contaminant, for how long, and under what circumstances (Davis and Gusman, 1982~. Many models also require information on the sources' transport, transformation and fate of the contaminants of interest. Models generally red on assumptions and approximations to quantitatively describe cause and effect relationships that are otherwise difficult to deter- mine. In this way models are used to estimate exposures when it is impracti- cal or impossible to measure exposures of an individual or population to a contaminant. Despite the simplifications inherent in models, they provide insights and information about the relationships between exposure and inde- pendent variables that determine exposure. Models discussed in this chapter are classified into two broad categories: those which predict exposure (in units of concentration multiplied by time) and those which predict concentration (in units of mass per volume). Al- though concentration models are not truly exposure models, their output can be used to estimate exposures when combined with information on human time-activity patterns (see Figure 6.1~. Since exposure occurs when humans are in contact u ith contaminantts), exposure models generally combine infor- mation on the concentrations in microenvironments with information on activi- ty patterns. The output of such models is a prediction or description of expo- sure for individuals or populations. Exposure models can be used to estimate individual exposures or the distri- bution of individual exposures in a population. Activity patterns and micro- environmental contaminant concentrations inputs to exposure prediction models~:an be measured or modeled. The microenvironmental concentra- tions and the activity pattern can vary from individual to individual, and from time period to time period. Three types of models have been developed to estimate population exposures: (a) simulation models such as SHAPE (Ott, 1981, 19843 and NEM (Johnson, 1984; Johnson and Paul, 1984), (b) the con- volution model by Duan (1981, 1982, 1985, 1989), and (c) the variance compo- nents model by Duan (1989~. As shown in Figure 6-1, concentration models are separated into several types: models based on the principles of physics and chemistry, and models that statistically relate measurements of concentrations to independent vari- ables thought to be direct determinants of concentration (e.g., gas emission

172 ASSESSING HUMAN EXPOSURE | Models based on principles || Models based on statistical | of physics and chemistry relationships . C ancentratlons In microenvlronmen ts ~ ~TT~me-~tivt tY Modeled or measured Pattern Information I ndoor Ou tdoor Ex posure Modeled FIGURE 6.1 Schematic diagram of models used in exposure assessment rate from a cooking range) or indirect indicators (e.g., the presence of a gas range). There are also many hybrids of these two basic approaches to model contaminant concentrations. Concentration models based on physical principles quantitatively estimate emission source dispersion, deposition in the environment (indoor or out- door), and transport to the receptor for a given contaminant. The transfer of a contaminant from one medium to another can also be modeled in this way. If a contaminant undergoes chemical reaction in the environment, then models based on chemical reaction kinetics principles are used to predict the outdoor concentrations of the secondary contaminants (products of reaction). Ozone and sulfuric acid aerosols are examples of secondary contaminants formed by chemical reactions of primary contaminants as they are dispersed and trans- ported in the outdoor atmosphere. Models to describe and predict their concentrations and, ultimately, human exposures must, therefore, incorporate the rates and products of the chemical reactions. The development of faster, larger, and less costly computers has greatly enhanced our ability to model complex phenomena like the turbulent flow of air in the outdoor and indoor environments. An approach to modeling the dispersion of contaminants from sources is to approximate the random motion of individual air parcels. However, random motion requires total indepen

MODELS 173 dence of one time interval from another and this requirement is not met for diffusion in the atmospheric boundary layers. Instead, a correlation will exist between one time interval and the next. This autocorrelation can be modeled approximately and the motion of a large number of individual parcels can be calculated. ~7 IMPORTANT MODEL CHARACTERISTICS Limited information is available regarding the accuracy of most contami- nant concentration models and less is known about exposure models because most models have not been adequately validated. Model users should under- stand that model outputs have uncertainties, not just those arising from the uncertainties in the input data, and that actual exposure lies somewhere in the range of that uncertainty. The results of models should be presented with their estimated uncertainties. To the extent possible, the description of the model results should distinguish between input and model uncertainty. A major objective for improving models should be to reduce uncertainty due to the model itself so that the estimated exposure is closer to the real exposure and the uncertainties are primarily associated with the uncertainties in the input data. Concentration and exposure models do not always include sufficient docu- mentation (fundamental equations, assumptions, whether parameters were lumped, etc.) that enable new users to identify and adjust critical model pa- rameters to fit new applications and or to compare their problems with previ- ous applications. The inclusion in a model of particular complex terrain, of specific contaminant source locations, unique source types, or other unusual features of a particular air shed may result in a model of high specificity; portions of such specific models may be applied to other air sheds only if the models are well documented. For example, a model developed for the Los Angeles urban atmosphere could not be used to estimate contaminant concen- trations In Denver's atmosphere unless the model takes account of the change in air density from sea level to Denver's 5,000 foot elevation along with other geographical differences. Although of limited use, sophisticated models are valuable research tools and provide valuable information on concentrations or exposures. With greater computational power becoming increasingly available, these models could be more widely applied in the future. It is important that users fully understand the models they apply, because improper use of a complicated model increases the likelihood of obtaining misleading results. Computer models need to be transferable from one computer system to another so that the validity of the model can be checked by others and the

174 ASSESSING HUMAN EXPOSURE model can be applied to other problems. Source codes for models (e.g., com- puter language code) in general should be provided in a form complete enough that programmers need not resort to any functions or subroutines other than those commonly available in the compiler for the model's language. In addition, as expert systems are developed to assist the application of mod- els, attention must be paid to ensuring that these systems can be operated by new users. CONCENTRATION MODELS Models are used extensively to estimate outdoor contaminant concentra- tions at specific sites. These models use physical, chemical, and statistical methods to address the contaminant source release, dispersion, reaction, and deposition. Models are also used to estimate indoor contaminant concentra- tions; most of these applications have occurred in occupational/industrial settings. They generally focus on measuring the contaminant concentration in a worker's breathing zone. The following discussion reviews outdoor con- centration models (e.g., emission, dispersion, atmospheric chemistry) and indoor concentration models (industrial and nonindustrial), including a review of deposition and mixing within and between rooms. Variability is discussed for both types. The section concludes with a discussion of recent advances In outdoor and indoor concentration models. Outdoor Models~ontaminant Source Emissions Emission models based on the properties of the chemicals, design parame- ters of the emission sources, the physics of mixtures, and the ambient weather conditions can provide an alternative to source monitoring (Owens et al., 1964; MacKay and Matsugu, 1973; Reinhardt, 1977; Tung et al., 1985~. The type and structure of a model depend on the source and type of contaminant re- leases; some sources are continuously replenished and can be considered to be at steady state, while other releases change in temperature or concentra- tion. Hanna and Drivas (1987) describe in detail various models available for dynamic and steady-state sources. Accurate estimation of emissions from point, area, and volume sources is necessary for accurate quantification of downwind ambient concentrations. Quantification of point sources such as stack discharges from manufacturing units can be accomplished by a number of methods, including monitoring of the sources directly and standard chemical engineering design procedures

MODELS 175 based on material and heat balances. For example, boiler emissions can be defied by knowledge of the composition of the fuel burned and the ash produced by the fuel combustion. Estimating releases from other processing equipment may require knowledge of the reaction kinetics, vapor-liquid behav- ior of the reaction mixtures, and the operating temperatures and pressures. Emissions from nonpoint sources are more difficult to monitor. A number of attempts have been made over the past decade to develop monitoring techniques for vapor and particulate emissions from pits, ponds, and lagoons (Harrison and Hughes, 1976, 1981; GCA, 1982; Thibodeaux et al., 1982) and fugitive emissions from chemical process equipment (EPA, 1988c). The Chemical Manufacturers' Association (CMA, 1987, 1989) and the EPA (1988c) have published extensive data and models for the quantification of fugitive emissions from chemical process equipment. EPA and the American Petroleum Institute have published models for quantifying the emissions from large storage tanks (EPA, 198Sa). Emissions are estimated for working losses (filling and draining the vessels) and breathing losses (losses caused by the diurnal temperature change). The EPA estimation procedure is frequently updated for use by federal and state regulators and the manufacturing organi- zations in permit negotiations and development of state implementation plans for compliance with federal regulations. The development of empirical models for emission rate estimations has focused mainly on issues related to fugitive emissions. The rate of fugitive emissions at any process point (valve, pump, etc.) is assumed to characterize all similar process points or similar equipment items. Although this assump- tion is known to be incorrect, data are insufficient to provide better emission preclictions. High emission rate predictions are obtained with these models and thus the subsequent exposure predictions may be overly conservative. Models for sudden releases of hazardous materials are generally based on fundamental principles of physics. The mass and heat balances (Bird et al., 1960) used by the modelers have used either a dynamic solution or a steady- state solution of the system of equations which describe these episodes. For spills on land, a model was developed for quantification of liquefied natural gas releases (Straw and Briscoe, 1978~. For spills on land or water, a model was developed for characterizing the emissions of chemicals in the workplace (Wu and Schroy, 1979~. These and related models are discussed by Hanna and Drivas (1987~. Models are used to calculate emissions of carbon monoxide, NOx, and organics from motor vehicles. Seitz (1989) contrasts the methods used by the state of California with those used by the federal government for transporta- tion and emission analysis.

176 ASSESSING HUMAN EXPOSURE Validation To ensure that their concentration estimates are appropriate, it is necessary to validate emission models with data from operating systems. The type of validation depends on the type of model and the ability of monitoring proto- cols to quantify actual emissions accurately. For fugitive emissions, the rate of losses to the environment can be measured directly by enclosing individual sources to quantify the emission rate. The accuracy of the emission rate measurement depends on the size and type of equipment, operating condi- tions, and the chemical and physical properties of the chemicals being han- dled. For example, the petroleum refining industry commonly involves high- temperature processing of chemicals in large equipment, but the chemical industry commonly uses ambient temperatures and small equipment and has substantially lower emission rates. Losses from large open ponds and pits are more difficult to quantify and have caused difficulty in validation of emission models. The evaporation of water from large lakes, monitored for many years by the U.S. Weather Serv- ice, provides the best validation data base. Spill tests with chemicals such as ammonia and liquefied natural gas offer another data base for validation and calibration of emission models. Validation of models for aerated basins, tanks, and lagoons can use standard data from the chemical engineering trans- port literature when no reactions or other removal mechanisms are involved. When a biological oxidation-reduction process is providing a competitive removal mechanism, the validation of emission models is much more difficult. Kinetic information is needed for biological degradation as an event separate from losses due to volatilization. Much of the literature of biological reaction kinetics combines volatilization and degradation losses and attributes the total loss to kinetic reactions. This procedure makes the resulting data bases diffi- cult to apply to specific sources. Contaminant Dispersion Models using annual average emission rates that were either measured or estimated have been available since the early 1930s (Sutton, 1932) for simulat- ing the dispersion of emissions from point sources. However, it was only in the late 1960s and the early 1970s that there was substantial development of computer programs for air dispersion of contaminants. For example, EPA has supported the continuing development of a variety of Gaussian plume models in its Users Network for Applied Modeling in Air Pollution (UNAMAP) · ~ series or programs.

MODELS 177 The basic concept of Gaussian plume models is that the turbulent disper- sion of contaminants in the air has a random character of large-scale eddy motion that is analogous to the Brownian motion of molecules. Prom this analogy, a differential equation based on Fiches law is obtained and the solu- tions are Gaussian functions. For atmospheric dispersion, motion ~ the direction of the wind (advection) is modeled as the average wind speed. Honzontal and vertical dispersion perpendicular to the prevailing wind direc- tion are modeled as Gaussian functions with the standard deviations functions of atmospheric stability and distance from the source (Henna et al., 1982~. To incorporate some of the source characteristics that affect dispersion, buoyant plume rise was included in the dispersion models (Briggs, 1969, 1971~. In 1978, EPA designated certain dispersion model computer codes as ~ap- proved models. for developing state implementation plans to achieve compli- ance with National Ambient Air Quality Standards (NAAQS) (EPA, 1978~. With EPA's endorsement of these models, they have become the principal tools In plans for controlling contaminant sources. In developing control strategies for contaminants regulated by the NAAQS, EPA developed models that combined source emission rates with atmospheric dispersion to predict the concentrations of the contaminants at a receptor site and to test the effec- tiveness of control strategies. Prediction of the concentration of ozone, a contaminant regulated by the NAAQS, requires modeling of the photochemi- cal transformation of its precursors, i.e., volatile organic compounds and NOX, as well as their transport. Dispersion modeling also can be done statistically. The air can be consid- ered as a number of parcels or particles' which move in a random fashion (Taylor, 1921~. The path of a single parcel can be described by a statistical function. If the parcel is assumed to have independent motion at any step during transport, it can be modeled as a Random walk,~ in analogy to Brownian motion of molecules. That concept was extensively developed in the l950s, but the methods became so complicated by the need for empirical factors that they were replaced with the simpler Gaussian plume methods (Henna et al., 1982). In recent years, stochastic modeling of atmospheric dispersion has in- creased in popularity, because it is relatively simple, it can be applied to com- plicated problems, and it has been made more practical by improvements in computer capability and costs. Probabilistic models can easily incorporate physical phenomena, such as buoyancy, droplet evaporation, polydispersity of released particles, and dry deposition. Stochastic modeling is typically implemented as a numerical Monte Carlo model. Boughton et al. (1987) describe a Monte Carlo simulation of atmo- spheric dispersion in which parcel displacement or velocity is treated as a

178 ASSESSING HUMAN EXPOSURE continuous-time Markov process. They restrict the model to crosswind-inte- grated point sources and assume that dispersion in the mean wind direction is negligible. That reduces the analysis to one dimension. Liljegren (1989) has extended the model to incorporate horizontal and vertical dispersion perpendicular to the mean upend direction. The results of the latter model agree well with published concentration data ~lilliam E. Dunn, University of Illinois, Urbana, personal communication, 1988~. It appears that three-dimen- sional stochastic models will offer considerable predictive improvement (in- clucting predictions of concentration change with time) over conventional Gaussian plume models. Most of the studies to calibrate and validate plume dispersion models have: involved the release of inert tracer gases from near the ground in nonbuoyant plumes-conditions very different from real stack plumes. In general, the studies have not covered a sufficient distance downwind to test the models beyond a few kilometers, so the results might not be reliable. Tracer pro- grams and in-plume aircraft flights do not provide sufficient data to permit evaluation of the models' ability to predict short-term peak concentrations. Long-term average values have been estimated with data from sparse net- works of continuous monitors, but their spatial resolution might be too low for estimation of impacts of peak concentrations. Thus, validation is still inade- quate. With support of the Electric Power Research Institute, a major study to validate plume models was mounted in the early 1980s. The first study was of a large coal-fired power plant situated in relatively simple terrain, to mini- mme topographical uncertainties. The study compared three Gaussian plume models and three stochastic models with ground-level concentrations obtained with both routine and intensive measurements programs (Bowne and Londer- gan, 1983). The results indicated serious deficiencies in the particular disper- sion models tested; they do not address complicating effects~uch as complex terrain, surface roughness, atmospheric chemistry, and large sources of heat that cause localized climatic change-and therefore are of uncertain validity. Little is known about how a plume is affected by the objects it passes over. For instance, a large manufacturing plant may emit much heat that creates localized climate changes that directly affect the plume. In what is called the heat-island effect, large masses of hot air rise and change the local climate. This can change weather patterns over large cities. The behavior of buoyancy, neutral buoyancy, and dense clouds in regions of complex terrain constitutes a problem for the dispersion modeler. The buoyancy and neutral-buoyancy plume models developed to date provide little encouragement that the problems can be solved to permit reasonable predic

MODELS 179 lions of exposure. Little research has been done on the behavior of dense clouds. The dense and neutral-buoyancy models use mixing factors to represent the surface under a plume. For example, the factors used for rural terrain are equivalent to flat, low-friction surfaces, which cause a minimum of plume turbulence. For urban terrain, the impacts of homes, businesses, and factories have been quantified by calibration experiments. Rural factors are usually used to ensure that results do not underestimate contaminant concentrations. However, surface roughness and the interaction of a plume with a building can have substantial effects. If the plume is spread sideways by such an inter- action, the results might well be catastrophic for a plant poorly designed for the community setting. Atmospheric Chemistry It is now possible to describe in detail many of the individual reactions occurring in photochemical smog (Niki et al., 1972; Demerjian et al., 1974; Seinfeld, 1988~. Use of explicit and detailed mechanisms in air-shed or long- range transport models, however, is not always practical, and detailed informa- tion on the rate constants of the precursors, intermediates, and products is not complete. The limitations on the understanding and quantitations of the complex chemical reactions can severely limit the accuracy of the output prediction. In addition, the computer time required for the integration of the rate equations associated with the hundreds of individual compounds involved is prohibitive using current computer systems. For urban air-shed models, condensed or (lumped) chemical mechanisms are generally used (Finlayson-Pitts and Pitts, 1986; Seinfeld, 1988~; i.e., reac- tions or chemical species are grouped and an overall rate constant is used for each group (Falls and Seinfeld, 1978; Whitten et al., 1980; McRae et al., 1982~. This approach can affect the spatial and temporal accuracy and preci- sion of a model. In addition, the lumping process limits the fundamental understanding of the specific pathways and interesting chemistry may be hid- den by the lumping process. To estimate ozone concentrations with a model, for example, it is necessary to estimate the concentrations of reactive interme- diates. The resulting concentrations of these other substances reflect many of the simplifying assumptions and may lead to erroneous results, even if the specific concentration sought i.e., ozone is accurately predicted. Ozone models have been critically reviewed by Seinfeld (1988~. Improved ozone models incorporate wind fields, chemical reaction mechanisms, turbu- lent dispersion, and removal processes. The newer, more sophisticated mod

180 ASSESSING HUMAN EXPOSURE els are "ridded: the area of interest Is dive into two- or three-dimensional zones in which photochemical reactions take place and wind and turbulence transport chemical constituents from one grid zone to another. The chemistry of the inorganic compounds involving NOX, O3, and HOX is included. Howev- er, because of the large number of possible constituents and the enormous number of possible chemical reactions among them, reduced or lumped mech- anisms are used, as mentioned above. The compounds may be combined into chemically similar classes, such as alkalies, alkenes, carbonyls, and aromatics, or carbon atom groups may be lumped according to bond type (single bond, double bond, aromatic bond, etc.), structure, and reactivity of subgroups. In either case, the chemistry is simplified to provide results with reasonable amounts of computational resources. The simplification is analogous to that necessary to model the dispersion of the chemically reacting mixture. Models describing the reactions of SO2 to form sulfuric acid aerosol involve many fewer chemical reactions than do photochemical smog models. The models must incorporate all important phases~as, aqueous, and those on solid surfaces and the reactions proceed in several phases (Rodhe et al., 1981; Seigneur et al., 1984~. Although the rates and mechanisms of gas-phase reac- tions of SO2 are fairly well understood, there are large uncertainties in aque- ous and solid-surface reaction rates (Scire and Venkatram, 1985~. Further- more, wet and dry deposition processes must be incorporated into the models, because such processes are significant for long-range transport (Lee and Shan- non, 1985~. Production and transport of the components of acid deposition are predict- ed by the Regional Acid Deposition Model (RADM) developed at the Nation- al Center for Atmospheric Research for EPA (Chang et al., 1987~. The mod- el combines many of the chemical mechanisms of the ozone models with liquid-phase reactions (Stockwell et al., 1986~. It includes long-range trans- port, deposition formation and related cloud processes, and chemistry. Initial validation studies have suggested good agreement between the model and actual deposition chemistry. However, only limited studies have been made and substantial additional testing and validation studies are planned before the RADM becomes the principal tool for acid-deposition-control planning in the United States. The modeling of acidic particle (e.g., sulfate) formation and transport is in its rudimentary stages. Previously, most particulate sulfate models dealt with the transformation of SO2 to the SO4= ion, but did not follow the transforma- tions to the ammonium salts, partly because of lack of information on the location, emission rates, and transport of ammonia and partly because of lack of information on the concentrations of particulate NH4NO3 and gaseous and particulate HNO3. Those deficiencies lead to large uncertainties in predicting

MODELS 181 when acidic particles will persist or will be neutralized. The issues of neutral- ~zation and nitrate concentration need to be resolved to facilitate the predic- tion of the conditions conducive to acidic particle exposures and the types of locations where human exposures occur. Graedel and coworkers have consid- ered dynamic processes by using a model of the aerosol consisting of a solid core surrounded by an aqueous solution and an organic file (Graedel and Weschier, 1981; Graedel et al., 1983~. This model predicts substantial inhibi- tion of mass transfer at the gas-liquid interface and a potential for retarding liquid phase oxidation in the atmosphere. Such a model may also be useful in explaining the dynamics of neutralization of acidic aerosols. Certain organic compounds, termed semivolatile, are distributed between the vapor and particle phases in the atmosphere (Cautreels and Van Canwen- berghe, 1978; Yamasaki et al., 1982; Bidleman, 1988; Coutant et al., 1988; Ligocki and Pankow, 1989~. Since the deposition properties of vapors and particles differ, this partitioning of sem;volatile organic compounds between the two phases can have substantial effects both on dose of these compounds to the lungs and on their atmospheric lifetimes. Efforts to develop models for this partitioning have increased over the last decade. Junge (1977) was the first to develop an equation, based on the BET isotherm, to estimate vapor- particle partitioning as a function of aerosol surface area and the saturation vapor pressure of the semivolatile compound. Ymnasaki et al. (1982) used a linear ~ ~n~nliir isotherm to explain the vapor-partitioning of polycyclic aro- matic hydrocarbons in outdoor air as a function of temperature and aerosol mass. The equivalence of these two approaches has been shown by both Bidleman and Foreman (1987) and Pankow (1987~. Pankow (1987) has ex- tended these modeling efforts to incorporate some of the fundamental proper- ties of the semivolatiles including molecular weight and a characteristic molec- ular vibration time. Although the models based on linear adsorption iso- therms have been successful in explaining the vapor-particle partitioning of many semivolatile organics, the models do not yet address the problem of the presence of multiple semivolatile compounds or the dynamic aspects of vapor- particle partitioning. In addition, more refined experimental data are needed to test these models fully, particularly those that address dynamic processes. Receptor Models Receptor models use data on contaminants at a specific site to identify the sources of contaminants. They are not predictive but can be used to validate predictive dispersion models, as in the Portland Aerosol Characterization

182 ASSESSING HUMAN EXPOSURE Study (Cooper and Watson, 1979, 1980; Core et al., 1982~. Receptor models use several methods, which have been described In detail by Hopke (1985~. In general, receptor modeling uses measured constituents of ambient sam- ples as tracers to infer the contributions of different sources to the ambient air on the basis of a mass balance and expected differences In the properties of particles emitted from different sources (Miller et al., 19723. For example, assume that the airborne lead measured at a site is the sum of lead from several sources of different types, such as automobiles (auto), incinerators (incm), and non-ferrous metal smelters (smelt): pbT = Pbo:~to + Pain + Pbsz=6 ., (Eq. 6.1) where PbT is the total airborne lead concentration (ng/m3), PbaU'o is the amount of lead contributed by automobiles, etc. However, the automobile particles contain other elements besides lead, so that Pb~,,,to = ap~,,,~O Ace (Eq. 6.2) where a, is the concentration of lead in automobile particles (ng~mg), and And is the concentration of automobile particles in the air (mg/m ). If this analysis is expanded to a series of elements, then the airborne concentra- tion of particulate element, xi, is given by Xi = X,a~f' k=1 (Eq. 6.3) where aid is the concentration of the ith element in particles emitted by the kth source and fk is the contribution to the airborne particulate mass concen- tration from the kth source. The summation is over all p sources in the air shed. Thus, if a suite of elements has been measured, a series of simulta- neous equations is available to solve to estimate the contributions of the vari- ous source types to the airborne particulate concentrations.

MODELS 183 There are two main approaches to receptor modeling: one applies the principle of mass balance and the other applies multivariate statistics. The mass-balance approach to obtaining a data set for receptor modeling is to determine and measure a number of chemical constituents, such as trace elements, in a number of samples collected from source emission streams and the ambient environment. The mass-balance approach can be used to account for all independent sources of the measured constituents In each sample. The methods require that samples be obtained at locations of interest (receptor sites) and be analyzed for properties that are characteristic of various sources (Hopke, 1985; Daisey and Kneip, 1980~. If no :a priori knowledge of the number and nature of the sources is avail- able, multivariate methods involving eigenvector analysis can be used. The mass-balance equation must be extended to a series of samples that have been analyzed and in which the various sources contribute different amounts to the airborne particle mass loadings. Methods such as target transformation factor analysis (Hopke et al., 1988) or absolute principal components analysis (Thur- ston and Spengler, 1985) can be used to obtain the elemental composition profiles associated with each source and their associated mass contribution. The U.S. NAAQS for total suspended particles (TSP) created the need to identify particle sources so that control strategies could be designed and im- plemented. The initial efforts used dispersion models; the resulting strategies to control point sources substantially reduced TSP levels. However, as the amount of needed additional control became smaller, it became more difficult to identify the sources with continuing problems. That difficulty was due in part to the general failure to address fugitive and other nonducted emissions in dispersion models. Receptor models have been useful in identifying such sources and estimating their contributions and in deciding what strategies to use to meet new standards for particulate matter (e.g., the PM-10 standard). In the guidance documents regarding the new PM-10 standard (EPA, 1987), receptor models are explicitly approved for use in the planning process with the traditional dispersion models. Sexton and Hayward (1987) recently suggested that receptor models could be useful for apportioning the contributions of indoor sources to contaminant concentrations, although identification of characteristics that distinguish among the sources of indoor contaminants would be required. Daisey and Gundel (1989) have reported that carbon and nitrogen thermograms might provide a rapid and inexpensive method for distinguishing sources of indoor particulate matter with receptor models Generally, indoor source emissions have not been sufficiently well characterized for this application of receptor models.

184 ASSESSI1`JG HUMAN EXPOSURE INDOOR CONTAMINANT CONCENIRATlONS Industrial Environments The first work on quantizing ~ndoor-air contamination was done by indus- trial hygienists in the early 1900s and focused primarily on measuring hazard- ous substances by direct sampling and measurement (Gerhardsson, 1988) at different locations and sources in the workplace. Exposure was then modeled by t~me-weighting the concentrations measured at various locations. This approach was, in essence, microenvironmental modeling. Recent attempts have been made to use this approach in a more sophisticated way, referred to as job exposure profiling (Corn and Esmen, 1979), to describe workers' expo- sure by identifying their presence and residence in known or estimated con- centration fields. Recent work with this concept is discussed later in this chapter. The time-weighted average model has been widely used in industrial set- tings. This is a statistical modeling approach that generally uses measure- ments of concentrations in each microenvironment or a source-oriented mass- balance model to predict concentrations for each microenvironment. The average concentration for a given period (typically 24 hours) is calculated as the time-weighted average of the concentrations in each microenvironment. In the industrial workplace, routine and accidental exposures to hazardous materials are of special concern. For routine releases, standards are specified as a maximum concentration of an air contaminant that should not be exceed- ed over a specified period, frequently the 8-hour workday. A time-weighted average concentration has long been used as both a conceptual and a mathe- matical mode} for routine exposures. Direct measurement of individual work- er exposures has become more common in recent years. The historical emphasis of industrial hygiene evaluation of workroom air quality by direct measurement of concentration has led to relatively little attention to the study and measurement of the mechanisms of contaminant generation and loss. Current interest in this area is high and important work is under way. Attempts have been made to estimate airborne contaminants in workroom air by using the physicochemical properties of the substance combined with information on site variables, such as ambient air temperature, temperature of the substance, production rates, surface area, and ventilation. One of the most notable past developments was the application of the box model to predict the dilution ventilation required to control worker exposure to open solvent baths. The concentration of a contaminant in a defined ~box`' of workroom air is determined on the basis of an assumption of perfect or at least good mixing of contaminant in air and first-order kinetics in the buildup

MODELS 185 and loss of pollutant in the bow This approach is the basis of a number of estimation techniques (MutchIer, 1973; Wadden and Scheff, 1983~. The box model, which has been applied to Industrial and nonindustrial environments (indoors and outdoors), involves equation 6.4 for the contami- nant concentration based on a mass balance. V~C = GO + CiQ~ - CQ~ - CK~ (Eq. 6.43 where V is the volume of the box, i is the time, C is the concentration in the box at any given time, Ci is the concentration of contaminant in the inlet air from outside the box, Qi is the volume flow rate of intake air into the box, G is the rate of generation of pollutant within the box, Or is the volume flow rate of recirculated air, and E is the contaminant removal rate for any recirculated air, e.g., the efficiency of the air-cleaning device in the recirculating airstream, and K Is the remove rate by mechanisms other than ventilation and filtration such as deposition to surfaces and chemical reactions. The equation can be modified to incorporate partial mixing through the use of a mixing factor, m. The mixing factor is the fraction of ventilation air that is completely mixed with the box air and is multiplied by the second, third, and fourth terms on the right side of equation 6.4 (Ishizu, 1980~. When this equation is applied to indoor industrial environments, the outdoor air is assumed to be contam~nant- free. In many cases the mixing factor is assumed to be unity. Measurements In Indoor environments have shown that this second supposition Is not always valid even in relatively small rooms (Drives et al., 1972; Ishizu, 1980), and the empirical mung factor should be retained In the model. In most Industrial settings, air cleaning is not used and Or is zero. For outdoor-air models, the recirculation air flow is also set to zero and the mixing factor is set to 1. Ja~ock (1988) has proposed a model that combines physical and statistical modeling techniques. Its purpose is to predict concentrations around localized sources in large industrial rooms. This model uses a stochastic relationship for the displacement of diffusing elements to determine the size of the box of uniform concentration air within the workspace. It then uses this input in a f~rst-pnuciple mixed-box model to estimate the airborne concentration. This model attempts to describe the practical size of the box (i.e., the volume V) around the source to render meaningful predictions of concentration for this assumption. This determination is done by sizing the box to contain most of the molecules that leave the source in a time frame consistent with their removal via ventilation purging. Lavenda (1985) described the manner in

186 ASSESSING HUMAN EXPOSURE which diffusing elements containing contaminants will travel outward from a point source. One can calculate a time-dependent concentration gradient for the instantaneous batch release of a finite amount of contaminant. Finding a distance from the source that will contain a majority of the releases in an interval when ventilation purging of these releases is well under way (e.g., the time for one air change) allows the sizing of the affected volume. This vol- ~me is then used in the box model to predict the concentration in the affected volume. A fundamentally different approach Is presented in a ventilation-driven dispersion model In which airborne concentration decreases monotonically from any point source (Roach, 1981~. The approach describes a concentration gradient from a contaminant source to a receptor (potential human contact). Although diffusion models have been widely used in describing the ambient concentrations from source emissions, this approach was not used in the indoor environment until Roach presented a simple indoor diffusion model. To illustrate its importance in describing the variability of concentrations within a room, consider an industrial room 30 meters square and 4 meters high with a point source of gas at its center. If the room is ventilated at 3 muting changes per hour, the air flow for a 3,600-m3 room is calculated to be 1D,800 m3/hr. Alternatively, consider an Imaginary box (a 2-meter cube) within this room and surrounding the source. If the ventilation is consistent throughout the room, then this box is also ventilated at 3 mixing changes per hour resulting in a flow of 24 m3/hr. Thus the purging air flow proximate to the source is only 0.02% of the total in the room. Since dilution ventilation purging is proportional to cube of the volume, its effect near point sources in typical industrial rooms can be considered small. Because molecular and turbulent diffusion combine to yield a diffusion coefficient that is independent of volume, the diffusion model describes the concentration gradient near the source. For distances from the source at which convection is significant, Roach derived an equation that combines ventilation and diffusion. All these models presuppose that diffusion is constant in space and time and assume nondirec- tional or random air flow in the room. Steady-state source models, based on equation 6.4, assume that the input (generation) rates and the output (control) rates are constant and that enough time has passed to yield a steady state (dC/d! = 0~. That holds for continu- ous processes, but may not be valid for intermittent or "batch" jobs that start with C = 0 at t = 0 and end well before steady state can be achieved. For example, any volume with 1 mixing air change per hour will reach 90% of equilibrium in 2.3 hours. At 10 mixing air changes per hour, this time is 0.23 hour. Since concentration is increasing during this entire period, the time

MODELS 187 average concentration will be significantly lower than the concentration at equilibrium. Exposure resulting from indoor operations that begin and end in a period of minutes to a few hours should explicitly consider determination of the time-weighted average exposure potential during concentration buildup and fallof£ Ja~ock (1988) has discussed this specific topic. Application of the box mode} described above frequently assumes com- plete mixing and no gradient of exposure. Less than ideal ventilation air mixing is handled by the use of the mung factor (m). Those assumptions appear to be justified if the diffusion and dispersion of airborne contaminants from sources in the room are indeed large relative to the size of the room. The modeler uses the room volume (~ and ventilation rate (Q.) in equation 6.4 to estimate contaminant concentrations in the workroom air. The assump- tions are not valid in small rooms with stagnant air, very high flow rate of air, or in large industrial rooms with moderate air-flow rates, in which the majori- ty of airborne contaminant is contained within a relatively small space, com- pared with the room volume. Knowledge of dispersion from diffusion is important to our understanding of mixing and concentration gradients. Molecular diffusion of gases in air is a poor dispersion mechanism. For instance, consider that the molecular diffusion coefficient for ethanol vapor in air is about 8 x 10-5 m2/min. By comparison, Franke and Wadden (1985) measured eddy diffusion coefficients between 1 and 12 m2/min in a large (120 x 160 x 16 ft) room with air ex- changed 0.3 times per hour. Air movement in rooms from temperature gradi- ents, ventilation, or the movement of objects in the room causes eddy diffu- sion that is about 1,000 to 10,000 times greater than molecular diffusion. Small rooms with high levels of eddy diffusion meet the assumption of the mixing model. Using a mixing model in a large room or a small room with a low diffusion rate may dramatically underestimate contaminant concentra- tions and exposures near sources in the room. The above models assume omnidirectional diffusion of contaminant out- ward from the source. That presupposition has not been tested in real work- rooms or residences, and might be true only for long averaging times but not for short periods. Most predictions of contaminant movement assume that the contaminants are gaseous. Particles and gases behave quite differently, however. Particles do not lend themselves to physicochemical predictions of their generation rate into air, and, once airborne, they act in a manner consistent with their aerody- namic diameter and not necessarily like the rest of the air column. Attempts have been made to model particulate matter in workroom air (Cooper and Horowitz, 1986), but more theoretical and experimental work is needed before a generally useful model is available.

1M ASSESSING HUMAN EXPOSURE Finally, some contaminants, termed semivolatile contaminants, can occur in both gaseous and particulate phases. Changes in temperature and particu- late surface area can shift the phase distribution of such substances and influ- ence the characteristics of exposure to them (Yamasaki et al., 1982; Pankow, 1987; Bidleman, 1988; Coutant et al., 1988~. Nonindustrial Environments Over the last decade, research in air quality in nonindustrial indoor envi- ronments has dramatically changed the understanding of human exposures to many airborne contaminants. The Harvard Six Cities Studies showed that exposures to respirable particulate matter and to NO2 were, on average, higher In homes than outdoors (Dockery and Spengler, 1981a,b; Mockery et al., 1981~. Similarly, measurements of volatile organic compounds (Molhave and Molter, 1979; Hollowell and Miksch, 1981; Wallace et al., 19823 and of radon (Sachs et al., 1982; Nero et al., 1983) showed that concentrations were generally higher indoors than outdoors. These findings and the growing rec- ognition that humans typically spend 80-90% of their time indoors (Szalai, 1972; Chapin, 1974; Sexton et al., 19843 have increased the attention to in- door-air exposures. Residential exposure differs from industrial in a number of critical factors. For example, the indoor exposed population includes members who are very young, very old, or infirm. The potential indoor exposure duration in resi- dences is 168 hours per week for a lifetime, compared with a typical industrial exposure of 40 hours per week for a working career. The concentrations of contaminants and ventilation rates are often much lower In residences than in industrial environments. Some contaminants enter residences with outside air, which then becomes the source of indoor-air contaminants. But some contaminants are commonly much more highly concentrated indoors than outdoors because of indoor sources. The fundamental approach to modeling indoor-air concentrations is a mass-balance or box-model analysis for each room or area to be modeled. In many buildings such as stores and houses, in which interior doors are left open~he air is fairly well mixed, and the single-compartment mass-balance equation can be usefully applied to the total volume of the building. The single-compartment mass-balance model (Eq. 6.4)-which describes the average concentration in an enclosed space as a function of source emission rates, infiltration of outdoor air, and losses by processes other than e~ltra- tio~has been the most commonly used source-oriented model for indoor-air modeling (Turk, 1963; Drivas et al., 1972; Shair and Heitner, 1974~. Turk

MODELS 189 (1963) proposed the use of an empirical mixing factor, defined by Brief (1960) as the ratio of effective air changes to theoretical air changes. Drivas et al. (1972) determined that the value of the mixing factor ranged from 0.3 to 0.7 for small rooms without fans. This mode! is now used extensively for ~ndoor- air modeling (Ish~zu, 1980; Wadden and Scheff, 1983~. In many instances, the Ming factor is assumed to be unity, although it should be otherwise (Esmen, 1978; Ishizu, 1980~. Models more complex than the single-compartment model are needed for multistory buildings or buildings with basements if the basement is a major entry path for a contaminant (e.g., radon). The simplest multicompartment models developed for indoor air consist of a combination of single-compart- ment models in which some or all of the air exhausted from one room or zone becomes the inlet air to another (Rodgers, 1980; Sandberg, 1981; Ozkay- nak et al., 1982; Wadden and Scheff, 1983; Ryan et al., 1988~. These simple multicompartment models constitute mass-balance approaches, but the total quantities of contaminants for the combined rooms or zones must be account- ed for. Equations have been written for more complex multicompartment models in which there is air flow between multiple compartment or zones. Such models, however, require more input data and much more computation time, which is approximately proportional to the cube of the number of zones modeled. Infiltration and e~ltration of air are key components in modeling contami- nant concentrations in indoor air. Infiltration of outdoor air can dilute indoor contaminants, as well as carry outdoor pollutants into indoor spaces. Infil- tration is driven largely by pressure differences between inside and outside air. These pressure differences are caused by wind, temperature differences, and mechanical ventilation. Many buildings do not have specific provisions for mechanical ventilation, i.e., they have no air ducts for the transport of air into or out of the building. Ventilation of such buildings occurs through infiItra- tion and exfiltration of air through cracks, windows, doors, and other open- ~ngs. The first infiltration model, an empirical model, was developed some 40 years ago by Dick and Thomas (1951~. Little work was done on infiltration models until the energy crisis of the 1970s gave impetus to the development of such models to support energy conservation efforts. The models developed in the early 1970s were generally empirical models in which infiltration was expressed as a function of temperature difference and wind speed; multivariate statistical methods were used to fit coefficients and exponents from experi- mental data (Ross and Grimsrud, 1978~. Sherman (1980) derived the first model of infiltration based on first principles of physics. Sherman and Grims- rud (l9SOa,b) then obtained outdoor values of parameters for the model, e.g.,

190 ASSESSING HUMflN EXPOSURE: leakage area and height of the structure, inside-outside temperature differ- ence, wind speed, terrain class of the structure, and local shielding. In a validation study, the Lawrence Berkeley Laboratory s~ngle-compartment model (Sherman and Grimsrud, 1980a,b) and the National Research Council multi- cell model of Canada (Liddament and Allen, 1983) performed best among the models tested. For many models of indoor-air contaminants' it is important to include the decay rates of the contaminants In the indoor space. The one-compartment model was applied In analyzing indoor ozone decay (Sabersky et al., 1973; Shader and Heitner, 19743, particulate matter in tobacco smoke (Hoegg, 19723, and CO2 from respiration (Kusuda, 1976~. Sabersly et al. (1973) determined heterogeneous decay rates for O3 on various indoor materials (decay occurs through interaction with surfaces). Using a single-compartment mass-balance model, they calculated the indoor concentration of O3 over time as a function of the decay losses and concluded that indoor O3 decreases rapidly once infiltration of outdoor air is decreased, i.e., when windows and doors are closed. Unfortunately, decay rates have been determined for very few con- taminants. Moreover, even if the decay rate of a specific material is known, it cannot be used effectively in modeling unless the surface area of the materi- al and the fluid dynamics of the chamber are also known. Jacobi (1972) provided the basic form of the model that incorporates the formation of the decay products of radon (222Rn), the radioactive decay of one progeny to the next, attachment to existing airborne particles, and deposi- tion (plateout) of the particle-attached and unattached activity on macroscopic surfaces ~ a room. Improvements have been made in assigning values to parameters (Porstendorfer et al., 1978; Knutson et al., 1983; Vanmarcke et al., 1985), but there are still difficulties related to the degree to which aerosol dynamics have been simplified and to the failure to incorporate fluid dynamics into models. Empirical models have been used to try to identify the major contributors to indoor-contaminant concentrations or exposures to contaminants. Spengler et al. (1981) used stepwise regression of personal exposure measurements of respirable particles against outdoor and indoor concentrations and such indica- tor variables as smoke exposure, employment status, and time spent at home and work. They found that indoor concentrations in homes explained almost half the variance in personal exposures and that outdoor concentrations had little predictive value. Dockery and Spengler (1981b) developed empirical models for respirable particles and sulfates in indoor air; they combined a basic physical model (the simple mass-balance or box model) with variables that are indicators of sus- pected sources. Their model regressed indoor concentrations against time

MODELS 191 paired outdoor concentrations for each house to fit a slope and intercept. The slope, identified from the physical model, is the penetration factor for outdoor air. The intercept is the product of house volume and the ratio of the average particle (or sulfate) emission rate for all the sources in a given house to the average ventilation rate. The values of the slopes and intercepts for aN houses are regressed against indicator variables appropriate to each. For example, the slopes can be regressed against a binary variable, A, related to whether a house is fully air conditioned. The intercept is interpreted to be the average penetration factor for respirable particles, and the coefficient ofA is ~nterpret- ed as the effect of full air conditioning in reducing penetration. Leaderer et al. (1987) developed an empirical model for indoor NO2 as a function of types of NO2 sources (such as unvented kerosene heaters, gas appliances, and smoking), source use, and physical attributes of the residences (e.g., house volume, air-tightness, and fan use). More than 60% of the varia- tion in indoor NO2 could be accounted for by source type and use. Variations in infiltration and removal rates, which were not well characterized, were suggested as the major sources of unexplained variation in NO2 concentra- tions. Ryan et al. (1986) have developed a class of indoor-outdoor models for respwable panicles and NO2 and other contaminants. The models sum the exposures in the occupied microenvironments to predict indoor concentrations. Efforts are under way to change indoor-outdoor models to simulation models and to model distributions of population exposures to NO2 (Ryan et al., 1988~. Variability in Emission Rates Variability in contaminant emission rates is important, although often ignored, in modeling of both indoor and outdoor air. Most concentration models use measured, rather than modeled, emission rates. The measure- ments are generally limited to a few examples of a given source type and a narrow range of operating or environmental conditions. Emission rates can vary substantially from one source to another, however, because of design, manufacturing, or construction differences. Hubble et al. (1982), for example, summarized published emission factors for CO and particulate matter for wood-burning stoves. Emission rates varied by a factor of more than 30, because of variations in type of wood, size of pieces, burning rate, and draft conditions. Traynor et al. (1988) compiled published data on the emission rates of CO, NO2, and respirable particles from indoor Invented combustion appliances. Emission rates, in some instances, vary by a factor of 100. To model concentrations effectively, it is essential to have knowledge of the vari

192 ASSESSING HUhlAN EXPOSURE ability of emission rates and to incorporate those into the model. Traynor et al. (1988) used distributions of measured emission rates, rather than a single emission rate, to model indoor air. For retrospective exposure modeling for epidemiological studies or prospec- tive modeling for risk assessment over a lifetime, it Is generally assumed that emission sources and processes do not change over long periods. That is not generally true, but it is difficult to model exposure without that assumption. Mung VV-'thin and Between Rooms Mixing of air within anti between rooms vanes spatially and temporally. Therefore, the mass-balance model might not characterize the concentration of an airborne contaminant accurately or adequately unless Ming is taken fully into account, so single-compartment and multicompartment mass-balance models generally incorporate a Ming factor. The Ming factor is either determined empirically or assumed to be unity. Empirical values measured using a sulfur hexafluoride tracer gas range from 0.3 to 0.6 in small rooms without fans (Drives et al., 1972~. Ishizu (1980) found that particle concentra- tions from sidestream tobacco smoke In rooms with high ventilation rates (9- 45 air changes per hour) were underestimated by about 50% if the mixing factor was assumed to be unity, i.e., actual mixing factors ranged from about 03 to 0.6. Measurements made in the absence of occupants are misleading. Move- ment and body heat (whether of humans or animals) tend to increase Ming. In fact, the very presence of a person would increase air mixing in an environ- mental chamber. The effects of human occupation on air mixing have not been systematically investigated, but measurements made in the absence of occupants might lead to underestimates of Ming and overestimates of con- centrations in exposure modeling. Very low and very high ventilation rates can cause large deviations of the mixing factor from unity. However at moderate ventilation rates, the mixing factor is closer to unity. Girman and Hodgson (1986), for example, measured exposures to methylene chloride from paint strippers in an environmental chamber with moderate ventilation (0.5 and 3 air changes per hour). Average concentrations measured in the breathing zone were only about 20% higher than those measured in the chamber at large. Short of fluid dynamical modeling or experimental measurement (Fisk et al., 1985, 1988), there is no simple means to predict the mixing factor or ventilation efficiency for a given room or zone. The experimental data suggest that the variation in indoor concentrations due to variations in air mixing is

MODELS 193 a factor of about 2 or 3. Although that is substantially less than the variations in source emissions, some research is clearly needed to identify situations in which a more accurate determination of the mixing factor is needed. Deposition Deposition onto surfaces can account for losses of gaseous and particulate contaminants. On striking a surface, a molecule can bounce off, be adsorbed, or be absorbed. An adsorbed or absorbed molecule can subsequently desorb or react on the surface to form another species, which, in turn, can either remain on the surface or desorb into the air. The net effect and importance of deposition processes on subsequent indoor airborne concentrations depend on the relative magnitude of the deposition sink compared with other indoor sink and source terms. Reactive gas phase species and airborne particles, especially larger particles, are the contaminants most likely to be influenced by deposition processes. In equation 6.4, the term CQ~df represents the loss rate of a contaminant from the boxy by means other tEan ventilation. With enough time and reactivi- ty, chemical transformations can account for substantial loss. However, many indoor polButants do not have long residence times in the air and are not highly reactive. Little experimental work has been done on this subject in modeling of exposure to indoor air. Deposition of radon decay products has been modeled by Jacobi (1972) and by Porstendorfer et al. (1978~. Deposition velocities of the major water-soluble salts associated with fine and coarse particles have been measured in some buildings (Sinclair et al., 1985, 1988) and incorporated in a mass-balance model that can calculate steady-state concentrations in similar buildings (Sinclair et al., 1985; Weschler and Shields, 1988~. Nazaroff and Cass (1986, 1989) have described the deposition of reactive nitrogenous species and modeled the loss rate of particles and highly reactive gases to indoor surfaces for homogeneous turbulence, laminar forced convection, and laminar natural convection. Deposition velocities were found to vary by a factor of 10,000 over the range of pollutant diffusivities and particulate sizes encountered. Indoor environments have larger surface-to-volume ratios than outdoor environments, so surface deposition and reactions are likely to be more impor- tant in indoor environments. Much less is known about chemical reactions in indoor air than in outdoor air; therefore, although the widely used mass-bal- ance model for indoor air explicitly incorporates a term for removal processes other than filtration into outdoor air, removal rates have generally not been

194 ASSESSING HUMAN EXPOSURE measured and so cannot be included in the model. For chemically unreactive contaminants, such as CO, the removal rate is commonly assumed to be zero. Spicer et al. (1986, 1987) and Nishimura et al. (1986) reported that some materials found in indoor environments chemically reduce NO2 to NO. Pitts and coworkers (1985a) experimentally demonstrated the production of nitrous acid (MONO) from NO2 in an indoor environment. Nazaroff and Cass (1986) have presented a general model for predicting the concentrations of chemical- ly reactive compounds in indoor air that accounts for the effects of ventilation, filtration, heterogeneous removal, direct emission, and photolytic and thermal chemical reactions. The discrepancy between the calculated formation of HONO from homogeneous reactions in their model and that measured by Pitts et al. (1985a) suggested to them that heterogeneous reactions are impor- tant in HONO formation in indoor air. That hypothesis was supported by results of Gundel and Daisey (1988~: high rates of conversion of NO2 to HONO were observed for heterogeneous reactions on polyurethane foam and wool carpet. Chemical transformations should be incorporated into models where appropriate. It is still difficult to evaluate the importance of indoor chemistry. Rates of contaminant removal and generation by indoor surface reactions will be im- portant only if they are at least as large as rates of infiltration and ventilation. Air Cleaning Air-cleaning devices are used to remove particles from the intake air of buildings. Although a term for the efficiency of particle removal can be easily incorporated into the mass-balance model (Eq. 6.4), its actual value is general- ly not known or easily determined for a given building. Filters used in com- mercial building air ducts are sometimes installed (improperly) in such a way as to leave a gap between a filter and a duct. Moreover, the efficiency of a filter can vary widely with time and with particle size. Efficiency of particle removal often increases with filter loading. Some air cleaners incorporate activated carbon and other catalysts or reac- tants to remove gases and volatile organic compounds. Only limited informa- lion is available on the efficiencies of these devices (Wadder and Scheff, 1983~. A problem with these devices is that adsorbent beds become saturated and lose their collection efficiency with usage, sometimes quite rapidly (Daisey and Hodgson, 1989~. More work is needed to incorporate air-cleaning sys- tems into indoor-air exposure models.

MODELS 195 Recent Advances Most advances in indoor-air modeling have come from increasing the so- phistication and complexity of the models. One effort to improve the data base on airborne contaminant concentrations and on emission and ventilation variables is the cooperative EPA-Southwest research project (EPA, 1986b) on workplace exposure estimation. That project plans to examine exposures of groups of operators in the chemical industry. It will characterize the con- centrations of airborne contaminants, contaminant generation rates from principal sources, local exhaust systems, and rates of dilution ventilation. Several models treat air flows in multicompartment buildings, but rely on mainframe computers and are not generally user-friendly. Feustel and Sher- man (1989) recently developed a simplified multizone infiltration model that can be run on a calculator for determining air-flow distribution in a complex building. Information on buildings, categorized on the basis of their leakage ratios and lumped physical parameters (e.g., volume, resistance to flow, etc., combined and expressed as a constant), is used to calculate overall infiltration- e~ltration rates. There was good agreement between the results from the simplified multizone infiltration model and a standard model for an eight-story building. A more detailed multizone infiltration model has been developed as part of COMIS (Conjunction of Multizone Infiltration Specialists), a year- long international workshop held at the Lawrence Berkeley Laboratory (Feus- tel et al., 1989~. The objective of COMIS was to develop a user-friendly program for multizone infiltration, taking into account crack flow, air-con- ditioning systems, single-sided ventilation, and transport through large open- ings. The model is modular and can be expanded to incorporate new knowl- edge. It will be validated with several sets of multitracer gas measurements. Tracer gases are commonly used to measure building ventilation. The measurements have generally used single tracer gases, but there have now been efforts to use multiple tracer gases to measure air flows In multiple zones in buildings (Sherman, 1989~. The flows determined from the tracer measurements and the relevant continuity or mass-balance equations are associated with large uncertainties. Sherman (1989) reported a method for using exogenous physical information on physical constraints to reduce the uncertainties in ventilation measurements based on tracer gases. The entry of radon from soil into houses has been shown to be dominated by pressure-driven flow of soil gases (Nero and Nazaroff, 1984; Nazaroff and Doyle 1985; Nazaroff et al., 1987~. This pressure difference between soil and house interior is due to indoor-outdoor temperature differences, wind, unbal- anced mechanical ventilation, and operation of combustion devices that draw indoor air for combustion and vent products outdoors. The structure of the

196 ASSESSING HUMAN EXPOSURE house and the characteristics of the soil, such as permeability, are important factors influencing pressure-driven flows of soil gases. Efforts to model radon entry from soil and the consequent indoor radon concentrations are only beginning. Mowris and Fisk (1988) have developed an analytical (closed- form) model of soil-gas flow based on its analogy to heat transfer. The model was used to evaluate the impact of exhaust ventilation on indoor radon con- centrations In two houses. It underpredicted radon concentrations by 23% and 13% for two different periods In one house and overpreclicted by 22% in a second house, but the authors noted that comparison with measured concen- trations was encouraging. Loureiro (1987) has developed a theoretical model to predict indoor radon concentrations.- It simulates rates of generation and decay of radon In soil, its transport through the soil due to diffusion and convection induced by a pressure disturbance at a crack in the basement, and its entrance into the house through the crack. Two computer programs were developed to calculate the pressure distribution in the soil and the resulting velocity distribution of the soil gas and to solve the radon mass-transport equation, calculate radon entry rates, and calculate the indoor radon concen- tration. Indoor radon concentrations were found to be directly, although not linearly, related to the ~ndoor-outdoor pressure difference. Domestic water contaminated with gases, such as radon and volatile organic compounds (VOCs), is a source of exposure that has only recently been recog- nized as important. Dissolved gases in contaminated water are released in- doors during such residential uses as showering and dish-washing (Andelman, 1985; Gesell and Prichard, 1975; McKone, 1987; Jo et al., in pressa). McKone has developed a mass-transfer model to estimate human exposures to VOCs due to their transfer from tap water to indoor air. It estimates the release of VOCs from water and uses a three-compartment model to simulate the 21 hour concentration profile in the shower, the bathroom, and the rest of the house. A preliminary data base on household characteristics and time-activity patterns has been used to calculate a range of concentrations and human exposures to seven VOCs. Nazaroff et al. (1987) used a single-compartment mass-balance model with a long averaging time to calculate the distribution of indoor-air Revlon in U.S. homes from tap water. In another recent advance in modeling indoor concentrations of contami- nants in homes, Traynor et al. (1988) developed a single-compartment mass- balance model for combustion emissions, specifically CO, NO2 and respirable particles. Input data for the model include distributions of housing stock characteristics (e.g., volumes and air-exchange rates), use of combustion appli- ances and sources (e.g., cigarettes), distribution of source emission rates, and source use. The model uses deterministic and Monte Carlo simulation tech- niques to generate distributions of average weekly concentrations of CO, NO2,

MODELS 197 and respirable particles for four regions of the country. The modeled distribu- tions have generally compared well with available field measurements. The model can also be used to rank indoor pollutant sources, identify high-risk populations, identify key factors for attempts at control and mitigation, and estimate exposures for epidemiolog~cal studies. N~roff and Cass (1986) recently developed the first model for chemically reactive pollutants in indoor air. It combines the multibox ventilation mode! of Shair and Heitner (1974) with a modified version of the Falls and Seinfeld photochemical kinetic mode} (Falls and Seinfel~ 1978; Russell et al., 1985~. The mode] accounts for the effects of ventilation, filtration, heterogeneous removal of gaseous pollutants, direct emissions, and homogeneous gas-phase reactions and predicts concentrations of such chemically reactive contaminants as HNO2, HNO3, NO3 and N205. Nazaroff and Cass (1986) tested the model in a museum gallery, predicted and measured concentrations of several pollu- tants were in reasonably good agreement. They also compared their modeled steady-state ratio of HNO2 to NO2 due to homogeneous gas-phase reactions with that measured by Pitts et al. (1985a) in an indoor environment; the ex- penmental ratio was about 35 times the modeled ratio. Heterogeneous reac- tions appear to play an important role in indoor production of HNO2 and models for indoor atmospheric chemistry probably will eventually have to incorporate heterogeneous chemical reactions. However, very little is known about such reactions today. EXPOSURE-ASSESSMEN] MODELS Current exposure models are based on relatively general assumptions about the distribution of contaminant concentrations in microenv~ronments, the activity patterns that determine how much time people spend in each micro- environment, and the representativeness of a sample to the population that might be exposed to a contaminant. Individual Exposures In a model of individual exposure, contaminant concentrations in each microenvironment are measured or modeled and time-activity patterns are used to estimate the time spent in each microenvironment. (Exposure is the product of time and contaminant concentration.) An individual's overall e~o- sure can be separated into the sum of products of concentration and time in

198 ASSESSING HUMAN EXPOSURE each microenvironment; this is termed a microenvironment decomposition (Duan, 1981~. Microenvironment decomposition can be extended to other summary expo- sure measures, such as peak concentrations. If we are interested in total exposure, microenvironmental decomposition is assumed to include all possi- ble locales and activities. Duan (1981, 1985) developed a criterion for stratify- ing microenvironments to improve the precision of estimated average eypo- sures and applied it to identify the important microenvironments for CO exposures. Some models for predicting exposures make assumptions regarding the independence between contaminant concentrations and time spent and activity in a microenvironment. Such assumptions should be validated for specific applications. Duan (1985) has suggested that there is no correlation between CO concentrations and time on the basis of data from the Washington, D.C., CO study (Akland et al., 1985~. However, there will be problems in the exist- ing models if correlations between occupancy periods and concentrations exist for other contaminants, because the independent variables, time and concen- tration, would not be truly independent. If the correlation is very high, the predictions based on models might not be valid because of an inappropriate assumption of independence. The committee is unaware of any empirical data quantizing the extent of problems caused by the correlations. It is likely that for contaminants such as particles, the presence of a person might change the particle concentration of a previously unoccupied microenvironment. Further study of the problems such correlation would produce is needed. Stock et al. (1985) used personal-activity profiles and household characteris- tics to partition the locations into seven broad microenvironments: three indoor, two outdoor, and two transportation modes. From measured concen- trations of the criteria pollutant gases (ozone, NO2, SO2, CO), aeroallergens, aldehydes, TSP, and inhalable particles and the time in each partition, eypo- sure estimates were calculated. The results will ultimately be combined with epidemiological data to determine the health effects of exposure to specific pollutants in a community environment. More or less sophisticated versions of partitioning are used in the work- place, where they are referred to as job exposure profiling (JEP). JEP some- times consists of grouping and compiling work tasks with durations of expo- sure at breathing-zone concentrations (Austin and Phillips, 1983~. The prod- uct of such analysis is a prediction of exposure of any employee involved in the tasks covered by the JEP. Hansen and Whitehead (1988) recently moni- tored the activities and breathing-zone concentrations of printing-press opera- tors and modeled time-weighted average exposures as a function of location and the number of times a "hazardous task" was performed.

MODELS 199 Population Exposures Modeling exposure of populations requires the combining of microenviron- ment concentrations with individual activity patterns and extrapolation of the results to a population. Data on human activity patterns have been combined with measured outdoor concentrations in the NAAQS exposure model (NEM) to estimate exposures to CO (Biller et al., 1981; Johnson and Paul, 1983~. The NEM was modified to include indoor exposures by incorporation of the indoor-air quality model (LAQM) (Hayes and Lundberg, 1985~. The IAQM, based on the interactive solution of a one-compartment mass-balance model, incorporates three basic indoor microenvironments: home, office or school, and transportation vehicle. It has been used to estimate distributions of ozone exposures (Hayes and Lundberg, 1985) and to evaluate strategies for mitigat- ing indoor exposures to selected pollutants in five situations, e.g., CO exposure from a gas boiler in a school (Eisinger and Austin, 1987~. As mentioned in the introduction to this chapter, three types of models have been developed to estimate population exposures: (a) simulation models such as SHAPE, (b) the convolution model, and (c) the variance-component model. The simulation of human air pollution exposure model (SHAPE) (Ott, 1981) is a computer model that generates synthetic exposure profiles for a hypothetical sample of human subjects; the profiles can be summed into com- partments or integrated exposures to estimate the distribution of a contami- nant of interest. The bulk of the model estimates the exposure profile of contaminants attributable to local sources; the contribution of remote sources is assumed to be the same as the background. The total exposure is therefore estimated as the sum of exposure due to local sources and the ambient back- ground. For each individual in the hypothetical sample, the model generates a pro- file of activities and contaminant concentrations attributable to local sources over a given period, say, 24 hours. Activity profiles are generated or accepted as input. At the beginning of the profile, the model generates an initial mi- croenvironment and duration of exposure according to a probability distribu- tion. At the end of that duration, the model uses transition probabilities to simulate later periods and other microenvironments. The procedure is repeat- ed until the end of a selected long period. For each time unit, say, 1 minute, in a given microenvironment, the model generates a contaminant concentra- tion according to a microenv~ronment-specific probability distribution: each microenvironment has a specific probability distribution for each contaminant concentration. Such models obviously require validation with measured expo- sure data for a subset of microenvironments and patterns. Duan (1981, 1985, 1989) developed the convolution model for integrated

200 ASSESSING HUAL4N EXPOSURE exposures. It calculates distributions of exposure from distributions of concen- trations observed in defined microenvironments and the distribution of time spent in those microenv~ronments. The variance-component model (Duan, 1989) assumes that short-term contaminant concentrations can be decomposed Into components that vary In time and those that do not. SHAPE deals mainly with the t~me-varying com- ponent; the convolution model deals mainly with the time-invariant exposure. The two components can be summed or multiplied to yield an estimated concentration value. It is necessary to determine the distributions of the two concentration components. If continuous personal-monitoring data are avail- able, it is possible to estimate the. distributions of the two components directly. If integrated personal-monitoring data are available, the methods described by Duan (1989) can be applied. Once the concentration distributions are available, exposure distributions can be estimated with a computer simulation similar to SHAPE. Instead of generating a contaminant concentration for each time unit independently, as in SHAPE, a time-invariant concentration and a time-va~ng concentration are generated for each unit and combined to determine 1-minute concentrations. The remainder of the simulation is identical to that in SHAPE. All three types of models (SHAPE, convolution, and variance-component) need to make assumptions about independence. The critical difference among the three types is in those assumptions. SHAPE assumes that the short-term pollutant concentrations (e.g., 1-minute averages) within the same microenvi- ronment are stochastically independent and independent of activity patterns. It follows that the microenvironmental concentration is not correlated with activity time in that microenvironment. Furthermore, the variance of concen- tration decreases in inverse proportion to activity time. For longer activities in the same microenvironment, the concentration is averaged over more time units. Similar assumptions were made in an earlier version of NEM; a more recent version of NEM incorporates serial correlation in the 1-minute aver- ages (Johnson et al., 1990~. The convolution model assumes that microenvironmental concentrations are statistically independent of activity pattern. That implies that they are not correlated with activity time and that the variance of the concentration also stays constant, irrespective of time. That needs to be validated. Switzer (Stanford University) noted in a private communication with Duan in 1982 that the forms of the variance functions used in both models might be unreal- istic and that some compromise between the two might be desirable. With either the additive or multiplicative form of the variance component model, the time-invariant components are assumed to be stochastically inde- pendent of the time-varying components. It is further assumed that for differ

MODELS 201 ent time units, the time-vary~ng components are independent from one interval to the next. Alternatively, it can be assumed that the time-vary~ng components have an autocorrelation structure. Duan (1985) examined data from EPA's Washington, D.C., CO study and found that concentrations and internal were unrelated. Ott et al. (19~) used data from EPA's CO study in Denver to emmine the validity of SHAPE, comparing exposure distributions of CO estimated with SHAPE and with the direct approach (personal monitoring). They found the estimated average exposures to be similar and the estimated exposure distributions to be cliffer- ent at the extremes of the distributions. That result might be due to failure to account for autocorrelation and the time-invariant component. Duan (1989) examined several statistical parameters for microenvironments In data from the Washington, D.C., CO study and found the time-invariant component to be dominant. Temporal Aspects One cause of inaccuracy in exposure modeling is failure to obtain measure- ment data on an appropriate time scale. Outdoor air is often sampled in the summer, and concentrations for an entire year are then estimated on the basis of a single season. But sampling and analysis programs must cover enough time for concentrations to be reasonably estimated for a fuD year, if they are to serve as reliable inputs to exposure models. Very few sampling studies have extended over a long enough period to revead seasonal and year-to-year varia- tions. An example of good sampling design was that of the Portland Aerosol Characterization Study (Cooper and Watson, 1979~. The researchers attempt- ed to learn the representative composition of airborne particulate matter and its sources without having to sample every day and analyze every sample. They stratified the year into eight defined meteorological regimes and took samples when conditions and time of year were appropriate. Although many samples were taken, only enough were analyzed to yield useful average values for each regime. The regime averages were then combined in proportion to their probability of occurrence during the year. Representative annual con- centration averages were obtained at a reasonable level of effort for both sampling and analysis. However, because of the variability of occupancy times, it may be that different averaging times are appropriate in estimating average exposures as compared with average concentration. Many estimates of annual average concentrations of indoor radon are based on measurements taken over periods of a few days under conditions that are

202 ASSESSING HUMAN EXPOSURE quite unrepresentative of those existing ~ a house over a whole year. The estimates so derived can easily differ from true annual averages by a factor of 2 or more, because, for example, the conditions that give rise to indoor radon change from season to season (Nero et al., 1986~. Modeling of very long exposures, as is required in assessing risk associated with exposure to carcinogens, presents several major difficulties. The typical practice is to measure or model the concentration of a contaminant at one time and determine lifetime exposure by multiplying that concentration by a long period, e.g., the lifetime of a person. However, both exposures and activity patterns change substantially over a lifetime. Industrial processes also change over time. Sources (such as wood-burn~ng stoves) are introduced, and sources (such as catalytic converters in motor vehicles) are eliminated or modified. Large facilities typically have a design life of 30 years, so consider- able uncertainty can be anticipated in a typical calculation of 70-year lifetime exposure. Time-activity patterns and locations of people also vary substantially over long periods. In the United States, people change their place of residence frequently and rarely live in the same place over a lifetime. For agents such as radon, such mobility can have a substantial impact on exposure and thus on the use of exposure estimates in an epidemiological study. A person's activity patterns shift from childhood through early adulthood and middle age to old age. There have been some efforts to address differ- ences in exposure associated with aging, but this aspect of variability in e~o- sure over long periods has generally not been addressed in exposure modeling. The modeling of short-duration peak exposures is also attended by tempo- ral problems. Typical steady-state airborne-concentration models are not able to provide estimates for periods shorter than 1 hour and have difficulty In modeling time-varying concentrations, which can lead to high short-term exposures. If an exposure model is to estimate the effects of peak exposures on sensitive populations, the concentration model must provide reliable esti- mates on biologically relevant time scales. Some important developments in stochastic models that might be able to provide such estimates have not yet been incorporated into exposure-estimation procedures. SUMMARY Models are useful tools for quantifying the relationship between air-pollu- tant exposure and important variables, as well as for estimating exposures in situations where measurements are unavailable. Models may obviate extensive environmental or personal measurement programs by providing estimates of

MODELS 203 population exposures that are based on small numbers of representative meas- urements. They can be used to identify major exposure parameters and to assist epidemiological studies and risk assessments. Models generally rely on assumptions and approximations to describe quantitatively cause-and-effect relationships that are otherwise difficult to determine. Despite the simplifica- tions inherent in models, they provide insights and information about the relationships between exposure and independent variables that determine exposure. Models discussed In this chapter are classified into two broad categories: those that predict exposure (in units of concentration multiplied by time) and those that predict concentration (in units of mass per volume). Although concentration models are not truly exposure models, their output can be used to estimate exposures when combined with information on human activity patterns. Exposure models can be used to estimate individual exposures or the distribution of individual exposures in a population. Activity patterns and microenvironmentalcontaminant concentrations inputstoexposure-prediction models~an be measured or modeled. Concentration models are separated into several types within two catego- ries: models based on the principles of physics and chemistry and models that statistically relate measurements of concentrations to independent variables thought to be direct determinants of concentration. Many hybrids of these two basic approaches to model contaminant concentrations also exist. Concentration Models These models are used extensively to estimate outdoor contaminant con- centrations at specific sites. These models use physical, chemical, and statisti- cat methods to address the contaminant source release, dispersion, reaction, and deposition. Models are also used to estimate indoor contaminant concen- trations; most of these applications have occurred in occupational or industrial settings. They generally focus on measuring the contaminant concentration in a worker's breathing zone. Over the past decade, research in air quality in nonindustrial indoor envi- ronments has dramatically changed the understanding of human exposures to many airborne contaminants. Many critical factors involved in residential exposures differ from those in industrial exposures. For example, the indoor exposed population includes members who are very young, very old, or infirm. The potential indoor exposure duration in residences is much longer com- pared with a typical working career. The concentrations of contaminants and ventilation rates are often much lower in residences than in industrial environ

204 ASSESSING HUMAN EXPOSURE meets. Most advances in indoor-air modeling have come from increasing the sophistication and complexity of the models. Outdoor New developments In stochastic dispersion models offer improvements in the prediction of the average and time-varyLng concentrations to which individ- uals are exposed. Receptor models can be used to cross-validate dispersion models. They also can be used to identify sources of exposure. In many cases, the data describing the source characteristics are not avail- able on the time scale at which the model predictions are needed. Such mismatches In the time scale of the measurements with the time scale of the models preclude adequate model development, validation, and application to new biologically relevant exposure situations. Because of the changing nature of sources and source emissions with changes in production and control tech- nology and in the economic conditions, it is necessary to measure periodically the amounts and chemical characteristics of sources of airborne contaminants. Improvements In photochemical models now permit far more accurate predictions of the spatial and temporal variability of ozone and some other atmospheric constituents than were previously possible. However, it is still not possible to incorporate the complete, explicit mechanisms into air-shed or long-range transport models. Indoor Current models used to predict worker exposures to airborne toxicants are relatively simple, undeveloped, and unvalidated. This deficiency has caused practitioners to use models-instead of estimation techniques as though they were conservative screening techniques. Little work has been done to model very short-term exposures (peak expo- sures) and gradients relative to dispersion, deposition, and ventilation in in- door environments. The sources of indoor-air pollution need to be character- ized. Measuring and modeling the temporal patterns of source strength as a function of readily identifiable or measurable source characteristics are critical steps in that process. In addition, more work is needed to model the relation- ship of indoor-air quality to the composition of the ambient atmosphere. Furthermore, the chemistry of the indoor atmosphere remains to be investi- gated. The variability of concentrations in indoor Industrial air over short time

MODELS 205 frames needs to be measured for emergency situations. The validation of the models to predict concentrations is linked to appropriate sampling time frames and methods with adequate sensitivity to specific chemical species. Indoor-Air Chemistry Indoor-air chemistry needs substantial research, including surface reactions on various materials, sorption, deposition, and rates for these processes rela- tive to ventilation or other loss mechanisms. Exposure Models Current exposure models are based on relatively general assumptions about distribution of contaminant concentrations in microenvironments, the activity patterns that determine how much time people spend in each microenv~ron- ment, and the representativeness of a sample to the population that might be exposed to a contaminant. In a model of individual exposure, contaminant concentrations in each microenvironment are measured or modeled, and time- activ~ty patterns are used to estimate the time spent in each microenv~ron- mcut. Modclir~g closure of populations requires the combining of microenv~- ronmental concentrations with individual activity patterns and extrapolation of the results to a population. Models for predicting exposures to populations have been developed re- cently. They have not, however, been adequately validated. Limited validation studies of the SHAPE exposure model, for example, have shown that the average values are well predicted but show substantial discrepancies in the tails of the distribution. Further development and validation of the models are warranted. One cause of inaccuracy in exposure modeling is failure to obtain measurement data on an appropriate time scale. Sampling and analysis must cover sufficient time for concentrations to be reasonably estimated for a full year, if they are to serve as reliable inputs to exposure models. Source Models Source emission models are available to predict mass emission rates for a variety of dynamic and steady-state emission problems. The available emis- sion models allow the estimation of downwind exposure for continuous or catastrophic releases of pure compounds or binary mixtures. These models

206 ASSESSING HUMAN EXPOSURE have not been validated. Dense-cloud dispersion models are available to estimate downwind exposure for heavier-than-air vapor releases; they also have not been validated. Emission-rate estimation protocols are available for defining losses from chemical-processing equipment. Emission modeling coupled with dispersion modeling and time-activity estimates allow estimation of exposures for work- place-population exposure concerns before construction of new production facilities. Validation Further validation studies are needed for virtually all existing models, in- cluding concentration prediction and exposure models. In particular, immedi- ate efforts are needed to validate the NEM model and modify the model to more accurately reflect the actual situations that can result in high population exposures. Valid emission-rate models are needed to provide precise estimates for multicomponent mixtures. Validated dispersion models are needed to predict downwind concentration for complex terrain to provide accurate e~o- sure estimates for down- and up-gradient terrain conditions. The same data set cannot be used to refine and validate a model; new, independent data are required to validate any refined model. All assumptions used in developing a model should be documented explicitly. Care should be taken by investiga- tors in any field-monitoring program to integrate their measurements pith the modeling community needs so that the requisite model input data are ob- tained, and the measurement results can be used to test, refine, or validate appropriate models. Measurements are needed of the concentrations of airborne pollutants In workplaces and homes along with the critical independent variables, such as source-emission rate distributions and the indoor general ventilation fields. Concentration gradients u ithin physically defined microenvironments also need to be measured accurately. When planning measurement campaigns, consid- eration should be given to the sampling strategies that would permit the ex- trapolation of the results to biological time frames other than those of the measurement program.

Next: 7 Current and Anticipated Applications »
Human Exposure Assessment for Airborne Pollutants: Advances and Opportunities Get This Book
×
Buy Paperback | $90.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Most people in the United States spend far more time indoors than outdoors. Yet, many air pollution regulations and risk assessments focus on outdoor air. These often overlook contact with harmful contaminants that may be at their most dangerous concentrations indoors.

A new book from the National Research Council explores the need for strategies to address indoor and outdoor exposures and examines the methods and tools available for finding out where and when significant exposures occur.

The volume includes:

  • A conceptual framework and common terminology that investigators from different disciplines can use to make more accurate assessments of human exposure to airborne contaminants.
  • An update of important developments in assessing exposure to airborne contaminants: ambient air sampling and physical chemical measurements, biological markers, questionnaires, time-activity diaries, and modeling.
  • A series of examples of how exposure assessments have been applied—properly and improperly—to public health issues and how the committee's suggested framework can be brought into practice.

This volume will provide important insights to improve risk assessment, risk management, pollution control, and regulatory programs.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!