Click for next page ( 170


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 169
6 Models INS RODUCTION Mathematical models use systems of equations, based on a conceptual framework, to describe interactions among components of physical, chemical, or biological systems. The conceptual component of a model consists of the assumptions and approximations that reduce a complex problem to a simpli- fied, more manageable one. Models are used because they are an efficient way to examine the cause-effect relationships among components (or vari- ables) in a system. The bases of mathematical models are the fundamental physical and chemi- cal laws, such as the laws of conservation of mass, energy, and momentum. Modelers must choose the level of detail in which components of a system are described. Clearly, an extremely rigorous model that includes every phenome- non in microscopic detail would be so complex that it would take a long time to develop and might be impossible to use. A compromise is always required between a rigorous description and getting an answer that is meaningful for a specific application with limited resources. This compromise involves mak- ing many simplifying assumptions, which should be carefully considered and listed. They impose limitations on the model that should always be kept in mind when evaluating the model's results. Models are useful tools for quantifying the relationship between air-pollu- tant exposure and important variables, as well as for estimating exposures In situations where measurements are unavailable. Exposure models may obviate extensive environmental or personal measurement programs by providing estimates of population exposures that are based on small numbers of repre- sentative measurements. The challenge is to develop appropriate models that allow for extrapolation from relatively few exposure measurements to a much larger population (Sexton and Ryan, 1988~. A practical approach to assessing exposure through modeling requires 169

OCR for page 169
170 ASSESSING HUMAN EXPOSURE decisions as to how precise and accurate the assessments need to be. The ultimate focus is on the biological effects of exposure, so decisions on accuracy and precision require some quantitative knowledge of the biological effects. Limitations on resources require the exposure analyst to choose the most economical methods to answer the question, How accurately must the e~o- sure or exposure potential estimate be to provide the needed information for risk estimation, risk management, or epidemiology?" For risk-related prob- lems, the analyst seeks a magnitude of exposure that defines the threshold of Significant risk.- In some cases, the threshold has already been set with the establishment of an exposure limit (e.g., by ACGIH, OSHA, or EPA). In other cases, the threshold needs to be ascribed on the basis of available infor- mation on possible health effects of the contaminant of interest or a structural analogue. The judgment of those assigning limits should be driven by the quality of the data. For risk assessment and management, health-effects data bases with a high degree of uncertainty should result in concomitantly high levels of attributed risk/(unit exposure)-that is, a low exposure limit as a prudent safeguard against underestimating the health-effects potential of the agent. Thus, ex- tremely meager information on contaminants and biological effects will result in low exposure limits until the data base can be improved to justify a higher limit. For epidemiological studies, the modeler must understand the study design sufficiently to recognize the trade-offs between levels of uncertainty in expo- sure estimates and the ultimate risk evaluation that also depends on the level of uncertainty in the health-effects data. Given an exposure limit, the analyst needs to determine whether any particular exposure scenario constitutes a significant fraction of that limit. However, the analyst needs only to use models with ~enough" sophistication to do the job with the least cost. Simple models can be used first to explore an exposure scenario, because they require relatively few data and are thus less expensive to implement than the more sophisticated techniques. Simple mod- els generally yield biased estimates of exposure. It is recommended that only models known to be conservative be used in screening calculations so that any bias that exists is protective of the individual exposed. Consider a contami- nant with a vapor pressure of 0.1 torr, a molecular weight of 100, and a daily exposure limit of 8,000 (mg/m3) hr (an 8-hour time-weighted average of 1,000 mg/m3~. A simple model that assumes complete saturation of the air with this compound will render an estimated exposure of 4,300 (mg/m3) hr. or about 50% of the exposure limit. Assuming further that this compound is not present in particulate form (which would increase the amount of contami- nant inhaled) allows one to estimate a lack of significant risk vis-a-vis the

OCR for page 169
MODELS 171 exposure limit. The true exposure will most likely be below this very conser- vative estimate, but greater quality of assessment is not needed, because this is a worst-case scenario. Exposure models can be used to identify major exposure parameters (e.g., sources, emotion rates, etc*) =d to assist epidemiolog~cal studies and risk assessments. Although the input required for exposure models depends upon the nature of the model, all exposure models require information on who is exposed, to what contaminant, for how long, and under what circumstances (Davis and Gusman, 1982~. Many models also require information on the sources' transport, transformation and fate of the contaminants of interest. Models generally red on assumptions and approximations to quantitatively describe cause and effect relationships that are otherwise difficult to deter- mine. In this way models are used to estimate exposures when it is impracti- cal or impossible to measure exposures of an individual or population to a contaminant. Despite the simplifications inherent in models, they provide insights and information about the relationships between exposure and inde- pendent variables that determine exposure. Models discussed in this chapter are classified into two broad categories: those which predict exposure (in units of concentration multiplied by time) and those which predict concentration (in units of mass per volume). Al- though concentration models are not truly exposure models, their output can be used to estimate exposures when combined with information on human time-activity patterns (see Figure 6.1~. Since exposure occurs when humans are in contact u ith contaminantts), exposure models generally combine infor- mation on the concentrations in microenvironments with information on activi- ty patterns. The output of such models is a prediction or description of expo- sure for individuals or populations. Exposure models can be used to estimate individual exposures or the distri- bution of individual exposures in a population. Activity patterns and micro- environmental contaminant concentrations inputs to exposure prediction models~:an be measured or modeled. The microenvironmental concentra- tions and the activity pattern can vary from individual to individual, and from time period to time period. Three types of models have been developed to estimate population exposures: (a) simulation models such as SHAPE (Ott, 1981, 19843 and NEM (Johnson, 1984; Johnson and Paul, 1984), (b) the con- volution model by Duan (1981, 1982, 1985, 1989), and (c) the variance compo- nents model by Duan (1989~. As shown in Figure 6-1, concentration models are separated into several types: models based on the principles of physics and chemistry, and models that statistically relate measurements of concentrations to independent vari- ables thought to be direct determinants of concentration (e.g., gas emission

OCR for page 169
172 ASSESSING HUMAN EXPOSURE | Models based on principles || Models based on statistical | of physics and chemistry relationships . C ancentratlons In microenvlronmen ts ~ ~TT~me-~tivt tY Modeled or measured Pattern Information I ndoor Ou tdoor Ex posure Modeled FIGURE 6.1 Schematic diagram of models used in exposure assessment rate from a cooking range) or indirect indicators (e.g., the presence of a gas range). There are also many hybrids of these two basic approaches to model contaminant concentrations. Concentration models based on physical principles quantitatively estimate emission source dispersion, deposition in the environment (indoor or out- door), and transport to the receptor for a given contaminant. The transfer of a contaminant from one medium to another can also be modeled in this way. If a contaminant undergoes chemical reaction in the environment, then models based on chemical reaction kinetics principles are used to predict the outdoor concentrations of the secondary contaminants (products of reaction). Ozone and sulfuric acid aerosols are examples of secondary contaminants formed by chemical reactions of primary contaminants as they are dispersed and trans- ported in the outdoor atmosphere. Models to describe and predict their concentrations and, ultimately, human exposures must, therefore, incorporate the rates and products of the chemical reactions. The development of faster, larger, and less costly computers has greatly enhanced our ability to model complex phenomena like the turbulent flow of air in the outdoor and indoor environments. An approach to modeling the dispersion of contaminants from sources is to approximate the random motion of individual air parcels. However, random motion requires total indepen

OCR for page 169
MODELS 173 dence of one time interval from another and this requirement is not met for diffusion in the atmospheric boundary layers. Instead, a correlation will exist between one time interval and the next. This autocorrelation can be modeled approximately and the motion of a large number of individual parcels can be calculated. ~7 IMPORTANT MODEL CHARACTERISTICS Limited information is available regarding the accuracy of most contami- nant concentration models and less is known about exposure models because most models have not been adequately validated. Model users should under- stand that model outputs have uncertainties, not just those arising from the uncertainties in the input data, and that actual exposure lies somewhere in the range of that uncertainty. The results of models should be presented with their estimated uncertainties. To the extent possible, the description of the model results should distinguish between input and model uncertainty. A major objective for improving models should be to reduce uncertainty due to the model itself so that the estimated exposure is closer to the real exposure and the uncertainties are primarily associated with the uncertainties in the input data. Concentration and exposure models do not always include sufficient docu- mentation (fundamental equations, assumptions, whether parameters were lumped, etc.) that enable new users to identify and adjust critical model pa- rameters to fit new applications and or to compare their problems with previ- ous applications. The inclusion in a model of particular complex terrain, of specific contaminant source locations, unique source types, or other unusual features of a particular air shed may result in a model of high specificity; portions of such specific models may be applied to other air sheds only if the models are well documented. For example, a model developed for the Los Angeles urban atmosphere could not be used to estimate contaminant concen- trations In Denver's atmosphere unless the model takes account of the change in air density from sea level to Denver's 5,000 foot elevation along with other geographical differences. Although of limited use, sophisticated models are valuable research tools and provide valuable information on concentrations or exposures. With greater computational power becoming increasingly available, these models could be more widely applied in the future. It is important that users fully understand the models they apply, because improper use of a complicated model increases the likelihood of obtaining misleading results. Computer models need to be transferable from one computer system to another so that the validity of the model can be checked by others and the

OCR for page 169
174 ASSESSING HUMAN EXPOSURE model can be applied to other problems. Source codes for models (e.g., com- puter language code) in general should be provided in a form complete enough that programmers need not resort to any functions or subroutines other than those commonly available in the compiler for the model's language. In addition, as expert systems are developed to assist the application of mod- els, attention must be paid to ensuring that these systems can be operated by new users. CONCENTRATION MODELS Models are used extensively to estimate outdoor contaminant concentra- tions at specific sites. These models use physical, chemical, and statistical methods to address the contaminant source release, dispersion, reaction, and deposition. Models are also used to estimate indoor contaminant concentra- tions; most of these applications have occurred in occupational/industrial settings. They generally focus on measuring the contaminant concentration in a worker's breathing zone. The following discussion reviews outdoor con- centration models (e.g., emission, dispersion, atmospheric chemistry) and indoor concentration models (industrial and nonindustrial), including a review of deposition and mixing within and between rooms. Variability is discussed for both types. The section concludes with a discussion of recent advances In outdoor and indoor concentration models. Outdoor Models~ontaminant Source Emissions Emission models based on the properties of the chemicals, design parame- ters of the emission sources, the physics of mixtures, and the ambient weather conditions can provide an alternative to source monitoring (Owens et al., 1964; MacKay and Matsugu, 1973; Reinhardt, 1977; Tung et al., 1985~. The type and structure of a model depend on the source and type of contaminant re- leases; some sources are continuously replenished and can be considered to be at steady state, while other releases change in temperature or concentra- tion. Hanna and Drivas (1987) describe in detail various models available for dynamic and steady-state sources. Accurate estimation of emissions from point, area, and volume sources is necessary for accurate quantification of downwind ambient concentrations. Quantification of point sources such as stack discharges from manufacturing units can be accomplished by a number of methods, including monitoring of the sources directly and standard chemical engineering design procedures

OCR for page 169
MODELS 175 based on material and heat balances. For example, boiler emissions can be defied by knowledge of the composition of the fuel burned and the ash produced by the fuel combustion. Estimating releases from other processing equipment may require knowledge of the reaction kinetics, vapor-liquid behav- ior of the reaction mixtures, and the operating temperatures and pressures. Emissions from nonpoint sources are more difficult to monitor. A number of attempts have been made over the past decade to develop monitoring techniques for vapor and particulate emissions from pits, ponds, and lagoons (Harrison and Hughes, 1976, 1981; GCA, 1982; Thibodeaux et al., 1982) and fugitive emissions from chemical process equipment (EPA, 1988c). The Chemical Manufacturers' Association (CMA, 1987, 1989) and the EPA (1988c) have published extensive data and models for the quantification of fugitive emissions from chemical process equipment. EPA and the American Petroleum Institute have published models for quantifying the emissions from large storage tanks (EPA, 198Sa). Emissions are estimated for working losses (filling and draining the vessels) and breathing losses (losses caused by the diurnal temperature change). The EPA estimation procedure is frequently updated for use by federal and state regulators and the manufacturing organi- zations in permit negotiations and development of state implementation plans for compliance with federal regulations. The development of empirical models for emission rate estimations has focused mainly on issues related to fugitive emissions. The rate of fugitive emissions at any process point (valve, pump, etc.) is assumed to characterize all similar process points or similar equipment items. Although this assump- tion is known to be incorrect, data are insufficient to provide better emission preclictions. High emission rate predictions are obtained with these models and thus the subsequent exposure predictions may be overly conservative. Models for sudden releases of hazardous materials are generally based on fundamental principles of physics. The mass and heat balances (Bird et al., 1960) used by the modelers have used either a dynamic solution or a steady- state solution of the system of equations which describe these episodes. For spills on land, a model was developed for quantification of liquefied natural gas releases (Straw and Briscoe, 1978~. For spills on land or water, a model was developed for characterizing the emissions of chemicals in the workplace (Wu and Schroy, 1979~. These and related models are discussed by Hanna and Drivas (1987~. Models are used to calculate emissions of carbon monoxide, NOx, and organics from motor vehicles. Seitz (1989) contrasts the methods used by the state of California with those used by the federal government for transporta- tion and emission analysis.

OCR for page 169
176 ASSESSING HUMAN EXPOSURE Validation To ensure that their concentration estimates are appropriate, it is necessary to validate emission models with data from operating systems. The type of validation depends on the type of model and the ability of monitoring proto- cols to quantify actual emissions accurately. For fugitive emissions, the rate of losses to the environment can be measured directly by enclosing individual sources to quantify the emission rate. The accuracy of the emission rate measurement depends on the size and type of equipment, operating condi- tions, and the chemical and physical properties of the chemicals being han- dled. For example, the petroleum refining industry commonly involves high- temperature processing of chemicals in large equipment, but the chemical industry commonly uses ambient temperatures and small equipment and has substantially lower emission rates. Losses from large open ponds and pits are more difficult to quantify and have caused difficulty in validation of emission models. The evaporation of water from large lakes, monitored for many years by the U.S. Weather Serv- ice, provides the best validation data base. Spill tests with chemicals such as ammonia and liquefied natural gas offer another data base for validation and calibration of emission models. Validation of models for aerated basins, tanks, and lagoons can use standard data from the chemical engineering trans- port literature when no reactions or other removal mechanisms are involved. When a biological oxidation-reduction process is providing a competitive removal mechanism, the validation of emission models is much more difficult. Kinetic information is needed for biological degradation as an event separate from losses due to volatilization. Much of the literature of biological reaction kinetics combines volatilization and degradation losses and attributes the total loss to kinetic reactions. This procedure makes the resulting data bases diffi- cult to apply to specific sources. Contaminant Dispersion Models using annual average emission rates that were either measured or estimated have been available since the early 1930s (Sutton, 1932) for simulat- ing the dispersion of emissions from point sources. However, it was only in the late 1960s and the early 1970s that there was substantial development of computer programs for air dispersion of contaminants. For example, EPA has supported the continuing development of a variety of Gaussian plume models in its Users Network for Applied Modeling in Air Pollution (UNAMAP) ~ series or programs.

OCR for page 169
MODELS 177 The basic concept of Gaussian plume models is that the turbulent disper- sion of contaminants in the air has a random character of large-scale eddy motion that is analogous to the Brownian motion of molecules. Prom this analogy, a differential equation based on Fiches law is obtained and the solu- tions are Gaussian functions. For atmospheric dispersion, motion ~ the direction of the wind (advection) is modeled as the average wind speed. Honzontal and vertical dispersion perpendicular to the prevailing wind direc- tion are modeled as Gaussian functions with the standard deviations functions of atmospheric stability and distance from the source (Henna et al., 1982~. To incorporate some of the source characteristics that affect dispersion, buoyant plume rise was included in the dispersion models (Briggs, 1969, 1971~. In 1978, EPA designated certain dispersion model computer codes as ~ap- proved models. for developing state implementation plans to achieve compli- ance with National Ambient Air Quality Standards (NAAQS) (EPA, 1978~. With EPA's endorsement of these models, they have become the principal tools In plans for controlling contaminant sources. In developing control strategies for contaminants regulated by the NAAQS, EPA developed models that combined source emission rates with atmospheric dispersion to predict the concentrations of the contaminants at a receptor site and to test the effec- tiveness of control strategies. Prediction of the concentration of ozone, a contaminant regulated by the NAAQS, requires modeling of the photochemi- cal transformation of its precursors, i.e., volatile organic compounds and NOX, as well as their transport. Dispersion modeling also can be done statistically. The air can be consid- ered as a number of parcels or particles' which move in a random fashion (Taylor, 1921~. The path of a single parcel can be described by a statistical function. If the parcel is assumed to have independent motion at any step during transport, it can be modeled as a Random walk,~ in analogy to Brownian motion of molecules. That concept was extensively developed in the l950s, but the methods became so complicated by the need for empirical factors that they were replaced with the simpler Gaussian plume methods (Henna et al., 1982). In recent years, stochastic modeling of atmospheric dispersion has in- creased in popularity, because it is relatively simple, it can be applied to com- plicated problems, and it has been made more practical by improvements in computer capability and costs. Probabilistic models can easily incorporate physical phenomena, such as buoyancy, droplet evaporation, polydispersity of released particles, and dry deposition. Stochastic modeling is typically implemented as a numerical Monte Carlo model. Boughton et al. (1987) describe a Monte Carlo simulation of atmo- spheric dispersion in which parcel displacement or velocity is treated as a

OCR for page 169
178 ASSESSING HUMAN EXPOSURE continuous-time Markov process. They restrict the model to crosswind-inte- grated point sources and assume that dispersion in the mean wind direction is negligible. That reduces the analysis to one dimension. Liljegren (1989) has extended the model to incorporate horizontal and vertical dispersion perpendicular to the mean upend direction. The results of the latter model agree well with published concentration data ~lilliam E. Dunn, University of Illinois, Urbana, personal communication, 1988~. It appears that three-dimen- sional stochastic models will offer considerable predictive improvement (in- clucting predictions of concentration change with time) over conventional Gaussian plume models. Most of the studies to calibrate and validate plume dispersion models have: involved the release of inert tracer gases from near the ground in nonbuoyant plumes-conditions very different from real stack plumes. In general, the studies have not covered a sufficient distance downwind to test the models beyond a few kilometers, so the results might not be reliable. Tracer pro- grams and in-plume aircraft flights do not provide sufficient data to permit evaluation of the models' ability to predict short-term peak concentrations. Long-term average values have been estimated with data from sparse net- works of continuous monitors, but their spatial resolution might be too low for estimation of impacts of peak concentrations. Thus, validation is still inade- quate. With support of the Electric Power Research Institute, a major study to validate plume models was mounted in the early 1980s. The first study was of a large coal-fired power plant situated in relatively simple terrain, to mini- mme topographical uncertainties. The study compared three Gaussian plume models and three stochastic models with ground-level concentrations obtained with both routine and intensive measurements programs (Bowne and Londer- gan, 1983). The results indicated serious deficiencies in the particular disper- sion models tested; they do not address complicating effects~uch as complex terrain, surface roughness, atmospheric chemistry, and large sources of heat that cause localized climatic change-and therefore are of uncertain validity. Little is known about how a plume is affected by the objects it passes over. For instance, a large manufacturing plant may emit much heat that creates localized climate changes that directly affect the plume. In what is called the heat-island effect, large masses of hot air rise and change the local climate. This can change weather patterns over large cities. The behavior of buoyancy, neutral buoyancy, and dense clouds in regions of complex terrain constitutes a problem for the dispersion modeler. The buoyancy and neutral-buoyancy plume models developed to date provide little encouragement that the problems can be solved to permit reasonable predic

OCR for page 169
MODELS 179 lions of exposure. Little research has been done on the behavior of dense clouds. The dense and neutral-buoyancy models use mixing factors to represent the surface under a plume. For example, the factors used for rural terrain are equivalent to flat, low-friction surfaces, which cause a minimum of plume turbulence. For urban terrain, the impacts of homes, businesses, and factories have been quantified by calibration experiments. Rural factors are usually used to ensure that results do not underestimate contaminant concentrations. However, surface roughness and the interaction of a plume with a building can have substantial effects. If the plume is spread sideways by such an inter- action, the results might well be catastrophic for a plant poorly designed for the community setting. Atmospheric Chemistry It is now possible to describe in detail many of the individual reactions occurring in photochemical smog (Niki et al., 1972; Demerjian et al., 1974; Seinfeld, 1988~. Use of explicit and detailed mechanisms in air-shed or long- range transport models, however, is not always practical, and detailed informa- tion on the rate constants of the precursors, intermediates, and products is not complete. The limitations on the understanding and quantitations of the complex chemical reactions can severely limit the accuracy of the output prediction. In addition, the computer time required for the integration of the rate equations associated with the hundreds of individual compounds involved is prohibitive using current computer systems. For urban air-shed models, condensed or (lumped) chemical mechanisms are generally used (Finlayson-Pitts and Pitts, 1986; Seinfeld, 1988~; i.e., reac- tions or chemical species are grouped and an overall rate constant is used for each group (Falls and Seinfeld, 1978; Whitten et al., 1980; McRae et al., 1982~. This approach can affect the spatial and temporal accuracy and preci- sion of a model. In addition, the lumping process limits the fundamental understanding of the specific pathways and interesting chemistry may be hid- den by the lumping process. To estimate ozone concentrations with a model, for example, it is necessary to estimate the concentrations of reactive interme- diates. The resulting concentrations of these other substances reflect many of the simplifying assumptions and may lead to erroneous results, even if the specific concentration sought i.e., ozone is accurately predicted. Ozone models have been critically reviewed by Seinfeld (1988~. Improved ozone models incorporate wind fields, chemical reaction mechanisms, turbu- lent dispersion, and removal processes. The newer, more sophisticated mod

OCR for page 169
196 ASSESSING HUMAN EXPOSURE house and the characteristics of the soil, such as permeability, are important factors influencing pressure-driven flows of soil gases. Efforts to model radon entry from soil and the consequent indoor radon concentrations are only beginning. Mowris and Fisk (1988) have developed an analytical (closed- form) model of soil-gas flow based on its analogy to heat transfer. The model was used to evaluate the impact of exhaust ventilation on indoor radon con- centrations In two houses. It underpredicted radon concentrations by 23% and 13% for two different periods In one house and overpreclicted by 22% in a second house, but the authors noted that comparison with measured concen- trations was encouraging. Loureiro (1987) has developed a theoretical model to predict indoor radon concentrations.- It simulates rates of generation and decay of radon In soil, its transport through the soil due to diffusion and convection induced by a pressure disturbance at a crack in the basement, and its entrance into the house through the crack. Two computer programs were developed to calculate the pressure distribution in the soil and the resulting velocity distribution of the soil gas and to solve the radon mass-transport equation, calculate radon entry rates, and calculate the indoor radon concen- tration. Indoor radon concentrations were found to be directly, although not linearly, related to the ~ndoor-outdoor pressure difference. Domestic water contaminated with gases, such as radon and volatile organic compounds (VOCs), is a source of exposure that has only recently been recog- nized as important. Dissolved gases in contaminated water are released in- doors during such residential uses as showering and dish-washing (Andelman, 1985; Gesell and Prichard, 1975; McKone, 1987; Jo et al., in pressa). McKone has developed a mass-transfer model to estimate human exposures to VOCs due to their transfer from tap water to indoor air. It estimates the release of VOCs from water and uses a three-compartment model to simulate the 21 hour concentration profile in the shower, the bathroom, and the rest of the house. A preliminary data base on household characteristics and time-activity patterns has been used to calculate a range of concentrations and human exposures to seven VOCs. Nazaroff et al. (1987) used a single-compartment mass-balance model with a long averaging time to calculate the distribution of indoor-air Revlon in U.S. homes from tap water. In another recent advance in modeling indoor concentrations of contami- nants in homes, Traynor et al. (1988) developed a single-compartment mass- balance model for combustion emissions, specifically CO, NO2 and respirable particles. Input data for the model include distributions of housing stock characteristics (e.g., volumes and air-exchange rates), use of combustion appli- ances and sources (e.g., cigarettes), distribution of source emission rates, and source use. The model uses deterministic and Monte Carlo simulation tech- niques to generate distributions of average weekly concentrations of CO, NO2,

OCR for page 169
MODELS 197 and respirable particles for four regions of the country. The modeled distribu- tions have generally compared well with available field measurements. The model can also be used to rank indoor pollutant sources, identify high-risk populations, identify key factors for attempts at control and mitigation, and estimate exposures for epidemiolog~cal studies. N~roff and Cass (1986) recently developed the first model for chemically reactive pollutants in indoor air. It combines the multibox ventilation mode! of Shair and Heitner (1974) with a modified version of the Falls and Seinfeld photochemical kinetic mode} (Falls and Seinfel~ 1978; Russell et al., 1985~. The mode] accounts for the effects of ventilation, filtration, heterogeneous removal of gaseous pollutants, direct emissions, and homogeneous gas-phase reactions and predicts concentrations of such chemically reactive contaminants as HNO2, HNO3, NO3 and N205. Nazaroff and Cass (1986) tested the model in a museum gallery, predicted and measured concentrations of several pollu- tants were in reasonably good agreement. They also compared their modeled steady-state ratio of HNO2 to NO2 due to homogeneous gas-phase reactions with that measured by Pitts et al. (1985a) in an indoor environment; the ex- penmental ratio was about 35 times the modeled ratio. Heterogeneous reac- tions appear to play an important role in indoor production of HNO2 and models for indoor atmospheric chemistry probably will eventually have to incorporate heterogeneous chemical reactions. However, very little is known about such reactions today. EXPOSURE-ASSESSMEN] MODELS Current exposure models are based on relatively general assumptions about the distribution of contaminant concentrations in microenv~ronments, the activity patterns that determine how much time people spend in each micro- environment, and the representativeness of a sample to the population that might be exposed to a contaminant. Individual Exposures In a model of individual exposure, contaminant concentrations in each microenvironment are measured or modeled and time-activity patterns are used to estimate the time spent in each microenvironment. (Exposure is the product of time and contaminant concentration.) An individual's overall e~o- sure can be separated into the sum of products of concentration and time in

OCR for page 169
198 ASSESSING HUMAN EXPOSURE each microenvironment; this is termed a microenvironment decomposition (Duan, 1981~. Microenvironment decomposition can be extended to other summary expo- sure measures, such as peak concentrations. If we are interested in total exposure, microenvironmental decomposition is assumed to include all possi- ble locales and activities. Duan (1981, 1985) developed a criterion for stratify- ing microenvironments to improve the precision of estimated average eypo- sures and applied it to identify the important microenvironments for CO exposures. Some models for predicting exposures make assumptions regarding the independence between contaminant concentrations and time spent and activity in a microenvironment. Such assumptions should be validated for specific applications. Duan (1985) has suggested that there is no correlation between CO concentrations and time on the basis of data from the Washington, D.C., CO study (Akland et al., 1985~. However, there will be problems in the exist- ing models if correlations between occupancy periods and concentrations exist for other contaminants, because the independent variables, time and concen- tration, would not be truly independent. If the correlation is very high, the predictions based on models might not be valid because of an inappropriate assumption of independence. The committee is unaware of any empirical data quantizing the extent of problems caused by the correlations. It is likely that for contaminants such as particles, the presence of a person might change the particle concentration of a previously unoccupied microenvironment. Further study of the problems such correlation would produce is needed. Stock et al. (1985) used personal-activity profiles and household characteris- tics to partition the locations into seven broad microenvironments: three indoor, two outdoor, and two transportation modes. From measured concen- trations of the criteria pollutant gases (ozone, NO2, SO2, CO), aeroallergens, aldehydes, TSP, and inhalable particles and the time in each partition, eypo- sure estimates were calculated. The results will ultimately be combined with epidemiological data to determine the health effects of exposure to specific pollutants in a community environment. More or less sophisticated versions of partitioning are used in the work- place, where they are referred to as job exposure profiling (JEP). JEP some- times consists of grouping and compiling work tasks with durations of expo- sure at breathing-zone concentrations (Austin and Phillips, 1983~. The prod- uct of such analysis is a prediction of exposure of any employee involved in the tasks covered by the JEP. Hansen and Whitehead (1988) recently moni- tored the activities and breathing-zone concentrations of printing-press opera- tors and modeled time-weighted average exposures as a function of location and the number of times a "hazardous task" was performed.

OCR for page 169
MODELS 199 Population Exposures Modeling exposure of populations requires the combining of microenviron- ment concentrations with individual activity patterns and extrapolation of the results to a population. Data on human activity patterns have been combined with measured outdoor concentrations in the NAAQS exposure model (NEM) to estimate exposures to CO (Biller et al., 1981; Johnson and Paul, 1983~. The NEM was modified to include indoor exposures by incorporation of the indoor-air quality model (LAQM) (Hayes and Lundberg, 1985~. The IAQM, based on the interactive solution of a one-compartment mass-balance model, incorporates three basic indoor microenvironments: home, office or school, and transportation vehicle. It has been used to estimate distributions of ozone exposures (Hayes and Lundberg, 1985) and to evaluate strategies for mitigat- ing indoor exposures to selected pollutants in five situations, e.g., CO exposure from a gas boiler in a school (Eisinger and Austin, 1987~. As mentioned in the introduction to this chapter, three types of models have been developed to estimate population exposures: (a) simulation models such as SHAPE, (b) the convolution model, and (c) the variance-component model. The simulation of human air pollution exposure model (SHAPE) (Ott, 1981) is a computer model that generates synthetic exposure profiles for a hypothetical sample of human subjects; the profiles can be summed into com- partments or integrated exposures to estimate the distribution of a contami- nant of interest. The bulk of the model estimates the exposure profile of contaminants attributable to local sources; the contribution of remote sources is assumed to be the same as the background. The total exposure is therefore estimated as the sum of exposure due to local sources and the ambient back- ground. For each individual in the hypothetical sample, the model generates a pro- file of activities and contaminant concentrations attributable to local sources over a given period, say, 24 hours. Activity profiles are generated or accepted as input. At the beginning of the profile, the model generates an initial mi- croenvironment and duration of exposure according to a probability distribu- tion. At the end of that duration, the model uses transition probabilities to simulate later periods and other microenvironments. The procedure is repeat- ed until the end of a selected long period. For each time unit, say, 1 minute, in a given microenvironment, the model generates a contaminant concentra- tion according to a microenv~ronment-specific probability distribution: each microenvironment has a specific probability distribution for each contaminant concentration. Such models obviously require validation with measured expo- sure data for a subset of microenvironments and patterns. Duan (1981, 1985, 1989) developed the convolution model for integrated

OCR for page 169
200 ASSESSING HUAL4N EXPOSURE exposures. It calculates distributions of exposure from distributions of concen- trations observed in defined microenvironments and the distribution of time spent in those microenv~ronments. The variance-component model (Duan, 1989) assumes that short-term contaminant concentrations can be decomposed Into components that vary In time and those that do not. SHAPE deals mainly with the t~me-varying com- ponent; the convolution model deals mainly with the time-invariant exposure. The two components can be summed or multiplied to yield an estimated concentration value. It is necessary to determine the distributions of the two concentration components. If continuous personal-monitoring data are avail- able, it is possible to estimate the. distributions of the two components directly. If integrated personal-monitoring data are available, the methods described by Duan (1989) can be applied. Once the concentration distributions are available, exposure distributions can be estimated with a computer simulation similar to SHAPE. Instead of generating a contaminant concentration for each time unit independently, as in SHAPE, a time-invariant concentration and a time-va~ng concentration are generated for each unit and combined to determine 1-minute concentrations. The remainder of the simulation is identical to that in SHAPE. All three types of models (SHAPE, convolution, and variance-component) need to make assumptions about independence. The critical difference among the three types is in those assumptions. SHAPE assumes that the short-term pollutant concentrations (e.g., 1-minute averages) within the same microenvi- ronment are stochastically independent and independent of activity patterns. It follows that the microenvironmental concentration is not correlated with activity time in that microenvironment. Furthermore, the variance of concen- tration decreases in inverse proportion to activity time. For longer activities in the same microenvironment, the concentration is averaged over more time units. Similar assumptions were made in an earlier version of NEM; a more recent version of NEM incorporates serial correlation in the 1-minute aver- ages (Johnson et al., 1990~. The convolution model assumes that microenvironmental concentrations are statistically independent of activity pattern. That implies that they are not correlated with activity time and that the variance of the concentration also stays constant, irrespective of time. That needs to be validated. Switzer (Stanford University) noted in a private communication with Duan in 1982 that the forms of the variance functions used in both models might be unreal- istic and that some compromise between the two might be desirable. With either the additive or multiplicative form of the variance component model, the time-invariant components are assumed to be stochastically inde- pendent of the time-varying components. It is further assumed that for differ

OCR for page 169
MODELS 201 ent time units, the time-vary~ng components are independent from one interval to the next. Alternatively, it can be assumed that the time-vary~ng components have an autocorrelation structure. Duan (1985) examined data from EPA's Washington, D.C., CO study and found that concentrations and internal were unrelated. Ott et al. (19~) used data from EPA's CO study in Denver to emmine the validity of SHAPE, comparing exposure distributions of CO estimated with SHAPE and with the direct approach (personal monitoring). They found the estimated average exposures to be similar and the estimated exposure distributions to be cliffer- ent at the extremes of the distributions. That result might be due to failure to account for autocorrelation and the time-invariant component. Duan (1989) examined several statistical parameters for microenvironments In data from the Washington, D.C., CO study and found the time-invariant component to be dominant. Temporal Aspects One cause of inaccuracy in exposure modeling is failure to obtain measure- ment data on an appropriate time scale. Outdoor air is often sampled in the summer, and concentrations for an entire year are then estimated on the basis of a single season. But sampling and analysis programs must cover enough time for concentrations to be reasonably estimated for a fuD year, if they are to serve as reliable inputs to exposure models. Very few sampling studies have extended over a long enough period to revead seasonal and year-to-year varia- tions. An example of good sampling design was that of the Portland Aerosol Characterization Study (Cooper and Watson, 1979~. The researchers attempt- ed to learn the representative composition of airborne particulate matter and its sources without having to sample every day and analyze every sample. They stratified the year into eight defined meteorological regimes and took samples when conditions and time of year were appropriate. Although many samples were taken, only enough were analyzed to yield useful average values for each regime. The regime averages were then combined in proportion to their probability of occurrence during the year. Representative annual con- centration averages were obtained at a reasonable level of effort for both sampling and analysis. However, because of the variability of occupancy times, it may be that different averaging times are appropriate in estimating average exposures as compared with average concentration. Many estimates of annual average concentrations of indoor radon are based on measurements taken over periods of a few days under conditions that are

OCR for page 169
202 ASSESSING HUMAN EXPOSURE quite unrepresentative of those existing ~ a house over a whole year. The estimates so derived can easily differ from true annual averages by a factor of 2 or more, because, for example, the conditions that give rise to indoor radon change from season to season (Nero et al., 1986~. Modeling of very long exposures, as is required in assessing risk associated with exposure to carcinogens, presents several major difficulties. The typical practice is to measure or model the concentration of a contaminant at one time and determine lifetime exposure by multiplying that concentration by a long period, e.g., the lifetime of a person. However, both exposures and activity patterns change substantially over a lifetime. Industrial processes also change over time. Sources (such as wood-burn~ng stoves) are introduced, and sources (such as catalytic converters in motor vehicles) are eliminated or modified. Large facilities typically have a design life of 30 years, so consider- able uncertainty can be anticipated in a typical calculation of 70-year lifetime exposure. Time-activity patterns and locations of people also vary substantially over long periods. In the United States, people change their place of residence frequently and rarely live in the same place over a lifetime. For agents such as radon, such mobility can have a substantial impact on exposure and thus on the use of exposure estimates in an epidemiological study. A person's activity patterns shift from childhood through early adulthood and middle age to old age. There have been some efforts to address differ- ences in exposure associated with aging, but this aspect of variability in e~o- sure over long periods has generally not been addressed in exposure modeling. The modeling of short-duration peak exposures is also attended by tempo- ral problems. Typical steady-state airborne-concentration models are not able to provide estimates for periods shorter than 1 hour and have difficulty In modeling time-varying concentrations, which can lead to high short-term exposures. If an exposure model is to estimate the effects of peak exposures on sensitive populations, the concentration model must provide reliable esti- mates on biologically relevant time scales. Some important developments in stochastic models that might be able to provide such estimates have not yet been incorporated into exposure-estimation procedures. SUMMARY Models are useful tools for quantifying the relationship between air-pollu- tant exposure and important variables, as well as for estimating exposures in situations where measurements are unavailable. Models may obviate extensive environmental or personal measurement programs by providing estimates of

OCR for page 169
MODELS 203 population exposures that are based on small numbers of representative meas- urements. They can be used to identify major exposure parameters and to assist epidemiological studies and risk assessments. Models generally rely on assumptions and approximations to describe quantitatively cause-and-effect relationships that are otherwise difficult to determine. Despite the simplifica- tions inherent in models, they provide insights and information about the relationships between exposure and independent variables that determine exposure. Models discussed In this chapter are classified into two broad categories: those that predict exposure (in units of concentration multiplied by time) and those that predict concentration (in units of mass per volume). Although concentration models are not truly exposure models, their output can be used to estimate exposures when combined with information on human activity patterns. Exposure models can be used to estimate individual exposures or the distribution of individual exposures in a population. Activity patterns and microenvironmentalcontaminant concentrations inputstoexposure-prediction models~an be measured or modeled. Concentration models are separated into several types within two catego- ries: models based on the principles of physics and chemistry and models that statistically relate measurements of concentrations to independent variables thought to be direct determinants of concentration. Many hybrids of these two basic approaches to model contaminant concentrations also exist. Concentration Models These models are used extensively to estimate outdoor contaminant con- centrations at specific sites. These models use physical, chemical, and statisti- cat methods to address the contaminant source release, dispersion, reaction, and deposition. Models are also used to estimate indoor contaminant concen- trations; most of these applications have occurred in occupational or industrial settings. They generally focus on measuring the contaminant concentration in a worker's breathing zone. Over the past decade, research in air quality in nonindustrial indoor envi- ronments has dramatically changed the understanding of human exposures to many airborne contaminants. Many critical factors involved in residential exposures differ from those in industrial exposures. For example, the indoor exposed population includes members who are very young, very old, or infirm. The potential indoor exposure duration in residences is much longer com- pared with a typical working career. The concentrations of contaminants and ventilation rates are often much lower in residences than in industrial environ

OCR for page 169
204 ASSESSING HUMAN EXPOSURE meets. Most advances in indoor-air modeling have come from increasing the sophistication and complexity of the models. Outdoor New developments In stochastic dispersion models offer improvements in the prediction of the average and time-varyLng concentrations to which individ- uals are exposed. Receptor models can be used to cross-validate dispersion models. They also can be used to identify sources of exposure. In many cases, the data describing the source characteristics are not avail- able on the time scale at which the model predictions are needed. Such mismatches In the time scale of the measurements with the time scale of the models preclude adequate model development, validation, and application to new biologically relevant exposure situations. Because of the changing nature of sources and source emissions with changes in production and control tech- nology and in the economic conditions, it is necessary to measure periodically the amounts and chemical characteristics of sources of airborne contaminants. Improvements In photochemical models now permit far more accurate predictions of the spatial and temporal variability of ozone and some other atmospheric constituents than were previously possible. However, it is still not possible to incorporate the complete, explicit mechanisms into air-shed or long-range transport models. Indoor Current models used to predict worker exposures to airborne toxicants are relatively simple, undeveloped, and unvalidated. This deficiency has caused practitioners to use models-instead of estimation techniques as though they were conservative screening techniques. Little work has been done to model very short-term exposures (peak expo- sures) and gradients relative to dispersion, deposition, and ventilation in in- door environments. The sources of indoor-air pollution need to be character- ized. Measuring and modeling the temporal patterns of source strength as a function of readily identifiable or measurable source characteristics are critical steps in that process. In addition, more work is needed to model the relation- ship of indoor-air quality to the composition of the ambient atmosphere. Furthermore, the chemistry of the indoor atmosphere remains to be investi- gated. The variability of concentrations in indoor Industrial air over short time

OCR for page 169
MODELS 205 frames needs to be measured for emergency situations. The validation of the models to predict concentrations is linked to appropriate sampling time frames and methods with adequate sensitivity to specific chemical species. Indoor-Air Chemistry Indoor-air chemistry needs substantial research, including surface reactions on various materials, sorption, deposition, and rates for these processes rela- tive to ventilation or other loss mechanisms. Exposure Models Current exposure models are based on relatively general assumptions about distribution of contaminant concentrations in microenvironments, the activity patterns that determine how much time people spend in each microenv~ron- ment, and the representativeness of a sample to the population that might be exposed to a contaminant. In a model of individual exposure, contaminant concentrations in each microenvironment are measured or modeled, and time- activ~ty patterns are used to estimate the time spent in each microenv~ron- mcut. Modclir~g closure of populations requires the combining of microenv~- ronmental concentrations with individual activity patterns and extrapolation of the results to a population. Models for predicting exposures to populations have been developed re- cently. They have not, however, been adequately validated. Limited validation studies of the SHAPE exposure model, for example, have shown that the average values are well predicted but show substantial discrepancies in the tails of the distribution. Further development and validation of the models are warranted. One cause of inaccuracy in exposure modeling is failure to obtain measurement data on an appropriate time scale. Sampling and analysis must cover sufficient time for concentrations to be reasonably estimated for a full year, if they are to serve as reliable inputs to exposure models. Source Models Source emission models are available to predict mass emission rates for a variety of dynamic and steady-state emission problems. The available emis- sion models allow the estimation of downwind exposure for continuous or catastrophic releases of pure compounds or binary mixtures. These models

OCR for page 169
206 ASSESSING HUMAN EXPOSURE have not been validated. Dense-cloud dispersion models are available to estimate downwind exposure for heavier-than-air vapor releases; they also have not been validated. Emission-rate estimation protocols are available for defining losses from chemical-processing equipment. Emission modeling coupled with dispersion modeling and time-activity estimates allow estimation of exposures for work- place-population exposure concerns before construction of new production facilities. Validation Further validation studies are needed for virtually all existing models, in- cluding concentration prediction and exposure models. In particular, immedi- ate efforts are needed to validate the NEM model and modify the model to more accurately reflect the actual situations that can result in high population exposures. Valid emission-rate models are needed to provide precise estimates for multicomponent mixtures. Validated dispersion models are needed to predict downwind concentration for complex terrain to provide accurate e~o- sure estimates for down- and up-gradient terrain conditions. The same data set cannot be used to refine and validate a model; new, independent data are required to validate any refined model. All assumptions used in developing a model should be documented explicitly. Care should be taken by investiga- tors in any field-monitoring program to integrate their measurements pith the modeling community needs so that the requisite model input data are ob- tained, and the measurement results can be used to test, refine, or validate appropriate models. Measurements are needed of the concentrations of airborne pollutants In workplaces and homes along with the critical independent variables, such as source-emission rate distributions and the indoor general ventilation fields. Concentration gradients u ithin physically defined microenvironments also need to be measured accurately. When planning measurement campaigns, consid- eration should be given to the sampling strategies that would permit the ex- trapolation of the results to biological time frames other than those of the measurement program.