GAS-PHASE CHEMICAL MEASUREMENTS
Certain trace gases are important in airborne particle formation, light scattering, and light absorption in the troposphere. The key gases are sulfur dioxide (SO2), nitric oxide (NO), nitrogen dioxide (NO2), ozone (O3), ammonia (NH3), hydrogen peroxide (H2O2), and the non-methane hydrocarbons (see Appendix A). A knowledge of their concentration distributions is essential for applying modeling methods and is necessary for developing mechanistic models. There generally are accepted methods for measuring those gases in the atmosphere. The methods are considered briefly in this appendix.
Gaseous Sulfur Dioxide
Sulfur dioxide is the precursor to sulfuric acid (H2SO4), ammonium bisulfate (NH4HSO4), and ammonium sulfate (NH4)2SO4). In most regions, (SO42-) are the single most important class of particles responsible for visibility degradation. Methods for measuring SO2 that have proven successful in the field are flame photometric instruments (e.g., Maroulis et al., 1980; McGaughey and Gangwal, 1980; Garber et al., 1983; Thornton et al., 1986); fluorescence analyzers (Schwarz et al., 1974); pulsed fluorescence analyzers (Maffiolo et al., 1982); single photon laser-induced fluorescence (Bradshaw et al., 1982); mass spectrometric analysis (Driedger et al., 1987); filter pack systems (e.g., Quinn
and Bates, 1989) coupled with colorimetric methods or ion chromatography; permeation sampling (McDermott et al., 1979); diffusion scrubber (Lindgren and Dasgupta, 1989); diffusion denuder (Possanzini et al., 1983; Keuken et al., 1988; Vossler et al., 1988); thermodenuder systems (e.g., Slanina et al., 1987); and chemiluminescence (e.g., Zhang et al., 1985).
When SO2 is present at very low concentrations (< 0.1 ppb), a sample can be accumulated by cryo-trapping an airstream over a period of several minutes; the trapped SO2 in the sample is then separated by gas chromatography and detected by one of the methods named above. Commercial instrumentation is available, but the instruments often must be modified to improve their sensitivity in relatively clean air.
Oxides of Nitrogen
The oxides NO and NO2, collectively referred to as NOx, are important in the generation of nitric acid (HNO3) in the troposphere. HNO3 can react with various bases, such as NH3, to form light-scattering airborne particles, such as NH4NO3. NO is commonly analyzed through the chemiluminescence of excited NO2 molecules formed in the NO-O3 reaction (Ridley and Howlett, 1974; Ridley et al., 1988). Commercial instruments are available for NO, but they must be modified to measure NO at low levels encountered in relatively clean air (less than 0.1 ppb) (Dickerson et al., 1984).
The other common oxide of nitrogen, NO2, is also important in airborne particle formation. NO2 is a strong absorber of visible and ultra-violet light and can thereby contribute to haze. However, because of its high reactivity and relatively short lifetime, NO2 does not normally contribute significantly to haze in remote areas; it is a problem only in areas close to sources.
NO2 can be analyzed in air by several instrumental methods: photolysis and chemiluminescence (Kley and McFarland, 1980); tunable diode laser spectrometers (Reid et al., 1978a,b; Schiff et al., 1987); and chemiluminescence from the NO2 luminol reaction (Schiff et al., 1986, 1987). A field intercomparison of the three methods has shown their utility and their limitations (Fehsenfeld et al., 1990). In some commercial instruments, NO2 is estimated by first using NO-O3 chemiluminescence to measure NO in an ambient air sample; a sample of ambient air
is then subjected to a reduction cycle (with heated molybdenum oxide catalyst) that reduces NO2 to NO. The concentration of NO2 in the air is estimated by taking the difference between the two measurements. However, that procedure is subject to errors because the reduction cycle will generate NO from nitrogen compounds other than NO2 (e.g., peroxyacetyl nitrate, PAN). The NO2 luminol method is subject to small interferences from PAN and O3.
Ozone is a critical gas because it is responsible for initiating much of the chemistry that leads to the generation of secondary airborne particles. O3 can be measured easily by several available techniques, including chemiluminescence from the reaction with either ethylene (Nederbragt et al., 1965; Warren and Babcock, 1970) or nitric oxide (Fontijn et al., 1970) and absorption spectroscopy in the 253.7 nm region (Bowman and Horak, 1972; de Vera et al., 1974). Commercial instruments for each of those methods are available; the sensitivity of some of the instruments is sufficient for use in remote ''background'' areas.
Ammonia gas is the only major basic gas-phase substance in the troposphere; as such it is important in the generation of sulfate and nitrate particles through the neutralization of sulfuric and nitric acid. NH3 can be measured by a variety of techniques: photo-fragmentation and laser-induced fluorescence (van Dijk et al., 1989); tungstic oxide (WO3) denuder sampling followed by chemiluminescence detection (Braman et al., 1982, 1986; LeBel et al., 1985); molybdenum oxide annular denuder sampling followed by chemiluminescence detection (Langford et al., 1989); citric acid coated denuder sampling followed by liquid extraction and ion chromatographic analysis (R.B. Norton, pers. comm., Aeronomy Lab, NOAA, Boulder, Colo., 1991, modified from Ferm, 1979); and filter pack sampling followed by liquid extraction and colorimetric (Solorzano, 1969; Harwood and Kuhn, 1970) or ion chromatographic analysis.
Hydrogen peroxide gas is important in the generation of sulfuric acid through its oxidation of SO2(HSO3-) in fog and cloud water. The subsequent evaporation of fog or cloud droplets results in the formation of particles of sulfuric acid or its salts. H2O2 can be measured by several techniques: scrubbing coupled with fluorometric detection (Lazrus et al., 1985, 1986); diffusion scrubbing with fluorescence detection (Dasgupta et al., 1986; Hwang and Dasgupta, 1986); tunable diode laser absorption spectrometry (Slemr et al., 1986); and glass impinger collection followed by luminol detection (Kok et al., 1978). Those techniques have been recently compared (Kleindienst et al., 1988); the results provide guidance about the suitability of each for specific environmental conditions.
Gas-Phase Organic Carbon Compounds
Volatile organic compounds (VOCs) are present in great variety in the atmosphere. Both natural and anthropogenic emissions are important sources of VOCs. Many of them play a key role in atmospheric chemistry because their oxidation products react with the oxides of nitrogen to generate ozone; indeed, in many polluted regions, high concentrations of ozone are attributable largely to those reactions (NRC, 1991b). The hydrocarbons usually are analyzed with gas chromatographic separation and flame ionization detection of the individual components (Greenberg and Zimmerman, 1984; Westberg and Lamb, 1985; Christian and Reilly, 1986; Zimmerman et al., 1988b; Rasmussen and Khalil, 1988). Mass spectrometers to identify components following chromatography have proved to be useful (Penkett, 1982).
It is often difficult to use conventional techniques to analyze air samples for gaseous substances in remote areas such as national parks and wilderness areas because of the complexity of the equipment required and the lack of electrical power. Portable battery-operated systems have
been designed to detect NO2, NO, O3, and SO2. Alternatively, certain simple filter systems can be used readily in remote areas, or the air can be collected in stainless steel, electropolished canisters and returned to the laboratory for analysis where more complex but more accurate methods can be used. In the latter case, care must be taken to avoid loss of trace gases during storage in the canisters (Greenberg and Zimmermann, 1984).
Many of the commercial instruments designed to monitor impurities at elevated concentrations in stack gases are not sensitive enough to monitor the same impurities at the low concentrations usually encountered in the atmosphere. The analysis of atmospheric trace gases requires highly specialized equipment which satisfies rather specific and demanding requirements:
The instrument should minimize interferences from other components of the atmosphere.
The sensitivity of the instrument should be sufficient to track a given trace gas at the very low levels commonly present in the atmosphere.
The stability of the instrument should be good enough to allow many hours of unattended operation.
In the review of the alternative instrumentation for the analysis of particle precursor gases given above, the types of equipment available today which satisfy these stringent requirements have been discussed very briefly. Further guidance on the choice of instrumentation to satisfy particular needs can be gained from the references cited in this section.
MEASUREMENTS OF ATMOSPHERIC PARTICLES
Airborne particle measurements are critical in any visibility program for several reasons:
Fine particles (i.e., smaller than 2.5 µm in diameter) are responsible for most visibility impairment. The measurement of fine-particle
mass concentrations can provide a first-order means of tracking trends in visibility impairment and therefore can be used as a basis for regulatory action.
The chemical composition of the aerosol yields information about the amount of extinction due to each pollutant (e.g., sulfates, organic carbon, and soil dust).
When used with receptor models, the chemical composition of the aerosol can provide clues about its origin. Apportioning extinction to various groups of sources is an essential step in targeting emission controls.
For those reasons, airborne particle measurements are a major component of any field study of visibility impairment.
Particle Size Distribution
Single-particle electro-optical counters and electrical mobility analyzers are the most commonly used instruments for measuring size distributions of atmospheric particles. An extensive review of instrumentation is given in ACGIH (1989).
Single-Particle Optical Counters
Single-particle optical counters (OPCs) measure the amount of light scattered by individual particles as they flow through a volume illuminated by a laser or white light source. The minimal detection limit for OPCs varies with instrument design but typically is in the range of 0.1-0.5 µm. The response of a given OPC to a given particle depends on the characteristics of the light source (wavelength distribution and polarization) and on the geometry of optics used to collect the scattered light. For homogeneous spherical particles of known refractive index, typically there is good agreement between measured and predicted OPC response and particle size.
There are several major difficulties in using OPCs to measure size distributions of airborne particles. If a large fraction of the measured
particles are nonspherical, generally it is not possible to measure size distribution accurately. Also, the refractive index of sampled atmospheric particles generally is unknown. To relate measurements to size distributions, a refractive index must be assumed. Indeed, measurements have shown that atmospheric particles of a given size can contain an external mixture of particles having two or more refractive indexes (Hering and McMurry, 1991; Covert et al., 1990). Even when particles are chemically alike, particles of different sizes can produce an identical OPC response (e.g., Szymanski and Liu, 1986). That phenomenon is especially likely for particles in the optically sensitive 0.5–1.0 µm diameter range. Finally, heating within the instrument can drive off volatile components, especially water; that will lead to a measured size distribution that is smaller than that in the ambient air and, consequently, to an erroneous estimate of the effect of the particles on visibility (Biswas et al., 1987).
These difficulties limit the use of OPCs for measuring airborne particle size distributions. Nevertheless, because the OPC provides real-time information and can operate in the optically sensitive 0.1–1.0 µm size range, the OPC is one of the most convenient instruments available for measuring of physical size distributions.
Electrical Mobility Analyzers
The electrical mobility of a particle in an electric field depends on the particle charge, size, and shape. The size and electrical mobility of spherical particles with a known charge are uniquely related. Therefore, electrical mobility measurements can be used to determine the diameters of such particles.
There are two types of electrical mobility analyzers. The electrical aerosol analyzer (EAA) (Whitby and Clark, 1966; Liu and Pui, 1975) measures the electrical mobility distribution of airborne particles that have been charged by exposure to a unipolar cloud of small ions. The unipolar charger is used to put the maximal possible charge on the particles. One limitation of the EAA is that particles of a given size emerging from the charger contain a distribution of charges rather than a unique charge. Therefore, there is no unique relation between particle size and electrical mobility. For particles smaller than about 0.1–0.3
µm, various charges do not present a major problem because the range of charges on particles of that size is narrow. However, for larger particles charge distribution broadens and the size sensitivity of electrical mobility decreases. It is difficult, therefore, to obtain accurate information on the size distribution of particles larger than about 0.3 µm. Because OPCs perform best for particles larger than 0.3 µm and EAAs perform best for smaller particles, both instruments are often used to measure overall particle size distributions.
The differential mobility analyzer (DMA), also referred to as the electrostatic classifier, can be used to determine atmospheric particle size distributions from measurements of electrical mobility distributions (Liu and Pui, 1975; Knutson and Whitby, 1975; Fissan et al., 1983; Hoppel et al., 1990). Two major differences between the DMA and the EAA are that (1) particles entering the DMA are typically charged with a bipolar ion charger, in which particles are exposed to an ion cloud containing both positive and negative ions; and (2) the DMA detects particles in a narrow mobility window rather than particles with mobilities below some minimal value, as is done with the EAA. The net effect of the differences is that size distributions of particles smaller than about 0.5 µm can be measured far more precisely with the DMA than the EAA. Recent improvements in the design of such instruments permit accurate size distribution measurements of particles as small as 0.002 µm in diameter (Winklmayr, 1987; Reischl, 1991). Although the DMA has received little application in measuring atmospheric particles, it undoubtedly will be used increasingly.
Particle Mass and Chemical Size Distributions
Filters are used often for obtaining bulk aerosol samples. A wide variety of filter sampling materials is available (e.g., Teflon, nylon, polypropylene, polystyrene, cellulose fiber, glass fiber, and polycarbonate). Filter samples are useful for obtaining a measure of gross particle concentrations and composition. Such measurements are important for flux data, but they do not provide information about chemical speciation as a function of size that is required for many applications. Also, filter samples are likely to have many artifacts that are generated by reactions
of the sampled substances with the sample matrix or that are among the atmospheric constituents themselves. That subject is discussed more fully later in this appendix.
Cascade impactors are the most commonly used instruments for measuring the size-resolved chemical composition of airborne particles. A cascade impactor consists of a series of impactor stages. The top stage removes the largest particles, and subsequent stages remove successively smaller particles. Impactors classify particles according to their aerodynamic diameter, which depends on particle shape and density. Although most impactors have a lower size cut (i.e., minimum particle size that can be collected by inertial impaction) of about 0.3–0.5 µm in aerodynamic diameter, impactors with size cuts as small as 0.05 µm have been developed (Hering et al., 1979; Berner et al., 1979; Hering and Marple, 1986; Cahill et al., 1987). Because most atmospheric particle mass consists of particles larger than 0.05 µm, such impactors can collect virtually all the particulate mass.
Impactors are highly versatile devices and cover a wide range of particle sizes and sampling rates. With appropriate substrate materials, impactor deposits can be analyzed for a large variety of aerosol components. Analytical techniques that have been applied to impactor deposits include ion chromatography, flash volatilization, elemental and organic carbon analysis, x-ray fluorescence, and proton-induced x-ray emission.
Although impactors and filter systems are used widely in visibility studies, problems with their performance must be thoroughly understood, to avoid compromising the acquired data.
There are several common problems with using of cascade impactors for atmospheric particles. Particles bouncing off the collection stage and collecting downstream is the most troubling problem (Dzubay et al., 1976; Wesolowski et al., 1977; Reischl and John, 1978; Lawson, 1980; Wang and John, 1987). Particle bounce leads to overestimates of concentrations in the small-size ranges. Bounce can be reduced or eliminated by coating the impaction substrates with a sticky substance (Turner and Hering, 1987; Wang and John, 1988), although greases can interfere with the subsequent chemical analysis.
Particle Shape and Density
Particle density must be known to compute the actual diameter from the aerodynamic diameter. However, atmospheric ammonium nitrate particles are not always spherical densities are not always known, and particles of a given size might have more than one density. Thus, computed diameters are often based on assumptions that can lead to considerable error.
Chemical Changes in the Sample
Chemical changes can occur while sampling with filters or cascade impactors and can lead to measurement errors. The artifacts include evaporation of semivolatile components during sampling, adsorption or absorption of gases onto sampling substrates, and changes in particle composition during sampling due to reactions among particles or with the gas stream.
Evaporative losses occur if the equilibrium partial pressure at the surface of the particulate deposit exceeds the gas-phase partial pressure above the deposit. Evaporation can result from the pressure drop through the sampler and from changes in temperature or aerosol composition during sampling. The theory is that evaporative loss is likely to be high when the ratio of the gas-phase concentration to the particulate concentration at the sampler inlet is large and that the extent of evaporation depends on whether the semivolatile compound is adsorbed, absorbed, or condensed on the particles (Zhang, 1990). Substances that can volatilize during sampling include HNO 3 from acidified NO3- particles, NH3 from basic NH4+ particles, and certain organics.
Evaporative losses of particulate nitrates have been investigated in both laboratory and field experiments with impactors and filters. The laboratory studies (Wang and John, 1988) involved parallel sampling of ammonium nitrate particles with a Berner impactor (size cuts ranging from 0.08 to 16 µm) and a Teflon filter. Both samplers were followed by nylon filters to collect evaporated nitric acid. Losses from the impactor were 3–7% at 35°C and 18% relative humidity, and losses from the filter were 81–95% under the same conditions. The result that evaporative losses from the filter exceeded those from the impactor is consis-
tent with theoretical predictions (Zhang and McMurry, 1987). The atmospheric measurements (Wall et al., 1988) involved sampling Los Angeles airborne particles with a Berner impactor located downstream of a nitric acid denuder parallel with a dichotomous sampler with nylon filters. The dichotomous sampler is known to sample particulate nitrate quantitatively (John et al., 1988). It was found that losses from the impactor were less than 10%. There have been no similar experiments in pristine regions where low particulate nitrate loadings might lead to relatively large evaporative-loss errors in impactors.
Because evaporative losses of nitrates from filters tend to be large, measurement methods for particulate nitrate must include collection and measurement of nitric acid that evaporates from particles during sampling. One common sampling method involves using a nitric acid denuder to remove gas-phase nitric acid upstream of the particulate filter. The particulate filter is then followed by an adsorber to collect nitric acid that evaporates from the particles during sampling. Discussions of sampling methods for nitrates and other inorganics are given by Lewin and Klockow (1982), Possanzini et al. (1983), Eatough et al. (1985), Ferm and Sjodin (1985), Mulawa and Cadle (1985), Knapp et al. (1986), Keuken et al. (1988), and Vossler et al. (1988).
The importance of evaporative sampling losses is not as well understood for organics as for nitrates. There is evidence that trace organic compounds, such as polycyclic aromatic hydrocarbons (PAH) and polychlorinated biphenyls (PCB), can evaporate from aerosol deposits during sampling (Commins and Lawther, 1957; De Wiest and Rondia, 1976; Katz and Chan, 1980; Koenig et al., 1980; Peters and Seifert, 1980; Galasyn et al., 1984; Marry et al., 1984; Coutant et al., 1988). Eatough et al. (1990) used a sampling system that included a denuder, filter, and sorbent bed for measuring organic particles. Their sampling scheme permitted determination of positive sampling artifacts (associated with vapor adsorption on filter media) and negative artifacts (associated with evaporative losses of vapors). They concluded that both positive and negative artifacts occurred but that negative artifacts tend to dominate. Negative artifacts as large as 2.6 µgC/m3 were collected at the Grand Canyon. Their results require confirmation by other measurement methods; however, they suggest that current methods for measuring atmospheric organic carbon may be seriously flawed.
Many investigators have reported that the adsorption of organic va-
pors on quartz filters can lead to a substantial positive sampling of artifacts for particulate organic carbon. For example, Cadle et al. (1983) found that when two quartz fiber filters were used in a series, the amount of carbon collected on the second filter was at least 15% of the amount collected on the first filter. Because particles could not penetrate through the first filter, the carbon on the backup filter was attributed to vapor adsorption. McDow and Huntzicker (1990) found that quartz backup filters collected more organic carbon when they followed Teflon prefilters than when they followed quartz prefilters, presumably because quartz prefilters are more effective than Teflon prefilters at removing adsorbing vapors, thereby reducing the vapor exposure of the afterfilter.
McMurry and Zhang (1989) used a cascade impactor with a cut point of about 0.1 µm to collect size-resolved atmospheric particle samples for chemical analysis; particles smaller than 0.1 µm were collected on a quartz afterfilter. Samples were analyzed for organic and elemental carbon, for various elements (by x-ray fluorescence), and for nitrate and sulfate ions. Measurements were made in Los Angeles and at pristine sites such as the Grand Canyon and Glen Canyon. Except for organic carbon, only a small fraction of the particle species were found on the afterfilter. However, a large portion (40–70%) of the collected organic carbon was found on the quartz afterfilter; the amount was higher at remote sites where particulate organic carbon concentrations were lower. McMurry and Zhang argued that much of the carbon on the afterfilter was due to the adsorption of carbon-containing gases. That suggests that gas adsorption on quartz filters can lead to major uncertainty in determining the particulate organic carbon concentrations in samples collected in visibility monitoring networks, where concentrations often are low but can be a large fraction of the fine particle mass.
Chemical reactions that involve the deposited particles can also lead to sampling artifacts. For example, Klockow et al. (1979) found that sulfuric acid particles could not be extracted accurately from certain filter materials because of chemical reactions of the acid with the filter substrate. Brorström et al. (1983) found that particle-associated PAH was degraded when air containing NO2 and O3 was passed through the filter; the rate of PAH degradation increased with increasing acidity of the aerosol deposits.
An overview of aerosol sampling and sampling inlets has been given by Vincent (1989). Because of the strong size dependence of radiationparticle interactions, it is important to use samplers with well-characterized sampling inlets for visibility research and monitoring. In recent years, considerable effort has been made in developing inlets with a well-defined sampling efficiency for coarse particles. There have been several reasons for the effort. First, the size-dependent sampling efficiency of the hi-vol sampler, the standard reference method for measuring ambient total suspended particulate concentrations, was found to depend on wind speed and direction (McFarland et al., 1979). Therefore, the characteristics of the sampled particles were not well defined. There also has been an interest in developing inlets that collect only particles of the size range that can be inhaled by humans and, hence, might harm human health.
Recent work has shown that particle deposition in aircraft sampling inlets may be a much more severe problem than was previously believed. Huebert et al. (1990) found that 50–90% of sampled atmospheric particles deposited within aircraft inlets of various designs. Deposition losses occurred both for coarse particles and for fine-particle non-sea-salt sulfates. The poor efficiency of the sampling is not understood; theories predict high efficiency, especially for fine particles. Based on those observations, the conclusion of Huebert and co-workers was that much of the aircraft data for atmospheric particle concentrations probably underestimate the actual ambient concentrations by factors of 2–10. However, other intercomparisons (e.g., Daum et al., 1987; Baumgardner et al., 1992) have shown relatively good agreement between aircraft and ground measurements. A workshop was recently convened to investigate sampling characteristics for airborne particles (Baumgardner et al., 1992). It was concluded that ''at present, there is insufficient airborne experimental data to validate any existing inlet models.'' Because of the importance of airborne sampling in visibility research, the committee supports the recommendation from this workshop that more research on inlet sampling efficiencies be conducted, especially for submicron particle sampling with aircraft.
Electron Microscopic Analyses of Single Particles
Filter and impactor samples can be analyzed to determine the average composition of all collected particles. Such analyses, however, provide no information about particle-to-particle variations in composition. Recent work has shown that atmospheric particles are externally mixed to some extent; that is, particles in a given size range comprise of distinct types of materials (Covert and Heintzenberg, 1984; Harrison, 1985; McMurry and Stolzenburg, 1989; Covert et al., 1990; Hering and McMurry, 1991). To understand optical properties of the atmosphere containing those particles, detailed knowledge is needed of the composition of the individual particles and the relative concentration of each in the ensemble (White, 1986). Recent advances in electron beam technology make single-particle imaging and analysis a useful technique for obtaining such information.
Aden and Buseck (1983) reported on data-analysis procedures that provide quantitative (± 10%) elemental analyses by energy dispersive spectrometry for particles as small as 0.1 µm. With light-element detectors, elements having molecular weights as light as carbon can be analyzed. Post and Buseck (1984) used these methods to analyze 8,000 particles from the urban Phoenix atmosphere and found that most fell into the categories of minerals, sulfur (presumably sulfate), and lead from automobiles. Their microscope was not equipped with a light-element detector, so carbon-containing particles were not identified. Schwoeble et al. (1988) reported on a computer-controlled scanning electron microscope (SEM) that automatically locates particles on a substrate and digitally records the particle image and elemental composition. Automated microscopy facilitates the analysis of a statistically significant number of particles at a modest cost.
There are some problems with electron microscopy. Volatile substances such as water, nitrates, and some organics might evaporate when irradiated by the electron beam, and all important elements cannot be analyzed. The environmental SEM (Danilatos and Postle, 1982; Danilatos, 1988) eliminates some of those problems. The electron gun and optics operate under high vacuum, but the samples are exposed to a positive gas pressure (greater than 5 torr). The gas phase above the sample can include water vapor, so water is not necessarily lost during pump down, as occurs in most other electron microscopes. The pres-
ence of the gas also eliminates the need to coat the sample with a conducting film. Recent developments suggest that electron microscopy shows promise for providing information on particle properties that is not provided by most common analysis methods.
Organic and Elemental Carbon Analysis
The total carbon content of filter or impactor samples can be measured accurately with a variety of techniques. For visibility measurements, however, it is important to distinguish between organic and "elemental" or "black" carbon. The different chemical forms tend to originate from different sources and have different optical effects.
A variety of analytical schemes have been developed to distinguish between organic and elemental carbon. In all the methods, the samples are exposed to a carrier gas (typically oxygen, air, or helium) in an oven where the particulate carbon is converted to a gas, which is measured by an appropriate detector. Particulate organic carbon tends to be released at lower oven temperatures than does elemental carbon, and that temperature dependence is used to distinguish between them. With these techniques, the pyrolysis of organic carbon within the furnace can lead to overestimates of the ratio of elemental to organic carbon. Some techniques use optical absorption to correct for pyrolysis.
Interlaboratory comparisons (Groblicki et al., 1983; Countess, 1990) showed a wide variation in ratios of organic to elemental carbon for identical samples. Samples included atmospheric aerosols, diesel and unleaded gasoline exhaust, soot, and organic particles collected from a smog chamber. Laboratories participating in those studies included most of those involved in the analysis of samples from visibility monitoring networks. The inconsistency of results among laboratories suggests that organic and elemental carbon analysis is a major source of uncertainty in aerosol composition for samples collected in visibility monitoring networks.
Water Content of Airborne Particles
Water often constitutes a substantial portion of atmospheric particles. For example, water accounts for 63% of the mass of ammonium sulfate
droplets at 85% relative humidity and 20°C. Because water causes substantial changes in the size of hygroscopic particles, it plays a central role in determining the visibility effect of airborne particles. However, because water is so volatile, the water content of airborne particles is difficult to measure.
Several methods have been used to measure aerosol water content. Hänel (1976) and co-workers used impactors to collect particles in one or two size cuts with geometric diameters larger than 0.3–0.5 µm. An electronic microbalance was then used to measure the mass for relative humidities ranging from 0 to 95%; densities were determined with a specially designed gas pycnometer. The mass and density measurements were used to determine the average dependence of size on relative humidity. Measurements of airborne particles collected in urban, desert, and mountainous regions showed that water uptake by these diverse particles was highly variable.
Ho et al. (1974) used microwave resonance to measure the water content of samples collected on glass fiber filters. Measurements were made in Southern California during Fall 1972. The particulate water content was found to increase from 10% of the airborne particle mass at 50% relative humidity to 40% at 70% relative humidity.
McMurry and Stolzenburg (1989) used a tandem differential mobility analyzer (TDMA) to measure the sensitivity of particle size to relative humidity for particles in Los Angeles. With this technique, atmospheric particles of a known size are segregated using a differential mobility analyzer (DMA). The final size after humidification or dehumidification is determined to within 1–2% using a second DMA. McMurry and Stolzenberg found that particles in the 0.05–0.5 µm range often contained both hygroscopic and nonhygroscopic fractions. The hygroscopics to nonhygroscopic ratio varied substantially from day to day. For particles in the hygroscopic fraction, particle diameters increased by factors ranging from 1.12 ± 0.05 at 0.05 µm, to 1.46 ± 0.02 at 0.5 µm as relative humidities were increased from 50% to 90%.
The approaches described above for measuring the chemical composition of airborne particles all require the collection of samples over ex-
tended sampling periods. Useful information often can be obtained with more refined temporal resolution. For example, measurements that can be taken continuously (e.g., light extinction or scattering and concentrations of various gases, including SO2) often show large variations during the sampling periods needed for collecting particulate samples. Concentrations of particles almost certainly undergo similar large variations. Continuous measurements of particulate chemical concentrations could provide important information about atmospheric transport and transformations and about the effects of specific point sources on visibility at a receptor site.
Continuous measurements of particulate composition would have provided useful information in the studies reported by White et al. (1990) and Miller et al. (1990). They used hourly information on the concentrations of methylchloroform and perchloroethylene to document the effect of urban plumes on receptor sites in the southwestern United States. They found that concentrations of those gases at desert receptor sites followed a diurnal pattern that mimicked industrial emission patterns in distant urban areas. By coupling those data with wind trajectory analyses, it was possible to identify likely source regions. If continuous real-time particulate data had also been available, it would have been possible to obtain valuable information about long-range transport of particulate species and about the effects of such species on visibility impairment at the remote receptor sites.
Some work has been done to develop continuous or nearly continuous detectors for some particulate substances, such as sulfur (Garber et al., 1983), or for elements analyzed by techniques such as PIXE (proton-induced x-ray emission) (e.g., Cahill et al., 1987). However, such instruments typically are not used for routine monitoring because of cost, complexity, limitations in sensitivity, and the need for specialized analysis facilities. More attention should be given to developing techniques for continuous measurements of particulate composition.
The appearance of a distant object viewed through the atmosphere is affected by several factors: the amount and color of light emitted by the object (initial radiance); the transmittance of that light from the object to
the observer; and the scattering of ambient light into the sight path by the atmosphere (path radiance) (Duntley et al., 1957; Maim, 1979; Richards, 1988). Because the initial radiance, transmittance, and path radiance are sensitive to the wavelength of the light, one must know how those factors depend on wavelength before visibility can be characterized fully. Middleton (1952) provides a comprehensive discussion of atmospheric visibility and its measurement.
Transmittance is determined by the average extinction coefficient between the observer and the object. The extinction coefficient is the sum of the scattering and absorption coefficients for both gases and particles. Although transmittance only partly characterizes the visibility along a particular sight path, it is an inherent atmospheric aerosol air-quality factor and is unaffected by time of day, viewing angle, or characteristics of the natural illumination.
Path radiance is affected by variables that are not specifically related to air quality. For example, under identical air-quality conditions, the path radiance along a given sight path will be greater if viewed toward the sun than if viewed away from the sun, because particles tend to scatter light more strongly in the forward direction. Ground reflectance and the extent of cloud cover also affect path radiance. For example, the path radiance decreases in the presence of clouds because less skylight is available for scattering into the sight path.
Methods for measuring atmospheric optical properties can be classified as either point or sight-path techniques. Point measurement techniques determine the contributions of gases and particles to scattering and absorption at the measurement location. That information can be used to determine the local extinction coefficient (and therefore the local atmospheric transmittance). Because the composition of the atmosphere may vary between an observer and a distant object, the local transmittance may not characterize accurately the average transmittance along a given sight path. Sight-path methods include direct measurements of transmittance over a long path, measurements of radiance along a given sight path, and photography. Photographic and radiance measurements are influenced by factors such as skylight and ground reflectance and, therefore, usually do not provide information on atmospheric optical properties. In the remainder of this section, the most commonly used point and sight-path methods are briefly reviewed and discussed.
Both gases and particles contribute to optical extinction. Evaluating the contributions of gases to light scattering and absorption is relatively straightforward. All gases scatter light. The scattering of light by atmospheric gases is dominated by the most abundant kinds (i.e., N2, O2, and CO2), and, thus, light scattering by gases can be determined largely by the total air pressure. Because the relationship between air scattering and total air pressure is well known (Penndorf, 1957), the contribution of light scattering by air to bext can be determined easily.
NO2 is the only pollutant gas in the atmosphere that absorbs visible light and is present in sufficient concentrations to affect visibility. The specific absorption of NO2 is well established, so extinction can be determined from concentration measurements.
The remainder of this section focuses on light scattering and absorption by particles. These processes are much more complex for particles than for gases.
Scattering-Coefficient Measurements: Integrating Nephelometry
The integrating nephelometer measures the total amount of light scattered by an aerosol sample. The most commonly used instrument of this type (see Charlson et al., 1974) draws a sample into an enclosed, dark chamber where it is illuminated. A detector measures light scattered at angles ranging from near forward to near backward. To determine the contribution of gases and electronic noise to the scattering signal, the instrument's light-scattering response to filtered air is measured periodically. The contribution of particles to scattering is then determined by taking the difference. When equipped with a photon-counting detector, the integrating nephelometer can measure particle light-scattering coefficients of less than 0.1 Mm-1, a value equal to about 1% of the light-scattering coefficient of particle-free air at normal atmospheric pressure.
Because of its potential for high accuracy, portability, and moderate cost, the nephelometer has been used widely for measurements of light-scattering coefficients. There are, however, several sources of measurement error with this instrument. First, the contribution of coarse parti-
cles (particle diameter greater than about 5 µm) to scattering is underestimated because they tend to deposit at the inlet. Second, the optics do not permit measurement of light scattered between 0 and 7 degrees; however, coarse particles scatter strongly in the forward direction (Ensor and Waggoner, 1970; Hasan and Lewis, 1983). Another limitation of the nephelometer is that heating within the instrument can reduce the aerosol relative humidity during measurement. The light-scattering coefficient of atmospheric particles is sensitive to relative humidity, especially for values exceeding 60–70%, because the amount of water absorbed by particles (and therefore particle size) depends on humidity (Charlson et al., 1978).
Some of those problems can be dealt with relatively easily with available instruments. For example, the humidity's effects can be minimized by heating the inlet air so that the relative humidity is reduced to a low or constant value. Improved designs also will minimize or eliminate many of the problems. A nephelometer is being developed that uses solar power to operate at remote locations (J. Persha, pets. comm., Optec, Inc., 1990). Because of their many advantages, it is believed that nephelometers will play a major role in visibility programs.
Absorption Coefficient Measurements
Elemental carbon is apparently the dominant light absorbing particle component (e.g., Rosen et al., 1982; Adams et al., 1990). Particle absorption coefficients can be determined either by using in situ techniques, such as photoacoustic spectroscopy (Adams, 1988), or by measuring the extent to which light is absorbed by particles that have been collected on a filter (e.g., Lin et al., 1973; Clarke, 1982; Hänel, 1987). Salient features of these techniques are outlined below.
Photoacoustic spectroscopy measures the absorption coefficients of suspended particles in real time (Adams, 1988). The air stream, from which NO2 has been removed, is drawn into an acoustic cell where it is illuminated by light that is modulated at the resonant frequency of the
cell. Light energy absorbed by the particles heats the carrier gas, which expands and then contracts according to the modulation frequency of the light. The associated pressure variation is a sound wave whose intensity can be measured with a microphone. Photoacoustic spectroscopy appears to be the best available technique for measuring particle absorption coefficients, but it requires skilled personnel and complex equipment and, consequently, is not suitable for routine regulatory monitoring.
Filter techniques are the most common methods used for measuring particle absorption coefficients. Because light transmittance through filters is affected by scattering and absorption, the effects of scattering, including multiple scattering, must be accounted for. If light interacts with more than one particle as it passes through the filter, the apparent absorption coefficient will exceed the correct value. Also, filter techniques are problematic because the optical properties of deposited particles may be different from those of airborne particles, especially if the particles undergo chemical reactions on the filter, either through exposure to gases passing through the filter or through contact with other particles on the surface of the filter.
Lin et al. (1973) developed the integrating plate technique for measuring absorption coefficients of particle deposits on filters. With this method, an opal glass plate is located between the filter and the optical detector. Because the opal glass is a diffuse reflector, light scattered by particles in the forward direction is detected with the same efficiency as light that enters the glass directly. If backward scattering is small in comparison to absorption, changes in filter transmittance before and after particle collection can be attributed to particle absorption. Lin et al. concluded that both backward scattering and multiple scattering did not contribute significantly to errors in their measurements. Clarke (1982) modified that technique to reduce some of the errors.
Hänel (1987) argued that multiple scattering and backward scattering led to significant measurement errors of absorption coefficients by previous investigators using filter techniques. He developed an approach for measuring absorption coefficients that permitted accounting for forward, backward, and multiple scattering from collected particles. Reported
values for absorption coefficients with his approach are somewhat smaller than values determined with other techniques.
Foot and Kilsby (1989) compared particle absorption coefficients measured with a filter technique and with a photoacoustic technique. They used laboratory particles with known properties and found that agreement between the two methods was ± 15%. However, as was pointed out by Japar (1990), the uncertainties are likely to be greater in filter measurements with atmospheric particles that scatter strongly.
The earliest and simplest method of measuring light absorption by particles on filters is the coefficient of haze (COH) technique (Hemeon et al., 1953). The aethelometer described by Hansen et al. (1984) is an updated and more sensitive absorption measurement that operates on a similar principle. These techniques measure the light attenuation caused by an aerosol sample on a filter; no integrating plate is used to correct for light scattering. Wolff et al. (1983) found a good correlation between COH and concentrations of elemental carbon, and Campbell et al. (1989) found good correlations between COH and absorption measurements with an integrating plate and integrating sphere. Because these instruments can provide continuous, near real-time data, they are used often for monitoring. Quantification is achieved by calibrating against a more accurate absorption measurement standard.
Instruments that are used for sight-path measurements include teleradiometers (telephotometers), scanning densitometers, and transmissometers. Although their strengths and limitations have been known for many years, only during the past 10–15 years have they come into widespread use for visibility research. Teleradiometers measure the radiance (light intensity per solid angle) along a given sight path. The most common approach to teleradiometric measurements involves measuring the radiances from a dark target and from the background sky adjacent to the target. The apparent contrast between the target and sky is obtained from the radiances and can be used to determine the visual range under ideal measurement conditions.
Atmospheric conditions that must be met to determine visual range from contrast measurements are well known (see Allard and Tombach,
1981; Richards, 1988). In particular, the initial contrast of the target against the background sky must be known, and the sky radiance at the target and at the measurement point must be equal. Factors that lead to spatial variations in sky radiance include nonuniform lighting conditions associated with clouds or variations in ground reflectance and variations in extinction along the sight path. Allard and Tombach (1981) have described ''nonstandard'' viewing conditions that should be recorded when telephotometric contrast measurements are used to infer visual range. During routine contrast measurements made with telephotometers in the Western Regional Air Quality Study, standard observing conditions were uncommon enough to raise concerns for the representativeness of data (White and Macias, 1987b). Therefore, although telephotometers can be used to obtain accurate measurements of sky-target contrast, the measurement conditions necessary for use of contrast data are seldom met.
There are measurement schemes for which teleradiometers can be used to measure atmospheric optical properties unambiguously, even under nonstandard viewing conditions (Richards, 1988). Typically, more than one teleradiometer is needed, and absolute calibrations or artificial targets might be required. Such measurement methods have been used in field research but have not been implemented in routine monitoring networks.
Photographic slides commonly are used to monitor visibility. A scanning densitometer is used to measure the contrast between an image of a dark target and background skylight. (Johnson et al., 1984). The technique involves measuring light transmission through the slide by focusing on a small area (typically 25–100 µm2) of the target and of skylight and using those data in conjunction with known properties of the film to determine the sky-target contrast. A considerable library of slides has been acquired by the visibility monitoring networks of the National Park Service and the U.S. Forest Service. This technique provides a low-cost approach for routine visibility monitoring.
The use of slides for visibility measurements has limitations similar to those encountered with teleradiometers. Even though the apparent target-sky contrast can be measured, the information can be used only to infer atmospheric optical properties under ideal viewing conditions. Because ideal conditions are seldom encountered, slide data have limited utility for visibility monitoring. Nonetheless, slides do provide a useful
record of visibility; studies have quantified the relationship between slide-based assessments of visibility and other types of measurements (see further discussion in this appendix).
Transmissometers measure the transmittance of light from an artificial light source over a measured path to a detector. Path lengths of about 15 km typically are required to obtain accurate data in pristine air. The average extinction coefficient along the path can be determined directly from the transmittance measurements. Because the measured light propagates through the open atmosphere, transmissometers measure the optical effects of the unperturbed airborne particle. That ensures that accurate measurements can be made at high relative humidities when the particle water content is large and that measurements include the effects of both fine and coarse particles. Because transmittance is a property of the aerosol along the sight path, transmissometric data are not affected by path radiance.
An essential difficulty of any long-path in situ method is the absence of any definitive field calibration. Because the contents of the sample volume cannot be controlled, airborne particles of known characteristics cannot be administered (e.g., ambient Rayleigh scattering), as is routinely done with the nephelometer and other enclosed instruments. The optical elements can be tested at close range, but that introduces potentially large geometric and alignment errors and in any case produces a calibration point at zero extinction, well below the minimal (Rayleigh) value encountered in the atmosphere. In particular, the potentially substantial effects of turbulence on the beam ("blooming") are invisible to such tests, although various schemes for minimizing such errors have been proposed (e.g., Malm et al., 1987; Richards, 1988). Problems such as environmental degradation of optical surfaces, drifting source intensity, and misalignment can introduce significant hidden scaling errors in the field, even though a unit tests out before installation. As transmissometers are only now passing from selective research to widespread operational use, few data are available on their overall accuracy and reliability.
Several other problems are encountered with transmissometers. Care must be taken to filter out data that are acquired when clouds, rain, and fog obscure the sight path. In practice, that is often difficult to do. Transmissometers are also much more expensive than other monitoring instruments. Despite the limitations, solar-powered transmissometers are
routinely used in remote locations, and they have been the instrument of choice in the NPS/IMPROVE monitoring network.
Remote Sensing Techniques
Satellites can be used to estimate the total column loading of particles in the atmosphere on the basis of the upwelling irradiances from the Earth's surface. These techniques suffer from a number of problems. First of all, algorithms must be used to convert the irradiances to equivalent aerosol concentrations. Because of the complex nature of the interaction of radiation with particles, algorithms are in an early stage of development; consequently, measurements are subject to considerable error. Another operational problem is that it is difficult to distinguish between clear (cloud-free) regions and regions where there is broken (or sub-visible) cloud; contamination with even a small amount of cloud can lead to highly erroneous data. Also, in order to make accurate estimates of particle irradiances, it is necessary to accurately measure the albedo of the underlying surface; however, the surface albedo can vary greatly with time and location. Finally, satellites can only provide an estimate of the total column loading of particles; for the purpose of visibility studies, one needs to know the concentration in the atmospheric boundary layer close to the surface. Thus, while satellites can be useful for characterizing the large scale distribution of haze events, at this time they can not be used in quantitative visibility studies.
Point measurements and sight-path measurements are two distinct alternatives for visibility measurement. Point measurements can be used to determine the extinction coefficient, an inherent air-quality factor, and they can be used in conjunction with particulate and gaseous substance measurements to determine the contributions of different substances to extinction. The average atmospheric extinction coefficient over a given sight path can be determined with a transmissometer. However, extinction does not completely characterize visibility. For example, from extinction data alone, one cannot estimate the clarity with which detailed
features can be resolved in a particular landscape scene or correct for changes in the perceived color of the scene.
Radiances measured directly with teleradiometers or indirectly with scanning densitometers on photographic slides can be used to provide more direct information on what is seen by an observer. Information of that kind, however, provides information on atmospheric optical properties only under restricted measurement conditions that are seldom encountered in practice. Data from these instruments can provide valuable insights into air-quality levels necessary to achieve a desired visual quality for a particular view, but they are of limited use for routine compliance monitoring, because measurements are strongly affected by factors unrelated to air quality.
On balance, point-measurement techniques appear to be preferable to sight-path methods for routine monitoring of atmospheric visibility. Point measurements permit measurement of both scattering and absorption coefficients, thereby providing important information on the contributions of each to extinction. Comparisons between optical properties and chemical composition measured at the same point can be used to infer the contributions of various substances to extinction, and point measurements are less prone to the complications of clouds, fog, or precipitation that can occur along sight paths.
The most important point-measurement instrument is the integrating nephelometer, which measures the light-scattering coefficient. Instruments of that kind can be designed to be sensitive to the range of scattering coefficients found in both pristine and polluted air, are easy to install and calibrate, and can be reasonably priced. Furthermore, there are absolute and unambiguous calibration methods for integrating nephelometers, which is not the case for long-path methods. Major drawbacks of integrating nephelometers include the unavailability of a commercial, electronically up-to-date instrument, the tendency of the instrument to heat sampled airborne particles, thereby underestimating scattering coefficients on high-humidity days, and inaccurate measurements of coarse-particle scattering due to particle losses at the instrument inlet.
We recommend that high-sensitivity integrating nephelometry be used for routine haze monitoring. To do that a modern, commercial integrating nephelometer must be developed. The instrument should be sensitive to the range of scattering coefficients encountered in the atmosphere, should be self-powered, and should be designed to minimize
errors associated with sample heating and coarse-particle sampling losses.
TRACERS IN THE ENVIRONMENT
A "tracer" element or compound is a substance with unique characteristics, allowing its positive identification at very low concentrations. Tracers can be added in small amounts to a known substance of larger amounts whose subsequent physicochemical behavior needs to be understood. The goal is to add a detectable marker whose presence is so insignificant that the basic behavior of the larger system is unaffected even when the marker mimics the reaction pathways of the known species. In everyday life, the laundry mark on a shirt serves later to identify the origin of the shirt without significantly changing its appearance; the brand on a cow serves a similar purpose. In scientific application, the ideal tracer is one that has the same chemical form and properties as the material to be traced and that is distributed physically in the same manner.
In many laboratory situations, the tracer of choice is often a radioactive isotope of the same element whose stable isotopes are of interest—e.g., 14CO2 with radioactive 14C as a tracer for ordinary CO2, which consists chiefly of 12CO2 (99%) plus a small percentage of 13CO2. If CO2 might be coming simultaneously from two power stations, the addition of 14CO2 to one of them allows evaluation of the contributions of the two power stations to the CO2 observed at a downwind site through measurement of its 14C content. Once the three isotopic versions of CO2 are thoroughly mixed, all three behave similarly (although not in precisely the same way because of the slight differences in mass of the carbon atom), and measurement of the 14C content of a sample provides information about the other two forms of CO2. In some situations, the substance of concern carries its tracer with it as a natural component, as with atmospheric CO2, which contains a small fraction (about 1 part in 1012) of 14CO2 formed by the natural process of cosmic ray bombardment of the atmosphere. When atmospheric CO2 is incorporated into a growing plant by photosynthesis, the presence of the trace of 14CO2 can be quantitatively demonstrated through the radioactivity of the 14C in the plant.
The ideal conditions for using a tracer material may not be attainable (e.g., tracers for sulfur within coal), and near-substitutes must be found. For example, artificial radioisotopes are frequently the tracer of choice in controlled laboratory situations, but their release into the open environment usually is usually not acceptable because of the lack of control over the subsequent pathways of the radioactivity. During coal combustion, the sulfur atoms in coal are convened into SO2, which is later changed chemically into sulfuric acid (H2SO4). Frequently, when H2SO4 is found in the atmosphere (e.g., as a component in acid rain), questions arise as to the source of the sulfur, because H2SO4 from many sources becomes essentially indistinguishable once mixed. Under those conditions, a tracer, which has accompanied the sulfur from a particular source, can be essential in determining whether the H2SO4 in question has come from that source. The available isotopic tracers for sulfur are the radioactive isotopes (especially 35S with a half-life of 90 days) and sulfur isotopic mixtures, which have been enriched in stable 36S. (The most abundant stable isotope is 32S.) Neither of those is acceptable for tracing the path of fossil-fuel sulfur in the atmosphere; the amounts of 35S needed are large enough that release of radioactivity in that quantity into the environment would not be approved, and the sensitivity of detection for enriched 36S is sufficiently low to make such experiments impractical. Even if those factors were acceptable, sulfur atoms within coal exist in a chemical environment very hard to simulate, so that one could not be assured that the conversion to SO2 would proceed at the same rate for the coal-bound sulfur and its isotopic tracer.
When no acceptable isotopic tracers are available, other similar tracers can be identified. The element selenium is just below sulfur in the periodic table of the elements, which suggests that the chemical behavior of selenium would be similar to that of sulfur. However, selenium and sulfur differ significantly in their chemical behavior under many atmospheric conditions, so their subsequent pathways might be different, thereby undermining the purpose of the tracer.
Another tracer procedure for following the SO2 formed during coal combustion is the addition of another gaseous component intended to mix with SO2 and to follow its physical path, even if it is not capable of undergoing a chemical change analogous to that of SO2 changing to H2SO4. Two such gaseous tracer species are an isotopically labeled form of methane, fully deuterated CD4, which can be readily separated
from the naturally occurring CH4 by mass spectrometry, and several perfluorocarbons, compounds not found naturally. The compound CD4 is extremely rare in nature, where the ratio of D to H ratio is about 1:6,500, making the ratio of CD4 to CH4 about 1:1015. When such a gaseous tracer is introduced into the stack of a power plant and mixed with the SO2 from the burned coal, one can anticipate that the gaseous tracer and SO2 will be carried by winds to the same locations. If the tracer has arrived at a particular location, then it is likely that the SO2 arrived at the same time—unless it was previously converted to H2SO4. In that case, the H2SO4 is likely to arrive coincidentally with the tracer, unless the H2SO4 has interacted further and has been chemically or physically removed from the atmosphere. Thus, when CD4 is added to a known power-plant stack and some of it is measured downwind, the inference is strong that the SO2 emitted from the same stack on the same day also arrived at the site as either SO2 or H2SO4.
The effect of haze on visibility is an important issue in local and regional air-quality management. In the past, visibility was primarily the concern of aviation and military operations where the most important aspect of visibility was visual range—that is, the greatest distance at which an object could be discerned against the background sky. In that context, visual range has been quantified and related to light extinction, and it is used extensively to define visibility at airports.
In contrast, much of the present concern about visibility is related to the aesthetic damage from air pollution—that is, the impact on the perceived form, texture, and color of scenic features (Trijonis et al., 1990). In that regard, visibility degradation is distinct from nonvisual effects of air quality, such as health effects or economic effects. A judgment of visibility is an aesthetic judgment; depending on the judgment of an observer, atmospheric conditions can degrade the aesthetic quality of a scene (for example, when plumes or urban haze obscure a mountain backdrop) or enhance it (as when haze adds interest to a landscape) (Stewart et al., 1983).
The broader conception of visibility serves as the basis for regulation of stationary sources in the Clean Air Act. However, the definition of
visibility impairment in the Clean Air Act is vague and qualitative at best. It is difficult to obtain quantitative cause-and-effect relationships when the effects are not clearly defined.
The ability to make quantitative connections between optical properties of the atmosphere and human judgments of visibility is still in the developmental stage because of the complexity of the physical and psychological phenomena. To quantify visibility impairment, an index must be developed that can incorporate the complexity of those phenomena; the index also must be understandable and useful to the general public and policy makers as well as to scientific researchers. Because impairment is based largely on human judgments of the visual environment, the human element must be incorporated in the development of such an index. In addition, the index must be based on properties of the physical environment that can be readily measured and monitored to enable enforcement of air-quality standards.
Human Response to Visual Air Quality
The human response to visual air quality (VAQ) can be measured by a variety of techniques: (1) judgments of VAQ made in the field by experienced observers; (2) judgments of VAQ made from photographs by experienced observers; (3) judgments of VAQ made in the field by random passersby; (4) judgments of visual range made at airports by trained observers; and (5) judgments of selected perceptual cues that are components of the overall judgment of VAQ.
Field Judgments by Experienced Observers
Field judgments of overall VAQ by experienced observers provide the most direct measurement of VAQ. The relationship between experienced judgments and those obtained by other techniques can be used to assess the suitability of the latter as indicators of VAQ (Middleton et al., 1985).
The use of experienced observers to rate VAQ has been tested under
a wide variety of field conditions (Mumpower et al., 1981; Middleton et al., 1983b; Stewart et al., 1983; Middleton et al., 1984). Stewart et al. (1983) summarized those studies; they concluded that field observations made by experienced observers can be used as a basis for studying VAQ and for monitoring trends of VAQ in an urban area. Similar studies have been conducted for pristine areas (Maim et al., 1980, 1981; Latimer et al., 1981).
Judgments of Photographs
Judgments of VAQ from photographs were found to be highly correlated with judgments made in the field when the photographs were taken (Stewart et al., 1984). Although VAQ tends to be judged slightly worse in photographs than in the field, the relative differences in VAQ estimates for different scenes are about the same whether the estimates are based on photographs or made in the field.
As with field observations, the judgments of photographs of the same scenes by many observers can be averaged to decrease the variance in the responses. Tests show that 50–100 photographs can be judged in one sitting. Because photographic assessments of VAQ can readily accommodate a large number of observers, VAQ estimated from photographs can be more reliable than that provided by field observations, where the number of observers must be restricted because of time limitations and the expense of traveling from site to site. Photographic judgments also can be used as an alternative to field judgments for assessing the relationship of other measurement techniques to VAQ. One example is the recent visibility standards development process for Colorado, which used judgments of photographs as the technique for determining acceptable levels of VAQ (Ely et al., 1991).
Field Judgments by Passersby
Another alternative to field judgments by experienced observers is to conduct on-site interviews of passersby to obtain their judgment of the VAQ at the time and location of the interview. Stewart et al. (1983) found that, when the judgments of a number of passersby were aver-
aged, the resulting average was highly correlated with judgments of trained observers. However, the judgments of passersby were more variable, so that 17 passersby would have to be interviewed to equal the statistical reliability of results obtained from three trained observers. Given limitations of reliability and cost and the restrictions on usable sites, on-site interviews do not appear to be a viable alternative to using experienced field observers or slide-based judgments.
Airport visual range is routinely estimated by trained meteorological observers. (Visual range is defined as the maximal distance at which a large black object can be perceived on the horizon.) Even though viewing conditions were well defined and the viewers well-trained, airport visual range was only weakly correlated to judgments of overall VAQ made by experienced observers in a study by Middleton et al. (1984). That can be explained, in part, by the fact that visual range is only one of the many factors that constitute VAQ. Indeed, airport observers are specifically trained to focus solely on visual range and to carefully exclude other visibility-related factors that might affect their judgment of visual range. Another deficiency of airport visual range data is that airports usually are located on the outskirts of cities; the public's perception of VAQ is based on observations during everyday life, which is largely spent in the cities themselves. In spite of the weak relationship between airport visibility and field judgments of VAQ, airport data have one positive feature that all other VAQ techniques lack—a long historical record. Airports have been recording visibility for decades. Such records are valuable for assessing long-term visibility trends and have been used extensively for that purpose (Trijonis et al., 1990).
Human judgments of visual air quality are a composite of judgments of perceptual cues associated with scene characteristics, such as the color of the air, the clarity of objects at a distance, and the existence of borders between clear and discolored air (Stewart et al., 1983). Several
visual indexes that are related to individual cues have been proposed as surrogates for the overall judgment of VAQ. Those indexes, which have been summarized in Trijonis et al. (1990), include prevailing visibility, apparent contrast, equivalent contrast, average landscape contrast, modulation depth, blue-red ratio, color difference delta E, and ''just noticeable difference'' (JND) (Malm et al., 1980, 1981; Latimer et al., 1981; Malm and Pitchford, 1989; Hill, 1990). Prevailing visibility and apparent contrast are based on observations of a specific target or element in a scene. Apparent contrast is computed from measurements of target and background radiances. Equivalent contrast, average landscape contrast, and modulation depth account for the spatial structure throughout a scene and require the measurement of radiance at many positions. Measurements of color differences involve adaptations of colorimeters optimized for color matching in the natural environment (Henry and Matamala, 1990). JND is defined as the change in the input stimulus (e.g., contrast) required for an observer to perceive that change 70% of the time.
The usefulness of the different indexes depends on how closely the index capture the overall judgment of VAQ in a particular setting and a particular time. In some cases, the index might be a major component of the overall judgment, and, in other cases, a small component. Additional field testing of the indexes is needed to determine their usefulness.
Human judgments of VAQ and perceptual cues are being used to help establish standards (Ely et al., 1991) and to document public assessment of VAQ in conjunction with measurements of physical, chemical, and optical properties (e.g., Maim et al., 1981; Middleton et al., 1984). More current measurements of light scattering and extinction, for example, are used to monitor changes in visibility. Periodic assessment of the public response to VAQ changes is determined by special programs. Measurements of human response are used to determine the selected thresholds for acceptable visibility.