National Academies Press: OpenBook

Protecting Visibility in National Parks and Wilderness Areas (1993)

Chapter: 5 Source Identification and Apportionment Methods

« Previous: 4 Haze Formation and Visibility Impairment
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

5
Source Identification and Apportionment Methods

Two questions invariably present themselves to those who must devise ways to protect or improve visibility: ''Which sources cause the visibility problem under study?'' and "How large is each significant source's contribution of visibility-reducing particles and gases?" The first question, of source identification, must be answered to reach even a qualitative understanding of the problem. If emission controls are to be applied efficiently to the major sources, then one needs a quantitative understanding of each source's contribution to the visibility problem. The quantitative assignment of a fraction of an entire visibility problem to one or more sources is called source apportionment.

It usually is impractical to conduct a source apportionment study by experimenting on all the major air pollution sources in a large region—that would require an expensive control program just to observe its effects. Instead, analytical methods and computer-based predictive models have been developed to quantify the connection between pollutant emissions and changes in visibility. There are several major classes of methods and models. Speciated rollback models are relatively simple, spatially averaged models that take changes in pollutant concentrations to be directly proportional to changes in regional emissions of these pollutants or their precursors. Receptor-oriented methods and models infer source contributions by characterizing atmospheric aerosol samples, often using chemical elements or compounds in those samples as tracers for the presence of material from particular kinds of sources. Mechanistic computer-based models conceptually follow pollutant emissions from source to receptor, simulating as faithfully as possible the pollutants'

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

atmospheric transport, dispersion, chemical conversion, and deposition. Mechanistic models are source oriented; they take emissions as given and ambient concentrations as quantities to be estimated. Because these models require pollutant concentrations only as initial and boundary conditions for a simulation they can therefore be used to predict the effects of sources before they are built.

The members of the committee do not aim to give advice on how to choose a single best source apportionment technique for analyzing a given visibility problem. Instead, the committee offers guidance on how to view the air quality modeling process. The way to view air quality models is that they provide a framework within which information about the basics of the problem can be effectively organized. This basic information includes data on the air pollutant emission sources, observations on meteorological conditions, data on the ambient air pollutants that govern visibility, and information on emission control possibilities. The quality of the outcome of the modeling process usually depends as much or more on the quality of the data used as inputs to the model than it does on the modeling method chosen, thus placing a premium on the accuracy with which the basic facts of the problem are known. The objective of the analyst is to capture the scientific relationships between emissions and air quality such that important decisions about the effect of emission controls or about the siting of new sources can be answered. Depending on the decisions to be made, there may be either a strict or a more relaxed requirement for technical accuracy or detail. Federal regulatory programs are permitted to make regulatory decisions in the face of continuing scientific uncertainty. Within many likely regulatory structures, attribution of contributions due to individual sources may be unnecessary—attribution to classes of like sources or upwind geographic regions would suffice. In those cases where approximate answers are satisfactory, there are many possible ways to approach answering questions about the relationship between emissions, air quality, and visibility.

When approaching the analysis of the causes of a particular visibility problem, the best strategy is generally to use a nested progression of techniques from simple screening through more complex methods. Simple methods can be used to screen the available data and find an approximate solution. Next, more complex methods can be applied to determine source contributions with greater resolution. Advanced methods are appropriate when a problem is scientifically complex or when

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

control costs are high enough that more detailed or more highly resolved information is warranted. In general, the simpler methods use subsets of the data required by the more complex methods; this nesting of data requirements yields a natural progression of techniques. Receptor-oriented methods, for example, form a progressive series, where each additional measured variable contributes new information. When it is necessary to collect data to support a more complex method, simpler methods often can be applied inexpensively using these same data. Even when the simpler methods fail to produce sufficiently specific findings, the information they offer can be valuable because it is easy to grasp and to communicate to policy makers.

The cause of visibility impairment can take two general forms: widespread haze and plume blight, and source apportionment models for them must account for markedly different physical, chemical, and meteorologic processes. The committee evaluated models applicable to both kinds of impairment but focused on source apportionment models for multiple sources that contribute to widespread haze, because regional haze is the main cause of reduced visibility in Class I areas. A later section of this chapter contains a section on single source and plume blight models, and that section is prefaced by a description of the differences between widespread haze and plume blight.

First, we provide criteria for evaluating the relative merits of source identification and apportionment methods in the context of a national program to protect visibility. We then evaluate various methods, roughly in order of increasing resources required for their application: simple source identification methods; speciated rollback models; receptor models, including chemical mass balance models and regression analysis; models for transport only and for transport with linear chemistry (these are simplified mechanistic models that are either receptor or source oriented); advanced mechanistic models; and hybrid models. These methods are described in Appendix C. Appendix C should be read before encountering the critique of modeling methods that follows, as that appendix contains the definition of certain uncommon modeling methods (e.g., the speciated rollback model) and important text related to air quality models based on regression analysis. We generally describe models that predict source effects on atmospheric pollutant concentrations only, and not on visibility itself; these are known as air-quality models. It is understood that once pollutant concentrations are

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

apportioned among sources, the source contributions to light extinction can be calculated by the optical models discussed in Chapter 4. We then discuss the selection of apportionment methods to assess single source siting problems and air-quality problems other than visibility.

CRITERIA FOR EVALUATING SOURCE IDENTIFICATION AND APPORTIONMENT METHODS

A national visibility protection program could employ many alternative modeling methods. Source apportionment studies are generally best conducted through the successive use of simple screening models followed by more precise methods. At each stage of this process, one must decide whether further analysis and investigation by more complex methods is warranted. How can one judge the merits of an investment in more sophisticated analysis? Will a particular source apportionment approach yield results of acceptable accuracy? Is that approach consistent with resource constraints and legal requirements?

This section sets forth criteria for use in comparing alternative methods of source apportionment. Some criteria might seem to reveal some methods as either adequate or inadequate, but the committee's intention is to provide standards for comparing methods across the board. Few, if any, source apportionment methods can be rated highly in all respects, and it can be expected that regulatory decisions will be based on imperfect models. Some of the desirable properties of source apportionment methods (technical validity and simplicity, for example) can in fact conflict with one another. However, the following criteria should help make more informed decisions about the suitability of a given method for application to a particular visibility problem.

Technical Adequacy

The first set of criteria concern the technical adequacy of source apportionment methods.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Validity

The methods for modeling air quality and visibility should have sound theoretical bases.

Air-quality models can be based on solving the atmospheric diffusion equation, which provides a mechanistic description of the atmospheric physics and chemistry of pollutant transport, transformation, and removal. Simplifying assumptions and approximations usually are made to speed the solving of these equations. In a particular model formulation, these assumptions should be made to capture the essence of the problem at hand rather than to oversimplify the problem to the extent that there is little assurance that source-receptor relationships are represented correctly. The same criteria apply to mechanistic models for predicting the optical properties of the atmosphere described by Mie theory. For example, the derivation of the calculation scheme must be understood, and the effects of any simplifying assumptions should be small enough that reasonably accurate results can be obtained.

Empirical models that relate emissions to air quality and air quality to visibility parameters also can be judged for their theoretical foundations. Some empirical models are derived directly from the differential equations that explain the physical phenomena of interest, and they have a well-understood theoretical basis. Other empirical models use the concepts of materials balances or concepts that require the whole to equal the sum of its parts. Finally, some empirical models are purely phenomenological, with little structural relationship to physical processes in the atmosphere. Source apportionment models should be examined for valid theoretical bases, and models that are not developed carefully and in the light of first principles should not be used.

Compatibility of Source and Optics Models

It should be possible to link the model for source contributions to pollutant concentrations to a model for pollutant effects on visibility. The assessment of source contributions to visibility impairment generally requires two types of calculations. First, source contributions to ambient concentrations of pollutants are computed; next, the effects of those pollutants on visibility are determined. The results of the ambient pol-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

lutant calculation should satisfy the input data requirements of the visibility model. Not all air-quality and visibility models are compatible; for example, a conventional rollback air-quality model probably will not provide the particle size distribution data needed to perform a Mie theory light-scattering calculation.

Input Data Requirements

The data required for application of a particular approach to source apportionment should be understood and obtainable in a practical sense. The input data requirements of the various methods for apportionment differ tremendously. A rollback air-quality model might require only tens to a few hundred observations on emissions source strength and air quality. A photochemically explicit model for secondary-particle formation, however, easily can require millions of pieces of spatially and temporally resolved emissions and meteorologic data, along with size-resolved and chemically resolved aerosol data to check the model's calculations. It should not come as a surprise that the more theoretically elegant techniques place the greatest demands for field experimental data.

Evaluation of Model Performance

The performance of the candidate source apportionment model should have been adequately evaluated under realistic field conditions.

Confidence in model performance builds up over time as a result of its successful applications. New and untested systems require thorough testing and evaluation before they can be recommended for use in a national visibility program.

Source Separation

The source apportionment method should distinguish the sources that contribute to a particular visibility problem with the level of accuracy required by the regulatory framework within which the model must operate.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

Some source apportionment methods (receptor models) can attribute visibility impairment with considerable accuracy to generic source types (sources of sulfur oxides, for example) but cannot distinguish among different sources of the same type (they often cannot tell which power plants are contributing to the problem). Other modeling methods (such as speciated rollback models) could predict the effect of individual sources in a region on air quality at each receptor (the prediction would be that the atmospheric concentration increment is proportional to the fraction of the emissions contributed by that source to the air basin), but that prediction might not be accurate. The source separation achieved by a particular method should serve as a basis for an effective regulatory program, given that the amount of source separation needed depends on the legal framework within which that program must operate.

Temporal Variability

The source apportionment method should account for the temporal character of the visibility problem. Many models directly calculate pollutant concentrations over averaging times that range from a day to as long as a month or a year. However, reduction in visual range is instantaneous, and often it is impossible to explain short-term reductions in visibility from data on long-term average pollutant loadings. A model's averaging time can limit its usefulness in visibility analysis.

Geographic Context

The source apportionment approach should be suited to the geography of the visibility problem; the spatial characteristics of an air-quality model should be matched to the spatial character of the problem at hand. If the terrain of interest is complex (for example, the Grand Canyon), then models that assume flat topography might not capture the location of plumes that travel between the observer and the features of the elevated terrain. In the case of grid-based air-quality models, the spatial scale of the grid defines the smallest area for which air quality can be examined. If the grid system is too coarse, the essence of a source-receptor relationship can be lost.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Source Configurations

The source apportionment method should be suited to the physical arrangement of the sources. Some air-quality models, such as rollback models, assume that the locations of emissions sources will not change. Some receptor-oriented models can apportion emissions from existing sources but cannot readily predict emissions from new ones. Some mechanistic models are better than others at predicting the effects of changes in the elevation of emissions.

Error Analysis and Biases

The method's error characteristics should be known. No technique can be expected to be completely accurate in its attribution of an environmental effect to a particular source. The limits of scientific knowledge about the atmospheric dispersion of air pollution and the workings of chance in atmospheric processes prevent absolute certainty. Obviously, the greater a technique's expected error, the less useful it will be in a regulatory program. It is best to conduct a systematic analysis of the error bounds that surround the predictions made by a candidate method. It should be known whether the errors affect all source contribution estimates equally or whether biases are likely to distort the relative importance of different sources.

Attribution techniques are often skewed in their error characteristics; a given technique, for instance, could be known to under-rather than overpredict the contribution of a source to an effect. A technique's error characteristics could restrict its use to a specific type of regulatory program. For instance, a technique that systematically overpredicts could be useful in a technology-based program that requires only a conservative screening model; the same technique might not be useful in a program that attempts to base control requirements on a more precise estimate of a source's effects. Similarly, a technique that systematically underpredicts source contributions would be of limited use in a program, such as that prescribed by the Clean Air Act, which takes a preventive approach to environmental problems.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Availability

A source apportionment method should be fully developed, available in the public domain, and ready for regulatory application; otherwise, further research and development should take place before it can be recommended for use in a national visibility protection program.

Some promising source apportionment methods (such as the models for atmospheric formation of size-distributed secondary particles, linked to Mie theory light-scattering calculations) are now being developed but are not ready for widespread use.

Administrative Feasibility

Technical merit alone does not determine the suitability of a source apportionment method. If a particular approach is to be the basis for a national program of visibility protection, it should be structured to fit the administrative requirements of a regulatory program.

Resources

The resources required to apply a particular source apportionment system should be clearly understood. Before a source apportionment method is selected, it should be known how many people, how much time, and how much money are required to start and maintain an assessment of source contributions to visibility impairment in Class I areas. Otherwise, it is unlikely that a regulatory or research program would be established with the amount of support needed to do the work correctly.

Regulatory Compatibility

The source apportionment method should be compatible with the various regulatory frameworks that have been or could be imposed on the national visibility problem.

If a national program were based on the principle of non-deterioration

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

of existing air quality, there might be little need to determine the precise causes of current visibility impairment. A system of source registration and emissions offsets might suffice to meet regulatory needs. Alternatively, a regulatory program might specify national visibility standards and require remedial action to improve visibility in particular Class I areas by a specified amount. In that case, a source apportionment system would have to be able to apportion existing visibility impairment among contributing sources and to forecast whether a changed distribution of emissions would lead to compliance with standards.

Multijurisdictional Implementation

Where several government agencies have jurisdiction over different parts of a regional visibility problem, the source apportionment method should be suitable for use on a common basis by all parties. Responsibilities for visibility protection in Class I areas are now divided among the National Park Service, the Forest Service, the Fish and Wildlife Service, the Environmental Protection Agency, and state agencies. Some simple source apportionment systems, such as plume blight models, might be applicable for use by each of these agencies and could be used nationwide by different agencies acting independently. On the other hand, regional haze analyses that extend over several states and incorporate several Class I areas within a single analytical framework would need large amounts of data and might require a more unified approach to visibility regulation than has been taken to date.

Communication

The source apportionment approach should facilitate open communication among policy makers. One can envision two models of equal technical accuracy, one based on readily understood material balance assumptions, the other consisting of a mathematic simulation that policy makers must accept on faith. The more easily understood model could be preferred. Within the framework of an easily understood model, policy makers could conduct a rapid (if informal) analysis of the effects of alternative policies; rapid analysis and discourse might be impossible with a less understandable model. If policy judgments must be made by

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

officials who do not have technical expertise, then the ease of communicating results to policy makers will be an important consideration in model selection.

Economic Efficiency

The source apportionment method should support an economic analysis directed at finding the least expensive solution to a visibility problem. In addition to being able to identify the source contributions to a regional visibility problem, the method should be capable of being matched to a analysis of the least expensive way to meet a particular visibility improvement goal. Some source apportionment methods, particularly linear methods, are readily linked to cost optimization calculations. Non-linear chemical models can be difficult to use within a system that requires economic optimization.

Flexibility

Source apportionment methods that can be adapted readily to new scientific findings or to the changing nature of a particular visibility problem are preferable to less flexible methods. Conditions outside the range of past experience will probably arise in the future. Some source apportionment systems could be more adaptable than others to new circumstances and new scientific understanding.

Balance

There should be a balance between the resource requirements and the accuracy of a source apportionment system. A source apportionment method might require elaborate field experiments to supply data for simplified calculation schemes whose inherent inaccuracy does not warrant such great expense. One also can envision elaborate calculation schemes whose sophistication exceeds the quality of the available input data. The effort required and the cost required of data collection should strike a reasonable balance with that required for data analysis.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

CRITIQUE OF SOURCE IDENTIFICATION AND APPORTIONMENT METHODS

This section presents the merits and deficiencies of the most widely used source identification and apportionment methods, using the criteria just set forth. Each method has advantages. Some are potentially accurate but have not been thoroughly evaluated or require costly, elaborate data collection. Others are well understood, fast, and inexpensive, but do not accurately address visibility impairment caused by secondary particles.

Two or more methods are often used to obtain a reasonably complete picture of a given visibility problem. The rapid lower order analyses are most effective in the early stages of a source apportionment study because they yield approximate solutions that can guide further studies. The more complex methods then can resolve more difficult technical issues if a more accurate analysis of the visibility problem is needed. The following discussion should therefore be viewed as a guide for successively unraveling various aspects of a visibility problem rather than for selecting a best single method of analysis.

Source Identification Methods

In some cases, sources of visibility impairment in Class I areas can be identified directly by simple empirical methods. Source identification is most often possible for plumes (rather than for regional haze) caused by individual nearby sources. Identification methods include visual or photographic methods, emission inventories, and simple forms of receptor modeling.

Visual and Photographic Systems

The most direct and convincing means of identifying a source that impairs visibility is to visually track a plume from its source. Photographic evidence, in the form of videotape or slides, is essential for verifying these observations. Still photographs can be used to estimate or document visual range and variations in visibility over time. Photography is relatively inexpensive for documenting periods of reduced

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

visibility and concurrent weather conditions (fog and layering of plumes, for example). Slides can be interpreted as described in Chapter 4.

Videotape images generally have poorer resolution than do still photographs, but they can illustrate transport and changes in visibility. Videotape can help researchers to identify wind flows associated with low-visibility episodes and can aid in identifying periods when in-cloud chemistry can affect sulfate formation. Given the relatively low cost of time-lapse film or videotape, this technique should be considered for studies of the dynamics of visibility.

In many areas of the West, fire lookout personnel provide valuable information on visibility in Class I areas, on the sources that can impair visibility, and on the frequency and duration of impairment. Although fire lookouts often photograph Class I area visibility conditions each day for state and federal land management agencies, few agencies require written records of the observations of the sources of visibility impairment. In Oregon, observations and photographs taken by fire lookouts were used to develop prima facie evidence that agricultural burning was severely impairing visibility within the Eagle Cap and Central Oregon Cascade wilderness areas (Oregon Department of Environmental Quality, 1990). Regulations now are being developed to control these sources.

Emission Inventories and Source Activity

Visual and photographic information can be supplemented by emission inventories and source activity data. For example, a fire lookout's observations and photographs often can be confirmed by independent records of source activities (acres of vegetation burned daily, locations and dates of wildfires, and operating schedules of industrial facilities). However, the quality of emission inventories must be assessed carefully, because such inventories often are incomplete, and significant point or area sources can be excluded.

Simple Tracer Applications

When a source emits chemically distinct particles that are suspected to be impairing visibility, it could be possible to obtain evidence of a sig-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

nificant contribution from that source by applying simple receptor modeling methods. For example, at kraft process pulp and paper mills, recovery furnaces can emit large quantities of fine-particle hydrated sodium sulfate. If the emissions from such a furnace are suspected of impairing visibility in a nearby Class I area and if sodium sulfate is a major constituent of the fine-particle mass measured at a receptor site at the area's boundary; then a reasonable argument can be made that the furnace is a significant source of visibility impairment.

Evaluation of Source Identification Methods

Because source identification techniques are intended only to determine the sources of visibility impairment—and not to apportion that impairment quantitatively, it is inappropriate to apply all of the criteria listed earlier to them. It is worth noting, however, that simple qualitative tools are easy for most regulatory agencies to use. Although the information they provide is mostly qualitative, it could be sufficient in some cases to support regulatory action. For example, the direct evidence gained through photographs of plumes impairing visibility in a Class I area can be compelling. Any or all of these methods should be considered as basic elements of a visibility-monitoring program.

Because the methods are relatively simple, their costs generally fall within the resource limits of regulatory agencies. These methods are best suited to air-quality management programs that prohibit nuisances or visible emissions, and they can be used in multijurisdictional programs. Because these methods document visible plumes only, they cannot be used alone for complex source apportionment, nor can they predict the impairment that would be associated with new sources. They might be unable to discriminate the effects of several sources.

Speciated Rollback Models

As defined in Appendix C, speciated rollback models are simple, spatially averaged, conservation-of-mass models disaggregated according to the major chemical components of aerosols (Trijonis et al., 1975,

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

1988). These models are based on the simultaneous use of several separate rollback calculations, one for each of the chemically distinct major contributors to a regional visibility problem. The assumption is made that the concentration above background of each chemically distinct airborne particle type (e.g., sulfates, nitrates, organic carbon particles, etc.) is proportioned to total regional emissions of a particular primary particle species or gaseous aerosol precursor. Speciated rollback modeling is a relatively simple method of apportioning visibility-impairing pollutants among sources in a region.

Technical Adequacy

Two assumptions must be met for a speciated rollback model to be valid. First, the relative spatial distribution of emissions for each species (or species precursor) must remain the same when the quantity emitted changes. Second, the concentration of each pollutant must be linearly related to that of a controlling precursor; atmospheric transformation and deposition rates must be independent of pollutant concentration. Two other assumptions, generally less problematic, are inherent in the model. It is assumed that the weather patterns will be the same as those represented in the air-quality data base used to construct the model, and it is assumed that the temporal distribution of emissions stays fixed when emission quantities are changed.

Deviations from the assumption of spatially homogeneous emissions changes are likely to occur in areas where air quality is most critical at a single receptor point, where a primary air pollutant is emitted from a source for which stack height is important, where there are local emissions of coarse dust, or where two or more important source types differ radically in spatial distribution and only one of these sources is controlled. Deviations from this assumption are less important if spatially averaged air quality is the important predicted quantity, or if the air-quality problem is widespread, as is the case for regional haze. If, for a particular chemical component one dominant source with a fixed spatial distribution is being controlled, then the assumption of spatial similarity of emissions before and after control usually is met.

The assumption that secondary-particle formation is linear in the emissions of a dominant precursor is justified in some cases but not in oth-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

ers. One factor that affects the rate of secondary-particle generation is the rate of OH radical formation in the atmosphere. The OH radical oxidizes organic compounds, forms HO2 radicals, and promotes NO conversion to NO2, nitric acid (HNO3) formation from NO2, a fraction of the sulfuric acid (H2SO4) formation from SO2, and O3 build-up. HO2 radicals are the major source of hydrogen peroxide (H2O2), which is the major cause of production of sulfuric acid [and NH4HSO4, (NH4)2SO4] through the cloud phase oxidation of SO2 (and HSO3-). Ozone generation in the atmosphere is highly nonlinear and controls many other nonlinear processes involved in secondary-particle formation.

In some cases (near large sources or in coherent plumes) the use of linear rollback models to apportion secondary particles could lead to poor control strategies. For example, much sulfate production probably occurs in clouds as a result of oxidation by H2O2. Only a finite amount of H2O2 is formed in a given air parcel, depending on the conditions of NOx, RH, H2O, O3, and on the amount of sunlight. If for a particular region, the concentration of SO2 is twice as great as that of H2O2, [SO2] = 2 x [H2O2], then only half of the SO2 can be oxidized locally. Under such H2O2-limited conditions, if SO2 emissions were cut by 20%, sulfate concentrations would not be reduced by this amount. Where H2O2 concentrations exceed SO2 concentrations, however, sulfate formation would decline linearly with SO2 emissions, as predicted by rollback calculations. Additional calculations and field investigations are needed to determine whether the linear chemical assumptions inherent in a rollback calculation are met in a particular geographical area.

The recent NAPAP integrated assessment modeling study (South, 1990) suggests that linear rollback works well for sulfates, at least when they are spatially averaged. Using the Regional Acid Deposition Model, (RADM) a mechanistic model, the NAPAP study concluded that a 36% reduction in SO2 emissions in the eastern United States would produce a 37% reduction in sulfates averaged over the East. Evidently, any nonlinearities in sulfate chemistry were compensated for by spatial distribution effects. Power plants, the controlled source category, are somewhat more efficient at producing sulfate from given SO2 emissions than are other sources. (R. Dennis, pers. comm., EPA, Research Triangle Park, N.C., 1990).

Optics models based on pollutant extinction (scattering and absorption) efficiencies—extinction per unit mass concentration, obtained from Mie theory, regression analysis, or the scientific literature—would tend

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

to fit most naturally with the speciated rollback model. The use of optics models that invoke extinction efficiency values would preserve a structural consistency in that the approach would be organized by aerosol composition from emission inventory to air-quality model to optics model.

One advantage of speciated rollback models is that their input data requirements usually can be met based on data gathered by most air-pollution-control agencies. The required input data are speciated concentrations of ambient pollutants within the region of interest, speciated data on background pollutant concentrations, and regional emission inventories for each pollutant or controlling precursor. The predicted source separation can be as detailed as that of the emissions inventory.

Rollback models are among the few models that have been subjected to comparison of model predictions with monitored air quality before and after reductions in emissions. Rollback models have been used with mixed success to evaluate the effects of emissions changes (NRC, 1975, 1986; Trijonis, 1979, 1982a; Husar et al., 1979, 1981; Eldred et al., 1981; Husar, 1986, 1990; Sisler and Maim, 1990). Sometimes, poor results from a rollback model can be ascribed to a failure to meet the assumptions of fixed spatial emission patterns and linear relationships between secondary aerosols and precursors. Other failures could result from an insufficient test period (so that meteorological conditions were not equivalent before and after the emissions change), variations in external factors such as background concentrations, or an emission reduction so small that expected changes are statistically insignificant. When the model's assumptions hold within a reasonable approximation, conservation of mass can show that the model is valid.

The speciated rollback model can be applied to any temporal concentration specification, such as annual median, annual average, worst 10th-percentile, worst daily average, or worst one-hour average. The speciated rollback model also can be applied to any region that meets the constraints on the spatial distribution of emissions changes.

Because the model neglects the spatial distribution of emissions, there is a potential for biases if some sources produce lesser air-quality effects because of their locations (for example, sources located downwind of a receptor area). Unless the model is modified to included a transfer coefficient for such sources, it will not correctly credit these sources with the effects of their emissions.

The speciated rollback methodology is fairly simple and does not

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

require a significant model development effort. Some basic research is needed to improve the approach—scientists need a better understanding of organic aerosols and their origins—but such research could be required for any modeling approach.

The speciated rollback model can be analyzed with error propagation techniques and ''judgmental'' uncertainties to provide error bounds and to reveal the uncertainties most in need of resolution (Trijonis et al., 1988). One advantage of the speciated rollback model is that its errors tend to be confined to subelements of the model. For example, errors in estimating organic or elemental carbon particle emissions have only a secondary, smaller influence on the relative importance of sulfates. Uncertainty in the primary organic aerosol emissions inventory is essentially independent of the uncertainty in the SO 2 emissions inventory.

Administrative Feasibility

Perhaps the greatest advantage of the speciated rollback approach is the elimination of many administrative problems associated with more complex source apportionment techniques. The model is a low-cost approach that does not require highly trained personnel, and it is computationally simple. It requires three basic sets of information: a characterization of the types of particles that impair visibility, a characterization of background pollutant concentrations, and a speciated emissions inventory that delineates the relevant source categories. These three pieces of information are prerequisites for any adequate source apportionment and control study. Because the required data are so basic, the administrative resources devoted to source apportionment are kept within reason. Only a small expansion in federal, state, and local training programs and resources is needed to implement the approach, especially if national guidelines are provided for obtaining appropriate air-quality and emissions data according to specified formats. In many cases, nearly complete data sets already exist.

Not only would minimal resources be required to implement the speciated rollback approach, but also fewer resources might be required to defend any conclusions drawn from its use. Arguments inevitably arise between government regulatory agencies and regulated industries over modeling procedures. These arguments have more fuel when there are many modeling assumptions to be contested. The more numerous

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

the parameters in a particular model, the more likely it is that modeling results obtained by regulated industries will differ from those obtained by control agencies. A simple modeling approach does obviate many arguments and discrepancies. If this modeling approach is selected principally to simplify the defense of the conclusions, then the legal and regulatory framework should specify the simpler approach as the basis for control plans. Any policy decision that suppresses debate over modeling assumptions should be made clearly and openly.

The speciated rollback model is compatible with nearly all regulatory approaches. It is not compatible, however, with a new-source regulation program, such as the current PSD program, in an area where there are not enough existing sources to allow an empirical determination of the baseline relationship between emissions and air quality. Even where there are existing sources, the speciated rollback approach might be inadequate if a new source significantly alters the spatial distribution of emissions.

The simplicity of the speciated rollback model lessens multi-jurisdictional problems because it is easy to maintain consistency among various agencies. It would help to have national guidelines for the identification of controlling precursor species, the collection of aerosol composition data, the compilation of speciated emission inventories, and the estimation of background concentrations.

The simplicity of the speciated rollback model keeps the whole modeling process within an understandable framework. This is important because a fundamental principle of decision making in the face of uncertainty is that the process should be open. The critical technical assumptions made by scientists within the context of a rollback model are clear and understandable, not buried in computational details. It is important for decision makers to grasp the major assumptions and understand how the results might change if alternative assumptions were made. The model allows government officials to judge air-quality modeling results in conjunction with all other aspects of the decision-making process, such as costs, benefits, social values, political tradeoffs, and legal issues.

Economic Efficiency

Economically efficient control strategies can be identified with economic programming models—usually based on linear techniques, but

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

sometimes based on nonlinear programming techniques—that determine least-cost strategies for achieving air-quality goals. The speciated rollback model is certainly compatible for use with these least-cost economic models. In fact, nearly all published least-cost analyses of air quality problems have used rollback or modified rollback as the approach to modeling air quality. Perhaps this is because the model's complexity, uncertainty, and flexibility are complementary to that of economic models.

The speciated rollback model might not be optimal for identifying least-cost strategies for attaining air-quality goals at a single receptor site when those strategies involve a detailed change in the spatial distribution of emissions (in cases when it is obviously most efficient to control sources at specific locations). However, focusing on least-cost strategies for attaining air-quality goals at an individual receptor site neglects the benefits associated with improved air quality at other receptors. The speciated rollback model is more appropriate for analyzing air quality averaged over many receptor sites or for considering a widespread problem such as regional haze.

Flexibility

The speciated rollback model has several advantages in flexibility. Its modest data requirements—just an emissions inventory and speciated air-quality data—make the speciated rollback model easily adaptable to many geographical areas. For areas where such data have been assembled, it can predict changes for new and existing sources. As the information available changes (new emissions data, new air-quality data, revised estimates of background concentrations, or "nonlinearities" in the atmospheric chemistry), the model can be altered easily. The simplicity of the model makes a view of the whole process possible. The importance of new information is readily apparent, and the model does not fall prey to errors that can be hidden in the complicated data sets or algorithms of more complex models.

In other ways, the speciated rollback model is inflexible. It is not well suited for extrapolation to entirely new circumstances, such as the addition of a source in a region where there are no existing emissions, or—more generally—for large changes in the spatial pattern of emis-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

sions. Although the model can be modified to incorporate new information about nonlinearities in chemical transformation rates, it does not do this precisely or accurately. Finally, model results cannot be extrapolated to meteorologic conditions other than those on which it is based.

Balance

As noted in the discussion of economic efficiencies, the speciated rollback model appears to be in balance with the degree of sophistication and the uncertainties associated with economic optimization models. When linked with an optics model based on extinction (scattering and absorption) efficiencies, the speciated rollback model also is in balance with the optics model in terms of complexity and data requirements. This entire system of models is computationally simple and has reasonable requirements for emission information, aerometric data, emission-to-air-quality modeling, and optics calculations. The main complexity—keeping track of each pollutant within the emissions inventories, air-quality data bases, and light extinction budgets—is consistent throughout the analysis.

Summary

The speciated rollback model has several strengths: It is straightforward, the data needed to support its use are widely available or easily constructed, and its use is compatible with the personnel resources and budgets, of regulatory agencies. The model and its assumptions are easily communicated to decision makers, and it is readily linked to economic decision-making tools. It makes use of data about the chemical composition of atmospheric aerosols to limit the range of error in its predictions. For example, because predicted source-receptor relationships must match the observed chemical composition of the aerosols and the emissions sources, an error in predicting the effect of fugitive dust controls on fine-particle concentrations will affect only the predictions for airborne dust and not those for the other aerosol components.

The important limitations of the speciated rollback model follow from its assumption that emissions changes within major categories of air-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

borne particles and their precursors are spatially homogeneous, and from the assumption that atmospheric chemical and removal processes are linear in emissions. Furthermore, the model is not suited for extrapolation to source configurations, weather conditions, or atmospheric chemical conditions outside the range of the historical data sets upon which such an empirical model must be built. Care must be taken not to use this model far outside the stated range of conditions to which it applies.

Chemical Mass Balance Receptor Models

Receptor-oriented models, which are discussed in detail in Appendix C infer source contributions to visibility impairment at a particular site based on atmospheric aerosol samples from that site. Chemical elements and compounds in ambient samples are often used as tracers for the presence of emissions from particular source types (e.g., knowledge of the lead content of the particles emitted from automobiles burning leaded gasoline historically has been used to compute the amount of motor vehicle exhaust aerosol present in an ambient aerosol sample that contains lead). The committee evaluated two widely used receptor modeling methods. The chemical mass balance (CMB) model is discussed first; regression analysis models are discussed in the next section. Alternative receptor-oriented modeling methods, including models based on factor analysis, are discussed briefly in Appendix C.

Technical Adequacy

The chemical mass balance model is widely used within the regulatory community to identify and quantify the sources of particles that are emitted directly to the atmosphere (primary particles). Although several researchers have used the CMB model to apportion secondary particles in aerosols, the uncertainties associated with these applications need to be better explained before this use will be widely accepted by the scientific community. The following discussion is only about the current ability of the model to apportion primary particles.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

The theoretical basis of the CMB model has been carefully examined in validation studies, and the assumptions implicit in application of the CMB model have been identified. The effects of deviations from these assumptions are largely understood. Although the CMB model has been used to apportion light extinction by applying additional assumptions about relative humidity and the physical and optical characteristics of particles, these assumptions, which are needed to link the CMB model to atmospheric optics, require full validation before they are likely to be accepted for regulatory use.

Until recently, a major disadvantage of the CMB model has been the scarcity of the required input data about the chemical composition of emissions. With completion of several large projects to develop profiles of emissions sources (Core, 1989a; Houck et al., 1989) and with updates to the Environmental Protection Agency (EPA) source library, profiles are now available for many source types. Although source profile libraries have grown from just a few profiles in the recent past to hundreds today, there might not yet be enough replicate tests of similar sources to establish the variability of the source profiles or to assess the likely effect of variability in the emissions from the same source over time. Source profiles for a particular industry can change because of changes in technology or raw materials, and new sources will be built. Therefore the process of measuring source profiles must continue. Data on the chemical composition of fine particulate mass at receptors in or near Class I areas are now being routinely collected through the Inter-agency Monitoring of Protected Visual Environments program, the Northeast States for Coordinated Air Use Management project, and several state visibility monitoring programs, most of which have structured their sampling and analytical protocols specifically for receptor modeling. As can be seen from Table 5-1, some important marker element concentrations often fall below quantification limits. Even with these limitations a great deal of data on aerosol chemistry is becoming available for CMB model applications, although some regions of the country are not yet included in ambient monitoring programs.

Several investigators have completed CMB model validation studies (see Appendix C), that focus on apportionment of primary particles only. Further validation studies of hybrid models designed to apportion secondary particles are needed before such models can be widely used with confidence. Validation studies also are needed for use of the CMB model to apportion light extinction among contributing sources.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

Table 5-1 Percentage of samples with detectable trace element concentrations in regional fine particle measurement programs

 

Grand Canyon National Park

Shenandoah Valley National Park

Northeast U.S.

 

(yr)

(yr)

(yr)

 

1979–86

1985

1987

1988–89

1982–86

1988–89

1980

1989

Season

all

fall

winter

all

all

all

summer

all

Referencea

1

2

3

4

2

4

5

6

Samplersb

SFU

SCIS

IMPR

IMPR

SFU

IMPR

DICH

IMPR

Time (h)

72

12

12

24

72

24

24

24

Analysis

PIXE

XRF

PIXE

PIXE

PIXE

PIXE

INAA (XRF)

PIXE

Element

 

 

 

 

 

 

 

 

Na

44

*

*

50

58

59

100

83

Mg

34

*

*

14

14

12

*

21

Al

73

100

*

93

88

69

97

60

Si

93

100

97

100

98

99

(97)

98

P

7

100

*

0

8

0

*

0

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

 

1979–86

1985

1987

1988–89

1982–86

1988–89

1980

1989

Cl

4

3

*

3

0

0

100

0

K

92

100

87

100

100

100

97

100

Ca

91

100

100

100

91

100

97

100

Sc

*

*

*

*

*

*

91

*

Ti

50

98

33

97

58

91

*

89

V

12

96

*

17

24

16

97

66

Cr

9

84

*

12

9

26

91

19

Mn

23

99

13

53

48

58

94

60

Fe

91

100

99

100

100

100

94

100

Co

*

*

*

*

*

*

97

*

Ni

9

68

49

7

10

16

*

80

Cu

35

83

43

90

47

91

73

93

Zn

43

98

93

95

96

100

97

100

Ga

*

30

*

*

*

*

*

*

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

Element

1979–86

1985

1987

1988–89

1982–86

1988–89

1980

1989

As

*

22

50

20

*

27

91

33

Se

*

76

34

25

*

88

97

55

Br

23

98

62

96

45

97

(100)

99

Rb

*

4

3

32

*

8

*

11

Sr

*

35

7

35

*

7

*

6

Zr

*

1

12

13

*

2

*

21

Pd

*

12

*

*

*

*

*

*

Cd

*

9

*

*

*

*

73

*

Sb

*

37

*

*

*

*

97

*

I

*

*

*

*

*

*

64

*

Cs

*

*

*

*

*

*

64

*

La

*

16

*

*

*

*

82

*

Ce

*

*

*

*

*

*

97

*

Sm

*

*

*

*

*

*

76

*

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

Element

1979–86

1985

1987

1988–89

1982–86

1988–89

1980

1989

W

*

*

*

*

*

*

58

*

Pb

37

88

29

88

83

95

(97)

99

Th

*

*

*

*

*

*

52

*

* = not reported.

a (1) NPS network, described by Eldred et al. (1987). Detection frequencies calculated by B. Schichtel, Washington University, from data communicated by T. A. Cahill, University of California at Davis (1990). (2) SCENES 1985 Summer Intensive at Grand Canyon, Bryce Canyon, and Glen Canyon. Network described by Mueller et al. (1986); detection frequencies calculated by D. Pankratz, Aero Vironment Inc., and communicated by C. E. McDade, ENSR Inc. (1990). (3) WHITEX network (NPS, 1989, table 3C.8). (4) IMPROVE network, described by Eldred et al. (1990). Detection frequencies communicated by J. Sisler, Colorado State University (1991). (5) Research study, Tuncel et al. (1985). (6) NESCAUM Network, described by Poirot et al. (1990). Detection frequencies communicated by R. Poirot (1990).

b SFU: stacked filter unit;

SCIS: size classifying isokinetic sequential aeerosol sampler;

IMPR: IMPROVE sampler;

DICH: dichotomous sampler.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

The CMB model typically can provide useful source separation for five or six source types. Separation of the effects of two sources is difficult if the source emission profiles are similar. For example, smoke from prescribed burning of forest, from wildfires, and from residential wood combustion cannot be differentiated by the model based on a bulk accounting of the concentrations of chemical elements. The sources must be determined on the basis of additional information, such as emission inventories or plume transport calculations. Source contribution estimates from CMB modeling must often be used along with other techniques, such as dispersion modeling, to provide enough detail for the development of a control strategy.

Model response to a full range of temporal variability is possible if ambient sampling times are short enough. One major advantage of the CMB method is that actual samples collected during episodes of air pollution or impaired visibility can be used to identify source contributions, and the apportionment can be conducted for periods as brief as a few hours, thereby allowing the analyst to examine temporal variability in estimated source contributions.

The chemical mass balance model works well even when it is used for areas with varied topography, whereas mechanistic models for aerosol transport can be difficult to apply in complex terrain. The CMB model uses chemical element tracers rather than meteorologic flow fields to determine source-receptor relationships. However, because the CMB model is founded on analysis of measured historical conditions, it cannot be used to predict the effects of different stack heights or new sources.

Systematic error analysis procedures have been developed for the CMB model, and the results have been published in the model validation studies (Appendix C). The CMB model is fully developed and readily available for regulatory use in the analysis of the source origin of primary aerosol particles. Similar models for the apportionment of secondary particles and light extinction, however, are yet to be developed. Particular attention should be given to systematic model validation, documentation of assumptions, and analysis of model sensitivity to deviations from underlying assumptions. Since many regional haze problems are thought to be dominated by secondary airborne particle formation, this is a serious limitation.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Administrative Feasibility

The resources required to apply the CMB model are well known from experience gained during the past few years of regulatory application. They are compatible with the personnel, skills, and budgets of many regulatory agencies. Expansion of CMB models to apportion secondary particles could be constrained by resource limitations if tracer injection were a required element of the modeling protocol.

Because the CMB model is already commonly applied to controlling particulate matter, it could easily be used within an analogous program to improve visibility. CMB modeling is already a formal part of EPA guidelines for the development of regulations, so further extension of its applications primarily should involve training, software development, and updates to the source profile library.

Multijurisdictional implementation of the CMB model already has occurred, following the release of the computer software by EPA and the Desert Research Institute. Training programs and software updates will be needed as its applications expand. Because the CMB model is based on the simple idea that ambient samples represent a linear combination of source contributions and that the ratios of chemical elements can be used as tracers for those sources, the model is easily understood by decision makers, and its outputs are easily manipulated during discussions of alternative control policies.

Economic Efficiency

Given the CMB models' assumption of a linear emissions-to-air-quality relationship, it is straightforward to link the model to linear programming calculations designed to find the least expensive combination of emission controls needed (Harley et al., 1989). The ease with which the CMB model can be used to find cost-effective solutions to regional air-quality problems is a further advantage of this method.

Flexibility

The CMB model cannot respond flexibly to cases that involve new

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

sources or secondary-particle production. Its use is limited to apportionment of primary emissions from existing sources, but within that constraint, it has found success over a wide range of applications.

Balance

The resources required to apply the CMB model and the usefulness of the method are balanced attractively when the model is used to apportion primary emissions based on ambient and source aerosol chemical composition data. However, if artificial tracer injection were to become a required element of a secondary-particle-modeling protocol, the cost of applying the model would in many cases exceed the budgets available.

Summary

The CMB model has many attractive features. It is readily available, and there is an extensive history of model validation and error propagation studies. It is accepted for primary-particle source apportionment by the regulatory and scientific communities. Its use is compatible with control agency budgets and personnel resources. It is easily used within a broader program designed to identify the least expensive set of emission controls needed to improve primary particulate air quality.

The model has several important limitations, however. It is not fully developed for the partitioning of secondary particles among contributing sources. This can be a significant problem because regional haze often consists largely of sulfate particles. Connections made between CMB air-quality models and atmospheric optical models require further testing. In addition, experience shows that CMB analysis can typically separate no more than five or six source types from one another. CMB model application is therefore limited in airsheds where there are many contributing source types. Given these advantages and limitations, it is likely that the CMB model will find is greatest value within visibility studies as part of a hybrid combination of several models used simultaneously, where the CMB model is applied to the primary particle portion of the problem (see later discussion of hybrid models).

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

Models Based on Regression Analysis

Receptor models based on regression analysis use empirical relationships between source strengths and ambient concentrations to apportion airborne particulate matter among various kinds of sources. Tracer species that are emitted uniquely from particular source types are often used to derive these relationships. Appendix C describes regression analysis in detail. Here we evaluate its use as a tool in support of regulatory decisions.

Technical Adequacy

Regression analysis in general takes a theoretical model relating source emissions to ambient concentrations and determines parameter values by optimizing the fit to observational data. In principle, the regression model can incorporate data explicitly accounting for most major variables in the relationship between source emissions and ambient conditions, leaving only a few elementary parameters to be inferred. The data necessary for such a comprehensive model would include accurate indicators for the presence of emissions from all significant contributors, along with the ages of their emissions and one or more indexes of atmospheric reactivity.

Often, not all of the observations necessary for regression or other statistical procedures are available. Unlike the CMB models, regression models cannot be used to evaluate an episode that is described by a single instance of sampling. It is typically unknown or unclear how much different sources contribute to the ambient mix, and atmospheric reactivity often is undocumented. As a result, regression analyses often are based on rudimentary models in which important unrelated variables are lumped into the conditions being estimated. The interpretation of the estimates then is sensitive to untested assumptions about the statistical behavior of the implicit variables.

If these deficiencies have been overcome in a particular case and if the source contributions to concentrations of the atmospheric particles have been estimated reliably, then those results can be linked to atmospheric optical properties at the same level of detail as is possible for rollback and CMB models. Many extinction budget studies used to

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

relate rollback and CMB model results to visual range are themselves based on a regression analysis of the relationships between visual range and the properties of airborne particles. At some point it could be possible to merge these regression-based air-quality and optical model calculations. Such an approach has yet to produce successful analyses routinely. Any failure to determine an empirical relationship between source emissions and visual range in a particular case could be the result of misspecification of the model rather than the absence of a cause-and-effect relationship.

The data bases available to support regression analysis are similar to those available to support CMB models. Tables 5-1 and 5-2 show that important elements in the chemical signatures for such important source categories as coal combustion have not been measured with adequate sensitivity in ambient air by many existing monitoring networks in Class I areas. This problem, which affects both regression and CMB models, can be remedied through redesign of future field experiments. More fundamentally, it is not clear that reliable endemic signatures even exist for some source types. For example, Figure 5-1 shows that copper smelters emit selenium and arsenic and thus may be difficult to distinguish from coal combustion in the Southwest by either regression models or CMB models. Selected sources can be inoculated with artificial tracers, but this technology is experimental and carries its own difficulties.

The apportionment of urban primary aerosol particles by regression analysis has been tested on simulated data and has survived various cross-checks with emissions data (Cass and McRae, 1983). Even in this restricted setting, however, regression analysis has not been subjected to the formal trials necessary to verify its performance in operational use by regulatory agencies. The apportionment of secondary aerosol particles has not yet been tested rigorously. The needs for major improvements in the characterization and parameterization of atmospheric chemical reactivity within the structure of regression-based air-quality models are similar to those previously discussed for CMB models. If regression models are to be applied to visibility-relevant studies designed to apportion the sources of secondary airborne particles, improvements in models must be accompanied by thorough and convincing field verification studies.

The source separation achievable with regression analysis depends on the availability of reliable signatures for all significant sources, which in

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

FIGURE 5-1 Chemical signatures of five different copper smelters in southeastern Arizona. The signatures are presented as ratios, relative to copper, of seven elements measured in smelter plumes. The ratio of each element to copper is multiplied by a normalizing factor, as indicated, to facilitate intercomparison. Source: Small et al., 1981.

turn depends on the chemical differentiation of the emissions. In common with CMB and other approaches based on the deconvolution of ambient composition, regression analysis is unimpeded by complex terrain or diverse source configurations. Unlike CMB analysis, the time resolution available from conventional regression models is limited by the fact that the regression relationship is averaged over all the observations used in its derivation.

Standard procedures for determining the error bounds that surround coefficient estimates produced by regression analysis are well understood. These error estimates, however, depend on a close correspondence between the standard assumptions of regression models and the situation being modeled. If the model structure provides a poor approximation of reality, then the error estimates themselves could be wrong. The physical correctness of a regression model, as opposed to its descriptive performance, is difficult to evaluate, because the model is explicitly fit to the data under consideration. Statistical uncertainty and

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

TABLE 5-2 Commonly Used Endemic Source Tracersa

 

 

Representative references

Tracer substance

Source

Source characterization

Apportionment use

V, Ni

Oil combustion

Miller et al. (1972), Sheffield and Gordon (1986)

White and Roberts (1977), Kleinman et al. (1980), Cass and McRae (1983), Dzubay et al. (1988)

As, Se

Coal combustion

Tuncel et al. (1985), Sheffield and Gordon (1986)

Kowalczyck et al. (1978), Thurston and Spengler (1985), Dzubay et al. (1988), NPS (1989)

Nonsoil K, C14/C12, CH3Cl

Wood combustion, other contemporary carbon

Dasch (1982), Edgerton et al. (1986)

Lewis et al. (1986), Wolff et al. (1981), Stevens et al. (1984), Lewis et al. (1988), Khalil et al. (1983)

 

interpretational error may be estimated, but these estimates are sensitive to statistical assumptions and unobserved quantities. Appendix C, demonstrates what results when emissions from an unrecognized or poorly characterized source are proportional to the ambient correlation of those emissions with other emissions, a statistic that is by its nature unknown.

Although the bias is hard to quantify, regression models tend to overestimate the importance of sources that have reliable and easily measurable chemical signatures, whether endemic or injected. This follows from the fact that emissions from unrelated sources tend to correlate in the ambient air, reflecting the common influence of large-scale meteorological factors.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

 

 

Representative references

Tracer substance

Source

Source characterization

Apportionment use

Br, Pb

Motor vehicles

Huntzicker et al. (1975), Pierson and Brachaczek (1983)

Friedlander (1973), Kowalczyck et al. (1978), Kleinman et al. (1980), Cass and McRae (1983), Dzubay et al. (1988)

Al, Si, Mn, Fe

Soil-derived material

Miller et al. (1972), Cahill et al. (1981), Batterman el al.

Friedlander (1973), Kowalczyck et al. (1978), Kleinman (1988) et al. (1980), Duce et al. (1980), Cass and McRae (1983), Lewis et al, (1986), White and Macias (1990)

a This listing is illustrative rather than comprehensive. Compilations of chemical profiles for many source types are available in Core et al. (1984) and Hopke (1985). Chow and Watson (1989) give a comprehensive listing of other compilations.

Administrative Feasibility

The data base needed for regression analysis requires a considerable investment of resources in field measurements of source and ambient aerosol chemical composition. CMB models also face this problem. As

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

explained earlier when discussing CMB models, that investment in data acquisition already is being made in some parts of the United States. Unlike CMB models, regression models require that these enhanced measurements be repeated over many samples in order to provide an adequate basis for statistical inference via regression analysis. Fortunately, many routine air monitoring networks make successive observations over long periods of time, such that regression analyses can be applied. However, if artificial tracers are required to provide source-specific signatures, they must be injected throughout the sampling period. Indications of the cost for a relatively simple study of a single point source are provided by the $2 million estimated to have been spent on the WHITEX study of the Navajo Generating Station (NPS, 1989) and the $2.5 million initially appropriated for a similar study of the Mohave Power Project.

Resource pressures can adversely affect the equity of an experimental design. Regression analysis tends to overestimate the importance of sources with reliable and easily measurable chemical signatures. If resources, in the form of injected tracers or enhanced sample analyses, can provide good chemical signatures for only a subset of the contributing sources, then the selection of these targets forces a bias in the experimental outcome.

The potential biases inherent in the use of regression models are somewhat offset by the openness of regulatory processes in which such models are used. Regression results can be straightforwardly and succinctly documented, so that they can be understood and replicated, or reworked and reinterpreted, by all interested parties.

Economic Efficiency

Like other models that generate linear source-receptor relationships, regression models yield source apportionment results that can be combined with linear programming economic optimization methods to identify the least expensive way to improve regional air quality.

Flexibility

Like any receptor-oriented method, regression analysis is a tool for studying the existing atmosphere and cannot be directly applied to

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

unbuilt sources. On the other hand, fugitive and intermittent sources, which typically are difficult to model prospectively, pose no particular problems. An evolving understanding of emissions and atmospheric processes can be incorporated into the analysis through improvements in the regression model.

Balance

Provided that the necessary input data are available, the further data analysis required in order to apply a regression model to that data set generally is not so cumbersome as to distort the allocation of resources within a larger regulatory program.

Models for Transport Only and Transport with Linear Chemistry

Two other sets of models for source identification and apportionment are those that assess only the transport of pollutants without regard to the chemical or physical processes that affect the pollutants and those that assess transport coupled with a simple linear chemical transformation process. This family of models includes techniques that range from air parcel trajectory analysis, through models based on prediction of pollutant concentrations via solution of integral or differential equations that govern atmospheric transport and dilution. Appendix C describes these models in detail.

Technical Adequacy

The value of backward trajectory analysis depends on whether the meteorologic data used to produce the analysis are representative and whether the interpolation schemes are accurate. It is scientifically reasonable to interpolate wind information in time and space, but the use of trajectories requires careful interpretation, because the technique has the potential for large uncertainties that can arise if there are errors in observation or interpolation, sparse observations, or a misunderstanding of the vertical structure of the air parcels being transported.

Trajectory analysis alone is insufficient to quantify source contribu-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

tions, but it can show a qualitative relationship between sources and pollutant concentrations. Moreover, trajectory analysis and other wind flow analyses can help corroborate other source apportionment methods.

The calculation of atmospheric transport requires wind fields that accurately resolve variations in space and time. Wind fields are generated from interpolation of observed or simulated winds. Estimates of transport are therefore fundamentally limited by the spatial and temporal resolution of the input wind fields. Nationwide observation of winds nominally occurs in the United States once every 12 hours with a horizontal resolution of 400–500 km. This spacing in observations was established by the National Weather Service for use in forecasting and is not necessarily sufficient for estimating pollutant transport. To obtain wind data that are more densely packed in time and space, it is usually necessary to survey and accumulate wind observations taken by research groups, state and local agencies, and industrial companies.

There are errors in measurement and interpolation. For relatively quiescent wind conditions, Kahl and Samson (1986) showed that the median error for trajectories drawn from National Weather Service data in the eastern United States after 72 hours of simulation was about 350 km. For convective situations, Kahl and Samson (1988) showed errors of 500 km after 72 hours. Mean absolute errors in estimation of wind velocity components (wind velocity can be divided into separate east-west and north-south components) were found to range from 3.3 to 6.5 m/s. Papers by Kuo et al. (1985), Haagensen et al. (1987), and Draxler (1987) report average trajectory uncertainties of 350 to 425 km after 72 hours. Thus the effects of specific sources on specific receptors in the eastern United States are not clear. The transport of air from large source areas to large receptor areas (for example, entire states) can be treated with reasonable reliability.

Trajectory techniques use regionally representative winds and hence are better for identifying sources hours upwind of the receptor than sources nearby. Small variations in wind measurements can cause trajectories to miss nearby sources, whereas compensating errors over time and the growth of the envelope of potential contributors diminish the sensitivity of trajectories at longer travel times.

Trajectory calculations require a priori assumptions about how vertical atmospheric layers act to transport the pollutant to the receptor. These assumptions can seldom be confirmed because vertical profiles of the

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

transported species are rarely available. The two most common assumptions about the vertical distribution of pollutants are either that the material is well mixed through some predefined layer or that its movement can be represented by the movement of air at a particular level (where the level is defined in terms of height above the ground, a pressure surface, or an isentrope). Winds are averaged over the layer, if applicable, and then interpolated in space and time.

The assumption that pollutants are well mixed could be adequate in the warmer months in the eastern United States where visibility-reducing material is often mixed by convective eddies during episodes of regional haze. However, this assumption might not be appropriate for the complex terrain of the western United States or for plumes from single sources.

Trajectory analyses suffer from their simplicity, results are presented (and too often interpreted) as a single line of air transport from a source or to a receptor. In reality, the transport should be represented by three-dimensional probability fields, the shape and magnitude of which are influenced by the exact elevation of the plume centerline, the amount of vertical mixing, and the vertical and horizontal gradients of wind velocity. Wind velocity is the combination of wind direction and speed. Thus gradients in wind velocity can occur when the wind changes direction when there is no change in speed.

The complete vertical profile of meteorologic factors that affect visibility-reducing gases and particles should be known in space and time. In the future, remote-sensing techniques could make such definition possible, but in the absence of this information it is necessary to estimate the uncertainty in transport paths resulting from the uncertainty in the vertical distribution of meteorologic parameters.

The uncertainty in trajectory paths is believed to increase with travel time and to increase faster in regions of complex topography. The uncertainty can be evaluated by analyzing the divergence of trajectories for different sublayers of the layer in which the pollutants are thought to be transported. Samson (1980) estimated the rate of growth of uncertainty in trajectories drawn from National Weather Service data in the eastern United States to increase linearly at a rate of 5.4 km/hr.

One danger in the use of ensemble trajectory analysis (ETA) is the misinterpretation of results, particularly for areas far from the receptor (in the case of backward trajectories). Consistent trajectories from a

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

particular direction, for example, can indicate a contributing source area or can simply represent the air flow direction on days conducive to pollutant buildup. Also, it is impossible to identify the location or magnitude of the contributing sources along the path of highest likelihood. Finally, ETA techniques are sensitive to individual events for source regions at the fringe of the nominal travel time of the trajectories. That is, one or two trajectories associated with high-arriving pollutant concentrations that traverse a source region through which few trajectories pass will produce apparently strong correlations. For this reason it is necessary to set criteria for a minimum number of trajectories. Below that number, the results are questionable. Using estimated two-dimensional probability fields, Keeler and Samson (1989) disallowed ETA for those areas in which the probability of transport over the sampling period dropped to less than 1 X 10-8 km-2.

Individual trajectory accuracy can be improved by increasing the spatial or temporal resolution of observations or incorporating hydrodynamic models to make the interpolations more realistic.

Estimates of source-receptor relationships based on transport models that calculate pollutant concentrations rather than just air parcel trajectories have some advantages over trajectory analysis alone. The use of either Lagrangian particle trajectory models or Eulerian grid transport models allows modeled wind flow to vary with height. Because vertical wind velocity shear is an important component of horizontal dispersion, this should result in a more realistic evaluation of pollutant transport.

Lagrangian particle trajectory models could be adapted for use with optical models because they simulate the distribution of particles downwind of sources. This makes them useful for evaluating plume blight. Eulerian grid models and Lagrangian particle models for transport alone are about equally useful when applied to regional haze problems where individual plumes have merged to form a widespread region of low visibility.

Lagrangian transport models that include linear chemistry are relatively inexpensive tools for evaluating potential source effects. The approximation that sulfate formation rates are linear in SO2 concentrations is thought to be reasonable when oxidants (OH in gas-phase oxidation, H2O2 and O3 in aqueous-phase oxidation) are plentiful. Concentrations of these oxidants have been measured only sparingly in the remote Southwest. One must confirm that these conditions hold before using

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

linear chemistry approximations in regions where few measurements have been made.

Administrative Feasibility

Most air pollution control agencies employ persons who have used simple Gaussian plume air-quality models. These personnel can easily learn to use simple trajectory models as well. Trajectory analyses based on interpolation of wind observations usually require the use of an engineering workstation, and large-scale problems can require the use of a mainframe computer. The analyses can be performed by a trained computer technician and can be applied to ETA by air-quality technicians. Conclusions drawn from individual trajectories should be reviewed by a trained meteorologist who can identify inconsistencies in estimated trajectory paths.

Advanced hydrodynamic modeling and the associated transport estimation require personnel who can initialize and apply the necessary algorithms. These models could be beyond the capabilities of some regulatory agencies. Such analyses yield useful estimates of the three-dimensional structure of plume travel and are best applied to individual episodes of unusually heavy pollution. Such analysis are necessarily more expensive than simple two-dimensional trajectory analyses.

Transport models that incorporate linear chemistry require considerably fewer resources than do the fully explicit coupled-transport and chemical models described later. Nonetheless, the choice of linear proportionality constants and the application of the codes require an understanding of the chemical and physical processes that govern the transport, transformation, and deposition of the pollutants. Linear transformation models unaccompanied by special measurements of all model parameters are of questionable value in the estimation of haze sources, because uncertainties in these parameters limit confidence in model simulations.

Transport modeling calculations can be used within each of the regulatory frameworks discussed earlier. Transport models that predict pollutant concentrations could be useful in predicting the effects of new sources. Trajectory models that show the paths of air parcels from source to receptor lend themselves to graphic displays and video anima-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

tions that are easy to communicate to decision makers. Gaussian plume models are widely used by regulatory agencies, and some standardization has occurred that facilitates communication between different jurisdictions that use the same modeling tools. The more advanced trajectory and hydrodynamic modeling methods could require extensive model validation studies before they are understood well enough for regulatory use.

Economic Efficiency

The results of simple trajectory streaklines alone are not conducive to coupling with linear programming economic optimization methods. However, ensemble trajectory analysis that yields estimates of linear source-receptor relationships, as well as Eulerian or Lagrangian transport models that compute pollutant concentrations and can incorporate linear chemistry, can be combined with such economic optimization models (see Harley et al., 1989).

Flexibility

Trajectory modeling as described here is a receptor-oriented tool used to diagnose the likely sources of observed pollutant concentrations. As such it cannot be applied to unbuilt sources except to demonstrate the likelihood of transport from the new source to a receptor. Transport models that compute pollutant concentrations (with or without the addition of a linear transformation and deposition module) can be used to estimate the expected change in concentrations with a changing source configuration, assuming fixed meteorologic conditions and a linear response in transformation rates.

Balance

Because the personnel required to apply simple trajectory models and Gaussian plume pollutant concentration models are available to most air pollution control agencies, the use of these models in the context of a

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

visibility protection program is not likely to create an imbalance between personnel needs and available staff. The more advanced trajectory and hydrodynamic modeling methods can require both a team of highly skilled specialists and a concurrent field experiment program to acquire data for model application and validation. Care must be taken when using the advanced transport models to match the needs for field experiments with the input data requirements of such models. The temptation to run improved theoretical models using inadequate input data should be resisted.

Mechanistic Models for Transport and Explicit Chemistry

Comprehensive mechanistic air-quality models are three-dimensional representations of physical and chemical atmospheric processes. They simulate the interactions of many chemicals and analyze processes on the time scale of air pollution episodes. The most detailed models now under development explicitly treat airborne particle mechanics in addition to gas-phase transport, removal, and chemical processes. These models are extensions of regional models originally developed to predict gaseous pollutant concentrations. Consequently, much of this critique is an assessment of the existing regional gas-phase models on which the aerosol models are based. The critique focuses on current approaches to aerosol modeling rather than on fully operational, three-dimensional Eulerian regional aerosol and visibility models, because such models are not yet available. See Appendix C for further discussion of mechanistic models.

Models that are based on physical and chemical atmospheric processes can be used to determine source-receptor relationships and to assess the merits of various pollution control strategies. Statistical relationships or other empirical models based primarily on aerosol and optical measurements might be inadequate for this task, especially when complex chemical processes such as secondary-particle formation are involved. Mechanistic modeling requires substantially more resources than do other source apportionment techniques since large mainframe computers usually are needed to manage the model and extensive, high-quality data are needed for model execution and evaluation. These factors, along with

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

other issues associated with technical and administrative feasibility, economic efficiency, flexibility, and balance, need to be considered when determining the range of applicability of the mechanistic modeling approach in source apportionment.

Mechanistic models can be envisioned that provide definitive source apportionment for visibility impairment in complex airsheds with multiple sources, but these models are not yet available. The most serious barrier to their development is an inadequate understanding of the phenomena that determine atmospheric particle size distributions. Theoretical descriptions of aerosol processes that occur in controlled laboratory experiments are well advanced, but the relative importance of competing processes as they occur in the atmosphere is not fully understood. The atmospheric data sets needed for the development and testing of advanced aerosol models are not readily available. Such information can be obtained only through experimental studies of atmospheric aerosol phenomena.

Technical Adequacy

Mechanistic models are theoretically grounded in fundamental physical and chemical principles. Mathematic descriptions of transport, chemical reactions, removal mechanisms, and airborne particle dynamics are derived from basic principles. Models that explicitly include aerosol dynamics would provide the most detailed analysis of particle size and composition distribution. Because particle size distribution is an important determinant of visibility impairment, such models could be extremely useful. However, these models are hard to develop because of mathematic difficulties and difficulties associated with imperfect understanding of atmospheric aerosol physics and chemistry. Complete models for predicting particle size distributions and chemical composition are not available for regulatory use.

There are simpler models that can predict from first principles the total concentrations of some secondary species (sulfates and nitrates) as well as primary particles, but they cannot predict particle size distributions. Those models could provide detailed information about the source contributors to atmospheric pollution. Translation of their findings into source effects on visibility requires simplifying assumptions about parti-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

cle size distributions, in particular that the secondary particles are in the fine-particle size regime (i.e., diameters less than 2.5 µm) with a size distribution like that observed during field experiments. Although the use of assumed size distributions is attractive for practical reasons, there are serious doubts about the accuracy of visibility model calculations based on typical size distributions rather than distributions specific to the time and place being studied. Larson and Cass (1989), for example, examine the use of size distribution data taken at Pasadena, California, to ''shape'' the submicron size distribution of bulk fine-particle sulfates, nitrates, and carbon concentration data taken elsewhere in Southern California prior to entering the data into Mie theory light-scattering calculations. Their results show that imposing typical size distributions on bulk chemical composition data (an analogue for the bulk chemical composition predictions of today's secondary-particle models) produce light extinction predictions that are in many cases far inferior to the results obtained if the size distribution and chemical composition data are taken concurrently. If models that predict bulk fine-particle chemical properties (but not size distributions) are to be used as part of a visibility study, and if the accuracy generally expected of mechanistic models is maintained, then the translation of pollutant concentration predictions into effects on visibility will probably need to be supported by size distribution measurements made at the time and place studied.

Whether the model includes aerosol dynamics explicitly and calculates the composition-dependent particle size distribution or assumes that secondary species exist as fine particles with an empirically determined size distribution, the model output can readily be made compatible with optics models. The main outputs from the regional models under development are gas and particle concentrations. Particle size information either will be explicitly calculated or will be assumed to be described by representative size distributions. In addition, the particle and gas concentrations will be available over the gridded modeling domain for prescribed time periods with hourly resolution or better. This detail provides information that is sufficient for detailed optics model calculations for any time of the day and for any sight path in the modeling domain.

Input data requirements for mechanistic models with high spatial and temporal resolution include data about weather patterns, pollutant emissions, and gas and particle concentrations. Meteorologic data for input into the chemical transport model can be obtained most effectively from

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

a combination of observed data and a dynamic model (Seaman and Stauffer, 1989). Emissions and ambient data for particles and precursor gases are harder to acquire than are meteorologic data. Although sulfur and NOx emissions and ambient concentrations are widely available for the United States, data on ammonia, organic particle precursors, and primary fine-particle species and size distributions are much less comprehensive or reliable (Placet et al., 1990).

Acquiring accurate input data for primary particle size distributions is technically difficult and expensive for regional models, and the lack of such information frequently limits the accuracy of model predictions. It is especially difficult to obtain accurate information on particles from fugitive sources, such as windblown dust and smoke from wild fires or controlled burns. In practice, size distributions of primary-particle emissions are often based on crude estimates. The accuracy of mechanistic visibility models can be improved significantly by ensuring that primary-particle emissions are carefully characterized.

Evaluation of complex mechanistic models is a multifaceted exercise. Individual processes can be evaluated independently using laboratory or well-defined field data. The entire model is best evaluated by using carefully designed, high-quality field observations of gas-phase precursors and particle size distributions along with the emissions and meteorologic information for the temporal and spatial scales of interest.

Unfortunately, much current work on mechanistic model development is not tied closely to field verification programs. Many atmospheric aerosol phenomena are poorly understood and cannot be accurately predicted with mechanistic models. Examples include the formation of fine particles by nucleation; the physical and chemical properties of organic aerosols, including their affinity for water and their distributions between the gas and particle phases; the effect of heterogeneous (incloud and noncloud) chemical transformations on the size distribution of secondary sulfates and the degree to which various chemical species are internally and externally mixed; and the associated effect of mixing characteristics on particle dynamics. The effort required to understand such phenomena undoubtedly exceeds the effort required to develop the models.

Mechanistic visibility models can, in principle, achieve useful and precise source separation provided that detailed emissions inventories are available for the important pollutants. The models can be designed with

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

enough flexibility to accommodate the maximum feasible information on sources. The high costs of acquiring model input data that provide precise source and chemical discrimination place important limits on the use of such models.

Comprehensive mechanistic models generally are used to study individual air pollution episodes of a few days' duration. For applications where information for longer time is needed, such as annual or seasonal averages, the mechanistic models could be executed for the entire period of interest. The computer time required to perform adequate analysis for periods even as long as a season could be prohibitively expensive. If this is the case, model runs that represent carefully selected episodes during the period of interest could be executed and the results averaged to provide the desired long-term values. Procedures for selecting and weighting representative episodes need to be developed for visibility modeling.

Mechanistic models can respond to the full range of geographic settings likely to be encountered, subject to the availability of suitable input data. Mechanistic transport and chemistry models can operate on domains comparable to or smaller than the operational domains of the models that provide meteorologic input data for the mechanistic models. Meteorologic models that successfully operate over all of North America have been demonstrated. Consequently, mechanistic models have been and can continue to be successfully applied to whole regions of the United States. Successful application of these models to domains with complex terrain depends on the adequacy of the data provided by the meteorologic models. For example, detailed mechanistic modeling studies of visibility in the Grand Canyon would require meteorological data that describe air flow channeled by the canyon walls as well as air flow over the rim of the canyon. These sorts of detailed small-scale phenomena are not addressed explicitly in models with large (>20 km × 20 km) grids.

The three-dimensional structure of mechanistic models allows them to respond to the full range of source configurations. In particular, the effects of ground level versus elevated sources can be addressed. Generally, comprehensive mechanistic models can be designed to accommodate the details of plume rise and mixing of sources. In addition, grid sizes can be chosen to provide adequate resolution. This detail provides a means for evaluating a variety of sources.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

Within the Eulerian framework, individual plumes generally are averaged over a grid cell. Therefore, Eulerian grid-based mechanistic models might not be suitable for studies of single-source plume-dominated problems. The influence of individual plumes can be examined through special treatment of subgrid-scale processes or, for the case of sulfate, by "tagging" the plume (Chang et al., 1990a).

Systematic error analysis in the context of mechanistic modeling must account for errors associated with mathematic approximations made to simplify computations, with the assumed size distributions and mass emissions rates of primary particles, and with parameterizations used to describe the effects of atmospheric chemical and physical processes on particle size distributions. Approaches to error analysis that have been proposed for gas-phase models can, in principle, be extended to evaluate visibility models. However, reliable estimates of uncertainty will require a more thorough understanding of atmospheric aerosol processes as well as comparisons between atmospheric observations and model simulations. It is not known how accurately atmospheric optical properties can be calculated directly from source emissions data using fully mechanistic models.

Mechanistic modeling is in principle a highly equitable source apportionment method, provided that emissions inventories are detailed enough. All emission sources are treated by the same set of computational rules, which are derived from fundamental physical principles.

The emissions from some source classes, such as fugitive dust sources, are difficult to specify in the context of the gridded emissions inventories used by mechanistic models. Any such errors in emissions input data will be reflected as a bias in the model results. Errors also will occur as a result of uncertainties about atmospheric aerosol processes. In principle, the limitations introduced by potential or known errors could be identified by sensitivity analysis.

Administrative Feasibility

The administrative feasibility of using mechanistic visibility models for regulatory purposes will be governed by the resources that these models require. Mechanistic models require substantially more computer and personnel resources than do other source apportionment methods.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

Resources are needed to obtain data on emissions, meteorology, chemical concentration, and atmospheric optics for model initialization, execution, and evaluation. The overall study design and analysis require additional resources. These requirements cannot be quantified precisely, because they vary with the scope of the application, the availability of data, the treatment of particle dynamics, and the availability of knowledgeable personnel.

Full-scale mechanistic models must therefore be used judiciously. Because of resource limitations, it is advisable to consider developing a range of analytical tools, including the complete mechanistic model and other simpler model versions. The comprehensive model could establish the range of applicability of the simpler models; the less resource-intensive models could then be used to examine the effects on visibility of different policy options.

Mechanistic models can be used in all of the regulatory frameworks discussed earlier. They are particularly well-suited for predicting the effects of new or modified sources on visibility because the models are not based on existing patterns of emissions. These models also are valuable for determining source-receptor relationships when secondary-particle formation is important.

Mechanistic models are well suited for assessing a wide variety of emissions control strategies. Because emissions must be clearly described as input data to the model, it is generally straightforward to simulate the effect on emissions and air quality of specific source controls. Also, because the models can provide high spatial resolution over large domains, they can be used directly to evaluate multijurisdictional visibility problems.

Economic Efficiency

Mechanistic models can be used, in principle, to support regional economic analyses of the least expensive regional air-quality controls. Because of the detailed source description built into these models, nearly any control policy can be simulated. However, because these models are highly nonlinear chemically, their use in the search for economically attractive combinations of emissions controls is a tedious and expensive process that involves systematic perturbation of the model inputs. Lin-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

ear programming techniques used with simpler air-quality models generally cannot be used with the nonlinear mechanistic models.

Flexibility

Mechanistic modeling is particularly flexible in that new sources can be incorporated easily into the modeling structure. Furthermore, models can be structured to permit the incorporation of new information on the behavior of airborne particles. The information provided by these models can be used to calculate a range of visibility indexes, including aerosol optical properties such as light extinction, color, texture, and contrast.

Balance

Mechanistic models demand a balance between field data acquisition and computational analysis. Problems can arise because both of these activities require a high level of effort. To analyze one regional visibility problem, several years' effort by a team of experts could be required for field data acquisition, followed by several years' effort by a team of computational experts to digest those data through the modeling system. If either aspect of the analysis is cut short because of time or other resource constraints, the analysis probably will not work. If the proper balance between field data collection and modeling support cannot be assured, then agencies should take a less intensive approach that can be completed with the time and budget allowed. Technical input to the regulatory process often is needed in less time than that required for mechanistic model development and application. Any approach taken should ensure that the time and resource demands of mechanistic models do not distort individual regulatory proceedings. The construction and testing of such models could be pursued as a long-term goal, so that verified base case analyses of regional visibility problems could be developed before any particular emissions control policies had to be tested. These completed case studies would be available for use in the rapid evaluation of emergent policy issues.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Summary

The major advantage of mechanistic models is that they simulate chemical and physical processes in three dimensions and therefore are suitable for predicting the effects of new and modified sources. These models also can be used to evaluate a range of visibility indexes include extinction, sight path, and scene-specific information. Because of this versatility, mechanistic models provide analyses that can be more readily linked to human judgments of visibility changes than do many simpler models. Coordination of model development with atmospheric studies could lead to experimental designs that answer important questions about model development and may also ensure that models are closely tied to observations.

The accuracy of current models is limited by incomplete understanding of atmospheric aerosol phenomena, especially particle size distribution. Consequently, mechanistic modeling for source apportionment of visibility impairment is still in its infancy. Models will need to evolve substantially to be valid over the range of conditions found in the atmosphere. It also is essential to verify model components and model simulations by comparison with field observations. The tendency to develop aerosol models in isolation from field measurements should be avoided.

Resource requirements will limit the direct use of complex mechanistic visibility models in some applications. The role of mechanistic models needs to be considered carefully for each application. One approach would be to use less advanced models to answer simple questions quickly and reserve those questions that require high accuracy for study by more complex mechanistic models. Such options need to be explored when determining the role of mechanistic models in a national program for visibility protection.

Hybrid Models

Hybrid models, which are described in Appendix C, are formed by combining two or more techniques. Models relating source emissions to ambient visibility are not rigid structures, but flexible assemblages of modules that are often interchangeable. The alternative modeling approaches discussed in Appendix C represent only an illustrative few of

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

those that can be assembled from available modules. This section sketches some considerations involved in the construction of hybrid models, and illustrates some of the possibilities.

A source-receptor model for visibility must account for three factors that together determine the relationship: transport, transformation, and optical impact. These factors are listed in causal order: each factor affects, but is not significantly affected by, the factor after it. Modules for an aerosol's optical impact were described in Chapter 4, so we focus here on transport and transformation.

Transport governs the dilution of all aerosol species, and completely determines the source-receptor relationship for long-lived primary species. Windfield-driven ("source-oriented") transport modules estimate dilution from meteorological conditions, the bases for the estimates ranging from sophisticated hydrodynamic theory to simple empirical parameterizations. Mass balance ("receptor-oriented") models derive dilution directly from observed ambient concentrations of conserved substances.

Transport also governs the atmospheric age of emissions, along with exposure to solar radiation, liquid water, and other factors that modulate gas-to-particle conversion. While the age of an air parcel can be estimated through chemical mass balance techniques in certain circumstances, that information is generally available only from windfield modules.

Atmospheric transformations eliminate significant fractions of some species, convert others from the gas to the particle phase, and distribute condensed material among particles of differing size and composition. Many of the conversion and loss mechanisms are approximately linear in emissions, in the sense that the fraction of emissions converted or lost in unit time is nearly independent of concentration. The conversion of SO2 to SO4= in clear air is often represented as an example of such a process.

In the most general case, transformations involve some irreversible nonlinear processes. The details of particle size distribution and morphology are likely to reflect such influences. The contents of an air parcel then depend on its dilution history, and transport and transformation are inextricably coupled. The ambient impact of multiple sources in this situation can be accurately captured only by a fully mechanistic Eulerian model.

Numerous self-contained modules for transport, chemical transformation, pollutant dry deposition, and distribution of pollutants between gas

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

and particle phases are presently available as documented and portable computer software. Various source-receptor models can be assembled from these components, cafeteria fashion, and this is in fact the way that present state-of-the-art single-source models are constructed.

The atmospheric aerosol often consists of a mixture of chemically distinct substances emitted from dissimilar sources that have been mixed into the same air mass. Different contributors to this aerosol are often most easily tracked by different air-quality models. For example, contributions from fugitive soil or road dust sources are easily identified by chemical mass balance methods; transport models that use fluid mechanics have great difficulty representing fugitive dust sources correctly because accurate emissions inventories for these sources are hard to assemble. On the other hand, particle formation can be tracked by certain mechanistic models for atmospheric transport and chemical reaction; while chemical mass balance models that are fully satisfactory for secondary-particle source apportionment have not been developed. A combination of these two modeling approaches is potentially more accurate and efficient than either method would be if used separately. Alternatively, mechanistic modules can be incorporated into rollback and receptor models. An example of the latter would be the unitization of an equilibration module within a speciated rollback model to relate ammonium nitrate concentrations to NOx and ammonia sources.

Because each candidate method likely to be considered as part of a hybrid has been critiqued separately, it is not necessary to repeat that discussion here. When evaluating the merits of a particular hybrid model, the user should list the elements of a complete description of the airborne particles (sulfates, nitrates, organic carbon particles, and soil dust, among others, as well as size distribution and other physical properties relevant to subsequent optical calculations). Then it should be determined which method is proposed as the basis for describing each subset of aerosol properties. A check should be made to ensure that all relevant aerosol properties are described by some portion of the model. The user can then refer to the critiques of individual methods to assess the strengths and weaknesses of each part of the model.

SINGLE-SOURCE MODELING PROBLEMS

The main thrust of the committee's advice has been directed at the

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

description and evaluation of models relevant to the analysis of regional haze problems. We believe that the nation's most widespread and important visibility problems are in fact regional haze problems and that tools should be selected that address the real problem. However, as a practical matter, we realize that some important decisions will be made in the near future under present regulatory paradigms that focus on analysis of proposed new sources one source at a time. For that reason, we feel that it is useful to provide advice on model selection in those cases that focus attention on a single-source.

First, we will set the stage for this discussion by detailing some of the important distinctions between the single-source and regional haze perspective on a visibility problem. Then we will provide a brief critique of available models and modeling approaches relevant to single source problems.

Widespread Haze and Plume Blight

There are two distinct ways that pollutants can impair visibility:

  • When an observer is inside an extensive polluted airmass, this airmass will degrade the appearance of the surrounding scenery. The airmass itself will appear to the observer as a diffuse haze obscuring distant objects.

  • When an observer is outside a well-defined plume of polluted air. the plume will appear as an identifiable object, defacing the sky or background scenery. The visual effect of such plumes is referred to as plume visibility or plume blight.

Widespread haze and plume blight are idealized categories which do not exhaust the range of possible forms of visibility impairment. They can be viewed as extreme cases between which there is a continuum of possibilities for visibility impairment. However, such an all-encompassing approach does not offer much promise for the development of operational tools. At a practical level, the concepts of haze and plume blight serve as adequate frameworks for most analyses.

The most pervasive and pressing threat to visibility in Class I areas is regional haze arising from the combined emissions of many sources. As

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

discussed in the committee's interim report (NRC, 1990), there are also concerns in some near-pristine regions over the potential for widespread haze from large individual sources. The committee has therefore focused on models used to determine the sources of multi- and single-source haze rather than plume blight. Under current guidelines, however, the main tools for judging the effects of proposed new sources are plume blight models, which explicitly assume that the observer is located outside the plume (EPA, 1980a, 1988a). Federal land managers typically base their permitting decisions on the potential visibility of a proposed source's emissions to observers in relatively clean air, rather than the contribution of these emissions to widespread haze. Since plume blight models are used widely and can yield useful information about the visibility effects of some sources, the committee has evaluated these models as well. However, the committee does not support the predominant use of plume blight models to determine the effects of individual sources on visibility. As discussed elsewhere in the report (see Chapter 7), if significant progress is to be made in preventing and remedying visibility impairment in Class I areas, source contributions not just to plume blight but also to regional haze must be considered.

Widespread haze and plume blight typically occur on different spatial scales, are often attributable to different pollutants, and differ markedly in their sensitivity to atmospheric and observation conditions (see Chapter 4). Hazes affecting national parks and wilderness areas are usually meso- to synoptic-scale phenomena, with extents of 100 km and up. The plumes responsible for plume blight usually occur closer to a source (EPA, 1988a), although they may be seen at great distances through clean air. Haze consists primarily of particles that scatter and absorb light, whereas plume blight is often due largely to light absorption by nitrogen dioxide (NO2) gas (White et al., 1985). Haze can form under a variety of atmospheric conditions. Plume blight, in contrast, is sensitive to stability and other airmass characteristics (EPA, 1988a). Emissions that form a dramatic elevated plume in stable early-morning air can disappear from view by late morning, when they have been entrained and diluted by the deepening mixing layer. Even optically thick plumes can be difficult to see if the background air is itself hazy. Widespread haze and NO2-dominated plume blight are fairly insensitive to illumination and are moderately sensitive to sun-target-observer geometry. Plume blight due to light scattering by particles is extremely sensi-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

tive to both factors (White and Patterson, 1981; White et al., 1986). The passage of a cloud across the sun can instantly change the color of a particle plume from bright white to dark brown or cause it to disappear. Changes in viewing angle can have similar effects.

The differences between widespread haze and plume blight lead to differences in the models designed to determine their effects. Since plume blight occurs near sources, plume blight models treat advection and dispersion much more simply. The standard EPA models rely on standard Gaussian plume parameterizations developed to predict ground-level effects (EPA, 1970). Some of these models have been described and evaluated by Latimer and Samuelsen (1978), Eltgroth and Hobbs (1979), Bergstrom et al. (1981), and White et al. (1985). Because plume blight occurs near sources and is often of concern in clean, dry air, plume blight models also treat atmospheric chemistry fairly simply. The EPA screening model (EPA, 1988a), for example, neglects the conversion of SO2 to sulfate. Even more sophisticated models (e.g., EPA, 1980a) incorporate only a rudimentary mechanism for the gas-phase oxidation of SO2 to form sulfate particles, along with minimal O3-NOx chemistry for the production and loss of NO2. Plume blight models do, however, treat primary particle emissions in considerable detail (White and Patterson, 1981; White et al., 1986).

The visual effects of approximately uniform haze are adequately characterized for many purposes by a single number, the light scattering or total extinction coefficient. The effects of plume blight, however, depend on the scattering coefficient, absorption coefficient, and scattering phase function (giving the angular distribution of scattered light) of both plume and background, along with the width, distance, and elevation of the plume (Latimer and Samuelsen, 1978; White and Patterson, 1981; White et al., 1986). Plume blight models therefore require detailed calculations of radiative transfer.

Measurement strategies for characterizing widespread haze and plume blight also differ. The composition of a plume from a tall stack can be difficult to ascertain without airborne measurements (Richards et al., 1981). In-stack measurements can be unreliable indicators of plume particle characteristics, and ground-level measurements of the entrained plume can be dominated by background. Haze, on the other hand, is routinely sampled by fixed surface observatories (Chapter 4). Plume blight is by definition remotely sensed; it is most naturally documented

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

by teleradiometer or camera (Seigneur et al., 1984). While haze can also be remotely sensed, it is more reliably documented by ambient light extinction measurements.

As noted above, current regulatory programs rely heavily on plume blight models for judging the visibility effects of proposed new sources. However, a proposed source's potential for plume blight does not necessarily correlate well with its potential to form widespread single-source haze or with its contribution to regional haze. The question of correlations has not received much study, but certain qualitative observations can be made. In support of a correlation, a source's potential for both plume blight and haze increases with its emission rate. Moreover, incremental increases in both plume blight and haze are most noticeable in otherwise clean air. However, haze is due predominantly to secondary particles, while plume blight is mostly due to NO2 and primary particles. Source potentials for forming haze and plume blight are thus not necessarily comparable across source types emitting different pollutants. Furthermore, haze and plume blight are enhanced by different atmospheric conditions, so their incidence frequencies may not be comparable. Finally, plume blight cannot occur where local topography restricts sightpaths, while haze can occur anywhere.

Critique of Single-Source Plume Blight Models

Models for the visual appearance of coherent plumes from single sources fall into two categories. The simpler plume blight models recommended for use by EPA are modified Gaussian plume dispersion models developed for time-averaged estimates of plume concentrations and visual appearance. More advanced single-source models incorporate explicit chemical reactions and aerosol processes within the dispersing plume. These will be referred to as reactive plume models. See Appendix C for more information on both kinds of models.

Technical Adequacy

The simple plume blight models and more complex reactive plume

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

models both are derived from solutions to the atmospheric diffusion equation and have a well-documented theoretical basis. Both types of models have been adapted for use with atmospheric optical models. However, the simpler models have several limitations based in their representation of atmospheric mixing, their abbreviated chemistry, and their geometric assumption that the plume must be seen from the outside. One weakness of single-source models in general arises from the sensitivity of the horizontal integral of light extinction to vertical dispersion. Given the instantaneous nature of plume perception, an accurate depiction of many plumes requires capturing their often looping or meandering appearance. Gaussian plume (averaged) models have a symmetric geometry that cannot capture such instantaneous and irregular plume structure. Even for the more advanced models that can depict meandering plumes, data on the temporal and spatial patterns of turbulence at the elevation of plumes are difficult to obtain.

The simpler plume blight models are most appropriate for use in cases (often near the source) where light extinction within the plume is due to primary particles and NO2. In cases where light extinction is dominated by secondary particles formed within the plume, a reactive plume model that can track particle formation should be used. In cases where the observer is located within the plume (within a plume traveling down a canyon, for example), a reactive plume model should be used in which the observer can be inside the plume.

The input data required by single-source models are difficult to obtain. Particle size distributions and water content measured at the high temperatures inside stacks before the pollutants are released to the atmosphere can be quite different from the size distribution and water content of particles beyond the tip of the stack where the plume cools rapidly. One would like to use the composition of the plume a few tenths of a kilometer downwind as the initial condition for a simple plume model. Such measurements are difficult to make; they can require the use of aircraft for sampling. One alternative is to sample from the stack, using a dilution sampler that cools and dilutes the effluent prior to measurement, thereby mimicking the cooling that occurs in the early stages of plume insertion into the atmosphere.

The EPA-recommended plume blight models and similar models proposed by others have been evaluated against data collected for this purpose at a variety of large point sources (White et al., 1985, 1986). Plumes were viewed against the sky in all cases. The accuracy with which plumes' essentially instantaneous appearance could be predicted

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

was limited by the statistical treatment of fluctuating dispersion characteristics in all models. When observed rather than predicted dispersion parameters were used as inputs to the plume chemistry and optics modules, the EPA and Environmental Research and Technology models satisfactorily reproduced the observed appearance of NO2-dominated plumes. The EPA and other models have been much less successful in reproducing the appearance of plumes that have significant particle loadings.

Reactive plume models have undergone limited evaluation against field data, largely because data sets sophisticated enough to support these models are hard to obtain. Most data sets do not include size-resolved plume particle composition; background particle concentrations; or background concentrations of ammonia, hydrogen peroxide, or reactive hydrocarbons. The most comprehensive evaluation study to date was carried out by Hudischewskyj and Seigneur (1989).

Single-source plume models obviously do not have to separate the effects of many contributing sources. The temporal resolution of single-source models is usually a few minutes or longer; actual plumes can vary in appearance in just seconds. The simpler, Gaussian plume-based visibility models cause difficulty if they are used for rough terrain, because they cannot represent a plume that impinges directly on such elevated features. Even when the plume does not impinge directly on such features, it can be difficult to see a plume when there is elevated terrain in the background. Two versions of EPA's plume visibility model are in general circulation, PLUVUE and PLUVUE II. PLUVUE II has been known for some time to have coding errors that render it essentially unusable in situations where the plume is viewed against terrain rather than sky. PLUVUE has recently been found to have an error that can cause it, too, to generate faulty output for terrain backgrounds. Both models are now being corrected by EPA (D. Latimer, pers. comm., Latimer and Associates, Denver, Colo.).

Administrative Feasibility

The personnel resources required for the use of simple Gaussian plume blight models are readily available to most air-pollution control agencies. Indeed, most agencies already perform calculations using these models as part of the new-source review process. The more complex reactive plume models must be applied by PhD-level personnel.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

The simple Gaussian models are compatible with current regulatory programs for the prevention of significant deterioration of air quality in relatively clean areas; indeed, they are about the only tools used routinely for visibility analysis by regulatory agencies. The reactive models are not yet widely used within the regulatory community, but efforts in that direction should be encouraged because these models could address a wider range of conditions than can the simple plume blight models. Simulated photographs that show how plumes would look when viewed against the background sky can help communicate the results of plume models (Williams et al., 1980).

Flexibility

The Gaussian single-source plume blight models that have been adopted for regulatory use are fairly inflexible. To the regulator, this could appear advantageous in that all model users will have to make similar simplifying assumptions to apply those models to a particular problem. The disadvantage is that the assumptions (that straight plumes are formed, dominated by NO2 and primary particles) often do not correspond to the situation being modeled. The reactive plume models, while more difficult to apply, are better able to represent actual conditions. They also provide a more flexible framework for incorporating advances in the scientific understanding of plume structure and aerosol processes.

Balance

The balance between data collection and data analysis is not likely to be distorted when using simple Gaussian plume blight models, as the data and personnel resources needed for their use are modest. Programs that involve reactive plume models must be carefully structured. Funds must not run short before the experiments needed to acquire input data for a reactive plume model are completed, or the models will have to be run using assumed rather than measured emissions and ambient data.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

Bridging the Gap between Near-Source Models and the Regional Scale

If a proposed single new source is made large enough, it may have significant effects on regional haze. A succession of large sources analyzed and built one at a time, also eventually can in the aggregate affect regional haze. For that reason, even in single source siting decisions, it may be necessary to consider effects on visibility at spatial scales greater than those of plume blight models or reactive plume models. The clear answer to how to do this is that the proposed new source or modified existing source should be introduced into an appropriate multiple source regional-scale model chosen from those described in Appendix C and in the regional haze section of this chapter. This requires that data bases on regional air quality and visibility and on the other sources already present in the region be maintained. An interagency working group on air quality modeling recently has been formed to assess the feasibility of multiple-source regional air quality modeling conducted by public agency staffs in support of single new source siting decisions (P. Hanrahan, pets. comm., Oregon Department of Environmental Quality, 1992). Their preliminary conclusion is that this is highly desirable and technically feasible. A multiple source Lagrangian puff model with linear chemistry probably will be selected for use initially, followed by a search for a model with a more explicit description of atmospheric chemistry and aerosol processes. The model will be driven by the U.S. Environmental Protection Agency's national emissions data base, presumably as modified by any more exact data available locally. Factors that will determine the rate of progress toward achieving an operational model at the regional scale in the west include issues of coordination between the various agencies, and access to slightly larger computers than are presently owned by most such agencies. A training program will be needed to diffuse the operational skills needed to support such models throughout the affected agencies. The most important missing piece of this system, within a regulatory context, is the present lack of clear criteria for judging how much of an increment to a regional haze problem due to a new single source is ''too much.''

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

SELECTION OF MODELS TO ADDRESS OTHER AIR-QUALITY PROBLEMS

Visibility degradation is just one of many important air-quality problems, some of which have been or are being considered as objects of federal, state, or local legislation and regulation. Control measures that focus on one issue, such as ecosystem damage caused by acid rain or the health effects of fine particles, can alleviate other air-quality problems, such as visibility degradation in Class I areas. This complicates the task of the regulator, who might be accustomed to dealing with individual issues separately.

The source apportionment models discussed in this report can provide the technical basis for determining the effects of proposed controls on the attainment of several air-quality goals. In analyzing several problems simultaneously, one must choose a model that can be applied to all the problems at hand. In many cases complex mechanistic models will describe a broad range of physical and chemical processes associated with the major gaseous and particulate pollutants. As they analyze visibility impairment, these models can simultaneously determine the concentrations, fluxes, and effects of primary emissions as well as secondary oxidants, acids, and aerosols.

Simpler approaches could be applied to analyses of limited scope. For example, simpler rollback models can be used to make a preliminary examination of the effect of sulfur reduction mandated by the 1990 Clean Air Act Amendments on visibility in Class I areas in the eastern United States (Trijonis et al., 1990). Such an analysis can provide useful information about visibility and acid deposition, because sulfate is a dominant component of eastern haze, and sulfuric acid is a major component of acid rain in the East. If NOx or volatile organic compound controls are to be evaluated, then more complex models would be preferred because the interactions associated with these chemicals are more complex.

The possible visibility effects of PM10 controls in nonattainment areas also illustrate the complications that arise when multiple regulatory goals and multiple sources are taken into account. The question in this case is whether controls adopted for countywide PM10 abatement are adequate to produce acceptable visibility in nearby Class I areas. If a small PM10 nonattainment area is the only major source of airborne particles in the

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

region, then there might be a simple way to assess the effects of local PM10 controls on broader scale visibility. However, if the local PM10 nonattainment area is only a small part of a large multiple-source region, then more complex modeling than that needed for a study of PM10 control will be required to sort out the various source contributions to the regional visibility problem.

An assessment of the potential benefits on visibility of existing and expected legislation and regulation aimed at alleviating other air-quality problems should be an integral part of the decision-making process. If visibility is considered in the selection of air-quality models that will be used to analyze those other problems, then the effects on visibility of policies directed at other air-quality problems are more likely to be taken into account.

SUMMARY

The committee evaluated several alternative methods that could be used alone or in combination to analyze the effects of individual emissions sources or source classes on atmospheric visibility. Empirical methods range from qualitative tools, such as photography, through models based on material balances, such as speciated rollback and chemical mass balance models, to techniques based on statistical inference. We also have described models that are derived from the basic equations that govern atmospheric transport and that sometimes include chemical reaction and aerosol processes.

Many of the more empirical models already have been developed almost to their fullest potential, and therefore a fairly clear picture of how these models could fit into a comprehensive source apportionment program is emerging. Photographic methods and other simple source identification systems provide an inexpensive way to qualitatively implicate single emissions sources that create visible plumes in or near Class I areas. Photographs, videotape, and film can provide sufficient evidence for regulatory action in simple cases, but they are not likely to be useful for source apportionment of widespread regional hazes.

Speciated rollback models linked to light extinction budget calculations represent perhaps the only complete system of analysis that can be used for regional haze source apportionment throughout the United

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

States based on data available today. The feasibility of such analyses is demonstrated for several regional cases in Chapter 6: A speciated rollback model is used to develop a preliminary description of the likely origin of visibility impairment in the eastern, southwestern, and northwestern regions of the United States. The speciated rollback model is best suited to analyses of regional haze and is not suited to projecting the effects of changes in single members of a group of similar sources or the effects of proposed new sources in areas where the air is now clean. New sources are outside the realm of the past experience upon which rollback models are built.

Receptor-modeling methods are valid, well accepted, and widely available for source apportionment of primary particles. Regression analysis and chemical mass balance techniques each have appropriate uses; regression analysis requires less prior knowledge of emissions characteristics, but it is more susceptible to model specification errors and consequent biases. However, neither model is yet developed to the point of acceptance for the purpose of apportioning secondary particles (sulfates, for example) among sources. It is not known whether further research will lead to a fully successful receptor-oriented model for secondary-particle formation; certainly many attempts are being made to expand the applicable range of receptor models. Receptor models currently must be used in conjunction with other models for secondary-particle formation. This simultaneous use of more than one model is useful in many cases, and therefore one can expect chemical mass balance receptor models to be used as part of comprehensive systems of source apportionment (either as a primary tool for source apportionment or to check the consistency of findings obtained by other methods).

Mechanistic source apportionment models for use in the analysis of regional haze problems are also in considerable demand. In principle, such models could contain a physically based cause-and-effect description of atmospheric processes that could provide detailed and accurate predictions. In practice, mechanistic models exist in pieces that have not been fully assembled and tested. Their completion and testing should be pursued as a high priority.

Some mechanistic models are available that present a partial picture of the effects of source emissions on pollutant concentrations. They can trace transport paths from source to receptor and in some cases can predict concentrations of secondary particles, such as sulfates and nitrates. These partial mechanistic models probably will find use in re-

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

gional visibility analysis, but must be combined with other empirical methods (such as the imposition of measured particle size distributions) to form a complete source apportionment system. Whether these semiempirical, semimechanistic models can produce predictions that are more accurate than those of a fully empirical speciated rollback model is an interesting research question that should be investigated.

There is a clear need for a comprehensive and versatile single-source model for use in predicting the effects of large new sources. An advanced single-source model should not be restricted to Gaussian plume model formulations. Further research will be needed to create such a model, because current plume blight models do not include the atmospheric aerosol processes that lead to light scattering and absorption by particles. Efforts are underway by public agencies in the western states to develop the operational capability to evaluate the effect of proposed large single sources on regional scale visibility. The procedure is to maintain a baseline multiple source regional model into which data on a single new source can be inserted. This development effort should be encouraged.

In summary, we will consider methods for source apportionment that are either available or could be put together from available components. Following our emphasis on a nested approach in which models of increasing difficulty and accuracy are chosen, the most attractive systems are judged to be as follows:

Regional haze assessment:

  • Speciated rollback models;

  • Hybrid combinations of chemical mass balance receptor models with secondary-particle models;

  • Mechanistic transport and secondary-particle formation models used with measured particle size distribution data to facilitate light-scattering calculations.

Analysis of existing single sources close to the source:

  • Photographic and other source identification methods (in simple cases only);

  • Hybrid combinations of chemical mass balance or tracer techniques with secondary-particle formation models that include explicit transport calculations and an adequate treatment of background pollutants;

  • The most advanced reactive plume models available, hybridized

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×

with measured data on particle properties in such plumes and accompanied by an adequate treatment of background pollutants.

Analysis of new single sources close to the source:

  • The most advanced reactive plume models available, hybridized with measured data on particle properties in the plumes of similar sources and accompanied by an adequate treatment of background pollutants.

    Analysis of single sources at the regional scale:

  • Insertion of the single source in question into an appropriately chosen multiple-source description of the regional haze problem.

The hybrid models mentioned above are available to the extent that the necessary pieces of the modeling systems exist. Any novel combination of existing models should be carefully evaluated.

For the reasons explained in the introduction to this chapter, most source apportionment studies would benefit from the use of several candidate models, and hence groups of models rather than single models are noted above. We emphasize that the skill and knowledge of the personnel executing a modeling study are often more important in determining the quality of the study than is the choice of the modeling method.

We recommended research to achieve several goals. First, fully developed mechanistic models for the chemical composition, size distribution, and optical properties of atmospheric particles and gases should be created and tested. Two types of mechanistic models are needed: an advanced reactive plume aerosol process model for analysis of single-source problems close to the source and a grid-based multiple-source regional model for analysis of regional haze. In pursuit of those objectives, a program of careful field experiments and data analysis must be designed and conducted to support the use of aerosol process models (for example, to collect data on the degree of uptake of water by airborne particles) and to better characterize emission sources (to measure the chemical composition and size distribution of primary particles at their source in addition to the gaseous precursor emissions). Finally, experimental programs must be designed and conducted to test the performance of completed models of all kinds against field observations on emissions and air quality.

Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 143
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 144
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 145
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 146
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 147
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 148
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 149
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 150
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 151
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 152
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 153
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 154
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 155
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 156
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 157
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 158
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 159
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 160
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 161
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 162
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 163
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 164
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 165
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 166
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 167
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 168
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 169
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 170
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 171
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 172
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 173
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 174
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 175
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 176
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 177
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 178
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 179
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 180
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 181
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 182
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 183
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 184
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 185
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 186
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 187
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 188
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 189
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 190
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 191
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 192
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 193
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 194
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 195
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 196
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 197
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 198
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 199
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 200
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 201
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 202
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 203
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 204
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 205
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 206
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 207
Suggested Citation:"5 Source Identification and Apportionment Methods." National Research Council. 1993. Protecting Visibility in National Parks and Wilderness Areas. Washington, DC: The National Academies Press. doi: 10.17226/2097.
×
Page 208
Next: 6 Emission Controls and Visibility »
Protecting Visibility in National Parks and Wilderness Areas Get This Book
×
Buy Paperback | $55.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Scenic vistas in most U.S. parklands are diminished by haze that reduces contrast, washes out colors, and renders distant landscape features indistinct or invisible.

Protecting Visibility in National Parks and Wilderness Areas describes the current understanding of the nature and extent of haze in various regions of the United States. The book addresses the scientific and legal framework of efforts to protect and improve visibility, as well as methods for assessing the relative importance of anthropogenic emission sources that contribute to haze in national parks and for considering various alternative source control measures.

The volume provides guidance on how to make progress toward the national goal of correcting and preventing visibility impairment due to human activities affecting large national parks and wilderness areas.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!