Click for next page ( 70

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 69
B Overview of Atmospheric Transport and Dispersion Modeling Summary of a presentation by Steven Hanna, George Mason Universi~/Harvard School of Public Health An overview is given of the history and the current status of atmospheric transport and dispersion models applied to C/B/N releases. The discussion includes questions being asked of models, history and types of models, links to meteorological inputs, evaluations with field data, uncertainties, and future systems and research needs. Models are being applied in real time, in historical mode, and in planning mode to address the following types of concerns: In real-time, for a known C/B/N release, what areas should be evacuated or other precautions taken? Alterna- tively, for an unknown C/B/N release but with observed concentrations, what are the location and magnitude of the releasers)? For historical analysis, what was the dose for past C/B/N releases (e.g., Khamisiyah, Bhopal,World War b? For planning analysis, what are the typical impacts of expected C/B/N release scenarios? Experience shows that transport and dispersion research is driven by major events or step- changes rather than long-term planning. Examples of major events are the use of CB agents in World Wars I and II, the nuclear tests of the 1950s, the 1968 Clean Air Act and its 1990 amendments passed by the U.S. Congress, the discovery of acid lakes in the 1970s, the discovery of the ozone hole in the 1980s, the Bhopal chemical accident, the Chernobyl nuclear plant accident, the Gulf war, the Japanese subway chemical agent release, and the September 11, 2001, terrorist attacks. BRIEF HISTORY OF TRANSPORT AND DISPERSION RESEARCH The fundamental problem in any transport and dispersion exercise is that, no matter what model is used, the turbulence must somehow be parameterized. This has been a central theme of research over the past 80 years, beginning with Richardson and Taylor's fundamental studies. Transport and dispersion model research was funded by C/B/N concerns for several decades (e.g., the Pasquill and Calder studies in the 1940s, 1950s, and 1960s, and the Porton Down and Prairie Grass field experiments in the 1950s). There were extensive classified studies in the United States, since there was a C/B/N offensive program through the Vietnam War. Large field experi- ments were conducted in many types of geographic locations, such as urban areas (Fort Wayne) and coastal zones (Cape Canaveral and Vandenburgh Air Force Base). At the Department of Energy national labs and NOAA, research was carried out in the 1950s and 1960s on models for nuclear releases, fallout, and source estimation. 69

OCR for page 69
70 APPENDIX B Over the past 20-30 years, as a result of the Clean Air Act, the research emphasis switched to EPA pollutants (e.g., SO2) and concerns (e.g., industrial point sources, mobile sources, acid rain, regional ozone precursors, particles and tonics). Many large EPA field experiments (e.g., the St. Louis Regional Air Pollution Study and the Complex Terrain Tracer Studies) took place, and model development efforts were conducted, leading to for example the Models-3 regional modeling system and the AERMOD short-range model. Many urban- to regional-scale field experiments have addressed the ozone issue and, more recently, fine particles and potentially toxic chemicals. The past five years have seen a switch back to DOD and DOE, with most of the new model development and the new field experiments being supported with C/B/N concerns in mind. The types of transport and dispersion models have evolved over the past 50-60 years, beginning with the analytical models (Gaussian, similarity, K) or nomograms used through the 1960s. In the 1970s, the focus switched to computer solutions of Gaussian plumes or of three- dimensional grid models involving the eddy diffusivity, K. The 1980s saw the development of Lagrangian puff models and one-dimensional time-dependent slab models, as well as improve- ment of three-dimensional Eulerian models (but with few grid nodes). Gaussian models were adapted to account for Monin-Obukhov and convective similarity, and advances were made in large eddy simulations and concentration fluctuations. In the 1990s, there were great advances in three-dimensional Eulerian models linked with numerical weather prediction models (e.g., the EPA's Models-3 system), and algorithms were improved in Gaussian-Lagrangian-puffmodels. So far in the 2000s, we have seen an increase in studies with CFD models, in linked emissions- meteorology-dispersion-exposure-risk systems, and in improved algorithms in Gaussian-plume models for building downwash and for concentration fluctuations. There always have been strong links between meteorology and transport and dispersion models. Early models used a single meteorological monitor for input (e.g., NWS airport site or on-site tower). The 1970s and 1980s saw the addition of diagnostic meteorological models, which interpolate among several observing sites and add a mass conservation constraint (e.g., Lawrence Livermore National Laboratory [LLNL] MATTHEW, EPA CALMET). In the 1990s, methods were devised to accommodate NWP model outputs (although the grid was coarse and the NWP model could not be run in real time). The 2000s have seen improved grid resolution of NWP models and improved computer speed, which have allowed real-time linked NWP and dispersion models (e.g., RAMS or Eta with HYSPLIT, MM5 with CMAQ as part of Models-3, COAMPS with NARAC). Examples of current C/B/N models include HYSPLIT and CAMEO/ALOHA from NOAA, NARAC from DOE/LLNL, HPAC from the Defense Threat Reduction Agency (DTRA), VLSTRACK from the Navy, MIDAS-AT from the Marines, the Joint Effects Model (JEM), the CATS-JACK model being developed by many agencies, and CFD models being experimented with by many groups. Emergency response models have been needed at all times, and some examples include the Air Force's OBDG and AFTOX models from the 1960s and 1970s, the proprietary SAFER model system (including on-site meteorological instruments, dedicated computers, training, and automatic alarms) sold to hundreds of chemical plants in the 1980s, the DOE LLNL MATTHEW- ADPIC system (which was originally designed for nuclear facilities and recently has been transformed into ADAPT-LODI part of NARAC for C/B/N releases), the NOAA CAMEO/ ALOHA system in wide use by fire depardnents and first responders to chemical accidents, DTRA's HPAC model and the Navy's VLSTRACK model for military applications, and NOAA's Eta-HYSPLIT model system for general purposes.

OCR for page 69
APPENDIX B 71 BRIEF HISTORY OF FIELD EXPERIMENTS AND MODEL EVALUATION There has been a long history of evaluations of models with field observations. Prior to 1980, the most useful tracer experiment was the 1956 Prairie Grass study of short-range dispersion from continuous near-ground releases over flat terrain. Similar experiments took place over flat terrain as well as some urban field studies, such as the Fort Wayne study. All of these early studies were sponsored by DOD with C/B/N scenarios in mind. In the 1980s, EPA, DOE, and industrial groups such as EPRI sponsored several complex terrain field studies, some mesoscale to regional tracer experiments (e.g., CAPTEX and ANATEX), a few extensive tall stack studies (Kincaid, Bull Run, Indianapolis), and regional acid rain field experiments. In the 1990s, EPA interest focused on regional ozone studies; a few DOD mesoscale tracer studies took place such as DP26 and OLAD; and DTRA sponsored the Phase I study of ensembles of puffs. The past two years have seen an emphasis on DOD and DOE studies of releases in urban areas and obstacle arrays (e.g., MUST, Salt Lake City URBAN 2000, planned OKC-2003~. Evaluations of air quality models usually involve statistical methods such as the BOOT and ASTM software. It is found that a "good model" has a relative mean bias of about 20 or 30 percent and a scatter (normalized root-mean-square error) of a factor of 2. Most air quality models predict the ensemble mean value and not the fluctuations. An exception is HPAC, which also predicts fluctuations using standard methods from the literature. Because of the relatively large uncertainty in model predictions, the question arises of how we should inform emergency responders and other decision makers of uncertainties and of the need to consider probabilistic predictions. The study of model sensitivity and uncertainty is an expanding research area, involving methods such as probabilistic Monte Carlo uncertainty analysis. EXPECTATIONS OF FUTURE RESEARCH Future systems are expected to involve real-time linked source emissions modules, meteorological modules, transport and dispersion modules, and exposure and risk modules. There is a need for efficiently communicating data and model predictions across large distances (e.g., from a modeling center to a battlefield or an emergency location). Much more work is anticipated on inverse modeling or source-finding, where observations are used to triangulate to identify the location and magnitude of a release. The accelerated studies of CFD models should produce data sets for analysis and parameterization. Research needs also include better parameterizations of mean flow vectors and turbulence in the lowest 2 km for all time periods and surface types, improved methods of real-time modeling using limited inputs, development of criteria for the best expected model agreement with observations, and optimization of methods to use new remote data systems.