Fallout and Tools for Calculating Effects of Release of Hazardous Materials
NUCLEAR BURSTS AND FALLOUT OVERVIEW
All nuclear bursts produce radioactive fission debris in the form of gases that eventually condense into aerosols. But a burst near or under the ground surface will also entrain dust ejected from the associated crater or kicked up by the blast wave. Some of this dust will scour the fission debris and become radioactive. After being lofted by the rising fireball, the dust begins to fall out of the cloud, resulting in a fallout pattern on the ground. The size and intensity of the fallout pattern are heavily influenced by the weapon yield and height or depth of burst and by the meteorological conditions both at the surface and at altitude. There are several critical parameters in determining the fallout pattern: the yield of the weapon, the fraction of the yield derived from fission (versus fusion), the height or depth of burst, the size distribution of the entrained dust, the base surge or material lofted by ground motion (for explosions below the surface), and profiles of atmospheric conditions. The water and ice loading depend critically on the amount of ambient moisture in the atmosphere.
There are four heights of burst (HOBs)/depths of burst (DOBs) of interest for this study of nuclear weapons: (1) a surface, or contact, burst; (2) a low-altitude (below the fallout-free limit) airburst; (3) an airburst at the fallout-free height of burst; and (4) a subsurface burst.
The cloud from a 500 ton surface burst could rise to a few kilometers, whereas that from a 1 megaton burst would stabilize in the stratosphere with the top around 20 kilometers. A 500 ton surface burst would loft about 500 tons of dust that would be contaminated by the fission debris, whereas a 1 megaton burst would loft 300,000 tons. The amount of dust contaminated and lofted falls off rapidly as the height of burst is increased, primarily because above about 7 m/kt1/3 there is no crater.
NOTE: The sections of this chapter addressing fallout are an amalgam of information from the following: The Effects of Nuclear Weapons, compiled and edited by Samuel Glasstone and Philip J. Dolan, 1977, U.S. Government Printing Office, Washington, D.C., §2.18–2.21, §2.23–2.31, §2.91–2.95, §2.99–2.100; A Study of Base Surge Phenomenology, Kaman Science Corporation, November 5, 1986; and The DTRA Nuclear Weapons Effects Technology Information (NWETI) Fallout Guide, 2004, by Joseph McGahan. They also contain material developed from discussions with members of the committee and presentations to the committee by David Bacon and Joseph McGahan, both of Science Applications International Corporation.
A burst at or above the fallout-free height of burst, as its name implies, produces aerosolized fission debris but no large particles, because the surface material does not mix with the remnants of the nuclear detonation. With high relative humidity, moisture in the cloud could form rain, ice, or snow that could scavenge the fission debris. Scavenging could also occur if the nuclear cloud encountered an ambient rain cloud. This height of burst increases with the yield of the weapon but is roughly 55 m/kt0.4 (~870 meters for a 1 megaton burst). Even above this height of burst, there can still be fallout arising from a variety of mechanisms. The potential for long-term fallout on a global basis is well recognized from atmospheric tests, although dose-rate levels would be much lower than those from the ionized fallout from surface or subsurface bursts, with a potential impact on latent cancer rates.
A subsurface burst introduces additional processes for mixing of fission debris and dust. A main “mushroom” cloud like that from a surface burst is formed if the subsurface burst is not too deep (Figure 5.1); a low-level base surge is formed as the shock wave breaks through the surface (Figure 5.2). Thus, the subsurface burst may create two radioactive clouds that do not merge. As the depth of burst increases, the amount of activity vented to the atmosphere is reduced until the burst is completely contained, if the emplacement hole through which the device has been lowered has been carefully stemmed (70 meters for a 1 kiloton burst), which of course would not be the case for a weapon used to attack a target.
The primary short-term hazard from fallout is the external dose received from gamma rays. In addition, an internal dose can also be received through inhalation or ingestion of debris emitting beta and alpha radiation. Under most conditions, the external dose is the more dangerous, but the internal dose could have latent effects, specifically, cancer.
The total gamma ray activity in a measured fallout pattern is usually stated in terms of exposure rate in roentgens per hour (R/h) at 1 hour after the burst as if that activity were uniformly spread over an area of 1 square kilometer. For a 1 kiloton surface burst, this value is on the order of 9,000 R/h per square kilometer. Of course, the area of the fallout pattern is much larger than 1 square kilometer. The area covered by the associated dose that would cause at least a 50 percent probability of fatality is roughly 2.6 square kilometers per kiloton, assuming that people are in the open and exposed for just the first day after the burst. Over time the radioactivity decays, and eventually the fallout hazard decreases. Some radionuclides, such as cesium-137, have long half-lives, but most of the hazard is due to short-lived radionuclides. Within 1 to 24 hours after the burst, for example, the total gamma ray activity decreases by a factor of about 60.
The following sections focus on the problem of fallout associated with surface and subsurface bursts.
Nuclear clouds contain three classes of materials—radioactive materials, dust and pebbles, and water and ice. The first is by far the smallest contributor to the mass of the cloud, but it represents the greatest residual hazard and therefore is of prime importance. The radioactive materials consist of fission products from the weapon, as well as activation products produced by the interaction of the weapon’s energetic radiation with the surrounding and/or nearby materials. When this radioactive material becomes attached to larger particles, it can result in fallout.
The dynamics process of the nuclear cloud starts with the ejection and then lofting of surface material. The cloud rises, entraining ambient air and moisture, cooling in the process. This cooling causes the materials to condense on nonvaporized residual material and causes these molten particles to coagulate into larger sizes. As the rising fireball and nuclear cloud continue to rise and entrain ambient
air, at some point the fireball becomes cool enough that moisture begins to condense, releasing latent heat into the cloud. This makes the cloud more buoyant, leading to an increase in the updraft velocity, which leads to additional condensation, which in turn causes the agglomeration of wet particles. The turbulence of the cloud mixes the material composing the cloud. In addition, the large-scale circulation of the atmosphere drives the long-range transport of the cloud.
The larger particles in the nuclear cloud cannot remain in suspension, and hence they settle to the ground. For dry particles, this settling is called dry deposition. The process by which small particles are captured by precipitation that falls to the ground is called precipitation scavenging. Both can result in the surface deposition of radioactivity—that is, fallout.
When caused by a low-airburst detonation, the dust mass loading in a nuclear cloud at about 10 minutes after the burst, when the cloud is stabilized, is roughly 0.1 Mt/Mt; the cloud from a surface burst has typically three times this amount of dust mass. The cloud consists of particles ranging in size from the very small ones (submicron to micron sized) produced by condensation as the fireball cools, to larger, surface material particles (10 to 500 micron sized) raised by the blast winds and the convectively driven afterwinds, to very large particles (up to millimeter sized) ejected from the crater. The radioactive cloud reaches a height of several miles before spreading out abruptly into a mushroom shape.
Dry Deposition and Precipitation Scavenging
Dry deposition and precipitation scavenging describe the processes resulting in the removal of radioactivity from the nuclear cloud and its deposition on the ground. Precipitation scavenging due to rainfall may produce “hot spots.”
As mentioned earlier, fallout is a gradual phenomenon extending over a period of time. In the 15 megaton Bravo shot test at Bikini Atoll in 1954, for example, about 10 hours elapsed before the contaminated particles began to fall at the extremities of the 7,000 square mile contaminated area. The sizes of these particles ranged from that of fine sand (i.e., approximately 100 micrometers in diameter or smaller) in the more distant portions of the fallout area, to pieces about the size of a marble (i.e., roughly 1 centimeter in diameter and even larger) close to the burst point.
Particles in this size range arrive on the ground within 1 day after the explosion and will not have traveled too far (e.g., up to a few hundred miles) from the region of the burst, depending on the wind. Thus, the fallout pattern from particles of visible size is established within about 24 hours after the burst. This is referred to as early fallout, or also sometimes as local or close-in fallout. In addition, there is the deposition of very small particles, which descend very slowly over large areas of the ground surface. This is the delayed (or worldwide) fallout, to which residues from nuclear explosions of various types—air, high-altitude, surface, and shallow subsurface—may contribute. The size distribution of the fallout particles is critical to the determination of the level and geographical extent of the contamination due to fallout.
The details of the fallout are determined primarily by the radioactive materials inventory (which, in turn, is related to the device design and yield and the height or depth of burst), the amount of entrained mass, the injection height distribution, the particle size distribution, and the atmospheric transport—first locally and regionally, but ultimately including the global transport of residual radioactivity. Surface geology is critical to fallout. The prediction of fallout for shallow buried bursts is uncertain because, of the many underground tests performed by the United States, only three tests were at depths shallower than 20 scaled meters, and none was in rock. Therefore, confidence in predicting fallout from low-yield shallow and near-surface bursts in dry soil is higher than that for predicting fallout from such bursts in rock. Current targets of special interest are in mountainous terrain and urban environments. The yield of the tests at the Nevada Test Site were typically in the range of a few kilotons to a few tens of kilotons. While these shots were of less interest than when strategic yields were large (hundreds to thousands of kilotons), they are yields of interest for the current study.
The extent and nature of the fallout can vary widely. The actual situation is determined by a combination of circumstances associated with the energy spectrum, the yield and design of the weapon, the height of the explosion, the nature of the surface beneath the point of burst, and the meteorological conditions. In an airburst, for example, occurring at an appreciable distance above ground surface so that no large amounts of surface materials are sucked into the cloud (i.e., in the fallout-free HOB), the contaminated particles become widely dispersed. The magnitude of the hazard from fallout is thus far less than if the explosion were a surface burst. Thus, at Hiroshima (height of burst, 510 meters; yield, about 12.5 kilotons) and Nagasaki (height of burst, 500 meters; yield, about 22 kilotons), injuries due to fallout were completely absent.
On the other hand, a nuclear explosion occurring at or near the ground surface can result in severe contamination by the radioactive fallout. From the 15 megaton Bravo shot, the fallout caused substantial contamination over an area of more than 7,000 square miles. The contaminated region was roughly cigar-shaped and extended more than 20 miles upwind and more than 350 miles downwind. The width in the crosswind direction was variable, the maximum being greater than 60 miles. (It should be noted that the upwind contribution arises from the spreading of the nuclear cloud that occurs when the cloud hits the tropopause. In this extreme case, this process created a roughly circular cloud extending some 20 to 30 miles in all directions, owing to the fact that the convective velocities overshadowed the ambient wind speed.)
Time Domain of Physical Effects
There are roughly three phases in the generation of a radioactive cloud, with corresponding time intervals in which the fallout phenomena can occur:
The early time (less than 20 seconds). In this first phase in the fireball, the mixing of ejecta and entrained sweep-up (dirt, vegetation, and rubble) with fission debris occurs.
The rise and stabilization phase (~10 seconds to 10 minutes). In this second phase, the cloud rises and early fallout is produced. These effects are dependent on the local atmospheric profile, including temperature, relative humidity, and winds.
The late time (10 minutes to 2 days). Wind transport and diffusion, as well as particle size, are major factors for the precipitation scavenging that occurs during this third phase.
The meteorological conditions that determine the shape, extent, and location of the fallout pattern from a nuclear explosion are the height of the tropopause, atmospheric winds, and the occurrence of precipitation. For a given explosion energy yield, type of burst, and tropopause height, the fallout pattern is affected mainly by the directions and speeds of the winds over the fallout area, from the ground surface to the top of the stabilized cloud, which may be as high as 100,000 feet. Furthermore, variations in the winds—from the time of burst until the particles reach the ground perhaps several hours later—affect the fallout pattern following a nuclear explosion.
Shallow Underground Explosion Phenomena
When a nuclear weapon is exploded underground, a sphere of extremely hot, high-pressure gases, including vaporized weapon residues and rock, is formed. This effect is the equivalent of the fireball from an airburst or surface burst. The rapid expansion of the gas bubble initiates a ground shock wave that travels in all directions away from the burst point. When the upwardly directed shock (compression) wave reaches the surface, it is reflected back as a rarefaction (or tension) wave. If the tension exceeds the tensile strength of the surface material, the upper layers of the ground spall (i.e., split off into more-or-less horizontal layers), and these layers move upward.
When it is reflected back from the surface, the rarefaction wave travels into the ground toward the expanding gas sphere (or cavity) produced by the explosion. If the detonation is not at too great a depth, this wave may reach the top of the cavity while the cavity is still growing. The resistance of the ground to the upward growth of the cavity is thus decreased, and the cavity expands rapidly in the upward direction. The expanding gases and vapors can thus supply additional energy to the spalled layers so that their upward motion is sustained for a time or even increased.
As the ground surface moves, it continues to increase in height, and cracks form through which the cavity gases vent to the atmosphere. The mound then disintegrates completely, and the rock fragments are thrown upward and outward. Subsequently, much of the ejected material collapses and falls back, partly into the newly formed crater and partly onto its surrounding “lip.” The material that immediately falls back into the crater is called the fallback, whereas that descending on the lip is called the ejecta. The size of the remaining (or “apparent”) crater depends on the energy yield of the detonation and on the nature of the excavated medium. In general, for equivalent conditions, the volume of the crater is roughly proportional to the yield of the explosion. The material that is not fallback or ejecta becomes part of the base surge.
Part of the energy released by the weapon in a shallow underground explosion appears as an air-blast wave. The fraction of the energy imparted to the air in the form of a blast depends primarily on the depth of burst for the given total energy yield. The greater the depth of burst, the smaller, in general, is the proportion of shock energy that escapes into the air. For a sufficiently deep explosion, there is, of course, no blast wave.
Generation of a Base Surge
The base surge begins to form as the crater growth stops and entrained material in the column rising up to the main cloud begins to fall and expand radially along the ground surface. After the initial period of rapid radial growth, the base surge assumes a toroidal shape with a central depression. Throughout
this period, additional material is propagated into the base surge from the smoke crown and central jet of the explosion (see Figure 5.2).
The base surge continues to grow vertically and radially along the ground surface until it reaches equilibrium with the surrounding air. Throughout the development of the base surge, its bulk density decreases as the surging flow expands and mixes with the surrounding air, while the entrained soil material settles to the ground surface. As the base surge reaches its stabilized geometry, it is relatively tenuous and remains airborne for a considerable period of time.
Experimental evidence indicates that the relative magnitude of the two stabilized clouds and the activity partition between these clouds is strongly dependent on the scaled depth of burst. The height of the stabilized main cloud depends on the initial energy transferred to the air and its mass loading. In general, however, the main cloud rises to an appreciable altitude, and it could present a serious radiological hazard at large distances from the burst point. The final distribution of the radioactive fallout from the main cloud depends on the features of the terrain and the local wind conditions at the time of detonation. Since the base surge cloud remains relatively close to the ground surface, the local winds are not expected to have a significant effect on the geometry of the base surge radioactivity. Therefore, a large fraction of the base surge cloud’s residual activity is expected to fall within a relatively short distance of ground zero. For depths of burst of 2 to 3 scaled meters, the fraction of activity in the base surge is typically less than a few percent of the total activity.
CALCULATING THE EFFECTS OF RELEASE OF HAZARDOUS MATERIALS
The calculation of the effects of the release of a chemical or biological agent or of nuclear radioactivity can be broken down into several stages: (1) The first stage is the determination of a source term—specifically, how much agent or radioactive material is released at some initial time at some location. This requires estimating what is in the target being attacked, which in many cases is uncertain. The source could be a nuclear explosion or the intentional or accidental release of a chemical or biological agent. (2) In the case of a nuclear explosion, the next stage is the estimation of deaths and serious injuries due to prompt (immediate) air-blast effects, thermal effects, and initial nuclear radiation. (3) Radioactive material or chemical/biological agent released into the atmosphere is transported by winds until it either settles to the ground or is rained out. The transport also depends on the details of the terrain, and the material can be affected by temperature, radiation from the explosion, sunlight, atmospheric stability, and so on, during its transport. When the material eventually reaches the ground, it can be inhaled or absorbed into the skin of exposed individuals, and the total dose per individual can lead to adverse health effects (and death for large doses). The third stage of calculating effects thus addresses the potential for adverse health effects.
A number of computational tools have been assembled over the years to calculate the sequence of events listed above for nuclear explosions or other releases of radioactivity. Substantial efforts to validate the tools against experimental data have met with reasonably good success. With the heightened concern over chemical and biological agents during the past decade, many of the tools assembled for calculating the effects of nuclear radioactivity have been adapted to situations involving chemical and biological agents, and there has been an attempt to create a base of experimental data against which to validate the tools. Although this effort has met with some success, the results are much more fragmentary and uncertain than in the nuclear case.
In the following sections, the committee outlines the principal tools employed for the computer programs used to model the effects of the release of hazardous materials into the atmosphere and gives some assessment of the uncertainties involved at each stage of the calculation.
TOOLS TO ESTIMATE CASUALTIES AND OTHER EFFECTS
Two computer programs are in wide use to model the effects of the release of hazardous chemical, biological, radiological, and nuclear (CBRN) materials into the atmosphere and their collateral effects on civilian and military populations. The Hazard Prediction and Assessment Capability (HPAC) code was developed by the Defense Threat Reduction Agency (DTRA) and its predecessors to analyze nuclear, chemical, and biological releases for military studies and operational planning. Lawrence Livermore National Laboratory (LLNL) uses the NUKE code from Sandia National Laboratories to model prompt (immediate) effects from a nuclear explosion and the K-Division Defense Nuclear Agency Fallout Code (KDFOC) to analyze the spread of radioactivity. KDFOC was developed to model fallout from Plowshare tests, which involved nonweapon nuclear explosions designed to produce craters with minimal fallout. Parameters are adjusted to fit the available data from nuclear tests at the Nevada Test Site.
Operational Planning Tools of the Defense Threat Reduction Agency
The Department of Defense uses primarily the DTRA-developed HPAC for estimating effects resulting from a release of CBRN materials. HPAC predicts releases of hazardous material into the atmosphere, downwind transport, and the impact on civilian and military populations. HPAC is composed of a variety of software modules handling four principal functions: (1) generation of information on the source term such as type of hazardous agent, agent mass, release rate, and geometry; (2) weather prediction and data retrieval; (3) determination of source term transport and dispersion code based on the Second-Order Closure Integrated Puff (SCIPUFF) model; and (4) prediction/indication of human effects and casualties. HPAC was designed to be fast running and to be used on a laptop computer with no connection to outside resources.
Using a stochastic simulation process, HPAC and KDFOC predict the dispersal and effects of CBRN hazardous materials based on three general tool components: for hazard sources, weather and transport, and effects. The source term component estimates the initial cloud dimensions, particle size distribution, and radiation attachment. The transport and dispersion component uses historical, observed, or forecast weather data to estimate the dose rate. Finally, the effects component integrates the dose rate to create hazard plots, casualty tables, population distribution, and prompt-effect circles for nuclear incidents.
Attachment 5.1 in this chapter provides descriptions of the following elements of the DTRA-developed codes and data:
Source term codes
Chemical and biological sources
Nuclear and radiological sources
Transport and dispersion codes
HPAC SCIPUFF model
Personnel, health, and environmental effects codes
Tools Used by Lawrence Livermore National Laboratory
The Lawrence Livermore National Laboratory has been engaged in modeling the effects of nuclear explosions since the 1960s. As with the HPAC code described above, many of the tools are under continuing development, motivated in part by advances in computer power that allow more complex calculations to be performed in a short time. In addition, the recent need to estimate the effects of a chemical or biological incident has introduced new phenomena that have to be included in the models.
The calculations requested for this study were performed with KDFOC4, the 1990s version of the LLNL fallout model. KDFOC was originally written to model fallout from Plowshare tests, which involved nuclear explosives designed to produce craters with minimum fallout. Consequently, the code’s emphasis has been on surface and buried explosives, and parameters have been adjusted to fit the data available from nuclear tests at the Nevada Test Site and the Pacific Proving Ground.
Following an explosion, KDFOC4 defines an effective nuclear mushroom cloud, characterized by distribution of activity and particles as a function of particle size and altitude. The computer model accepts a description of the wind field and then slices the effective cloud into overlapping disks. Each disk characterizes the cloud’s initial altitude, a particle size, and associated radioactivity. The horizontal motion of the disks is governed by the wind direction; vertical motion depends on the particle size, and each disk grows in size according to a scale-dependent diffusion. Each of these disks is blown by the wind field and eventually falls to the ground at its settling velocity. When all of the disks have reached the ground, the total radioactivity can be found by adding the contribution of all of the disks.
About 10 surface and belowground bursts at the Nevada Test Site produced well-characterized fallout patterns. Following a detailed study by DTRA (then the Defense Nuclear Agency), a factor-of-two agreement was found in comparing KDFOC predictions with the measured radiation contours. Another comparison with the 15 megaton Bravo test (part of Operation Castle) produced comparably good agreement with the model calculations.
KDFOC4 allows variation in the source term, the wind field, the degree to which the population is sheltered, and the terrain (although for this committee’s study the terrain was assumed to be flat). The nuclear device is described by a fission/fusion ratio with equivalent fission yield based on a mix of Plowshare and laboratory experimental data. For a systems study, the wind field is described in 16 sectors of 22.5 degrees, and the fatalities are averaged separately for each of the wind directions. Expected fatalities are calculated using historical data on the wind field. The modeled population can be out in the open, partially sheltered, or completely inside.
Fatalities are computed as resulting from prompt effects and from local fallout. As with HPAC, the population data are taken from the LandScan 2002 database maintained by the Oak Ridge National Laboratory.1 Prompt effects due to the initial heat, blast, and radiation are computed with the NUKE code developed by Sandia National Laboratories. The acute-fallout biological effects are based on the model in Glasstone and Dolan’s The Effects of Nuclear Weapons, which associates a probability of lethality with a given dose of radiation.2 The LD50 (i.e., the level of radiation at which 50 percent of the population would die) for these calculations is set at 4.0 sieverts (400 rems). Acute fatalities are defined as those that occur within the first 60 days after the explosion; deaths from latent effects of fallout are those that occur subsequently. However, no attempt is made to account for global fallout or for low radiation doses (less than 1 millisievert [100 millirems]).
SOURCES OF UNCERTAINTY AFFECTING MODEL PREDICTIONS
It is very difficult to give an overall assessment of the accuracy of the models described in this chapter. In this section the committee summarizes the state of the art in each of the major modeling areas and then brings these together to evaluate the overall uncertainty.
The source term for nuclear explosions is a fit to the 10 or so surface-burst and buried-burst tests that were carried out at the Nevada Test Site in the 1960s and 1970s. It has also been tested against more limited data from the Pacific Proving Ground and from unintentional releases of radioactivity (e.g., from tests that vented) that then produced fallout some distance from the initial explosion. These tests were primarily integral experiments that measured radiation on the ground after the events. Most were done under benign meterological conditions, with low winds and relatively flat terrain. At a workshop held by DTRA (then the Defense Nuclear Agency), it was determined that it was possible to model all of the data to within about a factor of two.3
The source term for chemical and biological agents released after a conventional or nuclear attack on a facility is derived from a much sparser set of data. DTRA has carried out an extensive testing program to attempt to simulate such events, but the testing suffers from two factors not present in the nuclear case. First, the lack of a “signature” for tracing particles makes it much more difficult to detect the ultimate fate of the chemical and biological agents released. Second, variations in the effects of an attack with conventional weapons depend so strongly on such variables as how close the explosion is to the containers; whether the agent is liquid, solid, or gas; how well the explosive penetrates the target area; and so on that it is virtually impossible to predict, for likely real-world situations, the amount of agent lofted to the atmosphere for transport out of the immediate area. Third, the ultimate effects of a biological plume can be influenced by the inherent stability of the particular agent targeted (spores, vegetative forms, virions, and so on) and probably by whether the material is wet or dry. Nonetheless, it is probably possible to bound the source term through use of the DTRA experimental database and models, although this committee has not seen the analysis and results to make those judgments. Consequently, this study takes the position that a parametric analysis with an assumed fraction of released or lofted agent would be the best way to model the range of possible outcomes in our knowledge of the event.
DTRA has not provided a quantitative evaluation of its model for predicting chemical/biological agent release as tested against the results of its experiments, but it has suggested that lofting of between 0.1 and 5 percent of the stored agent is a plausible number for release of “transportable” agent,4 and it proposed a bimodal distribution in particle size as a way of characterizing the releases.5
Transport and Dispersion
Once fallout or chemical or biological agent (specified in the source term as the amount of particles of each size as a function of altitude) is dispersed, then winds carry the particles away from their initial location, they spread horizontally owing to normal diffusion, and they settle to the ground under the force of gravity. When all of these processes are modeled, the results combine to produce contours of radiation or agent concentration at distances from the initial source.
Quite sophisticated transport models are used, and these are likely to be fairly accurate when the meteorology is well defined—that is, when there are well-specified wind directions, small or moderate
turbulence, no rain, and good knowledge of the topography. All of the computer models are planned to include provisions for these factors (e.g., rain), but there has been very little testing of how well the models work in complex topography or stormy conditions. It is probably most important to incorporate major topographic features such as mountains into the models, since these will always have a major effect on the results compared with what a “flat” model would predict for flat terrain in such regions.
The validation of models for particle transport is intimately connected to knowledge of the source term, and in most cases it is hard to disentangle the two factors. However, as described in the section above, the combined results produce about a factor-of-two agreement for the available nuclear test data. DTRA has also modeled the Russian release of anthrax at Sverdlovsk (and is able to produce reasonable agreement with the published results).6 Finally, over the years LLNL has used even more sophisticated models of atmospheric transport to characterize accidental releases of nuclear, chemical, or biological materials and has achieved some success in predicting global transport of such substances (e.g., from the accident at Chernobyl).
The existing state of the art in tracking and predicting atmospheric transport and dispersion of hazardous material was reviewed in another National Research Council study,7 whose recommendations should help guide future work in this area. Although planning the needs of the military and the emergency response community are somewhat different, the commonalities are also very large. These two groups should thus make special efforts to compare and utilize the results of one another’s research.
For modeling particle transport to predict the effects of hazardous materials dispersed in nuclear attacks on hardened targets or attacks with conventional weapons on chemical and biological agent facilities, it is probably more important to include more factors like topography in existing models than to build more sophisticated transport models. The exact distance at which hazardous materials of a particular concentration are deposited may be incorrectly estimated, but unless deposition is near the boundary of a populated area, that assumption will often not affect the predictions of casualties (the casualties will just occur in different places).
The strength and moisture content of soil are known to have significant effects on the size of the crater produced by a nuclear explosion at a given depth of burst, but the specific relationship is uncertain. The relationship between the amount of energy trapped in a nuclear cloud and the crater size is not the same for all soil materials. Some materials are inherently stronger than others (granite as opposed to sand), but it is not clear that soil strength is more significant than the total energy required to move the weight of the soil and form a crater. Although it has not been proven, there is evidence that the most significant factor related to fallout is the efficiency of energy coupling. That is, for some soils, less energy is required to move a given mass of soil than for others. It is further believed that the key soil ingredient that improves this efficiency is moisture content. Water is a soil constituent that expands more per given energy input than do most other soil constituents.
A more complex aspect of prediction of the effects of fallout is the size distribution of soil particles in the nuclear cloud. Some of these particles may be picked up from the ground surface by the turbulent flow and not be radioactive. The extent of pickup will depend primarily on soil type, although the complete nature of the dependence is not known.
Weapon placement and environmental material around the burst also have an impact on amounts of fallout produced. Experiments are needed to quantify this impact. Weapon design and yield also affect the amount of fallout produced. In addition to using lower yields in earth-penetrator weapons (EPWs), reducing the amount of fission energy in the design of EPWs would also reduce fallout.
Another important uncertainty factor affecting the amount and impact of fallout is fractionation, a process whereby fission products become attached to the soil particles in the cloud stem and fireball. Both the base surge and ejecta—including radioactive material continuum that breaks up into discrete particles involving fission products—are strongly dependent on surface geology in the area of the explosion.
Since the earliest days of the nuclear age, the nuclear radiation community has built up a detailed methodology for correlating radiation dose with the probability of death or serious injury. All of the nuclear test models capture this information. One source of uncertainty in the models concerns the determination of LD50 (the dose at which 50 percent of the population will die) and the long-term impacts of low-level radiation. The overall uncertainties in estimating casualties are in the tens of percent range and are smaller than most of the other sources of uncertainty in the models.
A second factor source of uncertainty is the amount and concentration of radiation present shortly after a nuclear explosion and its decay with time. To calculate this precisely requires knowledge of the details of a subsurface nuclear explosive (e.g., the amount of fission versus fusion) and how much activation occurs in different soil compositions. The models usually employ relatively simple algorithms for these effects and typically assume that the initial radioactivity decays with time to the negative 1.2 or negative 1.3 power. It is worth improving these prescriptions, but the errors are in the tens of percent range.
The chemical and biological communities have begun to focus on the real-world potential lethality of various toxic chemical and biological agents, but the variation in lethality, agent route exposure, agent lifetime, sensitivity to environmental factors, and releasability in virulent form is vastly wider than that for nuclear fallout. Current models incorporate these factors to some extent, but no sensitivity studies were presented to the committee that describe the ways in which each of these factors affects the lethality of various agents (and nearly every issue reduces the potency of an agent). It is strongly urged that such sensitivity studies be done and reviewed by appropriate scientific and medical experts and emergency responders, who are equally important in resolving these questions.
Careless use of the models to predict casualties from the use of nuclear EPWs can lead to estimates that are off by more than a factor of 10, and by much more than that when chemical and biological agents are targeted (and usually on the high side in these cases). Intelligent use of models to predict the effects of nuclear events can reduce this uncertainty to a factor of 3 if the wind direction, percentage of a population likely to be sheltered, and other meteorological and topographic factors are known and taken into account. Although the number of casualties at any precise location may not be accurately modeled, there is much less uncertainty in estimates of total casualties for populated urban areas.
The effects of release of chemical and biological agents are much more uncertain. However, a rule of thumb might be to use the code as well as possible by including all relevant phenomena, and then bounding the problem by a factor of a few in either direction, recognizing that casualties might in some cases be much less if the weapon failed to work or impacted off target or if the agent were particularly susceptible to environmental degradation. Studies of the conditions of release (e.g., whether from attacks occurring at night versus during the day) can also help elucidate the sensitivity of the results to environmental and other factors.
SUGGESTIONS FOR IMPROVING TOOLS
Based on its examination of the codes discussed in this study, the committee believes that tools such as HPAC and KDFOC (and others), should be upgraded to support more precise estimates of the health and environmental consequences of attacks on hard and deeply buried targets. In particular, this upgrading should include the following:
Improve the characterization and validation of the nuclear source terms for a more detailed set of the geological and meteorological conditions characterizing the locations of hard and deeply buried targets;
Incorporate global effects, including environmental impacts (on crop production, fishing, and so on) and low-dose-rate effects on humans across the full demographic spectrum, from infants to the elderly, using results from Chernobyl as well as from the Nevada Test Site and Pacific Proving Grounds;
Address the needs of emergency responders;
Refine population databases to include demographics and to account for movement of people, migration, and evacuation;
Develop, integrate, and maintain three-dimensional urban and topographic databases;
Validate the transport models more thoroughly, including over a broader range of realistic environmental conditions (e.g., topography, microclimates, and so on); and
Substantially improve the chemical and biological models in the HPAC code and other tools as follows:
Document and expand the verification and validation of the source term for a wider variety of scenarios, and
Determine lethality and other parameters for each relevant chemical and biological agent from a consensus of the relevant technical communities and document them. In particular, describe the wide variability in human susceptibility to biological agents and the consequent uncertainty in predicting the effects of their dispersal as the result of an attack on facilities for the storage and/or production of agents.
The committee also suggests that it would be useful for DTRA to convene periodic workshops to compare the results of all widely used predictive tools against experimental data and sets of benchmark problems and to publish the results.
ATTACHMENT 5.1: ELEMENTS OF DEFENSE THREAT REDUCTION AGENCY CODES AND DATA
Source Term Codes
Mathematical representations of physical processes that produce a hazardous source are called source term models. Source terms define the characteristics of materials initially released in an explosion for use in modeling downwind transport and dispersion. Examples of HPAC source terms include explosions, sprays, and airborne releases (dumping) of bulk powders or liquids. Each of these means of agent dispersal can be described mathematically by equations derived from the laws of physics or by empirical correlations determined through experimental observations.
Chemical and Biological Sources
HPAC includes complex methodology to calculate source terms for breached containers in a chemical/biological facility, and models are being developed for incidents at industrial facilities. The initial versions of HPAC, prior to HPAC 4.04, use a table lookup system for predicting releases from breached containers. The user states the type and amount of agent that is contained and estimates a general damage category (light, moderate, severe, total). HPAC associates the percentage of agent released with each damage category. For biological agents, 10 percent, 20 percent, 30 percent, and 40 percent of the agents are released under light, moderate, severe, and total damage, respectively. For chemical or industrial agents, the corresponding percentages of agents released are 15 percent, 30 percent, 45 percent, and 60 percent. Note that the committee views these large percentages of released agent as unrealistic and concurs with DTRA that 0.1 percent to 5 percent is plausible. For HPAC 4.04, the legacy table lookup system is no longer used. Instead, for a user-selected damage category, the estimated release is physics-based, using three modeling steps: damage state, container release, and expulsion/cloud. The overall process is to determine the damage to the facility (containers and building), to predict the container releases, and to calculate the source term parameters for the SCIPUFF atmospheric transport and dispersion model.
Liquid Chemicals Source
Bulk liquid dissemination is the dumping of some amount of bulk liquid agent from an aerial platform. The aerodynamic forces experienced by the bulk liquid in the air lead to the formation of initial droplets, followed by further evolution of the distribution of droplet size owing to liquid evaporation and aerodynamic breakup.
A pool of liquid agent on the ground may also be a source of airborne hazards. Liquid agent may be spilled from a container or munition and then evaporate, forming vapor clouds that are transported downwind.
Pool evaporation constitutes a continuous source of airborne hazardous material over a period of time. Parameters describing a pool evaporation source are the rate and duration of evaporation and the initial cloud sizes generated. The number and size of clouds or “puffs” produced reflect the dimensions of the evaporating pool. The duration of the release is dependent on the initial depth of the pool and the rate of evaporation.
Nuclear and Radiological Sources
Radiological material, fission fragments, and nuclear radiation can occur from multiple releases or events, which include the following:
Explosive release of radiological materials to energetically and quickly disperse the material (e.g., use of a relatively small amount of explosive to scatter the material in the air or over a surface);
A nuclear reactor accident (or deliberate unauthorized action) during which reactor fuel and spent-fuel particles are disseminated into the atmosphere and/or water; and
Detonation of a nuclear weapon or device on or near any target.
The following adverse environments would be created from detonation of a nuclear weapon or improvised nuclear device:
Blast and shock,
Large thermal output,
Initial nuclear radiation (primarily gamma rays and neutrons), and
Some of these effects also can be present when an adversary deliberately disseminates radiological material.
Factors of concern when modeling dispersal of nuclear radiation include nuclear yield (if a detonation is involved), fallout/rainout, particle size, wind shear, geology, burst (or dispersal) height, type of ionizing radiation, prevailing winds and/or water currents, and migration, half-life, and human bioeffects.
HPAC uses two source term models for fallout: K-Division Defense Nuclear Agency Fallout Code (KDFOC) and New Fallout Code (NEWFALL). These models use the weapon parameters of yield, height or depth of burst, time, and fission fraction (fraction of nuclear energy produced by fission) to estimate the weapon output. KDFOC was jointly developed by LLNL and the Defense Nuclear Agency in the 1960s and is used for buried bursts. It is an empirical code based on tests at the Nevada Test Site that define an effective nuclear mushroom cloud, characterizing its initial altitude, particle size, and associated quantity of radioactivity. NEWFALL, used for shallow buried bursts, surface bursts, and airbursts, models particle size, agglomeration, and radiation distributions. NEWFALL is substantially based on the legacy Defense Land Fallout Interpretative Code (DELFIC) and on both British and Nevada Test Site (NTS) tests.
What Are the Major Limitations or Uncertainties?
Within the methodology, the level of uncertainty in HPAC code source term is a significant concern. Current HPAC capabilities for predicting nuclear effects were designed for use in the Cold War and massive strike scenarios. For these considerations, HPAC is more than accurate enough. Additionally, HPAC’s fallout particle size and radionuclide inventories match high-end estimations for most nuclear tests. All codes for fallout are sensitive to weapon yield, height of burst, and target environment. No code takes into account the uncertainty associated with weapon yield or HOB. The current codes also do not explicitly account for secondary activation from the target-area environment. KDFOC accounts for secondary activation in the base for buried bursts, but only within the limits of the test data available. NEWFALL does not account for any secondary activation for shallow buried bursts, surface bursts, and airbursts. Depending on the target environment, the current codes may underpredict the associated fallout by up to several orders of magnitude. Assuming that the target area is comparable in geology and environment to the Nevada Test Site, the model source term for fallout, independent of weather, is well within an order of magnitude of error. Work is underway to more accurately estimate the effects of a single weapon at much higher fidelity, but this feature will not be operational for several years.
Transport and Dispersion Codes
Atmospheric transport and dispersion modeling is one of the most essential functions of hazard assessment codes, largely because these processes play a major role in the evolution of particle and agent concentration profiles in the atmosphere. Most such models use the advection-diffusion equation, a solution that gives the concentration profile for a cloud as a function of time and location. HPAC uses
SCIPUFF to estimate the dose-rate matrix. SCIPUFF is statistically based, with a three-dimensional wind field involving shear owing to terrain interaction.
HPAC Second-Order Closure Integrated Puff Model
SCIPUFF is a Lagrangian model for the transport and atmospheric dispersion of material. The numerical technique employed to solve the dispersion model equations is the Gaussian puff method, in which a collection of three-dimensional puffs is used to represent an arbitrary time-dependent concentration field. The turbulent diffusion parameterization used in SCIPUFF is based on the second-order turbulence closure theories of Donaldson8 and Lewellen,9 providing a direct connection between measured velocity and the predicted dispersion rates.
The second-order closure model also provides the probabilistic feature of SCIPUFF through the prediction of the fluctuation in particle concentration. In addition to giving a mean value for the concentration, SCIPUFF provides a quantitative value for the random variation in concentration due to the stochastic nature of turbulent diffusion. This estimate of uncertainty is used to generate a probabilistic description of dispersion and gives a quantitative characterization of the reliability of the prediction. For many calculations of dispersion, the prediction is inherently uncertain owing to a lack of detailed knowledge of the wind field, and a probabilistic description is the only meaningful approach.
What Are the Major Limitations or Uncertainties?
The inputs for wind and weather will involve significant uncertainties. HPAC uses weather files (or flow fields) generated by others—the Air Force Weather Service, the Navy Fleet Numerical Operations Center, applications of the DTRA Operational Multiscale Environment Model with Grid Adaptivity (OMEGA), the National Oceanic and Atmospheric Administration, and allied weather models—or it operates on simple input such as a single wind vector, uniform throughout the domain, or an interpolation between two or more atmospheric soundings. HPAC is dependent on the resolution, fidelity, and accuracy of the imported flow field. The HPAC methodology estimates the uncertainty in both the flow-field characterization and in the transport and dispersion operations done in those fields. Because HPAC is most generally used for operational planning based on a weather forecast, the quality of that forecast is the major limitation and the driver of uncertainties in modeling atmospheric transport and dispersion.
Personnel, Health, and Environmental Effects Codes
In health effects codes, fatalities are calculated on the basis of the number of people surviving at 60 days after a nuclear event. Injuries are calculated on the basis of incidence of illness from episodes of nausea and vomiting or unusual fatigability and weakness. Casualties are the sum of fatalities and injuries.10 HPAC uses the Probability of Damage Calculator (PDCALC) to calculate the probability of fatalities and casualties, based on a population database, from prompt effects of a nuclear explosion. HPAC calculations of fallout employ the Radiation Induced Performance Distribution (RIPD), which provides human-effects estimations based on dose, dose rate, shielding, and time. Fatalities are calculated on the basis of a lognormal dose-response curve with a median lethal dose of 410 rads. Injury is calculated on the basis of early effects of exposure to radiation at a median effective dose of 210 rads (fatigability) and 240 rads (weakness). External dose is calculated based on the use of the assumption that exposure rate decreases with time t–1.3. It estimates the response of military personnel in terms of physiological repair and/or recovery based on detailed models of relevant biological processes.
Transport and dispersion models estimate casualties by drawing dosage isopleths on the calculated or predicted curves of the downwind plume. Each model uses different methods to calculate the concentration in the plume, but essentially all calculate dosage by integrating the concentration at a point over time. Only HPAC includes a subroutine that calculates casualties from a nuclear event. HPAC calculates or predicts chemical and biological agent dosages and isopleths based either on program defaults or on user input.
HPAC uses the SCIPUFF model to predict concentrations of agents or radioactive particles and fluctuations in concentration. These predictions of airborne concentration are integrated over time to obtain an integrated dose of surface fallout, which is used to estimate lethality in human populations. Deposited mass on the surface may also be used to estimate percutaneous effects of exposure to chemical agents.
This probabilistic prediction of concentration or dose level can be integrated over the surface area to give an estimate of population exposure when the data on dose are coupled with data on population density. The result is an estimate of the number of people exposed to radiation or agent at a given dose or concentration. Conversely, an estimate of the area covered by radiation or agent at a particular concentration is also available, depending on the output that an operator selects. Estimation of a hazardous area is based on a calculation of dispersion that includes the uncertainties in the data input for wind conditions. Prediction of a hazardous area is based on an estimate that indicates the probability of exceeding a threshold value for concentration of radiation or agent.
SCIPUFF not only provides a mean value concentration, but also predicts a quantitative value for the random fluctuation in that value owing to turbulent diffusion. The results are a quantitative characterization of the validity of the prediction.
What Are the Major Limitations or Uncertainties?
HPAC’s estimates of effects are for those deemed to be militarily significant. Hence, low-dose effects on humans, effects on the environment (crop production, fishing, and so on), and economic impacts are not addressed. Combined effects (e.g., increased susceptibility to disease after exposure to radiation) are not considered. Within the methodology, human-effects errors in HPAC estimates of casualties should be within a factor of two for military-age (17- to 24-year-old) male personnel who have no medical treatment or additional injuries. Effects on the full demographic range, from infants to the elderly and infirm, may vary significantly.
Several types of databases are utilized by these codes: population, agent fate, weather, and terrain.
HPAC uses LandScan 2002, supplied by the Oak Ridge National Laboratory. Based on 1 kilometer by 1 kilometer worldwide grid cells, it is the only known high-resolution worldwide population database. The methodology for building LandScan population cells involves determining the probability that someone lives in a cell and the number of people in the cell. The probability of a cell’s being inhabited is based on factors such as proximity to roads and water, land cover, elevation/slope, nighttime lights, and data from satellite imagery. Estimates of the number of people are based on the best census data from the U.S. Bureau of the Census and the International Programs Center. Data on land use and urban
canopy are used for computations of ground-level atmospheric transport. Building inhabitant probability and building-type structural protection factors are derived to estimate probabilities of structural protection for each cell.
What are the major limitations or uncertainties? The LandScan population database does not account for events such as population movements (e.g., commuting), evacuation, or special events (e.g., the Olympics). The number of people in each cell can range over an order of magnitude when such factors are taken into account. Only detailed intelligence at a specific time can reduce this uncertainty. As with algorithms for predicting the effects of explosions, the population data contain no demographics.
After they are released into the air, biological agents degrade as a function of the levels of radiant solar energy, relative humidity, temperature, ozone, and hydroxyl radicals. Biological agents are particularly susceptible to inactivation by the ultraviolet energy in sunlight; even anthrax spores degrade rapidly in bright daylight. Although some chemical agents are environmentally sensitive, neither GB (Sarin) nor VX is degraded significantly by sunlight, or by hydrolysis suspension in the atmosphere. For environmentally sensitive agents, the duration of exposure to sunlight and to photochemically produced compounds and radicals is an important factor in their loss of viability or toxicity. Biological agents released by explosive destruction of the sites where they are stored will be subject to inactivation by the high temperatures generated in the explosion, and chemical agents will be degraded to the extent that they are affected by very high temperatures. Those agents that do not readily aerosolize will contaminate the immediate surrounding area, where they will ultimately be degraded by hydrolysis or another chemical reaction.
What are the major limitations or uncertainties? Within the methodology for HPAC, agent decay is a minor source of uncertainty within the first 180 days after a nuclear burst or accident, or release of a biological or chemical agent. Beyond that point, long-lived radionuclides dominate. Since sunlight has an ultraviolet (UV) spectrum that is very effective in killing spores, HPAC uses a decay constant of 6 hours for spores.
HPAC incorporates many sources of weather data. With this variety comes the burden of determining the best possible data for a particular HPAC project. This selection depends on a number of factors, including the type of project (nuclear, radiological, chemical, or biological) and the timeline of the event (scenario planning, prestrike decision making, or poststrike analysis).
Different hazard scenarios require different types of meteorological data, which can come from observations or from forecast models. The meteorological data entered into HPAC may be categorized according to four types: (1) a single observation (fixed wind), (2) climatology (historical weather data), (3) surface and upper-air profile observations, and (4) weather model predictions in either gridded or profile format.
For a limited chemical release, a single, representative wind observation probably is sufficient to make a reasonable calculation of transport. Data for a fixed wind can be entered directly using a simple interface. Fixed wind is the simplest weather data type. HPAC assumes that a fixed wind is horizontally uniform. The fixed-wind option uses a theoretical vertical profile unless terrain is added. While a fixed
wind is not the most accurate way of specifying weather conditions, the fixed-wind option does provide a quick calculation. This method can produce data to supplement climatology data when there are no available climatic data or when the goal is to display incident plumes driven by winds from varying directions.
Historical weather data created by the Air Force Combat Climatology Center from multiyear records of weather data are included on the HPAC CD-ROM. These historical weather data include the effects of both time of day (diurnal) and seasonal variations on the weather. The data sets provide quantitative meteorological input for long-range planning and incidents for which no other weather information is available.
HPAC includes both surface and upper-air historical weather data. Surface weather data are most useful for chemical incidents with durations of 4 hours or less. Upper-air weather data are more representative for larger domains and for longer-duration hazards such as those produced by the use of nuclear weapons or the release of biological agents, or for higher-altitude releases of materials (releases above 500 meters). Whenever historical weather data are selected as the weather input, HPAC defaults to using the upper-air historical weather data.
What are the major limitations or uncertainties? Several papers have recently appeared in the literature on the use of the ensemble method with transport and turbulent diffusion models. Many groups are currently carrying out sensitivity studies to determine how the ensemble approach can best be implemented to improve their forecasts. In this approach, N meteorological models with several alternate inputs or initialization options are run on the same scenario. At the moment, N typically ranges from 5 to 20. It is found that the average of N forecasts is more accurate than any single-model forecast. The forecasters try to make sure that the spread or uncertainty range for N forecasts is approximately the same as the spread in the climatological observations. This is an empirical approach, since there is no a priori reason that the spread of the ensemble should equal the observed spread. Use of the ensemble method is growing in the meteorological and air-quality modeling area, but it has not reached maturity. Research is needed to put the ensemble method on a better statistical and scientific basis, especially concerning the spread in or uncertainty of the results.
HPAC uses digital topographic elevation data (DTED) Level 0 terrain-elevation data developed by the National Geospatial-Intelligence Agency (NGA). DTED Level 0 elevation data have post spacing of 30 arc seconds (nominally 1 kilometer). These data allow construction of a uniform matrix of terrain-elevation values that provide basic quantitative data for systems and applications that require terrain-elevation, slope, and/or surface roughness information. DTED Level 0 is derived from the higher-resolution NGA DTED Level 1 data. Level 1 data have post spacing of 3 arc seconds (nominally 100 meters). Higher-resolution data are available. Balancing calculational accuracy with run-time requirements drove the choice of DTED Level 0 as the default data.
Run-time calculations of atmospheric transport and dispersion (ATD) require longer run times when higher-resolution terrain data are used. Some operational users require answers quickly and can decrease run-time demands by reducing the number of grid posts in the calculational domain, or by changing the domain, or both. For a particular domain size, several choices of resolution are available, ranging from the native resolution to resolution as little as 900 points over the calculational domain. Another option is to not use any terrain-elevation data (sometimes called a flat-earth approach) in the calculation.
HPAC defaults to different domain and corresponding terrain resolutions depending on the simulation scenario. A long-ranging nuclear fallout hazard scenario will cause HPAC to default to use of a large domain, whereas a chemical event that is driven by local winds has a smaller domain. The user can override all of the defaults but must be careful to balance mission requirements (e.g., calculational run time) with output accuracy.
As use of ATD tools has begun to address hazards in the urban environment, the concept of terrain has been extended to include urban terrain. Data on urban terrain represent the building location and height information. In contrast to the worldwide DTED data, digitized data on urban locales, compatible for input into urban transport and dispersion models, are available for only a handful of cities. There is currently no federal point of contact responsible for developing these data worldwide—that is, the NGA equivalent for terrain elevation. It is important to add, although it is beyond the scope of this discussion, that some urban models do not use building height data. These models extend the traditional (nonurban) ATD models by altering wind-flow assumptions and characterizing the urban environment as a surface roughness.
Terrain affects wind flow. In the presence of complex terrain, wind tends to flow around hills and to change speed. The height of the plume above the local surface changes as it passes along a hill or other elevation features. Both of these effects alter the local wind direction as well as the downwind transport and turbulent diffusion processes. The ATD models account for terrain. Numerical weather models (input to ATD models) factor in terrain as part of their calculational boundary conditions, but the calculations are performed on a coarse scale. These models typically predict wind information at resolutions of 10s to 100s of kilometers and can be run at several different resolutions, but long run times and associated resources are required. Even real-time observations of wind are spatially separated by significant terrain and are separated in time.
HPAC uses the Stationary Wind Fit and Turbulence (SWIFT) mass-consistent wind model to extrapolate the coarser-resolution wind data (from forecasts or observations) to a finer resolution, in which terrain features affect wind flow. When terrain is not a factor in the local wind flow or because run times demand a quick-look assessment, then another model, the Mass Consistent SCIPUFF (MC-SCIPUFF), is used by HPAC.
What are the major limitations or uncertainties? The data on terrain and elevation are available at several levels of spatial resolution, as discussed above. The DTED terrain elevation data are quality-controlled, and the precision of the data is usually not an issue. However, spatial resolution can be a large factor in determining the accuracy of the ATD calculation and downwind hazard.
The resolution of the terrain data used should be matched to the simulation scenario. Hazard predictions of continental-scale radiation fallout from large nuclear weapon incidents do not require highly resolved data on surface winds. Terrain-elevation data can be coarse, and in some locations a flat-earth approximation is sufficient. Although ATD models can be run with high-resolution terrain data, at some point the user is getting a false precision at the expense of very long run times. A longer-running ATD code does not mean a more accurate ATD code. When hazards must be characterized near the source as in the example of chemical or biological releases, the terrain is important.
The HPAC model can use very high spatial resolution terrain-elevation data. The accuracy of the prediction is a trade-off between mission requirements supporting an actionable decision and operational constraints associated with run-time demands.
See the subsection “Databases” in Attachment 5.1 for a description.
Samuel Glasstone and Philip J. Dolan (eds.). 1977. The Effects of Nuclear Weapons, U.S. Government Printing Office, Washington, D.C., §2.18–2.21, §2.23–2.31, §2.91–2.95, §2.99–2.100.
Todd Hann, Defense Threat Reduction Agency, August 13, 2004, personal communication.
Todd Hann, Defense Threat Reduction Agency, August 13, 2004, personal communication.
Todd Hann, Defense Threat Reduction Agency, August 13, 2004, personal communication.
Matthew Meselson, Jeanne Guillemin, Martin Hugh-Jones, Alexander Langmuir, Llona Popova, Alexis Shelokov, and Olga Yampolskaya. 1994. “The Sverdlovsk Anthrax Outbreak of 1979,” Science, Vol. 266, pp. 1202-1208.
National Research Council. 2003. Tracking and Predicting Atmospheric Dispersion of Hazardous Material Releases: Implications for Homeland Security, The National Academies Press, Washington, D.C.
C. du P. Donaldson. 1973. Atmospheric Turbulence and the Dispersal of Atmospheric Pollutants, AMS Workshop on Micrometeorology, D.A. Haugen (ed.), Science Press, Boston, Mass., pp. 313-390.
W.S. Lewellen. 1977. “Use of Invariant Modeling,” Handbook of Turbulence, W. Frost and T.H. Moulden (eds.), Plenum Press, pp. 237-280.
A soldier is considered injured if s/he is unable to perform.