The Potential Role of GPS/MET Observations in Operational Numerical Weather Prediction
Ronald McPherson, Eugenia Kalnay, Stephen Lord
National Center for Environmental Prediction, National Weather Service
Operational numerical weather prediction (NWP) applies the laws of physics, which govern the behavior of atmosphere, to the practical problem of weather prediction. In mathematical terms NWP is an initial value problem, in that the physical laws are used to calculate the temporal evolution of the physical state of the atmosphere, from an estimate of the initial atmospheric conditions.
Determining this “initial state” of the atmosphere is one of the three central problems in operational NWP. It requires observations of wind, temperature, pressure and humidity through the depth of the atmosphere, plus observations of some characteristics of the earth's surface such as snow cover, wetness, vegetation, and sea-surface temperature. These observations are presently obtained by a mixture of observing techniques that have evolved in a largely unplanned manner over the last five or six decades. For forecast projections longer than three or four days the complete global atmosphere must be sampled, and this has led to a considerable emphasis on space-based remote sensing techniques. The second section of this essay describes briefly the current observing system.
A second requirement for determining the initial state of the atmosphere is a system for assimilating disparate observations from this mixture of observing systems into a coherent, dynamically consistent, digital description of the atmosphere. Originally concerned merely with spatial interpolation of radiosonde data to grid of regularly-spaced points, modern four-dimensional data assimilation (4DDA) systems now seek to blend observations of many quantities from observing systems with widely differing error characteristics, with a highly accurate background (or “first guess”) estimate of the state of the atmosphere. Importantly, modern 4DDA systems are capable of ingesting observed quantities such as radiances or radar backscatter cross-sections rather than converting these quantities to more familiar meteorological variables such as temperature, wind, etc. The third section of this paper discusses characteristic of 4DDA systems that are relevant for the use of GPS/MET data.
From time to time, new observing technologies appear, offering either new data (to fill gaps), or better data (more accurate), or cheaper data. Several such possibilities are now, or soon will be, available. Governments that operate the existing, composite observing system are under enormous financial pressure to reduce the costs of observing the atmosphere. Therefore, the U.S. has recently undertaken a systematic redesign of the North American Observing System (NAOS), with the intent of better observing at less cost. Several new technologies will be considered in this redesign effort, which will last for several years. One of those new technologies, using radio occultation techniques in connection with the Global Positioning System, is the subject of this essay. The last section of this paper addresses the potential usefulness of atmospheric refractivity inferred from these techniques in operational numerical weather prediction.
Vertical profile observations of the mass field (i.e., temperature) are obtained from two principal sources: balloon-borne radiosondes flown twice daily from about 600 stations world-wide, and from passive radiometric measurements from satellite platforms. The former are quite accurate, with standard errors of 0.5 - 0.8C, have excellent vertical resolution, and have for many years been the backbone of the global observing system. On the other hand, radiosonde stations are mostly located on northern hemisphere continents, which provides very uneven spatial coverage, and are expensive to operate. Satellite temperature observations are less accurate with standard errors of 2C, and have poorer vertical resolution, but offer excellent spatial coverage. Current satellite systems are also extremely expensive.
Wind profiles are available from radiosondes, from ground-based radar wind profilers, Doppler weather-surveillance radars, and increasingly from wide-bodied jet aircraft on ascent and descent near airports. Single-level wind observations are obtained from aircraft, and by tracking cloud and moisture patterns in geostationary
satellite imagery. With the exception of the satellite-derived winds, these observations are accurate to within about 2m/s; satellite-derived wind are accurate to about 4-5 m/s. Importantly, essentially no wind profile information is available over the world's oceans. This is the greatest single deficiency in the present global observing system for operational NWP.
Moisture profiles are available from radiosondes and vertically-integrated moisture measurements are available from satellite. New technology may soon make moisture profiles available from aircraft on ascent and descent but these profiles and radiosondes humidity are, again, restricted to land areas. Thus, the second most serious deficiency in the current observing system is the absence of moisture profiles over the oceans and over land with sufficient horizontal, vertical, and temporal resolution This is especially important, indeed crucial, for short-period precipitation forecasting.
Until October 1995, data assimilation systems in use at operational NWP centers around the world required that satellite measurements of atmospheric radiance be converted to temperature profiles before they could be ingested. This process, called a “retrieval”, introduces errors and uncertainties into the retrieval profile. These errors tend to be spatially correlated, which greatly reduces their utility in operational NWP.
The U.S. National Centers for Environmental Prediction (NCEP) introduced a new data assimilation system in October 1995 that is based on a three-dimensional variational technique for satellite radiances. This new formulation permits radiance measurements to be ingested directly into the data assimilation, thereby by passing the retrieval process. Other operational centers are developing similar formulations.
In previous assimilation methods, as in the present one, the analysis was obtained by minimizing its distance to both the first guess (background field) and to the observations. However, if the observations (e.g., satellite radiances) were different from the model variables (e.g., temperature and moisture), the observations were first converted into model variables through a “retrieval process”. Since satellite observations are not sufficient to determine a unique atmospheric profile, this is an ill-posed problem which requires additional information such as a background field (e.g. climatology). The accuracy of the retrievals is compromised by these additional assumptions, the error characteristic are less clearly defined, and quality control is less effective.
Within the 3-D variational analysis, in which the observed radiances are compared with those that would be observed from a model atmosphere, we do not need to introduce any additional assumptions. It is a process similar to performing 3-D retrievals of all satellite data instead of the normal 1-D (column-wise) retrievals. Furthermore, it takes full advantage of the accurate model-generated first guess and all additional observations (e.g. radiosondes) simultaneously. The 3-D variational analysis with radiances produced improvements in five-day forecasts for the Northern Hemisphere equivalent to 40% of the total improvement of NCEP from 1984 to 1995. The improvement in Southern Hemisphere forecasts is even greater.
The framework established by the three-dimensional variational data assimilation system is applicable to many geophysical measurements relevant to the atmosphere. This requires the development of a forward model to go from model variables (bending angles or index of refraction in the case of the radio-occultation GPS data). In addition, we need to create the linear tangent model for the forward model (i.e., a perturbation model that indicates how much the observed variables will change if a small change is introduced in the model variables), and the adjoint of the linear tangent model, which transforms observed perturbations to model perturbations. The forward model and its linear tangent and adjoint should be accurate, and if possible, computationally efficient.
In the case of the radio-occultation technique using GPS, the measurements are actually of the signal delay due to the refraction of the atmosphere in the transmission of a radio signal from a GPS satellites to some point a known distance from the satellite. By geometric considerations, this delay can be transformed to a “bending angle”, which is proportional to the refractivity of the atmosphere. In turn, the refractivity is a function of temperature, humidity, and at a given altitude the pressure. Pressure can be determined hydrostatically, so given some external knowledge of the moisture distribution one can calculate temperature as a function of pressure; or, given some external knowledge of temperature, the moisture distribution can be determined from refractivity measurements.
However, it is extremely important to note that in modern data assimilation systems, the refractivity may be used directly without decomposition into temperature and moisture distributions. A very suitable framework thus exists to use refractivity information from the radio occultation technique.
Similarly, rather than assimilating precipitable water vapor estimates from delays observed in surface receiving stations, it would be preferable to assimilate the observed time delays themselves. This would make maximum use
of the GPS data by improving upon an already accurate model first guess of temperature and moisture.
The most important deficiency of the present observing system is most probably not the accuracy of the mass field, but rather the absence of wind profiles over the oceans. GPS/MET data will influence that only indirectly in the extra tropics, and not at all in the tropics.
It does appear possible that GPS/MET observations based on radio occultation techniques can improve the description of the distribution of moisture. NCEP modelers are eager to acquire the “forward model ” to convert temperature and moisture profiles to refractivity from colleagues at NCAR, and to begin experimenting with the GPS/MET data in the operational data assimilation system.
There is considerable evidence that the mass distribution in the atmosphere is fairly well measured and the recent advances in data assimilation noted above are making better use of that information. There is, therefore, limited room for GPS/MET observations to improve the current description of the atmospheric mass (temperature) field. GPS/MET observations may have precision, but experience clearly suggests that the addition of GPS/MET data is not likely to have a major impact on forecasting four or five days in advance.
However, if GPS/MET can provide as good a description of the temperature distribution as current systems, but at a significantly lower cost, then this would be an extremely valuable contribution to the North American Observing System.
Richard Anthes and William Schreiner
University Corporation for Atmospheric Research
Michael Exner, Douglas Hunt, Randolph Ware
University NAVSTAR Consortium
Ying-Hwa Kuo and Xiaolei Zou
National Center for Atmospheric Research
Russian Institute of Atmospheric Physics
On 3 April 1995, a Pegasus rocket carried aloft by an aircraft from Vandenburg Air Force Base launched a small satellite (MicroLab 1) into a circular orbit of about 750-km altitude and 70° inclination. The disk-shaped satellite, which circles Earth every 100 minutes, carried a laptop-sized Global Positioning System (GPS) receiver to demonstrate sensing of the terrestrial atmosphere by the radio occultation, or limb-sounding technique. This proof-of concept experiment is called GPS/Meteorology (GPS/MET). Since the 3 April launch, many thousands of atmospheric soundings of refractivity, temperature, pressure and water vapor have been retrieved. Some of the early results of the GPS/MET experiment are described by Ware et al. (1996) and Kursinski et al. (1996). This paper summarizes recent progress toward obtaining accurate atmospheric soundings of temperature and water vapor and the potential uses of GPS/MET data in atmospheric and climate research and weather prediction.
The radio occultation method used in GPS/MET was developed by scientists at the Jet Propulsion Laboratory (JPL) and used by scientists at Stanford University to measure the structure of planetary atmospheres (please see detailed references in Ware et al., 1996). In the GPS limb sounding method (Fig. 1), atmospheric soundings are retrieved from observations obtained when the radio path between a GPS satellite and a GPS receiver in low-Earth orbit (LEO) traverse's Earth's atmosphere. When the path of the GPS signal begins to transect the mesopause at about 85-km altitude, it is sufficiently
retarded by the atmosphere that a detectable delay in the dual-frequency carrier phase is observed by the LEO GPS receiver. As the radio waves are slowed by the atmosphere, they bend by a small but observable angle, which reaches a maximum value of between 0.02 and 0.03 radians near the Earth's surface (Fig. 2). Vertical profiles of atmospheric refractivity can be computed from the bending angle. The atmospheric refractivity, N, depends on pressure (P), temperature (T) and water vapor pressure (e) according to
Pressure is related hydrostatically to density, which is a function of temperature and water vapor. Thus refractivity is essentially a function of temperature and water vapor. Without knowing either temperature or water vapor, neither can be determined from refractivity alone in the general case. However, in regions of the atmosphere in which water vapor content is small and its contribution to refractivity negligible compared to that of temperature, accurate temperature profiles can be determined by assuming e=0 in (1). This approximation holds well in the upper troposphere, stratosphere, polar regions, and anywhere else where the temperatures are lower than 250 K.
In the general case, if either temperature or water vapor is known from independent measurements or analyses (such as the global analyses prepared daily by the operational weather centers of the world), the other variable can be obtained from the refractivity. Thus, if water vapor pressure is known independently, temperature can be computed from
or, if temperature is known, water vapor pressure may be computed from
It is very important to note that for several important applications of GPS/MET, it is not necessary, or perhaps even desirable, to try to separate the temperature and water vapor effects. For example, trends of globally or regionally averaged atmospheric refractivity would be a good measure of global or regional climate change (Yuan et al. 1993). For operational numerical weather prediction, it is possible to assimilate directly refractivity measurements into the model. The assimilation of refractivity causes the model fields of temperature, pressure and winds to adjust in a dynamically and thermodynamically consistent way (Zou et al., 1995; Kuo et al., 1996).
GPS/MET observations provide essentially global coverage with random spacing in the horizontal. A single GPS/MET satellite could theoretically produce approximately 500 soundings per day. Fig. 3 shows the soundings obtained on 21 October 1995 by the GPS/MET experiment; the number is significantly less than 500 because only setting occultations are obtained. With 12 (50) LEO satellites in orbit simultaneously, global atmospheric refractivity soundings at a horizontal resolution of approximately 400 km (200 km) can be expected every 12 hours.
Fig. 2 shows the retrieval of atmospheric bending angle and the temperature, derived with the assumption that water vapor is zero in (2). We call this the “dry temperature.” Because water vapor is a positive contribution to the computed temperature in (2), the “dry temperatures” generally show a significant cold bias in the lower troposphere. Fig. 2 indicates that the bending angle varies by more than three orders of magnitude from 60 km altitude to the surface. The temperature profile shows several interesting characteristics. The very sharp tropopause at around 12 km is characteristic of many GPS/MET temperature retrievals. It is in a region of the atmosphere where water vapor effects are negligible and the theoretical accuracy of the GPS/MET radio occultation methods is highest (better than 1 K). Thus the high vertical resolution and accuracy of the temperature in this regions suggests that GPS/MET observations will be very useful in upper-tropospheric and lower stratospheric research, including monitoring of climate change. Models predict a strong atmospheric response (cooling) in this region due to the enhanced greenhouse effect, and GPS/MET observations should be very useful in detecting any global or regional trends in this sensitive part of the atmosphere.
The vertical wave structure in the temperature profile of Fig. 2 is present in most of the GPS/MET temperature retrievals. In the lower stratosphere the features are almost certainly real, and associated with gravity waves. In the upper stratosphere (above 40 km), both errors in the retrieval and real atmospheric variability likely contribute to the wave structure. It is very difficult to verify the waves shown in Fig. 2 because other remote sensing systems in the stratosphere, such as HALOE (Halogen Occultation Experiment) and MLS (Microwave Limb Sounder) have inherently much lower vertical resolution than the GPS/MET technique. However, we know from rocket soundings and the LIMS (Limb Infrared Monitor of the Stratosphere) experiment that wavelike features with characteristics similar to those shown in Fig. 2 are ubiquitous in the stratosphere. For example, Fetzer and Gille (1994) state “The LIMS data are characterized by high vertical resolution, and often contain small scale peak-to-peak amplitudes as large as 40 K. These signals have dominant vertical wavelengths of about 10 km and horizontal wavelengths of about 1000 km.”
Fig. 4 shows a comparison of a GPS/MET temperature retrieval with a retrieval from the MLS and the global analysis of temperature from the National Centers from Environmental Prediction (NCEP). Both the MLS and NCEP soundings have much coarser vertical resolution in the upper troposphere and stratosphere so they do not show the sharp tropopause feature or the vertical waves that are observed in the GPS/MET sounding. However, the large-scale characteristics of the three soundings are similar, even in the upper part of the stratosphere (40-55 km).
We have compared many GPS/MET temperature retrievals with nearby radiosondes. Fig. 5 shows a typical example of a dry temperature retrieval, from 5 May 1995. The high vertical resolution and the accuracy of the GPS/MET temperature profile in the layer from about 5 km to 35 km are confirmed by the nearby radiosonde measurements.
The lower portion of the GPS/MET temperature sounding in Fig. 5, indicated by the dotted line beginning at about 6.5 km and extending toward higher temperatures to about 3.5 km is in error, and represents a typical behavior of the temperature
retrievals in the lower several kilometers of the atmosphere. Decreasing signal-to-noise ratio and the presence of increasing amounts of water vapor, and possible strong low-level temperature gradients cause multi-path effects and other errors. The manifestation of these errors is usually a sudden increase in the retrieved temperatures with decreasing elevation and is easily recognized as an erroneous result (a fact useful in quality control of the data).
Fig. 6 shows a comparison of “dry” and “wet” GPS/MET temperature retrievals with the NCEP analyzed temperature for a complete set of 1138, and subsets of 1000, and 800 soundings respectively. The “wet” temperature retrievals refer to temperatures computed from (2) with water vapor pressure obtained from the NCEP analysis. The sets of soundings are determined as follows: The set containing 1138 soundings represents the total number of soundings processed during the period 10-25 October 1995. The 1000 and 800 sounding sets represent the subset of soundings which give the smallest mean and standard deviation differences from the NCEP data. In other words, the set of 1000 soundings was obtained by eliminating the “worst” 138 retrieved soundings out of the total set.
The profiles in Fig. 6 demonstrate the ensemble effect of the “warm bias” errors in the lower troposphere that was seen in the single example of Fig. 5. Elimination of the “worst” retrievals removes those soundings which develop the warm bias error at the highest elevations. Thus the subset of the 800 “best” cases shows the ensemble warm bias beginning at a level around
5 km while the total set shows the bias beginning around 9 km. It is noteworthy that there is no significant difference between the three sets above 10 km, indicating that the retrieved soundings are very robust and stable in this region.
Fig. 7 shows a comparison of 1000 “dry” and “wet” temperature retrievals with NCEP analyzed temperatures, categorized by polar, middle latitude and tropical regions. The “good” soundings reach closest to the surface in the polar regions (about 2-5 km), while the “good soundings in the tropics typically reach to only about 9 km, a reflection of the fact that the lower troposphere in the tropics contain much more moisture than in the middle latitudes or polar regions.
Fig. 8 quantifies the number of retrieved “good” soundings which reach specified altitudes, again grouped into polar, middle latitude and tropical regions. The results from two retrieval algorithms are shown, the original algorithm and an improved algorithm. In tropical regions, the number of “good” soundings starts decreasing rapidly at the 9 km level, while for the middle latitude and polar regions the levels at which the number of “good” soundings begin to decrease rapidly are approximately 8 and 5 km respectively. The increase in the number of low-level “good” soundings due to the improved algorithm is apparent; for example, the number reaching the 5-km level approximately doubles from 100 to 200 in the tropical regions. This indicates that with adjustments in the instrumentation, antenna, and other aspects of the hardware, together with further improvement in the software, a significant improvement in the capability of GPS/MET to successfully sound the lower part of the atmosphere is possible. As will be shown later, it is very important in numerical weather prediction to obtain accurate refractivity profiles as close to the surface as possible.
Fig. 9 illustrates the ability to use the measured refractivity to compute temperature given an independent estimate of water vapor using (2) or, alternatively, to compute water vapor pressure given an independent estimate of temperature from (3). In this example the independent estimates of temperature and water vapor are obtained from the NCEP analyses. We note that because the analysis and short-term global forecasts of temperatures are much more accurate than those of water vapor, it is likely that it will be more useful to derive water vapor from refractivity and an independent estimate of temperature than vice versa. It is also noteworthy that water vapor retrievals of the accuracy shown in Fig. 9 on a global basis would be extremely useful for research and operational purposes.
One of the greatest potential applications of GPS/MET data is in operational numerical weather prediction. Advantages of GPS/MET data for this purpose include global coverage in all weather (GPS/MET retrievals are not affected by clouds), high
accuracy and high vertical resolution. Because GPS/MET data will occur at different spatial locations and at different times over the Earth, the best way to use the data will be through four-dimensional variational data assimilation (4DVAR). The 4DVAR technique is described by Zou et al. (1995). In this section we present a brief summary of the potential impact of assimilating GPS/MET data in a numerical model (Kuo et al., 1996).
In the 4DVAR technique, simulated atmospheric refractivity data are assimilated into the model during a six-hour period using an iterative process. The objective of the process is to minimize a “cost function ” defined by
where x represents the model-predicted variables (in this case temperature, pressure and water vapor), N(x) is the model's value of refractivity and No is the observed value. Starting from an initial guess field xo(o), the minimization iteratively finds the better initial condition xo(k) which satisfies
J(xo(k)) ≤ J (xo(k−1)), (5)
where k is the interation number. During the interation process all variables in the model, including pressure and winds, adjust in response to the changing temperature and water vapor fields.
The case selected for the 4DVAR study was one of intense cyclogenesis over the Northwestern Atlantic Ocean on 4-5 January 1989. This storm was the most intense storm ever observed in this region, with an estimated pressure of 936 mb at 0000 UTC 5 January. The storm started as a 996-mb low off Cape Hatteras, NC, embedded within a broad baroclinic zone with moderate thermal gradient. During the following 24 hours, with the approach of an intense upper-level trough, the storm intensified rapidly over the warm Gulf Stream.
In order to simulate refractivity observations, we first conducted a control simulation with a version of the Penn State/NCAR mesoscale model version 5 (MM5). The horizontal resolution of this model was 90 km and there were 20 levels in the vertical. This simulation was initialized at 0000 UTC 3 January 1989 (defined as t = − 12 h) and was integrated for 60 hours. It covered the northern hemisphere with a mesh of 197x197x20. The initial conditions were obtained from conventional observations using the NCEP global analysis as the first guess. The lateral boundary conditions were obtained by linear interpolation of these analyses over 12-h intervals.
The control simulation reproduced the observed storm quite well (Kuo et al., 1996). Fig. 10 shows the control model's simulation of sea-level pressure and surface temperature for four time periods beginning with 0600 UTC 4 January to 1200 UTC 5 January, which are 30-h, 36-h, 42-h and 60-h
forecasts respectively. During this time period the model storm deepened from a central pressure of 987 mb to an intense storm with central pressure of 938 mb, which compared very well with the observed minimum pressure of 936 mb. Other features of the simulation were realistic as well, and thus we felt confident in extracting model data from the control simulation and constructing simulated refractivity data from these model data for use in subsequent numerical experiments.
To investigate the potential impact of refractivity data on subsequent model forecasts, we degraded the control model data at 1200 UTC 3 January 1989 (t=0) and then assimilated simulated refractivity data from the control simulation over a 6-h period from 1200 to 1800 UTC 3 January on the region shown in Fig. 11. We tested the impact of the simulated refractivity observations by running five 48-h simulations beginning at 1200 UTC 3 January and ending 1200 UTC 5 January 1989. To help avoid the “identical twin”
problem in which overly optimistic results are obtained when the identical model is used for both generating the observations and running subsequent forecasts to test the impact of the observations, we used a degraded version of MM5 in the assimilation experiments. In this version the horizontal resolution was 180 km, the number of vertical layers was 10 rather than 20, and the model domain consisted of the region shown in Fig. 11 rather than the full Northern hemisphere domain of the control simulation.
In Experiment 1, no refractivity data are used; the model is initialized with the degraded initial conditions at t=0. In Exp. 2, refractivity data are assumed available during the 6-h assimilation period at evenly spaced gridpoints 180 km apart. In Experiments 3 and 4, the refractivity data were assumed to be spaced in a random fashion with mean separations of 180 km and 360 km. Fig. 11 shows the location and orientation of the 360 km spaced observations. In Exps. 2-4, refractivity data were assumed available at all model levels, including the lower troposphere. Because obtaining accurate refractivity observations in the lower troposphere on a regular basis is still problematic, as discussed above, we performed Exp. 5 in which refractivity data are assumed available only above 3 km.
The results show that the assimilation of refractivity data over the six-hour period improves the forecasts in all cases. Fig. 12 shows the 48-h forecasts of sea-level pressure for Exps. 1-4. Also shown are the errors in the sea-level pressure. Although all four simulations show significant errors compared to the control simulation (because of the degraded model used to make the forecasts), the forecasts which assimilate refractivity data are clearly better forecasts. Other aspects of the forecast also show improvement in the experiments which use the refractivity data (Kuo et al, 1996).
It is noteworthy that the forecast which assimilates refractivity observations on the regular 180-km grid and the one that assimilates observations in a random fashion show quite similar improvements; thus it is not necessary that refractivity observations be co-located with the model's grid points in order to have a significant positive impact. It is also important to note that eliminating the refractivity observations below 3 km has a significant negative impact.
By analyzing the results of these experiments, Kuo et al. (1996) found that the assimilation of refractivity observations during the six-hour period causes the initial values of temperature, water vapor, winds and pressure to adjust in a dynamically consistent way to give an improved forecast. In particular, the definition of the
initial upper-level disturbance that triggered the low-level development off of Cape Hatteras, and the low-level thermal field off the North Carolina coast was improved by the assimilation of refractivity data. The importance of having observations of refractivity in the lowest 3 kilometers of the atmosphere was shown by a significant degrading of the forecast when these data were withheld.
Active limb sounding of the atmosphere using Global Positioning System (GPS) radio signals received in low Earth orbit has been demonstrated by the GPS Meteorology (GPS/MET) instrument on the MicroLab-1 satellite launched in April 1995. In this paper the latest, improved temperature, water vapor and refractivity profiles obtained from GPS/MET are compared with radiosonde, operational gridded analyses from the National Centers for Environmental Prediction (NCEP) and other satellite data. Both individual soundings and statistics for more than 1000 soundings distributed globally are presented.
Accurate vertical profiles of refractivity are consistently obtained from approximately 30 km altitude to approximately 7 km altitude. Below 7 km, where multi-path effects and other sources of error are large, there are increasing difficulties in obtaining accurate profiles of refractivity. These difficulties are not thought to be fundamental to the technique, and efforts are underway to address them. Recent improvements in data processing have resulted in profiles up to 60 km which appear realistic when compared to HALOE and MLS data from the UARS.
Refractivity is a function of both temperature and water vapor. The GPS/MET temperature soundings agree closely with independent sources of data from approximately 30 km to 7 km where water vapor has a negligible effect. In this layer the mean differences between the GPS/MET temperatures and the other sources of data are approximately 1° C. The standard deviation of temperature differences in this layer range from 2° C to 3° C. The GPS/MET temperature profiles show vertical resolutions of about 1 km and resolve the location and minimum temperature of the tropopause very well.
Below 7 km, various sources of error produce an increasing number of erroneous soundings. However, under ideal conditions, accurate soundings have been obtained down to the surface. Also, under ideal conditions, it has been possible to calculate the atmospheric water vapor pressure from the GPS/MET refractivity and an independent estimate of temperature.
In observational system simulation experiments with a high-resolution numerical model, we showed that the four-dimensional assimilation of refractivity data over a six-h period produced a significant improvement in the subsequent forecast of a case of intense cyclogenesis. It is not necessary to separate out the effects of water vapor and temperature in order to use GPS/MET data in numerical weather prediction models. Nor is it necessary that the observations of refractivity be on a regular grid. Data assimilation studies in which randomly distributed refractivity data are assimilated directly into a model provide valuable information on both global and regional scales. Assimilating refractivity data caused dynamically consistent adjustments in the temperature, moisture and wind field in the model's initial state.
The GPS/MET observations show strong potential for contributing to atmospheric research and weather prediction. Accurate temperature profiles in the upper troposphere and lower stratosphere distributed uniformly over the Earth would be useful in operational numerical weather prediction, global and regional climate change studies and in studies of atmospheric dynamics and chemistry.
We thank Bob Corell, Dick Greenfield, Jay Fein and Mike Mayhew of the National Science Foundation for their support of the GPS/MET project.
Fetzer, E. J. and J.C. Gille, 1994: Gravity wave variance in LIMS temperature. Part I: Variability and comparison with background winds. J. Atmos. Sci., 51, 2461-2483.
Kuo, Y.-H., X. Zou and W. Huang, 1996: The impact of GPS data on the prediction of an extratropical cyclone: an observing system simulation experiment. J. Dyn. Atmos, Ocean, (submitted).
Kursinski, E.R., G.A. Hajj, W.I. Bertiger, S.S. Leroy, T.K. Meehan, L.J. Romans, J.T. Schofield, D.J. McCleese, W.G. Melbourne, C.L. Thornton, T.P. Yunck, J.R. Eyre and R.N. Nagatani, 1996: Initial results of radio occultation observations of Earth's Atmosphere Using the Global Positioning System. Science, 271, 1107-1110.
Ware, R., M. Exner, D. Feng, M. Gorbunov, K. Hardy, B. Herman, Y. Kuo, T. Meehan, W. Melbourne, C. Rocken, W. Schreiner, S. Sokolovskiy, F. Solheim, X. Zou, R. Anthes, S. Businger and K. Trenberth, 1996: GPS sounding of the atmosphere for low Earth orbit: preliminary results . Bull. Amer. Met. Soc., 77, 19-40.
Yuan, L.R., R.A. Anthes, R.H. Ware, C. Rocken, W. Bonner, M. Bevis and S. Businger, 1993: Sensing climate change using the global positioning system. J. Geophys. Res., 98(D8), 14,925-14,937.
Zou, X., Y.-H. Kuo and Y.-R. Guo, 1995: Assimilation of atmospheric refractivity using a nonhydrostatic adjoint model. Mon. Wea. Rev., 123, 2229-2249.
GPS/MET Program, University Corporation for Atmospheric Research
Over the last year, a new atmospheric remote sensing technology based on observations of GPS signals from space has been demonstrated by the GPS/MET Program.1 Based on preliminary results from this proof of concept program (Ware et. al. 1996), it appears likely that an operational GPS/MET system may be constructed in the near future. If such a system is constructed, it will require the use of real-time “fiducial data” and precise GPS orbit solutions derived from a network of globally distributed GPS ground fiducial sites. This paper provides a brief introduction to the GPS/MET technology and early results, and a projection of the data and ground network requirements for an operational GPS/MET observing system.
Less than 2 years after program start, the GPS/MET experiment got underway on April 3, 1995, when a small research satellite, MicroLab-1 (ML-1), was launched into a circular orbit of about 750 km altitude and 70 degrees inclination.2 The disk-shaped 76 kg satellite, which circles the Earth every 100 minutes, carries an instrument which is based on a precision dual frequency GPS receiver.3 This instrument is being used to demonstrate for the first time, using GPS signals, sensing of the terrestrial atmosphere by the radio occultation method.
In the radio occultation method (Figure 1), atmospheric soundings are retrieved from observations obtained when the radio path between a GPS satellite and a GPS receiver in Low Earth Orbit (LEO) traverses the Earth's atmosphere (Ware et. al. 1996). When the path of the GPS signal begins to transect the mesopause at about 85 km altitude, it is sufficiently bent by refraction such that a detectable delay (1 mm) in the dual-frequency carrier phase observations is obtained by the LEO GPS receiver. As the occulted GPS satellite sets below the horizon as viewed from the LEO satellite, the signal path descends through successively denser layers of the atmosphere, and the delay increases to approximately 1 km at the Earth's surface. Thus, the atmosphere creates a unique signal with about 6 orders of magnitude in dynamic range. This “delay signal” can be inverted to obtain a profile of atmospheric refractivity vs. altitude, from which density, pressure, temperature, and moisture profiles can be computed under various conditions.
The GPS/MET Program is managed by the University Corporation for Atmospheric Research (UCAR). Primary sponsorship has been provided by the National Science Foundation (NSF), with additional funding and support provided by the National Oceanic and Atmospheric Administration (NOAA), the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA).
ML-1 is owned and operated by Orbital Sciences Corporation (OSC). Under contract with UCAR, OSC integrated the GPS/MET payload on ML-1 and delivers GPS/MET data to UCAR's Payload Operations Control Center (POCC) via Internet.
The GPS/MET instrument was manufactured by Allen Osborne and Associates. The Jet Propulsion Laboratory (JPL) provided the special flight code used in the instrument and other engineering support for the program.
A single LEO GPS/MET instrument could observe more than 500 such occultation events per day (250 rising occultations and 250 setting occultations). With an appropriate LEO orbit altitude and inclination, global coverage can be obtained with roughly uniform spatial sampling density. A constellation of 20 GPS/MET microsats would be capable of providing 10,000 soundings per day distributed globally. This would give global coverage with roughly the same spatial and temporal sampling density as the US radiosonde network provides today.
An operational GPS/MET observing system could significantly improve weather forecasts and provide valuable new data to support research on global and regional climate change (Kuo, 1996). The technology promises to provide valuable measurements of three important neutral atmospheric variables: temperature, pressure and water vapor. In addition to neutral atmospheric observations, the GPS/MET Program has shown that accurate, useful profiles of electron density and Total Electron Content (TEC) can be retrieved from GPS/MET observations through the ionosphere. These observations show promise for operational use within the new National Space Weather Program.4
Temperature profiles retrieved from several thousand GPS/MET “soundings” have been compared to radiosonde data, profiles from other satellite remote sensing instruments and analysis obtained from operational weather prediction centers, such as the National Centers for Environmental Prediction (NCEP). These comparisons have generally shown that GPS/MET temperature profiles agree within 1-2 K from about 5 to 40 km altitude. Additional development is underway to improve retrieval accuracy at altitudes below 5 km and above 40 km. Figure 2 gives an example of how a GPS/MET temperature profile compares to other data sources. Additional examples are given in the companion paper included in these NRC Proceedings, “GPS Sounding of the Atmosphere from Low Earth Orbit: Preliminary Results and Potential Impact on Numerical Weather Prediction”, by R. Anthes et. al.
Using a variant of traditional “Differential GPS”, LEO satellite orbit and instrument clock errors are minimized by double differencing LEO GPS observations with ground-based observations collected from a network of fiducial sites (Figure 3). For the current GPS/MET proof of concept experiment, fiducial data is being provided by the International GPS Service (IGS) and NASA/JPL, typically with a 24-48 hour delay after the observations. For an operational GPS/MET observing system, fiducial data will be required in near real-time.
For many differential GPS applications, it is sufficient to collect fiducial data at modest sample rates. For example, the fiducial data available from IGS consists of 30 second samples typically. For other applications, such as precision air navigation, higher sample rates are required to track the dynamics of the platform.
National Space Weather Program: Strategic Plan, August 1995, Mr. Julian M. Wright, Jr., Chairman.
For GPS/MET, fiducial data is required for two independent purposes: (1) determination of precise orbits and (2) double difference processing of the occultation data. To compute precise orbits for LEO satellites, low rate (30 second) fiducial observations are adequate. Experience to date suggests that data from 20-30 such fiducial sites will be needed to obtain orbits with the required precision.5
To capture the high frequency dynamics of the signal as the ray path scans the neutral atmosphere, and to obtain high vertical sampling resolution, the sample rate on all links illustrated in Figure 3 above must be increased to 50 Hz or more for the 1 minute duration of each occultation.6 We refer to this fiducial data as “high rate fiducial data”. Simulations show global coverage could be provided from as few as 8 high rate fiducial sites. However, due to geographical limitations and the requirement for some redundancy, an operational system will require approximately 12-15 high rate fiducial sites.
To use GPS/MET soundings for weather prediction, the space and ground-based GPS/MET observations must be collected and processed at a central site in near real-time. High level meteorological data products then must be transmitted to the operational weather prediction centers where they will be assimilated along with other data into advanced 4DDA (four dimensional data assimilation) numerical models.
For the limited purpose of describing data and network requirements, the central site processing steps can be summarized as follows7:
Compute precise orbits for all GPS satellites using low rate observations (30 second data typical) from fiducial sites.
Compute the precise orbit of the LEO satellite(s) using the low rate observations from the LEO GPS/MET instrument, plus the low rate observations from fiducial sites and the GPS orbits computed in step 1 above.
Combine the precise GPS orbits, precise LEO orbits, low rate fiducial observations, high rate LEO and high rate fiducial observations to form high level meteorological data products.
To be competitive with other observing systems, the maximum latency (time from occultation event to data delivery at the operational centers) should be no more than 2 hours. To minimize latency, it is essential to identify the critical paths related to data collection and processing. With dedicated communications links, all high and low rate fiducial data could be collected in virtually real-time. However, data from the LEO satellites must be stored on the spacecraft until it is “in view” of a GPS/MET Earthstation. Once in view, the LEO data can be downloaded in 1-2 minutes. With judicious selection of LEO orbits and Earthstation sites, it will be possible to economically retrieve all LEO data with an average delay of about 50 minutes (100 minutes worst case).8 Once on the ground, the LEO data can be relayed to the central site in 1-2 minutes.
The number of fiducial sites needed varies depending on how well the sites are distributed globally.
Provided that the fiducial site is equipped with a sufficiently stable local oscillator, such as a hydrogen maser, then the sample rate on links A2 and B2 can be reduced to approximately 1 Hz and interpolated to 50 Hz at the central data processing center.
The specific real-time data required from a ground network supporting an operational system will depend on various architectural design tradeoffs. The processing steps assumed here are those for a system based on the architecture of the GPS/MET proof of concept system.
With the addition of a data relay satellite, similar to TDRSS in functionality, it would be possible to reduce this delay to a few minutes. However, the added expense and complexity of this approach makes it unrealistic for a first generation system.
The processing time for Step 1 above is relatively lengthy, ordinarily taking on the order of several hours on a fast workstation. Fortunately, recent studies indicate GPS orbits may be propagated ahead on the order of 1 day without incurring orbit errors in excess of 0.5m. 9 Step 2 goes much faster, requiring on the order of 15 minutes. As with the GPS orbits, the LEO orbits can be propagated ahead. However, due to the lower altitude of the orbit, unmodelled atmospheric drag causes the predicted LEO orbit error to grow more rapidly with time. In addition, the retrieved meteorological data are more sensitive to errors in the LEO orbits than to errors in GPS orbits.10Figure 4 above shows the results of a simulation conducted by UCAR and the University of Arizona prior to the launch of ML-1 illustrating the sensitivity to orbit errors. These simulations suggest that GPS orbit error should be no more than approximately 0.5m to prevent orbit error from becoming a dominant source of temperature error at higher altitudes.
There are many system architectures which could be envisioned for an operational GPS/MET observing system. All involve trades between cost, accuracy and latency. However, if it is assumed that meteorological data must be delivered no later than 2 hours after the observation, and the LEO observations can take up to 100 minutes to flow through the LEO-to-central store and foreword communications network, then it follows that predicted orbits, propagated from the most recent observations practical, will be required for both the GPS and LEO satellites.
Since GPS orbits require relatively more computing time, but degrade more slowly, the GPS orbits could be re-computed on a 12 hour cycle and propagated ahead roughly 24-36 hours from the end of the data arc. This should provide ample time for communication and Step 1 processing so as to begin using new GPS orbits in Step 2 at a point when they are roughly 12 hours old (past the end of the data arc). By the time the next GPS orbit solution becomes available 12 hours later, the previous solution will be only 24 hours old.
As noted above, LEO orbits degrade more rapidly but take less time to compute. Since the LEO orbits rely on data from the LEO satellite itself, new LEO satellite orbits could be computed once per LEO satellite orbit and propagated ahead one orbit period (approximately 100 minutes).
Using predicted orbits for both the GPS and LEO satellites, the computing time available for Step 3 processing is tight, but adequate. Allowing 3 minutes for LEO to central site communications, and 2 minutes for communication of the results to the operational prediction centers, up to 15 minutes will be available for Step 3 processing. Based on current Step 3 algorithm run times, this should be adequate.
It should be emphasized that there are many alternative system architectures and strategies capable of delivering product within 2 hours of the observations. For example, if all LEO satellite data were collected via real-time data relay satellites (e.g., TDRSS), and state-of-the-art, massively parallel super computers were used, it would be theoretically possible to avoid the need for
See for example, “Scripps Orbit and Permanent Array Center (SOPAC) and Southern California Precision GPS Geodetic Array (PGGA)”, J. Behr and Y. Bock, appearing in these NRC Proceedings.
A GPS or LEO orbit error on the order of a few meters does not result in significant temperature error per se. However, the associated velocity error manifests as Doppler error, which in turn manifests as bending angle error, and thus refractivity and temperature error. LEO satellites orbit with about 7 times the angular velocity of GPS satellites (100 minutes vs. 720 minutes per orbit). Thus, for a given position error, the GPS Doppler error is about an order of magnitude lower than for a LEO satellite with the same position error.
predicted GPS or LEO orbits. However, the cost would be significantly higher, without providing any significant advantage for weather prediction. Thus, GPS predicted orbits will be a practical necessity for any operational GPS/MET observing system.
It appears likely that an operational GPS/MET observing system will be constructed in the near future to provide near-real-time data to weather prediction and space weather operational centers. A practical system will require real-time 30 second fiducial data from 20-30 fiducial sites, plus “high rate fiducial data” from approximately 15 select sites. In addition, predicted GPS and LEO satellite orbit solutions will be needed, updated at frequencies sufficient to keep overall orbit error below approximately 0.5 m at the time of use.
Thanks to Bob Corell, Dick Greenfield, Jay Fein and Mike Mayhew of the National Science Foundation for their support of the GPS/MET project.
Kuo, Y.-H., X. Zou and W. Huang, 1996: The impact of GPS data on the prediction of an extratropical cyclone: an observing system simulation experiment. J. Dyn. Atmos, Ocean, (submitted).
Ware, R., M. Exner, D. Feng, M. Gorbunov, K. Hardy, B. Herman, Y. Kuo, T. Meehan, W. Melbourne, C. Rocken, W. Schreiner, S. Sokolovskiy, F. Solheim, X. Zou, R. Anthes, S. Businger and K. Trenberth, 1996: GPS sounding of the atmosphere for low Earth orbit: preliminary results . Bull. Amer. Met. Soc., 77, 19-40.
Judith Curry and Peter Webster
Program in Atmospheric and Oceanic Sciences, Department of Aerospace Engineering Sciences
University of Colorado-Boulder
Unlike numerical weather prediction, climate models “spin up” their own water vapor fields after a short time, and thus initialization is not important. The primary application of water vapor data in climate modelling is to evaluate their performance. Additionally, water vapor data is needed to improve parameterizations for climate models. Accurate water vapor data is needed at certain “baseline” stations to test our understanding of atmospheric radiative transfer. Accurate water vapor amounts are also needed to improve cloud parameterization. Climate diagnostic studies use water vapor information to improve our understanding of the role of the atmospheric hydrological cycle in climate dynamics.
Water vapor feedback is believed to be among the chief mechanisms that would amplify the global climate response to increased concentrations of greenhouse gases. The positive feedback between surface temperature, water vapor and the greenhouse effect is referred to as the water vapor feedback. According to climate models, of the 4.2°C warming that results from a doubling of atmospheric CO2, ~1.7°C is contributed by increased water vapor. It is frequently assumed that global warming will result in an increase in atmospheric water vapor such that the atmospheric relative humidity will remain nearly although there has been controversy surrounding this topic.
Two elements of the U.S. Global Change Research Program (USGCRP) and the World Climate Research Programme (WCRP) are addressing water vapor in the context of climate. One of the components of the Global Water and Energy Experiment (GEWEX) is the GEWEX Water Vapor Programme (GVaP). GEWEX objectives in water vapor research are to:
determine the influence of water vapor on the Earth's radiation budget;
determine the processes that control the distribution of water vapor in the atmosphere;
improve the retrieval of water vapor from satellites;
improve the in-situ measurement of water vapor;
provide a global climatology of water vapor.
The programmatic goals of GVaP are:
assessment of global water vapor retrievals from satellites;
operation of a water vapor reference station that includes Raman lidar;
intercomparison of water vapor sensing instruments;
research and development to improve radiosonde humidity data.
GVap Progress to date includes:
NASA Global Water Vapor Data Set Project (Nvap) has blended TOVS, SSM/I and radiosonde data into a five year (1988-1992) climatology of total and 3-layer water vapor values on a 1° x 1° grid for daily, pentad and monthly averages (http://wwwdaac.msfc.nasa.gov)
GVap Validation Experiment (GVEX) was conducted in Fall '95 at Wallops Island, Virginia, coordinated with WMO's upper-air balloon-sonde intercomparison campaign. The DOE-ARM CART site will be used in future campaigns
Upper tropospheric/lower stratospheric water vapor workshop slated for mid-1996
The US NRC Panel on GEWEX is considering the US contribution to a broader international GEWEX initiative.
The second program that is addressing water vapor measurements for climate purposes in the Department of Energy Atmospheric Radiation Measurement Programme (ARM). ARM is a major program of atmospheric measurement and modeling intended to improve understanding of the processes and properties that affect
atmospheric radiation, with a particular focus on the influence of clouds and the role of the cloud radiative feedback (Stokes and Schwarz, 1994). Measurements of water vapor play a major role in the integrated radiative flux experiments and single-column model experiments. The ARM program is sponsoring extensive observational facilities for a period of 10 years at 3 sites: the southern great plains (centered at Lamont, OK), the tropical western Pacific Ocean (centered at Nauru), and the north slope of Alaska (centered at Barrow). GPS receivers have been installed at the southern great plains site.
In the context of understanding the role of water vapor in climate, precipitable water data from GPS (and eventually some information on water vapor profiles) will be useful mainly in those regions of globe where radiosondes are unavailable and/or where SSM/I and TOVS retrievals have large errors.
GVaP is relying on passive microwave observations of column water amount over oceans. This poses some problems in the tropics. SSM/I has a local coverage of twice per day in the tropics, and SSM/I precipitable water retrievals are unavailable under precipitating conditions. Sheu and Liu (1996) performed an intercomparison of 6 different SSM/I precipitable water algorithms against precipitable water derived from ship-borne radiosondes during TOGA COARE. The algorithms showed 10-15% errors when compared with the radiosondes. It is not clear at present whether SSM/I precipitable water retrievals in this environment can be improved to a more acceptable level of 5%. GPS may provide improved accuracy in this environment.
The importance of the diurnal cycle in tropical airsea interaction is being increasingly appreciated (e.g. Webster et al., 1996). It has been hypothesized that the diurnal cycle may influence lower frequency variations. GPS may provide the only method to observe the diurnal cycle in precipitable water.
Numerous islands in the equatorial oceans could in principle be used as locations for GPS receivers. Additionally, use of the TAO buoy array in the equatorial Pacific Ocean could be explored as potential platforms for GPS receivers.
Another oceanic region where current satellite observations of precipitable water are inadequate is theArctic Ocean. Simulation of clouds by climate and numerical weather prediction models is poor in the Arctic because of poor data bases and lack of understanding of physical processes that determine water vapor amount. Water vapor in the Arctic is of particular importance because of the hypothesized “hyper” water vapor feedback in the Arctic (Curry et al., 1995). At present, there are no radiosondes in the Arctic Ocean, and passive microwave techniques to retrieve precipitable water do not work over ice. TOVS retrievals of water vapor at high latitudes are also problematic. Since precipitable water values are commonly less than 1 g cm−2 in the Arctic, a considerable challenge is posed to any observing system.
Improved measurements of atmospheric water vapor would be useful for a variety of climate applications. The WCRP GEWEX GVaP program is the major national/international forum for addressing water vapor issues. The DOE ARM program is cooperating with GVaP to evaluate and improve techniques to determine water vapor amount. GPS technology has made some minor inroads into these programs, but there is some skepticism of the contribution to be made from GPS, beyond the observing network that is already in place.
GPS has the greatest potential for contributing to the global water vapor data base in remote land areas where there are no radiosondes, in the moist equatorial oceanic regions where passive microwave algorithms do not presently perform very well, and in the very dry polar regions.
Curry, J.A., J.L. Schramm, MC. Serreze, and E.E. Ebert, 1995: Water vapor feedback over the Arctic Ocean. J. Geophys. Res., 100, 14,223-14,229.
Sheu, R.-S. and G. Liu, 1995: Atmospheric humidity variations associated with westerly wind bursts during TOGA COARE. J. Geophys. Res., in press.
Stokes and Schwarz, 1994: The Atmospheric Radiation Measurement Program. Bull. Amer. Meteorol. Soc., 1201-1221.
Webster, P.J., C.A. Clayson, and J.A. Curry, 1995: Clouds, radiation, and the diurnal cycle of sea surface temperature in the tropical western Pacific. J. Clim., in press
Larry Cornman, Jothiram Vivekanandan, Richard Wagoner
Research Applications Program, National Center for Atmospheric Research
In order to provide operationally useful hazardous weather information to both meteorologists and non-meteorologists, a robust method for synthesizing all available data is required. This methodology should include: data quality controls, algorithm modules, data and product integration, and concise, user-friendly outputs. In the following, a brief description of such a system is presented.
The strength of the system described below lies in the use of multiple data sources and multiple detection, diagnostic, and forecast algorithms. The synthesis of the disparate data and algorithm output is implemented via a fuzzy logic algorithm. As the use of fuzzy logic algorithms have been limited in the atmospheric sciences, a detailed introduction is given below.
As it is a novel source of data, a brief outline of possible applications of GPS data in a real-time weather system is presented.
The following list indicates a few potential applications of GPS data in a hazardous weather detection and forecasting system.
Calibration of radar-based rainfall estimation techniques (Z/R relations).
Meso- and Storm Scale
Hazardous convective weather.
Icing and Winter Storms
Cloud liquid water content.
Snowfall rate estimation (calibration of radar-based Z/S relations)
Figure 1 below illustrates the logical structure of the real-time system. At the top level is the data ingest and quality control algorithms. The output of these modules then feed into a suite of detection, diagnostic, and model-based algorithms.
Each algorithm module will perform its own quality control functions. These tasks will include the detection of time gaps or intermittence in the data stream and outlier detection from a meteorological perspective. Pertinent information about the data quality from a given algorithm module will be transmitted to the Integration Module (discussed below) via a set of confidence values.
In order to efficiently combine all the detection, diagnostic, and forecast information from the individual algorithm modules it is desirable to map all of this information onto a common spatial grid. The use of a common grid, the so-called analysis grid, will also facilitate the functions of the Alert Generation Module.
Along with the gridded detection and diagnostic information, each algorithm will produce a confidence value at each grid point. The idea behind the confidence value concept is twofold: producing a real time quality control metric and enabling an adaptive weighting
In order to minimize false alarms, it is important to quality control the output of the individual algorithms. This is distinct from the input quality control procedures described above, although filtering the raw input data will have an effect on the algorithm output data.
In the overall system, a number of individual detection, diagnostic, and forecast algorithms will be implemented. Each of these algorithms will generate information on the location and strength of hazardous weather. The purpose of the Integration Algorithm is to synthesize all of this disparate information in a cohesive spatial and temporal fashion such that accurate and reliable alerts are produced.
The disparate outputs from the individual algorithm modules, suitably mapped to their own analyses grids, must be systematically combined to produce the desired alerts. A simple and robust technique for performing this synthesis task is a fuzzy logic algorithm. The Integration Algorithm will ingest the various detection, diagnosis, forecast, and confidence grids from the algorithm modules and via the fuzzy logic machinery, produce composite gridded values for each type of hazardous weather phenomena.
In the last few years, fuzzy logic algorithms have evolved into a very useful tool in the scientist's and engineer's arsenal for solving complex, real-world problems. Fuzzy logic is well suited to applications in linear and nonlinear control systems, signal and image processing, and data analysis (Klir, 1988). The strength of these algorithms lies in their ability to systematically address the natural ambiguities in measurement data, classification, and pattern recognition. Typical, non-fuzzy applications require a rigid bifurcation into “true” or “false” -- nothing can lie in-between. Standard probability theory merely quantifies the likelihood that the outcome of a given process or experiment is true or false. Expert systems or neural network algorithms tend to be quite complicated and convoluted. It is usually quite difficult to add or subtract algorithm modules from these methods. Fuzzy logic allows for a more direct, intuitive and flexible methodology to deal with the vagaries of the real world.
While fuzzy logic algorithms have been widely and successfully applied in the engineering sciences, the use of these techniques in the atmospheric sciences has been somewhat limited, though highly effective. Due to the inherent ambiguity in many aspects of atmospheric data measurement, analysis, and numerical modeling, fuzzy logic should be a very useful tool in this field.
In general, there are four main steps in the construction of fuzzy logic algorithms: interest mapping, inference, composition and quantification.
The first step performs the conversion of measurement data into scaled, unitless numbers which indicate the correspondence or “interest level ” of the data to the desired result. This correspondence is quantified by the application of a prescribed functional relation, or “interest map” between the data and the interest level. (In the fuzzy logic literature, the terminology “degree of truth” and “membership function” is used for “interest level” and “interest map,” respectively.) As an example, consider a number of balls which have been painted various shades of gray, where “white” and “black” are the two extremes. In Boolean logic, the question: “is this ball white?” can only have one of two answers, “yes” or “no.” Hence, the answer to this question for a white or a black ball is simple. However, for a gray ball which is “mostly white,” Boolean logic forces a “rounding-off,” i.e., the mostly-white ball is categorized as “white.” In fuzzy logic, an interest map which takes into account the shades of gray is constructed, as illustrated in Figure 2.
The second step, inference, allows for the construction of logical rule expressions. In Boolean logic, such a rule might take the following form:
if (A = true and B = true) then (C = true) (1)
whereas in fuzzy logic such a rule might appear as:
if (A is 0.7 true and B is 0.3 true) then (C is 0.4 true) (2)
where a value of 0.7 true would be the resultant interest value after applying an interest map to the data A. For the “white-ness” example above, a ball that is light-gray may have an interest value greater than or equal to 0.7. It is important to note that in this example a maximum truth value of 1.0 was assumed. While this is enforced in Boolean logic, it is not necessary in fuzzy logic. That is, if it is appropriate for the given problem a maximum truth value of 2.8 could be used, so that 0.7 true would result in an interest value of 0.7 x 2.8 = 1.96. The use of inference rules is not incorporated in the current application, the synthesis of different data types is handled through the next step, composition.
The third step in building fuzzy logic algorithms is composition wherein the interest values from a number of different data types are combined in a systematic fashion. This process can result in a new, higher-level logical rule or in a precise value. For the current application the interest values at a given analysis grid point are combined into a “total” interest value (IT), a precise, unique number for that point by using a weighted linear combination. The linear combination of the individual interest values is computed with coefficients (ai) chosen to maximize a given performance measure such as a statistical skill level. This can be done once and for all by various optimization routines or from empirical analysis. Mathematically, the total interest field at the coordinate location x is given by the simple formula,
where the sum is taken over all interest fields. It is important to note that all of the individual interest maps must have the same range, i.e., all 0 to 1 or all -1 to 1, etc. The normalization factor in equation (3), insures that the range values for the total interest will be in the same range as the interest maps. Another, more general application of equation (3) employs adaptive weighting,
where ß(x,t) = 1 can be considered as “confidence” values computed for each space-time point. The use of the confidence values as in equation (4) is a simple way to employ adaptive weighting in the composition process. For example, using the signal-to-noise ratio (SNR) measured at a given location by a weather radar to modulate the weights: low SNR leading to lower values of ß,and increasing the ß values with higher SNR values. Another use of the confidence values might be to lower the weight for a diagnostic algorithm if there is a quality control problem with the data from a given sensor. For example, with an algorithm that utilizes multiple sensor inputs, missing data from an individual sensor might degrade the quality of the overall result, so that a lower confidence in the output would be expected.
A problem with equation (4) occurs when there is a limited number of data sources giving valid information for a given grid point. Consider the extreme case of only one valid source of information “k” at a grid point (e.g., only one non-zero confidence value), whereby equation (4) reduces to,
That is, the use of the weighting functions are in effect negated. This can be problematic when the confidence value for the remaining data source is very low.
One way to combat this potential difficulty is via the introduction of a total confidence metric and dynamic thresholding Equation (6) defines the total confidence value,
which gives information on the available confidence at a given point relative to the total possible. With this quantity, a dynamic threshold which is a function of the total confidence can then be assigned at a given point. This threshold value would vary between 1.0 at zero total confidence and a nominal value for a total confidence of
1. Figure 3 illustrates a dynamic threshold which uses an exponential decay.
With the total confidence and dynamic threshold, grid points which have very low total confidence would be required to have a total interest value close to 1.0 to be “valid”. Quantiatively, a valid point would be required to satisfy; IT(x,t) > T[x,t;CT(x,t)].
The final (optional) step, quantification, takes the result of the composition step -- if it had generated a composite fuzzy rule -- and produces a precise number. The fuzzy logic methods described above are quite powerful tools for event detection, i.e., for answering the question, “is something important happening at this location?” Once a given point is determined to satisfy this condition, a quantitative indicator of the magnitude of the event is required.
There are a few options for dealing with this issue: empirical testing, parallel magnitude grids, or inverse interest mapping. The empirical testing method would determine a suitable mapping between the total interest values and a measure of “truth.” The truth values can be derived from simulated or real data. This method would most likely be used (in this context) to set a small number of threshold values. For example, two interest values which define “moderate” and “severe” turbulence, respectively. If finer resolution is required, method three described below would be used.
The parallel magnitude grid method would separate the event detection and magnitude estimation tasks. That is, use the fuzzy logic machinery to find the location of the events. A set of analysis grids which are not “fuzzified,” (i.e. contain actual magnitudes), are then used to determine the magnitude of the events.
In the last method, inverse interest mapping, a special interest map would be generated that reflects a relationship between total interest values (via equation (4)) and event magnitudes. This map would then be used to produce a magnitude value at each grid point from the total interest value at that point, hence the term inverse interest mapping. Figure 4 illustrates what an inverse interest map may look like. This map can be constructed via the empirical method described above (which would actually obviate the need for that method), or by a amalgam of the individual interest maps (e.g. centroid or average, etc.).
These quantification methods would be used to set appropriate alert thresholds for total interest values or to assign specific magnitudes at the analysis grid points.
The output of the Decision Module is a set of analysis grids which have at each point the total interest values, magnitude, and dynamic threshold values, respectively. The purpose of the Alert Generation Module is to process these grids in order to determine the locations of the hazardous weather events and produce event-specific alerts.
The analysis grids do not give any spatial information, per se, only point-by-point values. In order to remove spurious grid points (i.e., outlier magnitudes or spatially isolated points), it is useful to build global features out of the local grid point data. An appropriate technique for this task exists (Dixon and Wiener, 1993).
This clumping algorithm will identify regions of the total interest grid which correspond to individual events.
In this step, events (i.e., clumped regions) which do not satisfy certain a priori criteria are discarded. These criteria may deal with spatial extent, temporal continuity, low confidence values, etc.
The alert magnitude is generated using the quantification analysis grid point values (from interest values or the parallel magnitude grid values) that are associated with the given event(s). The simplest method is to take a certain percentile of the grid point magnitudes within a given event. A median value (50th percentile) would prevent overwarning, though it might fail to provide critical hazard information. A higher value, for example the 85th percentile, could be a good compromise.
The general structure of a sensor-based system which can provide operational warnings of hazardous weather has been described. It is clear that the use of GPS data can enhance the quality of such a system. This warning system consists of a number of detection, diagnostic, and model-based algorithms. The outputs from the individual algorithm modules are synthesized using a fuzzy logic algorithm. Fuzzy logic algorithms are well suited to this type of problem, wherein a number of disparate data sources must be combined in a simple and efficient manner. A number of practical issues regarding the generation of warnings have also been discussed.
Dixon, M. and G. Wiener, 1993: TITAN: Thunderstorm identification, tracking, analysis, and nowcasting -- a radar-based methodology, Journal of Atmospheric and Oceanic Technology, 10, 785-797.
Klir, G. J. and T. A. Folger, 1988: Fuzzy sets, uncertainty and information, Prentice-Hall, New Jersey.
Thomas Runge, Yoaz Bar-Sever, Garth Franklin, Peter Kroger, Ulf Lindqwister
Tracking Systems and Applications Section, Jet Propulsion Laboratory
The size and scope of permanent arrays of continuously operating GPS receivers will soon rival the current worldwide network of approximately 600 radiosonde launch sites. The accuracy of ground-based GPS estimates of precipitable water vapor (PWV) has already been demonstrated through a number of direct comparisons with simultaneous radiosonde and water vapor radiometer (WVR) measurements of this quantity (NOAA, 1995). A GPS-based system for determination of PWV offers the added benefits of more frequent estimates of this quantity and the potential for near real time availability. Including additional PWV estimates into numerical weather models could significantly improve the accuracy of weather forecasts.
We describe here the components of a GPS-based system that is capable of providing near real time estimates of PWV. These include:
A surface meteorological instrument package capable of providing accurate measurements of barometric pressure and surface temperature. Ideally, this instrument package should be interfaced directly to a GPS receiver, and incorporate the pressure and temperature data directly into the GPS data stream.
A means of transferring both the GPS and surface meteorological data to a central processing facility in near real time.
A source of, or a means of computing, GPS orbits of sufficient accuracy whenever new data arrive at the central processing facility.
An automated data handling and analysis system that can produce estimates of PWV from the GPS and surface meteorological data and GPS orbits whenever new data from a remote site arrive at the central processing facility.
In the remainder of this paper we describe each of these requirements in some detail and present the results of tests that have been performed as part of our effort to develop a prototype ground-based system for estimation of PWV using the GPS.
The use of GPS data to estimate precipitable water vapor has been discussed in detail by others (Bevis, 1992; Bevis, 1994; Rocken, 1993). In summary, the effect of the atmosphere on the transmission of GPS signals is modeled as a single zenith “delay” parameter. The equivalent delay at other elevation angles is determined by a mapping function that is roughly proportional to the inverse of the sine of the elevation angle. This total zenith delay is modeled as the sum of a hydrostatic, or “dry” delay, due to the induced dipole effects of all atmospheric gases, and a “wet” delay due to the permanent dipole effect of atmospheric water vapor. Hence,
τGPS =τD + τW (1)
where τGPS is the total zenith delay estimated from the GPS data, τD is the zenith dry delay, and τw is the zenith wet delay.
To a high degree of accuracy, the dry delay can be computed independently using the surface barometric pressure and the relation (Davis, 1985):
τD = 0.22768 (1−. 00266 cos[2 ϕ]−.00028 ho)−1Po (2)
where τD is the dry delay (cm), ho is the height (km) of the pressure sensor above the geoid, Po is the surface pressure (mbar), and ϕ is the latitude of the observing site. Thus, by combining estimates of τGPS obtained from processing GPS data, and estimates of τD from simultaneous surface pressure measurements, it is
possible, using Eqs. (1) and (2), to obtain estimates of the wet delay, τw.
The zenith wet delays, τw, at each measurement time are related to the precipitable water, PW, by (Bevis, 1994):
PW ≈ Πτw (3)
where Π is a temperature dependent constant (~1/6). Π is related to the refractivity coefficients of water vapor by
where k1, k2, and k3 are the refractivity coefficients for water (Smith, 1953), m is Mw/Md, the ratio of the molar masses of water vapor and dry air, Rw is the gas constant of water vapor, pw is the mass density of liquid water, and Tm is the average temperature of the atmosphere over the receiver. Tm can be expressed as (Davis, 1985)
where Pv is the partial pressure of water vapor, T is the temperature in Kelvins and the integrals are taken over the vertical coordinate, z.
An empirical relationship between the surface temperature measured at the receiver and Tm has been established by analysis of data from a large number of radiosonde launches throughout the United States. Thus it is possible to estimate Tm from the measured surface temperature Ts using
Tm = 70. 2 + 0.72Ts (6)
The accuracy of the average temperatures computed using equation (6) is estimated to be approximately 1-2 % (Bevis, 1992).
One means of establishing the accuracy of GPS-based estimates of PWV is to compare them with those obtained from a well established technique such as a water vapor radiometry, lidar, or direct radiosonde measurements of water vapor. In this section we present the results of a comparison of GPS-based estimates of PWV with those obtained from a collocated water vapor radiometer.
The GPS data used in the WVR comparison were obtained from an 8 channel, dual frequency, TurboRogue SNR 8000TM GPS receiver that is in continuous operation at a site located at the Jet Propulsion Laboratory, Pasadena, CA. Simultaneous surface pressure and temperature measurements were obtained from a Paroscientific Model 6016B pressure sensor with a stated accuracy of 0.01% of the nominal atmospheric pressure at the comparison site. Surface temperatures were obtained from the temperature sensor contained within the pressure sensor.
The water vapor radiometer used in this comparison was a 3-channel design developed at JPL (Keihm, 1991). During the period of the intercomparison, the WVR operated continuously in a fixed scanning pattern. Measurements of the sky brightness temperature were made at a number of elevation angles to allow necessary gain corrections to be made to the WVR signal. PWV estimates used in this comparison were obtained from the WVR measurements made at zenith.
GPS-based estimates of PWV were obtained by processing the data with the GIPSY/OASIS II software system developed at JPL (Lichten, 1987; Sovers, 1990). Precise GPS orbits, obtained using data from a global network of GPS receivers, were used in estimation of the total zenith tropospheric delays. Data from elevation angles as low as 7.0° were processed to estimate the total zenith tropospheric delays from the GPS data at the JPL site.
Figure 1 shows typical results for 3 days of WVR and GPS-based estimates of PWV. This figure also illustrates the effect of including observations at low elevation angles when estimating PWV from GPS data. In routine processing of GPS data for geodetic purposes, these observations are often discarded to mitigate the effects of increased multipath at lower elevation angles. However, it is apparent from the results shown in Fig. 1 that including observations at low elevation angles improves the agreement with the WVR measurements of PWV at this site.
This effect is also evident in mean values of the PWV differences shown in Table 1. These results clearly indicate that including observations at lower elevation angles improves the agreement between the GPS and WVR estimates of PWV. This improvement is thought to result from breaking the high degree of correlation between the total zenith delay and the local vertical position of the GPS station location, both of which are estimated when the GPS data are processed. The sensitivity of these parameters to GPS data is nearly the same at higher elevation angles, and only begins to
show significant differences at low elevation angles. To take advantage of these differences and obtain accurate estimates of the total zenith delay, it may prove necessary to include GPS observations at elevation angles below 10°. This must be balanced against the deleterious effects of increased multipath noise that may accompany observations at the lower elevation angles.
TABLE 1 Summary of GPS & PWV differences
Mean diff., mm
RMS diff., mm
8/11 - 8/28b
2.47 ± 1.05
8/11 - 8/28b
0.91 ± 1.03
9/29 - 10/27c
1.69 ± 1.02
9/29 - 10/27c
−0.07 ± 1.07
aLowest elevation angle allowed for GPS observations.
bAverage PWV for this period: 19.05 mm.
cAverage PWV for this period: 12.66 mm.
When considering the results shown in Table 1, it must be remembered that there are inherent limitations to the accuracy of both WVR and GPS-based estimates of PWV. An analysis of major error sources (Runge, 1995) has estimated the uncertainty in GPS-based estimates of PWV to be 1.0-1.4 mm for PWV values in the range of 5-50 mm. Similarly, due to uncertainties in instrument calibrations and retrieval algorithms, the accuracy of WVR measurements of PWV is currently limited to 0.6-2.6 mm. Hence, the close agreement between the PWV estimates for the two techniques during the October comparison period is probably fortuitous and does not reflect the true accuracy of the GPS-based PWV estimates. Furthermore, this intercomparison was carried out in a relatively dry environment. A similar comparison in a more humid area might show larger differences between the two techniques. Nevertheless, these initial results are very encouraging for future development of a GPS-based system for PWV estimation.
Based upon the results of these tests, we make the following recommendations for GPS-based estimation of precipitable water vapor:
Pressure sensor should be accurate to 0.5 mbar (0.2 mm PWV) or better.
Temperature sensor should be accurate to 1° C or better.
Observations at low elevation, angles (below 10°) should be included to reduce the bias in the PWV estimates.
Relative heights of the GPS antenna and pressure sensor should be known to about 1 m.
To serve as useful input to numerical weather prediction models, the GPS-based estimates of PWV must be available within a few hours after the data have been recorded. The GPS-based PWV estimates described in the previous section required the use of precise GPS orbits obtained by processing data from a global network of ~30 GPS receivers. Because of the time required to collect and process the data used to provide these precise orbits, it is not practical to use them as the basis for a GPS-based system capable of providing near real time PWV estimates. For this reason, we have investigated the use of “predicted” GPS orbits as an alternative to the precise orbits used in the WVR intercomparison.
The predicted GPS orbits used in this study were obtained by using the equations of motion to map the precise orbits forward in time. Hence, by using predicted GPS orbits, it is possible to process the data from a GPS receiver/meteorological sensor package as soon as they arrive at the central processing facility. The resulting PWV estimates could be made available shortly after receiving the data.
Because it is not possible to model perfectly all of the forces that affect the orbits of the GPS satellites, the error in the predicted orbits grows as the length of the prediction period increases. This degradation in the orbit accuracy directly translates into reduced PWV accuracy. Furthermore, since the predicted orbits do not contain information on the satellite clocks, it is necessary to difference the data from at least two GPS receivers in order to remove the effects of the satellite clocks, and allow useful PWV estimates to be made. 1 Despite these added difficulties, the use of predicted orbits currently offers the most viable means of obtaining GPS-based estimates of PWV in near real time.
With the PWV estimates obtained with precise orbits serving as a truth model, we evaluate the accuracy of GPS-based estimates of PWV obtained using predicted orbits. Several strategies for PWV estimation with predicted orbits are tested and the results compared to the truth model. Based upon these results, a number of recommendations regarding the use of predicted orbits for PWV estimation are presented.
The elements of the PWV estimation process that are investigated include the effects of:
Station separation on the accuracy of the estimated PWV values.
The time span of the GPS data used to estimate PWV values.
The length of the GPS orbit prediction period.
The number of sites used when estimating PWV values.
All results presented in this section were obtained from GPS and surface meteorological data recorded during the month of October 1995. To remove the effects of satellite clocks, it was necessary to form differenced GPS observations between two or more sites before estimating PWV values at the JPL site. In addition to the total zenith troposphere delays, receiver clocks and site positions were also estimated. In the case of receiver clocks, one site was chosen to serve as the “reference” clock and its clock was not estimated.
Figure 2 shows the effects of changing the site separation on the accuracy of the estimated PWV values. The degradation in PWV accuracy with decreasing site separation is probably due to increasing correlation between the zenith troposphere parameters at the two sites. The site separation must not be too great, however, since mutual visibility of the GPS satellites is required to allow removal of clock effects by differencing the GPS observations.
The effect of changing the data span is shown in Fig. 3. This figure shows estimates of PWV using data from the JPL and Pietown, NM sites for data spans of 24 h and 3 h. This figure clearly shows that PWV accuracy is degraded with shorter data spans. In an operational system, however, the time span of the data could be maintained at a fixed value(e.g. 6-12 h). As new GPS observations arrived, they would be appended to the existing data file for the site and older data would be removed. Such a scheme would effectively prevent any degradation in accuracy due to a shortened data span.
The GIPSY software used in these analyses is a Kalman filter in which the station clocks are explicitly modeled in the system state equation as white noise stochastic processes. For the purposes of this discussion, this is equivalent to explicit differencing of the GPS observables.
It is also possible to use arbitrarily short data spans without any degradation in accuracy by including the Kalman filter covariance information from earlier processing. This technique would improve the efficiency of a near real time system by requiring that only the most recent (small) batch of new data be processed as they arrive. This would only involve some additional bookkeeping to keep track of covariance information from earlier filter runs.
Another parameter that can affect PWV accuracy is the length of the orbit prediction period: the interval between the time that orbits were last estimated and the time that PWV estimates are made. Because of deficiencies in the physical models that are used to map the estimated orbits forward in time, the accuracy of the predicted orbits degrades in a quadratic fashion as the prediction interval increases. For the orbits used in this study, the orbital accuracy (in three components) degraded from ~0.30 m to ~2.5 m for a prediction period of 48 hours. Since the orbits are fixed when PWV values predicted orbits will are estimated, any degradation in the accuracy of the predicted orbits will directly affect the accuracy of the PWV estimates. The effect of increasing the prediction period is shown graphically in Figure 4. It is clear from this figure that extending the prediction period past one day can result in a significant degradation of PWV accuracy.
If data from more than two sites are available, then it is possible to adjust the orbits in the PWV estimation process. This should improve the accuracy of the PWV estimates and alleviate somewhat the effects of an extended prediction period. Figure 5 compares PWV
estimates obtained from a two-station case with those obtained using data from three sites. In the three-station case, the predicted orbits were adjusted as part of the PWV estimation process.
As a result of these and other studies, we have formulated the following recommendations regarding the use of predicted GPS orbits for estimation of PWV values:
Site separation must be large enough to eliminate the effects of correlations between the zenith troposphere parameters, but small enough to allow differencing of observations to remove satellite clock effects.
The data used for PWV estimation should span at least 3 hours or covariance information from previous estimates should be used.
The prediction period for the orbits should be minimized to prevent degradation in the PWV accuracy due to orbit errors.
Using data from more than two sites allows the predicted orbits to be adjusted, resulting in more accurate PWV estimates.
If orbits are not adjusted, the receiver position should be estimated along with the total zenith delay.2
These are in addition to the instrumental accuracy requirements discussed earlier in the section describing comparisons with WVR measurements.
In this paper we have presented the requirements for a ground-based system for measurement of precipitable water vapor in near real time using the Global Positioning System. The system described here relies on the use of predicted GPS orbits to allow near real time estimation of PWV values from GPS and surface meteorological data. Based upon test results presented here, a number of recommendations are made regarding meteorological instrumentation and estimation strategies using predicted GPS orbits.
The work described in this paper was carried out by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We would like to thank Stephen Keihm of JPL for providing the WVR data and for many useful discussions on the inherent accuracy of the WVR measurements.
If the predicted orbits are not adjusted in the estimation process, the orbit errors may propagate into errors in the estimated total zenith delays. Based upon tests using data from the JPL site, estimation of the GPS site location appears to alleviate this problem and result in more accurate estimates of PWV.
Bevis, M., S. Businger, T.A. Herring, C. Rocken, R.A. Anthes and R.H. Ware, 1992, GPS Meteorology: Remote Sensing of Atmospheric Water Vapor using the Global Positioning System, J. Geophys. Res., 97, 15787-15801.
Bevis, M., S. Businger, S. Chiswell, T.A. Herring, R.A. Anthes, C. Rocken, and R.H. Ware, 1994, GPS Meteorology: Mapping Zenith Wet Delays onto Precipitable Water , J. Appl. Meteorology, Vol. 33, 379-386.
Davis, J.L., T.A. Herring, I.I. Shapiro, A.E.E. Rogers, and G. Elgered, 1985, Geodesy by radio inteferometry: Effects of atmospheric modeling errors on estimates of baseline length, Radio Science, 20, 1593-1607.
Keihm, S.J., 1991, Water vapor radiometer intercomparison experiment, Platteville, Colorado , March 1-14, 991, Final Report for Battelle Pacific Northwest Laboratories on behalf of the Department of Energy, Jet Propulsion Laboratory, Internal Document, Doc. No. D-8898.
Lichten, S., and J. Border, 1987, Strategies for High Precision Global Positioning System Orbit Determination , J. Geophys. Res., 95, 12751-12762.
NOAA (National Oceanic and Atmospheric Administration) 1995, Precipitable Water Vapor Comparisons Using Various GPS Processing Techniques, Doc. No. 1203-GD-36, August 28, 1995. Rocken, C., R.H. Ware, T. Van Hove, F. Solheim, C. Alber, J. Johnson, M. Bevis, S. Businger, 1993, Sensing Atmospheric Water Vapor with the Global Positioning System , Geophys. Res. Letters, 20, 2631-2634.
Runge, T.F., P.M. Kroger, Y.E. Bar-Sever and M. Bevis, 1995, Accuracy Evaluation of Ground-basod GPS Estimates of Precipitable Water Vapor, EOS Transactions, American Geophysical Union, 1995 Fall Meeting, Vol. 76, F146.
Smith, E.K. and S. Weintraub, 1953, The constants in the equation for atmospheric refractive index at radio frequencies, J. Res. Natl. Bur. Stand., 50, 39-41.
Sovers, O. and J. Border, 1990, Observation Model and Parameter Partials for the JPL Geodetic GPS Modeling Software “GPSOMC”, Jet Propulsion Lab., JPL Publication 87-21, Rev. 2, Pasadena, CA.
Frederick Solheim, Christopher Alber, Randolph Ware, Christian Rocken
University NAVSTAR Consortium
Error sources in precise GPS geodesy such as antenna multipath, the inability to precisely model anisotropy in the wet troposphere, the inability to precisely model the ionosphere at low elevation angles, and orbit errors likewise induce errors in determination of zenith precipitable water vapor (PWV). We are currently proposing to measure water vapor along the propagation path to each of the GPS satellites in view; we call this slant-path water vapor (SWV). The ability to diminish the above errors will be critical to SWV. Examples of multipath and antenna site noise are discussed herein. Some remedies are presented. SWV is discussed further in a companion paper in this publication.
Multipath reflections from the GPS receiver antenna environment mix with the sky wave and modulate the antenna phase center. Even well below the antenna, where the gain is low, mounting methods can affect the phase center (Elosegui et al., 1995, Meertens et al., 1996). The resultant position solution can be displaced or otherwise noisy. Because the antenna is coupled with its environment, changes in this environment will change the antenna phase center motion.
Figure 1 illustrates the change in a 55 meter base-line at Table Mountain, Colorado due to a change in the antenna environment. This antenna site is a large nearly level penaplain consisting of poorly sorted river rock and slightly bentonitic soil. The site has favorable low multipath characteristics. Heights of the Trimble SSE antennas on this baseline were about 0.6 and 1.1 meters above the surface. The multipath behavior of these antennas is different because of the different set-up heights. Snowfall and subsequent melting dramatically changed the baseline elevation difference as a function of GPS satellite elevation cutoff angle, presumably due to wetting of the soil below the antennas.
The vertical difference between these antennas is strongly influenced by the change in the multipath environment. Phase center motion as a function of elevation angle is apparent in the antenna height difference as a function of elevation cutoff angle (Figure 2). If there were no phase center motion as a function of elevation angle, there would be no induced change in measured height difference
Mitigation of these multipath effects can be accomplished with mapping of the antenna environment with careful measurements and modeling the resultant multipath. Antenna sites can change, however, and so such models must be compliant. The recognized multipath signature from previous data can also be used to correct current multipath errors, but changes in multipath due to changes in the environment require updating such models over periods of days. The above methods may have applications in improving historical GPS data as well as improving current measurements, but are not optimal for meteorological forecasting wherein data latency is critical.
Decoupling the antenna from its environment will diminish multipath. This can be accomplished with antennas designed for sharp gain cutoff at the local horizon and low gain from below the antenna. We have experimented with, among other methods, large choke rings. Anechoic chamber results for this antenna compared with a Dorne Margolin antenna are presented later herein. For the interested reader, references on chokes are included at the end of this paper. Figure 3 demonstrates mitigation of multipath from behind the antenna by an 85 cm diameter choke ring on a Dorne Margolin choke ring antenna.
A zenith value of precipitable water vapor (PWV) currently compare with radiosondes and water vapor radiometer measurements at the 1 to 2 mm PWV level.
However, recent antenna tests by UNAVCO (Meertens, 1996) at the Table Mountain facility demonstrate that the multipath signature for a certain range of antenna mount heights can alias as PWV. Monuments were occupied with high (1.5 m) and low (less than 0.5 m) antenna tripod mounts with various GPS receivers and antennas. Baseline results using Trimble SSE GPS receivers and Trimble SST antennas with high and low antenna heights had vertical errors as large as 17 mm when tropospheric parameters were estimated. The horizontal components were not affected. Details of the UNAVCO high-low antenna tests are described in separate papers by Meertens et al. (“Field and Anechoic Chamber Tests of GPS Antennas”) in this publication and at http://www.unavco.ucar.edu/docs/science/1995_ant_tests/tblmtn , and Johnson et al., “Role of Multipath in Antenna Height Tests at Table Mountain,” at http://www.unavco.ucar.edu/docs/science/tblmtn .
Mixing of antenna types can also cause large errors in PWV estimation. Figure 5 shows the PWV estimation at Platteville using Trimble SST antennas, and using mixed Dorne Margolin and Trimble SST antennas. Phase center motion corrections from anechoic chamber measurements were utilized.
Information on the anisotropy of the distribution of tropospheric water vapor relative to a ground-based GPS antenna is contained in the carrier phase residuals to each of the GPS satellites in view. Knowing the antenna coordinates and the total zenith delay may enable us to determine the total precipitable water vapor along each of these propagation paths. This
information on the water vapor field is expected to be of high value by forecast modelers to weather forecasts. Provided that multipath can be sufficiently mitigated and the ionosphere accurately modeled, SWV measurements at very low elevation angles may be possible. This would enable sensing of tropospheric features well beyond the horizon (Figure 6). Such capability, however, will require careful selection of antenna sites and implementation of low multipath antennas.
We have investigated various techniques for gain cutoff at the horizon, including resistive-loss ground planes, various configurations of microwave absorber, and diffractive scatterers. We have developed, under USAF and NSF funding, an enhancement to the existing Dorne Margolin choke ring antenna that diminishes gain at the horizon by about 8 dB while slightly increasing gain at mid elevations. Gain plots from tests in the Ball Aerospace anechoic antenna range are shown below. Methods of Kolesnikoff were employed. Similar improvement in gain cutoff were achieved with larger chokes, both theoretically and with prototypes, by Jaldehag (1995).
Phase center motions of the standard Dorne Margolin as a function of azimuth and elevation and those of the enhance antenna are also shown. These chamber tests do not represent the real world in that contributions to phase center motion from reflections from behind the antenna are absent by definition of the anechoic chamber. These phase excursions would be acceptable, provided that they can be modeled, and provided that manufacturing tolerances in phase center variations are sufficiently small. However, placing the antenna in an echoic field environment changes the behavior of the phase center in a hard to predict manner.
As is shown in Figure 8, the choke ring enhancement reduces multipath induced phase residuals to about 60% on an 11 meter baseline at Table Mountain
An antenna superior in gain cutoff to this enhanced Dorne Margolin choke ring antenna, but with greater phase center excursions, was developed as the Macrometer antenna by Dr. Charles Counselman more than a decade ago. Right circular gain and phase center variations are presented in the antenna range tests by Schupler and Clark (1994) and revisited in 1995. One attraction of this antenna is its simple ground plane, in contrast to expensive and heavy choke rings.
Antenna testing and tests:
Elosegui, P., J. L. Davis, T. K. Jaldehag, J. M. Johansson, A. E. Niell, and I. I. Shapiro, “Geodesy using the Global Positioning System: The effects of scattering on estimates of site position,” JGR Vol. 100, No. B7, pp 9921-9934, June 10, 1995
Jaldehag, R.T.K, “Space Geodesy Techniques: An Experimental and Theoretical Study of Antenna Related Error Sources,” Ph.D. thesis, Technical Report No. 276, Chalmers University of Technology, Goeteberg, Sweden, 1995
Kolesnikoff. Paul, “Method of Determining Phase Center Location and Stability of Wide Beam GPS Antennas,” Ball Communications Systems Division internal paper.
Meertens. C.M., C. Alber, J. Braun. C. Rocken, B. Stephens, R. Ware, M. Exner. P. Kolesnikoff, “Field and Anechoic Chamber Tests of GPS Antennas.” IGS Meeting, Silver Spring, MD. 20-22 March 1996: also http://www.unavco.ucar.edu/docs/science/1995_ant_tests/tblmtn .
Schupler, Bruce R., Robert L. Allshouse, and Thomas A. Clark, “Signal Characteristics of GPS User Antennas,” Journal of the Institute of Navigation Vol. 41, No. 3, Fall 1994.
Schupler, Bruce R., Thomas A. Clark, and Robert L. Allshouse, “Characterizations of GPS User Antennas: Reanalysis and New Results, ” papers GA11A-23 and GA21C-23, July 1995 IUGG, Boulder, Colorado.
Corrugations and Chokes:
Balanis, Constantine A., “Antenna Theory, Analysis and Design,” pp 578-592, Wiley and Sons, 1982.
Bersanelli, M., M Mensadoun, G. DeAmici, M. Limon, G.F. Smoot, S. Tanaka, C. Witebsky, and J. Yamada, “Construction Technique and Performance of a 2 GHz Rectangular Corrugated Horn,” IEEE Transactions on Antennas and Propagation,” Vol. AP-40, No. 9, pp 1107-1109, September 1992.
Clarricoats, P. J. B., P. K. Saha, “Part 2 - Corrugated-conical-horn feed,” Proc IEEE, Vol. 118, No. 9, pp 1177-1186, September 1971.
Lawrie, R. E., and L. Peters Jr., “Modifications of Horn Antennas for Low Sidelobe Levels,” IEEE Transactions on Antennas and Propagation,” Vol AP-14, No. 5, pp 605-610, September 1966.
Lowe, Allan W., “Horn Antennas,” in “Antenna Engineering Handbook, Richard C. Johnson, Editor, pp 15-28 to 15-51 , McGraw Hill 1993.
Mentzer, Carl A., and Leon Peters Jr., “Properties of Cutoff Corrugated Surfaces for Corrugated Horn Design, ” IEEE Transactions on Antennas and Propagation,” Vol. AP-22, No. 2, pp 191-196, March 1974.
Mentzer, Carl A., and Leon Peters Jr., “Pattern Analysis of Corrugated Horn Antennas,” IEEE Transactions on Antennas and Propagation,” Vol. AP-24, No. 3, pp 304-309, May 1976.
Thomas, Bruce MacA., “Design of Corrugated Conical Horns,” IEEE Transactions on Antennas and Propagation.” Vol. AP-26, No. 2, pp. 367-372, March 1978.
Tranquilla, James M., J.P. Carr, and Jussain M. Al-Rizzo, “Analysis of a Choke Ring Groundplane for Multipath Control in Global Positioning System (GPS) Applications,” IEEE Trans. on Ant. and Prop., Vol. 42. No. 7, pp 905-911 , July 1994.
Walter, Carlton H., “Traveling Wave Antennas,” ISBN 0-486-62669-5, Dover, 1970; also McGraw-Hill, 1965
This work was conducted under USAF AFGL grant F19628-93-C-0064 and peer reviewed NSF grant.
Ionospheric Effects Division, Phillips Laboratory
The Ionospheric Effects Division of the Phillips Laboratory, Geophysics Directorate (PL/GPI) has been carrying out, over the past ten years, a broad spectrum of ionospheric and neutral atmospheric specification and forecast studies which include both the development of global, first-principles, theoretical models as well as the generation of computationally-fast, real-time ionospheric specification models which have been transitioned to Air Force Space Command's (AFSPC) 50th Weather Squadron (50 WS) where they provide AFSPC's DOD customers with near-real-time specification of ionospheric parameters, globally. Figure 1 briefly describes the Objective, Approach and Payoff of these ongoing efforts.
The Parameterized Real-time Ionospheric Specification Model (PRISM) provides real-time ion and electron density profiles from 90 to 1600 km. It is now operational at 50 WS, having achieved Initial Operational Capability (IOC) on 17 April, 1996. PRISM was designed to accept all real-time ionospheric data which is available at 50 WS, whether from the ground-based Digital Ionospheric Sensing System (DISS) comprised of seventeen digital ionospheric sounders giving bottomside electron density profiles or from the two Defense Meteorological Satellite Program (DMSP) satellites in sun-synchronous orbits at 840 km. Figure 2 depicts the capability of PRISM to ingest real-time data from DMSP and the DISS network while Figure 3 specifies the inputs and outputs of PRISM.
Increased Operational Performance and Reliability of C31 System
Simulation of Operational Environments for Planning of Future Systems Acquisitions
Reliable Global, Real-Time, Neutral Atmosphere Density and Ionospheric Specification and Forecast Techniques and Models
Transition Specification and Forecast Models to AFSPC, DMSP and Others for Operational Use
Develop and Validate Parameterized Models (Derived From First Principles), Simulation Codes, and Forecasting Techniques That Are Driven With Real-Time Sensor Data
In the ionospheric, low latitude region, the physical processes which determine the vertical and horizontal structure are production of ionization by solar extreme ultraviolet (EUV) radiation, loss of the major O+ ions by recombination with the neutral N2 and O2 molecules and transport by diffusion, collisions with neutral particles (neutral wind) and motion perpendicular to the Earth's magnetic field lines (ExB drift). Figure 4 depicts the geometry of low latitude geomagnetic field lines and the motion of ionization perpendicular and parallel to B. During the daytime, and eastward-directed electric field, E, generated around 120 km altitude (ionospheric E region) causes ExB drift which is upward and away from the magnetic equator. At the same time, downward diffusion (parallel to B), caused by gravity and pressure gradient forces combine with ExB transport to create two crests of enhanced ionospheric electron densities at + 15 − 20 degrees dip latitude called the equatorial anomaly. These crests occur at F region altitudes of 300-500 km, and the ratio of the crest-to-trough densities can be as high as 6-to-1. At night the ExB drift is directed downward causing the equatorial anomaly to disappear.
Figure 5 illustrates why near-real-time specification of ionospheric parameters is required rather than climotological values. This figure displays the day-to-day variability in observed Total Electron Content (TEC) measured at Ascension Island in just one month during moderate solar activity. The daytime variability can be as high as a factor of 3 from one day to the next. On the right-hand side of the figure is the range error that the ionosphere produces for SpaceTrack Systems operating at two frequencies - 400 MHZ (Pave Paws) and 1.4 GHz (Cobra Dane). The Pave Paws system has a stated requirement of knowing target location to 30 metes which means the TEC value must be known to 10% if TEC ~ 120 units.
PRISM is now operational at 50 WS and global, RMS error in ionospheric specification is estimated at ~ 35%. It is important to now determine how accurate PRISM is and how to improve on this accuracy. Figure 6 projects how PRISM will improve with time. With the availability of GPS dual frequency TEC values from 24 stations in FY97 at 50 WS, the low latitude specification should improve dramatically. In the late 1990s when the SSUSI and SSULI UV sensors will be flown on DMSP Block 5D3 satellites, significant improvement in the nighttime, low latitude ionospheric parameters and auroral oval E region densities should be realized. Eventually, it is anticipated that the goal of 10% global RMS error should be achievable after 2000.
The Near Real-time (NRT) network of 24 GPS dual frequency receivers run by JPL for NASA are displayed in Figure 7. When TEC values become available at 50 WS within an hour of the measurements, this will represent the only low latitude, ground-based real-time input to PRISM. Figure 8 shows why this information is so important. These are observed TEC values from the dual-frequency altimeter on the TOPEX/Poseidon satellite which measures TEC below 1350 km. Note the large crests in TEC on either side of the magnetic equator - evidence of the equatorial anomaly crests in the F region described earlier. While TOPEX data is not available in real-time at 50 WS, it can be used to verify the NRT GPS TEC values and the slant-to-vertical conversion of GPS TEC values required by PRISM.
Another sensor that can be used to obtain ionospheric electron density profiles is a Low Earth Orbiting (LEO) dual frequency GPS receiver depicted in Figure 9. The occultation of the GPS satellites by the Earth allows the GPS/Met LEO dual frequency sensor to measure height profiles of TEC. These TEC profiles can be converted to electron density profiles if certain assumptions are made about the horizontal homogeneity of the ionosphere. Figure 10 and Figure 11 are electron density profiles obtained on May 4, 1995 from the GPS/MET satellite by George Hajj of JPL. These profiles are compared with the Parameterized Ionospheric Model (PIM) which is the theoretically-based model within PRISM.
There are a number of global models of the neutral atmosphere and ionosphere being developed by Phillips Lab and the Ionospheric Effects Division. I will not describe all of the ones displayed in Figure 12 but will just mention a few. The Ionospheric Forecast Model (IFM) was delivered to HQ Air Weather Service (AWS) for transition to 50 WS. This will be operational in two years. It will provide 12 hour forecasts of global ionospheric ion and electron density profiles using PRISM values as the current (t=0) specification. The Thermospheric Forecast Model (TFM) currently being validated will be coupled with IFM to form the Coupled Ionosphere Thermosphere Forecast Model (CITFM) which will be a completely self-consistent, coupled model providing 12 hour forecasts of the neutral winds, temperature, densities and ion and electron density profiles. This will be especially important during geomagnetic storms. The advanced coupled models also include a Solar Prediction Model (SPM), an Advanced Coupled Magnetospheric Model (ACMM), a Global Forecast Model (GFM) and an executive model which carries out quality control and automatically determines which of the models should be run to satisfy specific 50 WS customer requirements. This overarching model is called the Integrated Space Environment Model (ISEM).
Finally, a very important initiative called the National Space Weather Program (NSWP) is gaining momentum. From its inception, NSWP has done a remarkable job in bringing together various Government Departments and Agencies, at the highest levels, to define, implement and fund the four “pillars” of the program, Research, Observations, Models and Education. The Program Elements of NSWP are illustrated in Figure 13 from the NSWP Strategic Plan (August, 1995). The agencies actively involved in this effort include, Dept. of Commerce, Dept. of Defense, NSF, NASA, Dept. of Interior and Dept. of Energy. The Office of the Federal Coordinator for Meteorological Services and Supporting Research (OFCM) has responsibility for overall coordination. Both NOAA and Dept. of Defense, jointly, have taken on the responsibility of developing the global operational Space Weather models for NSWP. The models being developed by the Geophysics Directorate and briefly described here represent an integral part of the National Space Weather Program.
Yi-Chung Chao, Per Enge, Bradford Parkinson
Department of Aeronautics and Astronautics, Stanford University
The Wide Area Augmentation System is quickly being developed by the FAA. The goal is to use this satellite-ground network system as the primary navigation means. The National Satellite Test Bed is a prototype of WAAS and will be up and running by mid1997. Stanford University has participated in the FAA effort of the development of WAAS since 1992 and has demonstrated promising flight trial results using a threestation WAAS network on the West Coast. This paper describes the WAAS research at Stanford University with emphasis on the ionospheric modeling.
The Global Positioning System (GPS) is a satellitebased navigation system invented and deployed by the U.S. Department of Defense. Local Area Code-phase Differential GPS (LADGPS) successfully demonstrated sub-10-meter navigation error performance. To further reduce the errors of LADGPS corrections, the Wide Area Differential GPS (WADGPS), originally, was invented at Stanford University. By estimating the satellite orbit errors and modeling the ionosphere, the time and spatial decorrelation of GPS errors can be minimized even over a large geographical region [Kee, 1992]. However, as the goal to use GPS as the primary navigation means for landing and en route flight, the system integrity, continuity and availability are all needed to be improved to reduce the sensitivity to the failure of individual system components. The concept of Wide Area Augmentation System (WAAS) was proposed. Presently, the real-time implementation of WAAS is being aggressively developed by Federal Aviation Administration (FAA) to serve the main goals.
A pictorial outline of WAAS is presented in Figure 1. As fully deployed, WAAS will be composed of a nationwide reference stations for GPS data collection and a master station responsible for data crunching. Because of the widely distributed network, the GPS error components, mainly orbital and ionospheric error become observable and even the system integrity can be monitored at real time. Therefore, this system will not only provide the WAAS vector differential corrections to increase the user position accuracy to several meters, but also has built-in real time integrity. Finally, a ranging signal transmitted from the data link geosynchronous satellite will further improve the satellite geometry and therefore the service of navigation continuity and availability.
For improving the position accuracy, WAAS is designed to use weighted navigation solutions. When calculating differential corrections, confidence numbers will be estimated at the same time. As received the corrections, the WAAS users will perform a weighted navigation solution. This weighting algorithm allows the system to be able to handle marginal situations better. For example, situations such as satellite has only been observed by one or two stations and/or low elevation noisy measurements.
National Satellite Test Bed (NSTB) is one of the FAA efforts for WAAS development. It serves as a prototype of WAAS and is scheduled to be in service in mid-1997. Figure 2 gives the current Testbed Reference Station (TRS) layout in continental US (CONUS), and total of 25 TRS's will come on-line in the neat future. All the TRS's will be equipped with GPS dual-frequency Ashtech Z-12 receivers and meteorological stations. A DEC UNIX computer at each site will be employed for data collection and communication. Several Testbed Master Stations (TMS), including those at FAA Tech Center as well as Stanford University, have been set up. The data will be transmitted from TRS through T1 line and piped to different TMS' s to test independently developed WAAS algorithms. Except the data transmitting, the main capabilities of TMS will be: 1) the estimation of satellite orbit and clock errors to reduce DGPS errors, 2) ionospheric delay modeling for WAAS L1 single frequency users and 3) integrity monitoring and warning.
Stanford University has participated the FAA WAAS development since 1992. A three-station mini-network has been created and is shown in Figure 3. This real time system of WAAS has been supported by numerous flight trials conducted at Palo Alto, Livermore and Lake Tahoe area [Walter et al, 1994, Tsai et al, 1995, Lawrence et al, 1996]. As reported, the WAAS generated corrections can improve one-sigma ranging accuracy to less than 1 meter. The most challenging GPS vertical position error has been reduced to 1.25-meter one-sigma. In recent month, the WAAS user program has been integrated with the Stanford University developed Integrity Beacon Landing System (IBLS) [Cohen, 1995] and real-time flight guidance display system [Barrows, 1995]. This system integration greatly improves the capability of system testing and verification. (The IBLS is a carrier DGPS system with centimeter accuracy], and the display system is a cockpit based flight guidance system for the pilot).
With the progress of NSTB, Stanford WAAS Laboratory has started to merge its research in many aspects of the development. There are several major research topics at the Stanford WAAS Laboratory: 1) the estimation of satellite orbit and clock errors to minimize the spatial and time decorrelation errors in Differential GPS (DGPS). Current development is through the use of the patented Common View Time Transfer and Single Difference method to separate the satellite slow (orbital) and fast (Selective Availability) errors. 2) Ionospheric delay modeling. Because WAAS is designed for L1 civilian frequency users for navigation purposes, the goal of this study is to derive an efficient and effective model for the ionospheric correction. 3) integrity study. This includes the development of Receiver Autonomous Integrity Monitoring (RAIM) algorithms as well as optimal scheduling of the WAAS messages to optimize the use of limited GEO data link and to broadcast the in-time system integrity warning. A flight test result is show in Fig 4.
Along with the development of WAAS, it is important to keep in mind that WAAS is aimed to serve as a primary navigation tool. For this life-critical purpose, the federal certification process plays an important role. Therefore, WAAS is constrained to make use of the guaranteed L1-frequency GPS service only. Under this circumstance, the wide area L1 ionospheric delay model must be transmitted to the single frequency WAAS users who constitute the main service volume, in spite of the quick development of the dual frequency cross-correlation receiver technologies.
The generation of ionospheric delay model and provision of real-time ionospheric correction integrity monitoring are therefore the main goal of this research. With the WAAS Minimum Operation Performance Standard (MOPS) [WAAS MOPS, 1995] specification created the RTCA-SC-159, Working Group 2, the users will be using a predefined ionosphere grid for their ionosphere corrections. Under this guideline, the research goals will naturally focus on the process of generation the ionosphere grid using different modeling methods and the grid ionosphere vertical error (GIVE). Moreover, the error analysis will be emphasized on the integrity study, i.e. the search for outliers.
The progress of current research can be outlined as following: 1) real time dual-frequency carrier-phase smoothing, 2) GPS interfrequency bias calibration 3) development of different ionosphere modeling techniques and 4) ionosphere distance correlation function study. Each of these topics will be detailed in following sections.
The dual-frequency ionospheric delay measurements can be derived from the GPS dual-frequency code-phase and carrier-phase observables [ICD-GPS-200]:
IL1 is the ionospheric delay at L1 frequency,
IL1,PR is the measurement of IL1 from code-phase,
IL1,ϕ is the measurement of IL1 from carrier phase,
PR is the GPS code-phase (pseudorange)
ϕ is the GPS carrier-phase (integrated doppler).
Amb represents the combination of ambiguities from L1 carrier and L2 carrier phases, and γ ≡ (L1/ L2)2 = (77 / 60)2
Note that the interfrequency biases in both GPS satellite transmitter and receivers are also included in the equations.
is the actual (as opposed to the broadcast) transmitter inter-frequency bias in code-phase on L1 for the j-th satellite. is the respective bias on the carrier-phase.
R1 is the receiver differential inter-frequency bias on L2 for the i-th receiver. Because of the timing of the GPS receivers is dependent on L1 C/A code, the inter-frequency bias on L1 is zero by definition.
Figure 5 illustrates the situation of ionosphere measurements. In Eq (1) and (2), the sum of the satellite and receiver interfrequency biases clearly keep us from getting the real ionospheric delay on L1 frequency. Thus the first step of the data processing is to estimate the interfrequency biases. The following discussion will demonstrate that the calibration result is necessary for further error analysis.
From the measurement equations, (1) and (2), the interfrequency biases has to be separated from the ionospheric delay. The separation is made possible by the variation of the obliquity factor (OF) with satellite line-of-sight elevation. The OF can be expressed as
where Re is the radius of the Earth,
h is the height of the ionosphere slab,
el is the GPS elevation angle.
There are several assumptions for this estimation process: 1) ionosphere is almost constant as expressed in solar magnetic frame [Knecht et al, 1985] 2) ionosphere remains at a constant height, usually at 350 km. 3) obliquity is elevation dependent only. 4) interfrequency biases are very slow time varying, possible with time constant of several weeks to months. 5) to make valid use of the obliquity factors, the large ionosphere gradient around day and night termination periods are avoided for data collection. [Chao, 1995].
The ionosphere is modeled by spherical harmonics expressed in solar-magnetic frame. The estimation state vector can be set up as
x = [Sph Harm Coeff | IFB1 | IFBk − IFB1] (4)
A sequential (recursive least square) algorithm with Householder reflection is employed for the measurement update [Bierman, 1977].
The estimated model has half meter accuracy across different satellites and receiver combinations when compared to real measurements and hence the total confidence of this estimation being declared
The presented results of this calibration is from comparing of WAAS navigation solutions. Figure 6 and Figure 7 compare the vertical position error with and without the bias calibration. The comparison shows better mean value position error. The comparison also implies that the system integrity monitoring will be greatly improved by the fact of narrower error distribution after the calibration of this systematic error.
This software approach also make the calibration easier once needed and provide a long-tern monitoring tool, such as change of satellite on-board transmitter and change of reference station receiver-antenna, etc.
As required for the data processing, the carrier smoothing of ionospheric delay measurement must be accomplished with an estimate of the smoothing confidence. A Hatch filter [Hatch, 1982] has been used with some modification to take into account of the elevation related multipath effect. Figure 6 presents one of the smoothing result along with the estimation of confidence envelope. The dual-frequency smoothing technique has big advantage over the single-frequency smoothing because the ionosphere divergence between code-phase and carrier-phase measurements are completely avoided. As the filter smoothes with time, the multipath problem will be minimized.
As mentioned above, the WAAS MOPS specifies the users' algorithm of reconstructing the ionospheric grid based correction. The master station is responsible for the generation of the grid-based vertical ionospheric delay corrections and estimated errors. The master station algorithm need to make optimal use of all the IPP measurements to calculate the ionosphere correction for each grid point. Two categories of modeling techniques are currently under studying: 1) use a weighting function to relate the grid point with the ionospheric delay measurement located at the ionosphere pierce point (IPP). An optimal weighting weighting is subject to study. 2) use a surface to directly model the ionosphere and then calculate the grid point delay from the fitted model.
For both categories, the capability to model any local disturbances, i.e. the resolution of the model must be sufficient, is the most important consideration. In the meanwhile, computational complexity and time are also a factor of decision.
A prototype of the grid generation function can be expressed as
where Îgrid.V is the estimated vertical ionospheric delay at the grid, ÎNominal,V is a nominal ionospheric delay calculated from, Imeas,V,k is the measured ionospheric delay at the IPP. K is the total number of IPP's.
The use of INominal is to take into account of the Geomagnetic longitudinal and latitudinal effect of the ionosphere variation. By using Inominal, the weighting function w is a function of distance from IPP to the desired grid point.
The current implementation of the ionospheric delay model in Stanford WAAS conforms to the RTCA-SC-159 Working Group 2 (WG2) Grid algorithm [RTCA-SC-159, WG2, 1994]. The GPS Klobuchar ionosphere model is chosen to be nominal and wk = 1/dk is used as a weighting function, where dk is the distance from the kth IPP to the grid. The formula to generate each grid is therefore:
A slightly modified weighting factor is used to incorporate the measurement confidence number σ as
Because of the singularity and lack of physical sense in the above weighting function, some other weighting schemes are also under investigation. One of the attractive function is exp.(−x2), with x defined as the normalized IPP-to-grid-point distance. This weighting with a good understanding of the physical ionosphere correlation function will be more desirable. Again, the confidence number can be used as
wk = exp(−x2) / σ (8)
Figure 9 shows a 3-D model generated by the above weighting function. The IPP-grid distance has been normalized by 10-degree arc length on the Earth surface great circle. This color coded 3-D ionosphere model facilitates the model development effort.
Another approach of the modeling is to fit with a functional surface. A natural choice of the function on a spherical surface will be the spherical harmonics. However, spherical harmonics are only orthogonal on the entire sphere, i.e. for latitude in [−90,+90] and longitude in [−180, +180]. The Spherical Cap Harmonics Analysis (SCHA) make it possible to work on a spherical cap which is the situation of the CONUS region. The downside of this function is that the model is quite complicated and the need of computational power for the surface fitting for the 1 Hz real time system increased significantly. Further investigation is needed.
For the modeling of ionosphere, it is important to understand its behavior. One of the important information is the distance correlation function of ionosphere. It will not only provide the background of how to choose the cut-off distance for choosing IPP's to create grid ionosphere correction, using figure 9, but also provide important messages about the study of ionosphere integrity monitoring.
To attack this problem, the ionospheric variation from the IPP's geo-magnetic longitude and latitude has to be taken out or modeled first in order to reveal the correlation function associate with distance alone. Again, the single frequency Klobuchar model is used for the first order approximation of this effect.
Figure 10 and Figure 11 presents the preliminary study of the ionosphere distance decorrelation function using the NSTB data. For up to 2000 KM, ionosphere is strongly correlated. Beyond that, the correlation coefficient becomes fluctuated, part of the reasons are that the available data samples become smaller and also the Klobuchar model may not be good enough in the longer range for correct modeling.
Using this correlation information, a optimal linear estimator can be designed. Test result will be delivered in the ION National Meeting, Boston, MA, 1996.
The real-time flight trials from Stanford University mini-WAAS network have demonstrated a very promising starting point for the Category I precision landing using WAAS. The ionospheric modeling is still one of the greatest challenge of WAAS in the aspects of accuracy and integrity monitoring. The interfrequency bias calibration and dual-frequency carrier-phase smoothing enable us to perform further error analysis such as correlation function. With the construction of NSTB, fascinating data bank are becoming available. Several studies need to be made for improvements toward ionospheric modeling. Among them are 1) validation of the nominal model used for generation of ionosphere grid, 2) best weighting function and 3) air and ground algorithm for ionosphere outliers detection.
The authors would like to gratefully thank the support and assistance of the FAA AGS-100, the Satellite Program Office, FAA Technical Center and the FAA personnel at the reference stations. Would also want to thank Dr. Todd Walter, Y.J. Tsai, Jennifer Evans and Dave Lawrence for their numerous help.
Barrows, A., P.Enge, B.Parkinson, J.Powell, Flight Tests of a 3-D Perspective-View Glass-Cockpit Display for General Aviation Using GPS , Proceedings of ION-GPS-95, Palm Springs, CA, September, 1995, pp 1615-1622.
Cohen, C., D.Lawrence, H.S. Cobb, B. Pervan, J.D.Powell, B. Parkinson, G.Aubrey, W. Loewe, D.Ormiston, B.D. McNally, D.Kaufmann, V.Wullschleger, and R Swider, “Preliminary Results of Category III Precision Landing With 110 Automatic Landings of United Boeing 737 Using GNSS Integrity Beacons,” Proceedings of the National Technical Meeting of the Institute of Navigation, Anaheim, pp. 157-166, January, 1995.
Hatch, R.R., The Synergism of GPS Code and Carrier Measurements, Proceedings of the Third Geodetic Symposium on Satellite Doppler Positioning, Las Crues, NM, Feb, 1982, pp1213-1232.
ICD-GPS-200 Revision B-PR, Rockwell International, July 1991.
Kee C., B. Parkinson, P. Axlerad, Wide Area Differential GPS Navigation, Journal of the US Institute of Navigation, vol.38, no.2, Summer, 1991.
Klobuchar, J.A Design and Characteristics of the GPS ionospheric Time Delay Algorithm for Single Frequency Users IEEE Position Location and Navigation Symposium, Las Vegas, NV, Nov. 1986.
Knecht, D.J. Shuman, B.M. Handbook of Geophysics and the Space Environment, chapter of the geomagnetic field., 1985.
Lawrence, D., J. Evans, Y.C. Chao, Y.J.Tsai, C.Cohen, T.Walter, P.Enge, J.D.Powell, B.Parkinson, Integration of Wide Area DGPS with Local Area Kinematic DGPS, IEEE PLANS 96, Atlantic City, GA, April, 1996.
RTCA Special Committee 159 Working Group 2 Wide Area Augmentation System Signal Specification, March 1994.
RTCA Special Committee 159, Minimum Operational Performance Standards (MOPS) for Airborne Supplemental Navigation Equipment Using GPS, RTCA 204-91/SC159-29, These MOPS are modified by Technical Standard Order TSO-C129, which was released December 10, 1992.
Tsai, Y.J., P.Enge, Y.C. Chao, T. Walter, C.Kee, J.Evans, A.Barrows, D.Powell, B.Parkinson, Validation of the RTCA Message Format for WAAS, ION-GPS-95.
Walter T., C. Kee, Y.C. Chao, Y.J. Tsai, U. Peled, J.Ceva, A. Barrows, E. Abbott, D. Powell, P. Enge and B. Parkinson Flight Trials of the Wide Area Augmentation System (WAAS), Proceeding of Institute of Navigation 1994 Annual Meeting of the Satellite Division (ION-GPS-94), Salt Lake City, Salt Lake City, Sept, 1994.