Estimates of temperature variations near the earth's surface are based on thermometer readings taken daily at thousands of land stations and on board thousands of ships. Dating back into the late nineteenth century, the data coverage has been dense enough to reveal the existence of gradual changes in hemispheric- and global-mean surface temperature. A time series of global-mean temperature from 1880 to 1998 (Figure 2.1) displays short-term fluctuations that can be identified with El Niño events and volcanic eruptions. Superimposed upon these short-term fluctuations in the time series are more gradual variations that include a warming of between 0.4 and 0.8 °C over the course of the century. The exact amount of estimated warming depends upon which of the existing compilations of the data is used as a basis for the calculation, the method used to estimate global means on the basis of irregularly spaced station observations, and the way in which the data are smoothed in time. Such globally averaged time series are not necessarily representative of local conditions: for example, Canada and Siberia have warmed much more rapidly during the past 20 years than indicated in Figure 2.1, while parts of the high latitude North Atlantic and North Pacific regions have cooled slightly. In order to estimate globally averaged temperature changes with a high degree of accuracy, it is necessary to have a broad spatial distribution of observations that are made with high precision.break
Temperature changes at and just above the earth's surface are of singular importance from the standpoint of societal and human impacts, and they are also widely regarded as an important indicator of human-induced climate change. However, if global warming is caused by the build-up of greenhouse gases in the atmosphere, it should be evident not only at the earth's surface, but also in the lower to mid-troposphere. Temperatures aloft can be measured in a number of ways, two of which are useful for climate monitoring: by radiosondes (balloon-borne instrument packages, including thermometers, released daily or twice daily at a network of observing stations throughout the world), and by satellite measurements of microwave radiation emitted by oxygen gas in the lower to mid-troposphere, taken with an instrument known as the Microwave Sounding Unit (MSU).5 The balloon measurements are taken at the same Greenwich mean times each day, whereas the times of day of the satellite measurements for a given location drift slowly with changes in the satellite orbits. The radiosonde network has been operative since the late 1940s and substantialcontinue
5 The Microwave Sounding Unit senses radiation in a number of different channels, each of which is representative of a different layer of the atmosphere. The measurements discussed in this report are derived from channel 2a channel that senses radiation in the layer extending from the surface up to about 15 km. To eliminate the influence of the stratospheric radiation, rather elaborate processing is required. The processed data are referred to as MSU 2LT (lower to mid-troposphere). Successive, improved versions of the MSU 2LT data have been produced over the past several years. The current version (D) was released in early 1999. For further discussion see chapter 7.
only since the mid-1960s that the instrumentation has been stable enough and sufficiently well documented for these measurements to be of use for estimating global temperature changes. Continuous MSU measurements began in 1979.
In the scientific literature on the detection of climate change, temperatures are commonly expressed in terms of departures from the local "climate" mean for a specified reference period. In this report, the reference period is the 20-year period 1979–98. Such departures from climate means are referred to as "temperature anomalies." Figure 2.2 shows the patterns of tropospheric temperature anomalies over the Western Hemisphere, as sensed by the MSU, during the northern winters (December through February) of 1982–83 and 1997–98, which both correspond to strong El Niño events in the tropical Pacific, and during the winter of 1988–89, which corresponds to a La Niña event. During the El Niño winters, temperatures throughout the tropics were above the mean of the past 20 years (i.e., the anomalies were positive), with alternating patches of warm and cold anomalies at higher latitudes. In contrast, during the La Niña winter the tropics were colder than the mean for the 20-year period. The patterns in this figure reflect the warmer global-mean temperatures characteristic of El Niño years, in contrast to the cooler La Niña years.
Figure 2.3 shows three time series of global-mean temperature anomalies. The black curve represents surface temperature, and the colored curves represent the temperature of the lower to mid-troposphere as inferred from MSU measurements (red) and radiosonde observations (green). Year-to-year fluctuations are evident in all three time series, and particularly in the series for the temperatures aloft. For example, the El Niño years 1983 and 1998 were a few tenths of a degree warmer, while 1992–93 following the eruption of Mt. Pinatubo were a few tenths of a degree cooler, than the 20-year average. Contrasting warm El Niño and cold La Niña years show up even more clearly in the tropical time series shown in Figure 2.4. In both global and tropical data, the peaks and dips in the satellite and radiosonde time series correlate quite well. Since these two time series represent largely independent mean temperature estimates for the same atmospheric layer, the strong correspondence between them is further proof that the fluctuations are real. El Niño and La Niña years are also evident in surface observations for the tropical belt (Figure 2.4), but they do not show up as clearly in the global-mean time series (Figure 2.3).break
Upon close inspection, it is evident that the surface temperature time series in both Figures 2.3 and 2.4 show upward trends relative to the corresponding tropospheric temperature time series for the past 20 years. The fit of a trend line to the time series of global-mean surface temperature (e.g., Figure 2.5) indicates a warming between 0.25 to 0.4 °C for this 20-year period, or approximately 0.1 to 0.2 °C per decade,6 depending upon which of the existing data sets is used to represent the surface temperatures, and exactly how the fitting is done. In contrast, the tropospheric time series exhibits a smaller upward temperature trend of about 0.1 °C during this 20-year period. This disparity between the recent trends in global-mean surface and tropospheric temperature is the motivation for this report. Since this phenomenon first became apparent in the early 1990s, the research community has been seeking to identify and quantify possible sources of errors in the surface and upper air temperature measurements, and it has been trying to understand the physical processes that may have caused surface and upper air temperatures to change relative to one another. A number of biases in the data sets have been identified and corrected, and the process of refining the data sets is continuing.
In considering possible sources of errors in the satellite, radiosonde, and surface-based temperature measurements, it should be noted at the outset that none of these measurement systems was specifically designed for long-term climate monitoring (NRC, 1999). Changes in instrumentation and station locations have introduced time-varying biases into all three temperature time series. In principle, time series can be adjusted to remove these artifacts, but in practice there is some ambiguity in making such corrections. Decisions concerning which corrections need to be made, and how to implement them, are subject to debate. While many adjustments have been implemented, some quite recently, there will always remain a possibility of biases in the data that may be beyond the range of the current formal error estimates based on currently recognized sources of error. One mitigating factor is thecontinue
6 In the literature on climate change, rates of change observed during prescribed intervals such as the past 20 years are conventionally expressed in units of degrees per decade. Rates of change computed in this manner are not necessarily applicable to periods of record outside the interval for which they were estimated. For example, the rate of warming of surface air temperature observed during the past 20 years is much greater than that observed during the previous 20-year interval, 1960–79, and is not necessarily indicative of the rate of temperature change that will be observed during the future interval 2000–2019.
independence of both the measurement errors and the uncertainties in satellite, radiosonde, and surface-based temperature records, which lends greater confidence to an assessment based on all three measurement categories than to an assessment based on any one of them in isolation.
A concern that has been raised with respect to the surface-based temperature measurements is the effect of land use changes such as urbanization. As growing metropolitan areas encroach into the surroundings of formerly rural observing stations, the temperatures at these stations rise, particularly at night, in response to the well-documented "urban heat island effect." Some have suggested that much of the observed rise in global surface temperature during the twentieth century might be merely the expression of such local environmental transformations that are real, but not necessarily a signature of the global warming predicted to be associated with an increase in atmospheric greenhouse gas concentrations. These concerns have been addressed in numerous studies over the years that have sought to quantify the effect of land use changes and adjust the estimated global surface temperaturecontinue
trends accordingly. There have also been continuing efforts to document changes in instrumentation and observing practices, and to make appropriate adjustments in the data to compensate for them. Documentation of instrumentation and observing practices is also critical with respect to the radiosonde data. Ongoing efforts are being made to recover information on the past observing practices of the various national weather services and to apply adjustments as appropriate.
The major uncertainties in satellite measurements of upper air temperature are due to sensor and spacecraft biases and instabilities, the characteristics of which need to be estimated by performing satellite intercalibrations during overlapping intervals. These intervals are designed to be about two years long, but on two occasions, the overlap was substantially shorter due to instrument failures. The temperature measurements have recently been adjusted for gradual changes in satellite orbits that affect the levels and times of day at which the microwave radiation is sampled, and for small non-linearities in sensor performance, which cannot be determined in advance on the basis of laboratory calibrations. Because there is, in effect, only one satellite-based temperature record for which most of the processing has been performed by a single group, efforts to independently verify the MSU temperature measurements have, of necessity, focused on comparisons with radiosonde data.
Calculating the global-mean temperature anomaly for a particular season based on the MSU is straightforward, because the measurements are densely spaced and global in extent. However, for radiosonde observations, which are irregularly spaced with large gaps over the oceans (Figure 2.6), global-mean temperature is estimated on the basis of those stations operating during the season in question. Notice, for example, how the radiosonde data fail to sample the strongest local temperature anomalies over the subtropical eastern Pacific shown in Figure 2.2. Even in the absence of any real temperature variation, the global-mean temperature anomaly computed from radiosonde data could conceivably change from one season or decade to the next, merely as a result of stations in one of these poorly sampled regions going into or out of operation. Surface-based estimates are also subject to similar discontinuities, but they are not considered as serious a problem because there are so many more surface stations than there are radiosonde stations (compare Figures 2.6 and 2.7). In addition, surface data coverage over the oceans is much better, with the notable exception of highcontinue
latitudes in the Southern Hemisphere. The effects of this uneven sampling are being investigated and quantified in several ways, for example by estimating ''true" global-mean temperatures from the complete fields generated by satellite observations, blends of satellite and in situ data, or climate models, and then sampling these fields using the actual (incomplete) observed data coverage (see chapter 9).
Measurement errors and uncertainties are not the whole story. The possibility that there may have been a real disparity between trends in surface and tropospheric temperature also needs to be considered. One way of making such an assessment is to consider whether simulations of the evolving climate of the past 20 years in climate models exhibit disparities as large as those that are observed.break
Climate models are tools that can be used to relate changes at the surface to those in the troposphere. Although today's state-of-the-art models accurately depict many physical processes, they are deficient in several respects, owing to difficulties in representing small-scale processes, such as those associated with clouds. Moreover, the detailed three-dimensional spatial structure and the temporal evolution of the many forcings of the climate system that are used to "drive" the models are poorly known. Model simulations are helpful in understanding the disparity between the 20-year trends in surface versus tropospheric temperatures, but they are not sufficiently reliable to provide a definitive assessment of whether the trends at these two levels are physically consistent.
Due to the non-deterministic nature of the climate system, an ensemble7 of simulations run with the same climate model yields acontinue
7 In such an ensemble, each individual simulation is run with the same time-dependent climate forcings (greenhouse gases, aerosols, etc), but with different, but equally
(footnote continued on the next page)
number of different possible scenarios, each with its own 20-year trends at various levels of the atmosphere. It is only by performing ensembles of simulations with these models that it is possible to assess whether the observed disparity lies within the range of what should be regarded as physically plausible. Because these numerical experiments are computationally intensive, only a very limited number of them have been run thus far.
It is evident from Figure 2.3 that globally averaged temperature fluctuations associated with El Niño tend to be larger aloft than at the surface, and this behavior is well-simulated in numerical models. These models show evidence of stronger cooling aloft than at the surface in the wake of major volcanic eruptions such as Mt. Pinatubo and in the amplitude of temperature variations induced by fluctuating solar irradiance. The longer the period over which trends are computed, the more these naturally occurring fluctuations in the temperature time series tend to average out. For example, the influence of these phenomena upon the trends should be much smaller when the trends are estimated for a 20-year long record compared with a 5-year record. However, model simulations suggest that such natural variability can still amount to an appreciable fraction of the observed disparity between the global-mean temperature trends at the earth's surface and in the lower to mid-troposphere. Because 20-year trends can be substantially influenced by just a few single or multi-year "warm" or "cold" events, they are not necessarily representative of the true response of the climate system to the more gradual changes in atmospheric composition that are taking place in response to human activities.
A number of different human-induced forcings are, in fact, believed to have contributed to the observed temperature changes during the past 20 years. The climate system is highly non-linear8 and relatively little is known about the effect on temperature changes resulting from human contributions to the changing three-dimensional distributions of ozone and aerosols, either or both of which may have been partially responsible for the observed discrepancy between surface and lower to mid-tropospheric temperature changes. The aerosol contribution iscontinue
(footnote continued from the previous page)
plausible initial conditions. Differences among the climates in the individual stimulations are interpreted as being due to the internal (unforced) variability of the climate system.
8 Highly non-linear in this context means that there is no guarantee that the response of the climate system to the sum of these forcings would be equal to the sum of its responses to the individual forcings if each of them had occurred in isolation.
particularly difficult to estimate because of the limited understanding of how aerosols affect cloud properties, which affect the transfer of radiation through the atmosphere. In addition to changes in atmospheric composition, land use changes can be a significant factor in causing climate change at the earth's surface.
Despite the many unresolved issues touched on in this chapter and discussed in more detail in chapters 5–9, the progress that has been achieved over the past few years provides a basis for drawing some tentative conclusions concerning the nature of the observed differences between surface and upper air temperature trends, and their implications for the detection and attribution of global climate change.break