than the results of the 1-D EBMs because of the lack of interesting nonlinear feedbacks, such as the ice-albedo feedback. Various applications of 2-D EBMs to paleoclimate studies are reviewed by Crowley and North (1991). In this section North and Kim (1995) consider a randomly forced 2-D EBM in the context of decade-to-century-scale variability.

Finally, there are 2-D models of barotropic planetary flow as well, resolved in latitude and longitude. These focus on the dynamics that is not contained in 2-D EBMs. Highly truncated barotropic models (Charney and DeVore, 1979) have provided insight—similar to that of the 1-D EBMs for atmospheric thermodynamics—into the possibility of multiple equilibria in atmospheric dynamics. More highly resolved versions of such models have shown a rich variety of flow regimes (Legras and Ghil, 1985) and of intraseasonal oscillations (Ghil et al., 1991). Intermediate between these 2-D models and GCMs are baroclinic flow models with two to five layers and limited sub-grid-scale parameterizations (Reinhold and Pierrehumbert, 1982; Simmons and Hoskins, 1979). Such models can play an important role—complementary to high-resolution GCMs—in determining the change in frequency and severity of persistent anomalies in the atmospheric circulation, given changes in mean temperature or in surface insolation.

The application of three-dimensional (3-D) GCMs to decade-to-century-scale and paleoclimate variability is discussed in this section by Rind and Overpeck (1995). A general introduction is provided by Washington and Parkinson (1986). GCMs are computer models based on the equations governing the 3-D dynamics and thermodynamics of the atmosphere. They provide today a fairly detailed and reliable simulation of the present climate. The ultimate goal of GCM modeling is to achieve such a simulation by deriving all the terms in the model equations from first principles of atmospheric dynamics, physics, and chemistry. In practice, given the incomplete state of our knowledge in these fields, and the limited spatial resolution permitted by any computational device, GCMs contain many semi-empirical terms; their empirical parameters are calibrated ("tuned"), explicitly or implicitly, to the present climate. Hence, the adequate simulation of the latter does not guarantee that a GCM's climate sensitivity will be an equally good approximation of the natural climate's sensitivity.

The major conclusion of this brief review is that modeling and predicting natural climate variability on decade-to-century time scales will require the use of the full range of atmospheric models available. GCM results on the steady-state and transient response of climate to greenhouse-gas and aerosol loading are routinely interpreted in terms of the simple linear 0-D EBM presented at the beginning of this section (Hansen et al., 1985). The intermediate models—1-D and 2-D—described here capture better the climate system's essential inhomogeneities and non-linearities (Ghil and Childress, 1987). They should therefore play a more important role in understanding and complementing GCM results than has been the case heretofore.

VALIDATION METHODOLOGY

In some sense, model validation can never be complete, since we are making models to describe or predict behavior that we cannot observe directly in nature; that is a primary goal of modeling. On the other hand, confidence in a model's predictions increases as additional aspects of its behavior come to agree with those observations we have. Validation methodology is thus essential to our confidence in model performance.

A model can be validated against other models, whose behavior either is better understood or has been previously tested against observations, or directly against observations. To be more precise, a physical model is usually tested against some statistical model of reality that is itself fitted to the raw observations. The role of a statistical model is more that of a go-between in comparing a dynamical model with reality than that of a competitor. Ideally, the same statistical model should fit equally well both the data and the solution of the dynamical, physical, or—more generally— understanding-based model. For the purpose of illustration in this brief review, the discussion that follows emphasizes validation in the time and frequency domain. Validation in the space and wave-number domain is at least as important for decade-to-century-scale variability. Questions that pertain to changes in the position and intensity of mid-latitude storm tracks or of tropical cyclones, changes in the location and extent of droughts or floodings, or changes in other persistent anomalies in the workings of the climate system cannot be reliably addressed without proper validation of the models used in the latter domains. The main tools of such validation are mass-, energy-, and momentum-flux diagnostics, along with the examination of the spatial patterns of the meteorological fields of primary interest, such as surface air temperature and precipitation.

North and Kim (1995) review in this section the classical estimation and prediction theory of random processes due to Kolmogorov (1941) and Wiener (1956), and its application to climate signals by Barnett (1986) and Bell (1986). Considerable attention has been given recently in the geosciences in general and in climate signal analysis in particular to two relatively novel methodologies: the multi-taper method and singular spectrum analysis.

The multi-taper method (MTM) introduced by Thompson (1982) uses an optimal set of tapers, based on Slepian's (1978) work, rather than the single, empirical taper used in classical spectral filters (Jenkins and Watts, 1968). It permits high spectral resolution, as well as confidence intervals that are based on coherence; these intervals are therefore independent of spectral amplitude, unlike those provided



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement