compared to the observed decadal temperature changes for the last 150 years described above.

Errors in the instrumental record can reduce the effectiveness of the proxy calibration process because the fundamental relationship sought from the calibration exercise may be compromised to a degree. For example, proxy–temperature relationships determined on the local scale suffer from errors arising from (a) inhomogeneous data at the land air temperature calibration site, (b) horizontal distance between the proxy location and the land air temperature site, (c) elevation differences between the proxy location and the land air temperature site, and (d) differences between the land air temperature sites that are composited to create the calibration and validation datasets. As a result, there are many opportunities for errors in the measurements and averaging techniques to influence the temperature datasets against which data methods are calibrated and verified. Fortunately, when increasing the size of the samples being averaged and tested, random and uncorrelated errors tend to cancel, enhancing the confidence in the variations produced.

There is also the added burden of dealing with new versions of particular datasets. Estimates by research groups of large-scale average temperatures for particular periods have changed somewhat over time. This occurs when the different groups (a) update the primary source data used in the large-scale averages, (b) institute new adjustment procedures, or (c) adopt new spatial or temporal averaging techniques. Thus, a proxy record calibrated or verified using an early version of an instrumental record may be altered slightly if the instrumental data against which the proxy was calibrated changes.

SPATIAL SAMPLING ISSUES

Deducing the number of sites at which surface temperature needs to be sampled in order to represent variations in global (or hemispheric) mean temperature with a specified level of accuracy is a challenge no less formidable than deducing the temperature variations themselves. The most obvious way to address this problem is to try replicating the variations in the Earth’s temperature in the instrumental record using limited subsets of station data. The effectiveness of this approach is limited by the length of the observational record. One way of overcoming this limitation is to sample much longer time series of synthetic climate variations generated by climate models, but this strategy is compromised by the limited capability of the models to simulate temperature variations on the century-to-century timescale and on spatial scales that represent the highly variable character of the Earth’s surface. The studies that have been performed to date suggest that 50–100 geographically dispersed sites are sufficient to replicate the variability in the instrumental record (e.g., Hansen and Lebedev 1987, Karl et al. 1994, Shen et al. 1994). These results indicate that the temperature fluctuations in the instrumental record are well resolved; that is, proxy records do generally reflect the same variability as instrument records where they overlap (Jones et al. 1997). However, they leave open the question of whether the proxy records are sufficiently numerous and geographically dispersed to resolve the major features in the time series of the temperature of the Earth extending back over hundreds or even thousands of years.

Hopes for reliable Northern Hemisphere and global surface temperature reconstructions extending back far beyond the instrumental record are based on the premise that local surface temperature variations on timescales of centuries and longer are



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement