There have been criticisms of the techniques used to create large-scale surface temperature reconstructions and, in particular, of the work done by Mann et al. (e.g., Zorita and von Storch 2005; McIntyre and McKitrick 2003, 2005a,b; von Storch et al. 2004; Moberg et al. 2005b). One criticism is related to the question of whether century-to-century climate variations are underestimated in proxy records that have strong year-to-year variability and consist of segments that have been spliced together to obtain a chronology longer than any of the segments. Several research groups have developed reconstruction methods to address this problem. For instance, Esper et al. (2002a) developed a tree-ring-based reconstruction that attempts to remove the bias by using improved statistical methods explicitly designed to preserve low-frequency variability. Moberg et al. (2005b) separated annual records (tree rings) from smoother (non-annual) records (such as ice borehole temperatures and sediment based records) by using wavelet analysis. These studies indicate that the true amplitude of temperature variations over the last 1,000–2,000 years may have been roughly twice as large as was previously proposed (see Figure 11-1), although their results differ in geographic emphasis and in the details of the time sequence of the temperature changes. Von Storch et al. (2004) used a long-term climate model simulation to produce artificial proxy data and then compared reconstructions of hemispheric mean temperature with varying degrees of noise contamination; they found that the full amplitude of century-to-century variations were underestimated to an increasing degree as the noise level was increased. Thus, the reconstruction of century-long trends has substantial uncertainty when it is based on data that exhibit year-to-year variability.
A second area of criticism focuses on statistical validation and robustness. McIntyre and McKitrick (2003, 2005a,b) question the choice and application of statistical meth-