641- 659, but model simulations cannot be definitive given the exceptional nature of the 1997-98 event. Even in Figure 5.7 only a single set of error bars is given.

3. The chapter notes the importance of the stratospheric contribution to the channel 2 temperatures and refers to Fu et al. in lines 580-583, but then never allows for this in subsequent comparisons. As a result, Figures 5.2B, 5.3, 5.4, and 5.5 and discussion are all misleading because the models clearly have different cooling in the stratosphere; discussions in Chapter 4 suggest that this accounts for 0.05 K/decade trend in channel 2 discrepancy. Several parts of the text ought to be substantially revised as a result of this (including lines 614-624, 631-633, and others).

4. Regarding the focus on the global means and some zonal means, regional trends differ a lot from global values (Agudelo and Curry, 2004). For instance, the large increase in surface temperature over northern land and the smaller decrease in the troposphere, which is related to changes in surface inversions (Chapters 1 and 3), are not examined in the models and not picked up. The chapter comes closest with Figure 5.5, but that fails to account for the stratospheric contamination. The fact that sondes are not global is also not dealt with. Subsampling of the modeling data at sonde locations is not done.

5. There should be more explicit discussions of the specific responses to individual forcings and how these combine together. This could be done in the first of the conclusions that have “some confidence”—see details towards the end of the specific comments. There should also be a discussion of the use of multiple regional forcings in models.

6. In the presentation by B. Santer during the February 23, 2005 NRC meeting (Chicago, Illinois), the committee liked the two model plots (of standard deviations and trends at the surface and at low and middle troposphere) and would hope that these can be included in a revised chapter. Also included should be as many results from additional models as time allows.

7. The committee had some discussion of how the basic methodology of “Detection and Attribution” should be presented in the report. What is needed is not a full mathematical description of the method (for that one can refer to the original source papers) but a discussion of the main principles behind the methodology that would be appropriate for a climate scientist who does not work directly in this area of research. There needs to be better understanding of the strengths and limitations of detection and attribution analyses. What follows is a tentative suggestion of how to do this. In addition, the authors may find the work of Levine and Berliner (1999) useful in revising this discussion.

Detection and attribution methods try to represent an observed climatic data set in terms of signals due to forcing factors such as greenhouse gases, aerosols and solar fluctuations, plus correlated random noise. The methods are also called “fingerprint analysis” because it is possible to think of the method as identifying specific fingerprints (spatial patterns of climate change due to specific forcing factors) in the observational climate record. The climate data typically consist of temperature or rainfall averages over grid boxes and are very high-dimensional.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement