Jerry Mahlman, from the National Center for Atmospheric Research (NCAR) and moderator of the workshop, welcomed the participants. He explained that the goals of the workshop were to discuss, analyze, and synthesize recent work on constraining estimates and probability intervals for climate sensitivity, focusing primarily on work that has been carried out since the production of the IPCC/TAR.
Mahlman noted that this is an exciting time to examine our understanding of the sensitivity of the climate system because valuable insights are being gained from climate model intercomparison and diagnostic studies and because researchers are developing novel ways to think about the problem from a statistical-probabilistic point of view.
He explained that the workshop discussions would include
elucidating the breadth and complexities of both forcings and feedbacks in the climate system;
reflecting on lessons gleaned from the paleoclimate record about climatic response to natural forcings;
examining intrinsic uncertainties in the climate system that affect our ability to constrain climate sensitivity estimates; and
considering how to frame these concepts in a way that nonscientific audiences can grasp.
Jane Leggett, from the U.S. Environmental Protection Agency (EPA), discussed how analytical and policy communities utilize the information produced by climate scientists. She expressed a hope that the workshop would provide a “checkpoint” on emerging research and would clarify what scientists do and do not know about climate sensitivity (see Box 1). She asked the speakers to help elucidate the points of agreement and disagreement, the issues that require further resolution, and the priorities for research needed to reduce uncertainties.
Before the scientific presentations, workshop participants agreed that they had to work with, at the very least, two different definitions of climate sensitivities:
When a speaker refers specifically to one of these definitions, it is noted as Seq or Str.1
Analysts are increasingly utilizing integrated assessment models to examine scenarios of future climate change. When examining a large number of scenarios, modeling with a full climate model is too costly and time-consuming. Instead they rely on reduced-form climate models in which climate sensitivity is an input assumption,
For specific climate model intercomparisons and evaluations that use “realistic” past and projected radiative forcing, it has sometimes proved useful to use “effective climate sensitivity” (Seff ) as a measure of a model time-varying warming responses under “realistic” forcing scenarios. Seff is occasionally referred to in this report to describe a specific participant’s or group’s research strategy.
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 2
OPENING SESSION JERRY MAHLMAN Jerry Mahlman, from the National Center for Atmospheric Research (NCAR) and moderator of the workshop, welcomed the participants. He explained that the goals of the workshop were to discuss, analyze, and synthesize recent work on constraining estimates and probability intervals for climate sensitivity, focusing primarily on work that has been carried out since the production of the IPCC/TAR. Mahlman noted that this is an exciting time to examine our understanding of the sensitivity of the climate system because valuable insights are being gained from climate model intercomparison and diagnostic studies and because researchers are developing novel ways to think about the problem from a statistical-probabilistic point of view. He explained that the workshop discussions would include elucidating the breadth and complexities of both forcings and feedbacks in the climate system; reflecting on lessons gleaned from the paleoclimate record about climatic response to natural forcings; examining intrinsic uncertainties in the climate system that affect our ability to constrain climate sensitivity estimates; and considering how to frame these concepts in a way that nonscientific audiences can grasp. JANE LEGGETT Jane Leggett, from the U.S. Environmental Protection Agency (EPA), discussed how analytical and policy communities utilize the information produced by climate scientists. She expressed a hope that the workshop would provide a “checkpoint” on emerging research and would clarify what scientists do and do not know about climate sensitivity (see Box 1). She asked the speakers to help elucidate the points of agreement and disagreement, the issues that require further resolution, and the priorities for research needed to reduce uncertainties. BOX 1 CLIMATE SENSITIVITY DEFINITIONS Before the scientific presentations, workshop participants agreed that they had to work with, at the very least, two different definitions of climate sensitivities: Equilibrium climate sensitivity (Seq): The global mean surface-air temperature warming achieved at long-term equilibrium, for a doubling of atmospheric CO2 over pre-industrial levels, commonly set at 280 parts per million by volume (ppmv). Transient climate sensitivity (Str): The global mean surface-air temperature achieved when atmospheric CO2 concentrations achieve a doubling over pre-industrial CO2 levels increasing at the assumed rate of one percent per year, compounded. When a speaker refers specifically to one of these definitions, it is noted as Seq or Str.1 Analysts are increasingly utilizing integrated assessment models to examine scenarios of future climate change. When examining a large number of scenarios, modeling with a full climate model is too costly and time-consuming. Instead they rely on reduced-form climate models in which climate sensitivity is an input assumption, 1 For specific climate model intercomparisons and evaluations that use “realistic” past and projected radiative forcing, it has sometimes proved useful to use “effective climate sensitivity” (Seff ) as a measure of a model time-varying warming responses under “realistic” forcing scenarios. Seff is occasionally referred to in this report to describe a specific participant’s or group’s research strategy.
OCR for page 2
rather than a diagnostic. For this, they require guidance from the scientific community about the appropriate range of sensitivity estimates and associated uncertainties. The kinds of questions that analysts and policy makers may pose to climate modelers include the following: How has our understanding of the most likely value of sensitivity changed in recent decades? When and how will we have better estimates? What do scientists think about the shape of the probability distribution for sensitivity? Can we put more credible limits on the high end of the tail of the distribution? How good is sensitivity as a general indicator of the future severity of climate change? Does it take into account possible non-linearities in the climate system? How does the value of sensitivity differ for different types of forcings and time scales? Are other indicators needed to express the response of the climate system to non-CO2 forcings? Policy makers need quantitative information to better gauge the time scales and susceptibility of the climate system and to assess the potential consequences of global changes for human health, ecosystems, and social well-being. They are often interested in low-probability, high-consequence events, so it is important for the scientific community to provide information about the likelihood of such events. TOM WIGLEY Tom Wigley, from the National Center for Atmospheric Research, first provided an overview of the simplified MAGICC-SCENGEN system as an example of an integrated assessment modeling tool that produces probabilistic outputs. MAGICC (Model for the Assessment of Greenhouse-Gas Induced Climate Change) is a coupled gas cycle-energy balance climate model that can simulate the global scale behavior of comprehensive three-dimensional climate models. Users are able to choose emissions scenarios and a number of other model parameters either as single values or as probability density functions (PDFs). SCENGEN (SCENario GENerator) is a global-regional database that contains the results of a large number of climate model experiments. SCENGEN uses a scaling algorithm to provide information about spatial patterns of climate change. The primary purpose of the MAGICC-SCENGEN software is to allow nonexpert users to investigate the implications of different emission scenarios for future global mean and regional climate change and to quantify uncertainties in these changes. Wigley discussed the general question of how climate sensitivity influences global mean temperature projections. It can be concluded from simple energy-balance climate models that both the magnitude and the timing of climate change depend on sensitivity (Figure 1, pg. 27). In a study by Wigley and Raper (2001), PDFs were created for a number of key inputs, including Seq, greenhouse gas emission rates, aerosol radiative forcing, carbon cycle feedbacks, and ocean mixing (Figure 2, pg. 28). This information was used to run more than 100,000 simulations in a simple upwelling-diffusion energy balance climate model. The resulting output is a PDF for change in global mean temperature, and one can evaluate how the results depend on the assumed input value of sensitivity. They showed that uncertainties in the climate sensitivity, as characterized by its PDF, are a primary source of uncertainty in the projected values of global mean temperature change, especially the high-end tail of the distribution. This study illustrated the need to better quantify and define a PDF for sensitivity. The probabilistic form of the input value greatly enhances the ability to characterize uncertainties in future projections. Other points raised by Wigley include the following: One can utilize methods to minimize the effects of sensitivity uncertainties. For example, one can reduce the spread in output PDFs by calibrating a model against observed climate change (e.g., twentieth century warming). For given historical forcing, only a subset of the assumed range of sensitivities (and other model parameters) may be consistent with the observed warming (and its uncertainty range), so this would limit the output uncertainty range. We must make the best possible use of available observations. Climate modelers can go beyond evaluating model calculations against the historical record of global mean temperature change; they can also try to simulate the diverse spatial and temporal characteristics of our present climate (which is essentially what is done in many current atmosphere ocean general circulation model [AOGCM] based detection studies).
OCR for page 2
Currently, modeling approaches tend to give slightly higher central values of sensitivity than estimates based on observational data. Further progress will require a multipronged strategy involving both modeling-based and observation-based approaches, as well as new ways to mesh these two approaches. Discussion Mahlman: It now seems tractable to quantify the lower limit of the PDF for climate sensitivity, but we remain “in trouble” in regards to defining the upper limit credibly on such statistical grounds. Are there physically based approaches that can rule out some of the very high (but unreasonable) values of sensitivity (Str) obtained from data-based studies? Schlesinger: Trying to artificially reduce uncertainties can be dangerous. We have to learn to live with some degree of uncertainty. The biggest problem is that uncertainties in radiative forcing confound the estimates of sensitivity derived from observations. Ramaswamy: Models should accurately simulate not only past trends, but also the characteristics of the present-day climate variability. Currently, many climate models fail to replicate observations of interannual variability. Prather: Even if you cannot constrain the “asymptote” of the equilibrium climate change, perhaps you can constrain the rate of change. Stone: To do this, you need to know the various time constants and reservoirs of the climate system. Note that deep ocean mixing becomes much more important over long time scales and for stabilization scenarios. Mahlman: Another “wild card” is the indirect effect of aerosols, which currently has a huge range of uncertainty. Wigley: True, we must keep in mind that model results tell us nothing about the uncertainties associated with many possibly important processes that are not represented in the models. VENKATCHALAM RAMASWAMY Venkatchalam Ramaswamy, from the National Oceanic and Atmospheric Administration (NOAA) Geophysical Fluid Dynamics Laboratory (GFDL), provided an overview of how climate sensitivity is used in the IPCC Third Assessment Report. The IPCC/TAR reports that climate sensitivity is within the range of 1.7–4.2°C, as derived from seven ocean-atmosphere general circulation models.2 There are no stated probabilities associated with any particular values in this range. Climate sensitivity is discussed in several chapters in IPCC/TAR, but unfortunately, the terminology and usage are sometimes inconsistent among these chapters. In Chapter 6 of the IPCC/TAR “Radiative Forcing of Climate Change”, the forcing-response relationship is defined such that the global, annual mean surface temperature is equal to the global, annual mean radiative forcing (evaluated at the tropopause after equilibration of the stratosphere) multiplied by a global mean climate sensitivity factor (λ, given in units of Kelvin per watts per meter squared [K/Wm−2]. Studies of forcing-response relationships originated with simple one dimensional radiative-convective models, where λ was found to be nearly invariant. More complex three dimensional atmosphere-ocean climate models were later used to examine the applicability of λ to a variety of forcings, including species with a globally homogeneous distribution such as CO2 and species with a highly varied distribution such as aerosols and ozone. The IPCC found that λ remains constant to within ~20 percent for globally homogeneous forcings, but for some ozone and absorbing aerosol cases, λ can vary by up to 50 percent. In such cases, the forcing-response relationship depends critically on the vertical structure of the forcing. Many climate model simulations include estimates of changes in solar and volcanic forcings, but most simulations do not include other, poorly understood forcings such as land-use changes and nonsulfate aerosols (e.g., mineral dust, black carbon). Another limitation is that sensitivity is only an indicator of the global annual mean surface temperature response. It does not define regional temperature responses or the responses of any other climate variables, nor is it an indicator of the possibility of abrupt changes or extreme events (although a high sensitivity value implies larger amplitudes of decadal-scale natural variability). 2 IPCC also reports a climate sensitivity range of 1.5-4.5°C, derived from 15 simple models.
OCR for page 2
At the new climate equilibrium, F =αT, where F is the radiative forcing change and αT represents the net effect of processes acting to counteract the changes in mean surface temperature. The variable α is described as the climate sensitivity factor,3 and the equilibrium climate sensitivity (Seq) is inversely proportional to α. Ideally, climate sensitivity would be obtained from a coupled atmosphere-ocean general circulation model (GCM) by integrating the model to a new climate equilibrium after doubling the CO2 concentration. However, this can require very long simulations to evaluate; in some cases, several millennia are required to attain equilibration, due to heat exchange with the deep ocean and the continental ice sheets. Instead, equilibrium climate sensitivities (Seq) are usually estimated with an atmospheric GCM coupled to a mixed-layer (upper ocean) model, where there is no heat exchange with the deep ocean and the model can be integrated to a new equilibrium within a few decades. In the IPCC’s Second Assessment Report (1995), the average sensitivity of the mixed-layer models for a doubling of CO2 was 3.8°C, with an intermodel standard deviation of 0.78°C. For the 15 mixed-layer models used in the TAR, the average sensitivity of these models was 3.5°C, with an intermodel standard deviation of 0.92°C. Although current models give a slightly lower average value and intermodel scatter has increased somewhat, these differences are not considered as statistically significant. In IPCC/TAR Chapter 12 “Detection of Climate Change and Attribution of Causes” it was pointed out that different models may yield different patterns of response for the same forcing. Chapter 12 also points out that within a given model, the pattern of response to different types of forcings can be quite similar. Even if the forcing is local (e.g., direct sulfate aerosol), the first-order response is global in nature and determined by many of the same feedback processes that also determine the response to uniformly distributed forcings. This similarity of response pattern is one of the things that makes it so difficult to separate the responses to greenhouse gas and sulfate aerosol forcing in the observed record. Discussion Schlesinger: Should the aerosol indirect effect be characterized as a forcing or a response, and how does this affect the estimate of sensitivity?4 Ramaswamy: This is an open question that has not been satisfactorily addressed. Prather: Note that the feedbacks examined in the IPCC/TAR are basically limited to physical feedbacks. There are a number of possibly important chemical and biological feedbacks that have not been considered in IPCC climate projections. Socci: Is any definition of sensitivity even a practically usable concept, given the amount of regional variability in climate impacts? Ramaswamy: Generally, global mean metrics are limited in their applicability. Broccoli: Sensitivity is relevant to the consideration of local and regional climate changes because as sensitivity increases, the mean of the distribution (and thus the probability of a temperature increase in any one place) will increase. ANTHONY BROCCOLI Anthony Broccoli, from Rutgers University, discussed some intercomparison studies between recent versions of the GFDL and NCAR climate models (both of which are under development) and their representation of key climate feedbacks (defined as sequences of interactions that determine the response of a system to an initial perturbation). They found that the value of sensitivity (Seq) estimated by the NCAR model is much lower than the value from the GFDL model. We have to understand what causes these differences and determine which (if either) estimate is correct. Current models are relatively consistent in representing water vapor feedbacks, and model simulations of interannual-decadal variations in tropical mean water vapor match well with observations. In contrast, models 3 Note that the meaning of climate sensitivity factor α used in IPCC Chapter 9 differs from that of the climate sensitivity factor λ used in IPCC Chapter 6. 4 Note that this issue is discussed later in the presentation by Joyce Penner.
OCR for page 2
differ considerably in their simulation of cloud feedbacks. Cloud feedbacks differ substantially between the NCAR and GFDL models, and these very likely account for most of the difference in their respective estimates of climate sensitivity. Even modest changes in cloud parameterizations have been found to affect cloud feedbacks and, hence, sensitivity. The primary difference is that for the NCAR model, there is a strong negative feedback involving low clouds. For the GFDL model, there is a moderate positive feedback involving low clouds, which adds to the positive feedback involving high clouds. It is important to realize that feedbacks interact. For instance, small changes in the strength of the water vapor feedback may increase or decrease the level of uncertainty due to the cloud feedback. If the water vapor feedback is weak, then the uncertainty due to cloud feedback would be smaller. This interaction must be taken into account when diagnosing feedbacks in models. It is also important to understand that even regional feedbacks can have global-scale impacts. For instance, snow-ice albedo feedbacks are geographically confined to a small percentage of the earth’s surface, but can affect temperature patterns over much of the planet. So how do we go about resolving these uncertainties? Developing better estimates of climate sensitivity requires a multifaceted approach involving model diagnostics, field measurements of important feedback processes, analysis of global observations, comparisons of simulated and observed climate history, and process modeling. GFDL and NCAR, along with their partners, are active in all of these research areas. Within another six months or so, more comprehensive model intercomparison studies will be under way. In comparisons of models to observations, the GFDL model does seem to capture some features of seasonal variations in cloud climatology, although the model has a general bias toward too much cloudiness. Interannual variability of cloudiness is similar in both the GFDL and the NCAR models, but the cloud response to warming is quite different. Thus, interannual variability may not be an adequate surrogate for global warming.