Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 21
MEETING THE NEEDS OF THE USER COMMUNITY ROSINA BIERBAUM Rosina Bierbaum, from the University of Michigan, discussed the various ways in which climate sensitivity estimates may be applied for decision-making purposes. For instance, Str and Seff provide important input parameters for integrated assessment models. These models, in turn, are used to evaluate the temperature commitment and impacts resulting from various emission scenarios. This information may ultimately be used to identify “levels of dangerous interference” in the climate system and to set specific emission reduction targets. The climate sensitivity uncertainty range is often misunderstood or misrepresented by policy makers, for instance, with the range of 1.5 to 4.5°C presented in the IPCC/TAR commonly characterized as a “factor of three” uncertainty. Since the sensitivity estimate range is the same as it was in 1979, it is also often incorrectly assumed that there has been no scientific progress in recent decades, even though today this range is far more confidently stated (roughly 67 percent confidence in the range from 1979 versus greater than 90 percent confidence in 2001). Some questions about climate sensitivity that are of interest to policy makers, but are not yet clearly answered by the scientific community include: What is the central, most likely value? What does this value tell us about the rate of change and possibilities for abrupt changes? How does the concept of sensitivity apply to radiative forcing agents other than CO2 (solar, aerosols, etc.)? Is Str constant or does it vary as feedback mechanisms change with time? Is the full range of possible values encompassed in current estimates, since these estimates neglect some feedbacks (e.g., land-use changes, surface vegetation)? The general questions that decision-makers focus on include the following: How serious is this problem? What does it mean to me in my “place”? What can I do about it? They may be seeking to evaluate vulnerability to changes in climate means and extremes, or seeking to enhance flexibility, robustness, and capacity to adapt to such changes. Finally, they may be seeking near-term response options, including possible management, institutional, technological, and legal or policy changes. To answer such questions and to provide useful information for risk assessments and other policy-relevant analyses, the scientific community has to provide more information than just estimates of global average temperature change. We must consider whether it may be possible to develop metrics of climate response that go beyond just a global average warming at 2*CO2. For instance, could one define sensitivity functions for precipitation, extreme weather events, sea-level rise, and so forth? Could a sensitivity parameter define regional spatial patterns for any of these parameters? Are there approaches that would help characterize rates of change, including possible abrupt, discontinuous changes? It is also important to think in terms of “multiple stresses,” that is, the interactions of climate change impacts with factors such as urbanization and other land-use changes, air pollution, and invasive species. These pieces of information are needed for robust decision making, but we do not yet have approaches to tackle many of these questions. Bierbaum showed examples of the types of regional-scale changes that are projected to occur as a result of climate change (e.g., degradation of various ecosystems, fire hazard in boreal regions, shifts in forest biomes, regional precipitation changes). The huge uncertainties associated with many of today’s regional-scale impact projections make it difficult for decision makers to know what to do with this information. For instance, in the U.S. National Assessment, the different models that were used projected widely varying regional impacts, and as a result, Congress simply dismissed them. Similarly, in projections for the Lake Ontario region, models do not even agree on whether the lake levels will be higher or lower than the current average, which makes it very difficult to plan response strategies for managing water levels.
OCR for page 22
Regardless, it is still useful for policy makers to get a sense of the kinds of changes that could occur, even if these are not precise projections. Clearly, the above questions highlight a key insight from this workshop: the capability of scientists to communicate the nuances of climate change science to policy makers and decision makers lags behind the level of efficient internal communication within the scientific community. This suggests the value of continued discussion of this subject, with emphasis toward optimal communication of global warming science to policy makers and decision makers. TOM WIGLEY Tom Wigley addressed several topics related to communicating about climate change and sensitivity. Characterizing Uncertainty in the Spatial Patterns of Climate Change. Climate sensitivity is the primary method for characterizing the magnitude of climate change, but this can be misinterpreted and does not address impacts of climate change directly. For impact analysis, we need spatial details of change. Currently, the best way to get a handle on uncertainty related to the spatial patterns of climate change is to compare the results of different models. The MAGICC-SCENGEN system allows one to compare the results of different AOGCMs and assess the intermodel S/N (signal-to-noise) ratio (defined as the average signal across all of the models divided by the intermodel standard deviation). For annual mean temperature projections, S/N for most of the world is high, meaning that the models agree well, but for projections of precipitation, the S/N is low for most of the world (with S/N less than 1 everywhere but in the high latitudes), indicating a much greater level of uncertainty. Note that in some cases, the inferred noise level may simply reflect the realization-to-realization variability within any one model, as well as the differences between different models, thus demonstrating the need for intercomparing multiple-member ensembles of climate model runs. Characterizing Progress in Climate Modeling Capabilities. It appears to some people that little progress has been made since current uncertainties in sensitivity estimates are similar to those of a few decades ago. However, there has in fact been substantial progress in many important aspects of climate modeling (e.g., see Covey et al., 2003). This progress was illustrated with a comparative study of 16 models, showing how well the annual mean precipitation patterns projected by the various models agreed with observations. The best models have ~80 percent of their variance in common with observations, while the worst have ~40 percent of their variance in common with observations. In the past 10 years, all of the models have improved dramatically in their ability to simulate present-day precipitation patterns accurately. The Hadley model explains twice as much of the observational variability as it could when first created, and the worst models in the analysis are now performing better than Hadley and other leading models did a decade ago. New Approaches for Reducing Uncertainty. Some researchers are developing ways to use probabilistic information to circumvent uncertainties in sensitivity and provide useful input to policy. For instance, Myles Allen’s group uses an approach wherein a large set of models is calibrated against past observations and then only the best of these models are used to make future projections. (It still is unclear, however, whether such approaches make the most credible projections.) Similarly, Wigley tried an approach of evaluating various PDFs for key input values, determined which cases give twentieth century changes that lie within the observed range, and used only those cases to produce a probabilistic projection of global mean temperature change. (For example, it was found that the combination of a low aerosol radiative forcing with a low sensitivity would not fit the observations and thus could be discarded.) These studies found that the output was not strongly dependent on the input sensitivity value, which indicates that it is very difficult to derive a PDF for sensitivity directly from observations. Determining CO2 Stabilization Targets. Probabilistic approaches can also be used to address a question of central interest to policy makers: What is a “dangerous level of interference” in the climate system? With a simple climate model, Wigley input PDFs for sensitivity, for non-CO2 forcing, and for a desired global warming limit and then generated a PDF for the resulting CO2 concentration stabilization target. Spatial Scaling. A question of great interest to researchers is whether one can use a scaling method to translate global mean values into spatial patterns of climate change, without having to rely on climate models. Spatial scaling is an attempt to decouple those components of change that are and are not related to sensitivity; that is, the global mean change (which is the sensitivity-dependent term) is separated from the patterns of change per unit of global mean warming (normalized patterns).
OCR for page 23
The simplest form of scaling is represented by the equation: DY(x,t) = DT(t) DŶ(x), where the change in some climate variable of interest (as a function of space and time) equals the global mean temperature change multiplied by some normalized pattern of change. In a more general form of the equation, one can distinguish between global-scale forcings due to well-mixed greenhouse gases and spatially confined forcings due to short-lived species such as aerosols. Scaling is a way to assess whether we can linearly combine different types of forcings to get the overall response. Most modelers are not aware that these types of spatial scaling techniques were introduced many years ago and are widely applied in the climate impacts community, in software such as SCENGEN and COSMIC, and in several integrated assessment models. These scaling techniques are extremely valuable and should be applied more widely to studies of other forcing agents such as ozone and soot aerosols. Note, however, that these scaling techniques employ some fundamental assumptions that have not been adequately tested. Questions that may require further investigation include the following: How valid is this assumption that various types of forcings, which exhibit different spatial and temporal patterns, can be added linearly? How much do the normalized patterns of change depend on the sensitivity? To answer such questions we need more information about the climate effects of individual forcing factors, as well as their net effect.
Representative terms from entire chapter: