The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Natural Climate Variability on Decade-to-Century Time Scales
BATTISTI: Explicitly examining the mechanisms and processes that yield variations in climate would take more computational power than I for one can fathom, even if we looked at only a subset of the system. I think what we should try for is determining the sensitivity of the simulated climate variability to the parameterization schemes for the unresolved physics. That sensitivity is likely to be a strong function of model resolution.
SARACHIK: You know, it seems to me that we should start concentrating on the signal rather than the noise. Some of the variability we've been talking about is undoubtedly scientifically interesting, but it's relatively local. It seems to be bounded by something on the order of one-third or one-half of a degree of temperature. My inclination is to look at longer time scales, something like 1000 years, and a signal big enough to be called climate change.
REIFSNYDER: I'd like to take philosophical issue with your statement that models are used to test hypotheses. I'd say that models are simply functional expressions of hypotheses, and any meaningful testing must involve independent data sets. Testing against data used in the specification of the model, or against general phenomena, won't tell you anything about predictive power.
BATTISTI: Well, that's true. The solution to a model is really the solution to a set of equations. But what worries me is the possibility that the results of complex, numerical GCMs will be used to build hypotheses of how model phenomena come about, without adequate attention to whether those phenomena appear in the observational or proxy data bases. I'd rather see models used to test hypotheses about the mechanisms responsible for observed phenomena. That is, we should be testing our understanding of a phenomenon, not defining one.
LINDZEN: I think that our interest has traditionally been signal detection—greenhouse warming, for instance. That implicit search for something dramatic worries me. I think we should be equally concerned with the constraints on detectable phenomena provided by the data themselves.
LEVITUS: It seems to me essential that we understand decadal-scale variability better before we can go on to longer time scales or develop better models. First we have to be able to parameterize the processes better and to understand the system on shorter time scales.
WEAVER: The people who complain that this or that process hasn't been included in a model seem not to understand that you can't possibly do a systematic analysis by putting everything in at once.
RIND: A propose of some comments by both Dave and Ed, I'd like to mention again that the NSF/NOAA ARRCC—Analysis of Rapid and Recent Climate Change—program is currently working on reconstructing the big climate changes over the past 1000 years, which includes two cold periods and one warm. We want to put together a worldwide picture of how the climate then compares with today's, and ultimately see whether models can reproduce it.
MARTINSON: Jerry, I just wanted to add that I'm glad you emphasized that it's almost impossible to evaluate the role of sea ice in long-term changes when a model has prescribed flux corrections, since in reality so much of that flux is driven by sea ice itself.