Increased Societal Demands on U.S. Modeling
There are clear societal needs and mandates to which the modeling community must respond. As these needs arise, the ability of the modeling community to develop and deliver relevant products as it is presently constituted continues to fall behind (NRC, 1998a). Understanding the influence of climate change on environmental cycles, both at a regional and global scale, is crucial to various aspects of society. For example, the combination of observations of climate change and projections of future alterations to climate resulting from anthropogenic inputs and natural variability have increased the awareness of the importance of accounting for the impacts of climate change. As a result, there have been increasing demands on the climate modeling community to provide climate data for use in assessments of the impacts of climate change at various time scales, on regional and global scales. This section outlines some of these recent demands.
4.1 OZONE ASSESSMENTS
The discovery of the catalytic destruction of ozone by chlorinated compounds, in particular chloroflourocarbons (CFCs), led to a number of international assessments of the physical and chemical states of the stratosphere under the auspices of the World Meteorological Organization (WMO) and the United Nations Environment Program (UNEP).
The first of these assessments (WMO, 1982) dealt with the observations and theory of ozone chemistry using one- and two-dimensional models of the chemistry of the stratosphere. In an assessment leading up
to the Vienna Convention in 1985 (WMO, 1985) the situation had not changed very much although the use of full three-dimensional models for off-line tracer advection for use in chemical calculation was just becoming possible.
The Montreal Protocols, which limit the release of CFCs into the atmosphere, were signed in 1987, and the 1989 assessment (WMO, 1989) was input to these protocols. The state of modeling had advanced significantly with the availability of three-dimensional models of the stratosphere with the recognition that some stratospheric chemistry problems, such as the Antarctic ozone hole, were inherently three-dimensional. The Montreal Protocols required that the parties to the protocols to assess the control measures on the basis of available scientific, technical and economic information, at least every four years beginning in 1990. Notable in the first assessment (WMO, 1991) are chapters on the radiative forcing of climate, the role of ozone as a greenhouse gas and an evaluation of the effects of aircraft and rockets on the ozone layer. The expansion of interest of the assessment seems to have required rapid turnarounds so that mostly simplified models were used for the ozone chemistry which had since become extremely complex. Both the 1994 and 1998 Assessments (WMO, 1994; WMO, 1998) are dominated by two-dimensional models, but there were indications that fully three-dimensional models of the coupled climate and stratospheric chemistry were coming online.
The relationship between ozone and the climate problem, wherein ozone is affected by the climate of the atmosphere and in turn affects the climate, indicates the difficulty of the problem and points to the time when common tools will be used for both the Ozone Assessments and the Intergovernmental Panel on Climate Change (IPCC) assessments (below). As long as these remain separate, it appears that only two-dimensional models can, at present, address the full range of purely stratospheric chemistry concerns. As the problems are recognized to be intimately related and mutually dependent, it is expected that the demands on the modeling community for both assessments will converge.
The IPCC was established in 1988 under the joint auspices of the WMO and the UNEP: “to assess the scientific, technical and socio-economic information relevant for the understanding of the risk of human-induced climate change. It does not carry out new research nor does it monitor climate-related data. It bases its assessment mainly on published and peer reviewed scientific technical literature” ( http://www.ipcc.ch/about/about.htm).
The IPCC is organized into three scientific working groups. Only Working Group 1, which is devoted to scientific aspects of the global
climate and its changes, concerns us here. Working Group 1 publishes an assessment every five years. The third assessment report has been recently released. Because the assessment reports are consensus documents involving hundreds of world scientists and because the Working Group Reports have international state review and endorsements, the assessment reports have demonstrated unusual authority and serve as standard sources for climate and climate change information.
The involvement of U.S. scientists in the IPCC process is through participation in writing the assessment reports, in reviewing the drafts of the report either as an individual scientist or as part of the governmental review, and perhaps most importantly, as producer of the scientific information on which the reports are based. The USGCRP program office supports individual scientist's participation in the IPCC writing process, organizes the U.S. governmental review, and provides the venue, resources, and scientific and technical personnel to support international Working Group II on Impacts, Adaptation, and Vulnerability.
Although participation by individual modelers in the IPCC Working Group I is informal and voluntary, it is a point of national pride that national models be included. Indeed, it is hard to see how the major industrialized nations of the world could make vital national decisions about greenhouse gases unless their own scientists have been involved. The Hadley Centre, for example, was created within the United Kingdom Meteorological Service to perform the data analyses and modeling runs needed to feed the IPCC process, especially with respect to detection and attribution of existing climate change and projection of future climate change. It is in these areas, which require long runs of coupled climate models, that the United States has had no similar concerted response, and has been shown to be lagging (NRC, 1998a).
4.3 U.S. NATIONAL ASSESSMENT
The USGCRP has been mandated by statute to undertake scientific assessments of the potential consequences of global change for the United States. The Global Change Research Act of 1990 (P.L. 101-606) states that the federal interagency committee for global change research of the National Science and Technology Council “shall prepare and submit to the President and the Congress an assessment which –
integrates, evaluates, and interprets the findings of the Program and discusses the scientific uncertainties associated with such findings;
analyzes the effects of global change on the natural environment, agriculture, energy production and use, land and water resources, transportation, human health and welfare, human social systems, and biological diversity; and
analyzes current trends in global change, both human-inducted and natural, and projects major trends for the subsequent 25 to 100 years. ”
This first U.S. national assessment ( http://www.gcrio.org/nationalassessment/) includes a set of regional assessments, assessments of the consequences of climate change on five important societal and economic sectors of the nation (water resources and availability, agriculture and food production, human health, forests, and coastal areas), and a synthesis for policymakers.
The National Assessment used two model scenarios for long-term climate change: one produced by the Hadley Centre and one by the Canadian Climate Centre. These global models were chosen mostly because their output was available in time for the first step of the National Assessment, presentation of the scenarios to regional meetings so that the individual regions could study scenarios for future changes and possible responses in their own regions. The dependence of the National Assessment on foreign model results is contrary to the recommendations outlined in NRC (1998a), which argues that it is inappropriate for the United States to depend on foreign models for decisions about its own national interests. However, further review ( Box 4-1) illustrates the difficulty of having a U.S. model respond to the needs of the National Assessment.
The 20 U.S. regions were asked to consider the two differing scenarios using the best guess of atmospheric concentrations of radiatively active constituents over the next 25–100 years. The regions were then asked to interpret the results and uncertainty of the results for their regions in terms of the interacting effects on such elements as water, energy, ecosystems, coasts (if any), forest, agriculture, and quality of life. The regional specificity of the two global models was poor with a resolution of T42 or approximately 300 km at the latitude of the United States. At the resolutions of these models orography was severely truncated, and it became difficult to assess future water resources in those regions that depend on mountain icepack for meltwater, since the extent of such icepack was badly misrepresented in the models and the height of the mountains was generally too low; the effect of warming on the icepacks was therefore generally too large (e.g. Plate 2).
The public law that called for the assessment requires a similar assessment every four years although the magnitude of the task was hardly foreseen by the authors of the law. A regular assessment would be conducted continually at specified intervals (probably no more frequently than 10 years or so) so that
The inter-communication that produces the assessment would continue.
The National Assessment
In the winter of 1998 an official of the USGCRP asked scientists at the National Center for Atmospheric Research (NCAR) if they could run one or more simulations with the NCAR Climate System Model (CSM) that could be used in the National Assessment of Climate, which was being planned at that time. The planning committee for the assessment had decided to use data from two models, one from Canada and the other from the United Kingdom, as a starting point. The USGCRP official felt it would be desirable to have at least one American model being used for the assessment. There was a fairly tight time line to produce the data, approximately 10 months, in order for CSM data to be given to the people who would carry out the assessments.
The NCAR scientists recognized that this would be a major undertaking. They had no readily available emission scenarios for the twent-first century, and it was unclear how quickly credible scenarios could be developed. It was also unclear how much could be done with important components, particularly interactive sulfate aerosols, that were not included in the original CSM. A major complication was the lack of supercomputer time to carry out the necessary runs.
The Climate of the 20th Century run performed using the CSM took the equivalent of three months of fully dedicated Cray C-90 time. (This means all 16 processors of the C-90 running 24 hours a day, 7 days a week. Actually, the C-90 was never fully dedicated to the CSM because of the demands from competing modeling groups. The actual time for the run was closer to 6-12 months to completion.) The C-90 was part of NCAR's Climate Simulation Laboratory, which supported a variety of modeling activities. It was not possible, for the U.S. Assessment runs to use the fully dedicated machine. When it became clear that NCAR could not meet the deadline using the computers at NCAR, the USGCRP official volunteered to ask other agencies participating in USGCRP whether they had time available on a C-90 for this project. They did not.
NCAR scientists continued to work on the project, and two scenarios were developed; (1) a “business as usual” scenario with no political intervention to restrict greenhouse gas emissions; and (2) a “doubling carbon dioxide ” scenario with interventions that restricted emissions to levels where the concentration of carbon dioxide in the atmosphere would asymptote to 550 ppm shortly after 2100. These scenarios were developed before any scenarios were developed for the most recent IPCC report, but they do not differ greatly from the IPCC scenarios. An interactive sulfate aerosol component model for the direct effect only was completed and tested in the atmosphere model. The scientific parts of the project were successfully carried out. Unfortunately, no extra computer time was found and the deadline was not met. NCAR completed the runs (using private resources to buy computer time in Japan) after the deadline, and data was made available to the assessment community, but it was not used.
Comparison of successive assessments might demonstrate a deeper insight into the response of the global climate and the regional climate.
The regions would demonstrate progress in understanding the integrated changes likely to occur and would gain a feeling for the vulnerability of their societies and institutions.
The questions arising from the ongoing assessment would stimulate the local regions' research agenda into their own modes of life and activities.
The ongoing nature of the assessment indicates that the demands on models will also be ongoing. The inability of the main U.S. modeling institution to come to grips with the first U.S. National Assessment ( Box 4-1) and the inability of the United States to address climate change assessment requirements (NRC, 1998a) indicates that responding to the national assessment in the future will be a major problem for the U.S. modeling community.
4.4 SEASONAL-TO-INTERANNUAL FORECASTING
The development of seasonal-to-interannual forecasting grew out of the Tropical Ocean-Global Atmosphere (TOGA) program and the resulting understanding achieved in simulating and forecasting the El Niño/ Southern Oscillation (ENSO). The history of TOGA and the development of short-term climate forecasting has been well documented in NRC (1996) and in a special issue of the Journal of Geophysical Research (Vol. 107, Issue C7, June 29, 1998).
The early realization of the usefulness of the forecasts led to the establishment of the ENSO Observing System in the tropical Pacific Ocean (NRC, 1994), which because of this perceived usefulness, survived the end of TOGA. It is being maintained as a quasi-operational observing system to this day (McPhaden et al, 1998). Because the ENSO Observing System provides the initial conditions needed to make forecasts of the phases of ENSO, a number of different seasonal-to-interannual forecasting efforts were established throughout the world, all using the data produced by the ENSO Observing System. Significant seasonal-to-interannual efforts in prediction exist at many places in the United States, in Australia, at the ECMWF, and in Germany.
A comparison of the prediction activities at the National Centers for Environmental Prediction (NCEP) and ECMWF provides insight into a potential path for seasonal-to-interannual prediction in the United States. (Ji et al, 1996; Barnston et al., 1994). The NCEP coupled model is a two-tiered system to save computer time: it first initializes and then predicts the tropical Pacific SST using a T40 (approximately 300 km) atmosphere coupled to an ocean in which only the tropical Pacific is active. The active Pacific has 150 km resolution in the zonal direction and 30-km to 100-km resolution in the meridional direction from the equator to 45°. The tropical Pacific SST is calculated monthly 6 months in advance. The global SST, with the predicted tropical SST, and the rest of the global SST started at observed values, and relaxed to climatology, is used to force a global
atmospheric model at higher resolution to predict the global effects of tropical Pacific SST. An ensemble of 18 members of the T40 NCEP model is then used to predict the climate over the United States six months in advance. The final outlook, a part of the NCEP products suite (available at http://www.ncep.noaa.gov/) is subjectively determined from this ensemble and other statistical forecasts and is issued once a season.
In contrast, the ECMWF predictions are run in a one-tiered sense: the T63 atmosphere (about 200 km resolution) is coupled to a global ocean with resolution similar to NCEP, high at the equator and reducing gradually with latitude. The global ocean is initialized each week using all global data available, and an ensemble of seven 6-month predictions is run every week with the full coupled model.
Both NCEP and ECMWF release public forecasts of ocean SST. The NCEP outlook is for the United States only, while the ECMWF issues forecasts for the global tropics, Africa, South America, and East Asia; all other forecasts are available to member states of the European Centre only.
The potential economic value of these forecasts has been explored in detail (a bibliography is maintained at http://www.esig.ucar.edu/biblio/ comprehensive.html); the public sector and many industries use these forecasts and clamor for more forecast skill. A large amount of research on seasonal-to-interannual predictability and prediction is being conducted (The July 2000 issue of the Quarterly Journal of the Royal Meteorological Society was devoted to the results of two programs, PROVOST (Prediction of Climate Variations on Seasonal-to-Interannual Time Scales) and DSP (Dynamical Seasonal Prediction), which are looking toward coordinated atmospheric GCM simulations in response to specified SST). A bibliography of several hundred references on seasonal-to-interannual forecasts is being maintained at < http://www.atmos.washington.edu/tpop/pop.htm>. It is clear that many nations, including the United States, will expand their research in seasonal-to-interannual predictions, create operational forecast systems, and apply the results for public good and private gain.
4.5 DECADAL AND LONGER VARIABILITY
One of the major advances of climate research over the last decade or so has been the realization that decadal and longer variability in the past has taken place in only a handful of patterns, in particular the Pacific Decadal Oscillation (PDO), the North American Oscillation (NAO) and its counterpart, the Arctic Oscillation (AO), the Atlantic Subtropical Dipole, the Antarctic Oscillation, and a number of other more regional patterns (NRC, 1998c). These patterns of variability have a profound effect on water resources, storms, food and fish resources, energy production and
consumption, and the general economy and well-being of societies. Over the United States increasing amounts of variance of temperature and precipitation are successively explained by ENSO, the PDO, and the NAO (Higgins et al, 2000), implying that being able to predict these patterns would explain successively greater amounts of these crucial climatic variables, with obvious social and economic implications. The response of these patterns to the addition of radiatively active constituents to the atmosphere is also an active field of research with the idea gaining currency that global warming intensity and patterns cannot be understood without understanding the changes of these decadal patterns with time.
One of the recommendations of the IPCC Third Assessment Report was that patterns of long-term climate variability should be addressed more completely. This topic arises both in model calculations and in the climate system. In simulations the issue of climate drift in model calculations needs to be clarified in part because it compounds the difficulty of distinguishing signal and noise. With respect to the long-term natural variability in the climate system per se, it is important to understand this variability and to expand the emerging capability of predicting patterns of such organized variability as ENSO. This predictive capability is both a valuable test of model performance and a useful contribution to natural resource and economic management.
The possibility of projecting these patterns was discussed in NRC (1998c), and indications of predictability of the NAO (Rodwell et al, 1999; Saravanan et al, 2000), the PDO (Venzke et al, 2000), and the subtropical Atlantic Dipole (Chang et al, 1998) have been demonstrated. The actual and simulated forecasting of these patterns requires a tremendous amount of computer resources, comparable to ensembles of global warming simulations. We expect the study of decadal variations and the predictability of these variations to continue with the aim of discovering useful future predictability as a guide to long-term planning similar to the case of global warming simulations.