SCOPE AND PURPOSE OF THIS REPORT
This study responds to a request by the National Oceanic and Atmospheric Administration (NOAA) to the National Academy of Sciences to review the current understanding of climate predictability on ISI timescales, including past improvements in our understanding of predictability; to identify remaining gaps in our understanding of predictability at these timescales; to assess the performance of current prediction systems; and to recommend strategies and best practices for improving estimates of predictability and prediction skill (see Box 1.1).
In preparing the report, the committee has drawn on published literature as well as from presentations from a variety of research scientists and forecasting experts representing both U.S. and international institutions. Due to the experiences of the committee members and the source of the report request, the report tends to focus most heavily on climate predictions for the United States and North America. In contrast to the U.S.-focus regarding predictions, the recommendations regarding forecasting procedures and protocols (Best Practices) have been crafted based upon forecasting experiences from the United States and abroad. These have been drawn from and could be applicable to many national and international institutions. Likewise, many of the physical processes discussed (e.g., ENSO, MJO, NAO) have significant impacts on non-U.S. climate phenomena, such as the Indian monsoon.
The committee feels that this report will inform and guide decisions regarding future opportunities in climate research and operational forecasting. Significant challenges remain in formulating and disseminating accurate and useful forecasts at the intraseasonal and interannual timescales. Significant opportunities exist for the research community to expand its knowledge of climate processes, especially with respect to the coupling among components of the climate system, and improving observational systems, statistical and dynamical models, and data assimilation techniques. Likewise, for the operational community, opportunities exist to verify, catalog, and share forecasts in a more systematic manner. Overall, better communication between the research and operational communities is required for all of these improvements to be achieved.
Introduction to the Climate System
The sun serves as the primary energy source for the climate system, and day-to-day and season-to-season changes in the solar radiation received by the Earth lead to some wellrecognized changes in the climate system. For example, on a clear, calm day, sea surface temperature (SST) in the tropics and mid-latitudes can warm as much as 3°C during the day. At
the same sites, we observe warming and an increase in SST through the spring and into the summer followed by cooling and a decrease in SST through the fall and winter. These changes in SST occur in concert with seasonal changes in surface winds. These types of day-to-day and season-to-season variability, caused by strong, regular, and periodic external forcing from the sun can be accurately predicted.
But beyond these daily and seasonal cycles, the dynamics of the climate system are more complex and incompletely understood, challenging our efforts to make predictions. For example, to answer a question like “Will the upcoming winter be colder or wetter than usual?” requires an understanding of climate variability on the timescales of weeks, months, and years. This variability stems from the atmosphere, the ocean, the land, and the coupling between them. How these components of the climate system interact and affect one another can be understood by examining how they exchange heat, moisture, and momentum. For example, the ocean absorbs heat from the sun and can also transport that heat and release it elsewhere on the earth’s surface. At mid- and high-latitudes, cooling and evaporation make surface water denser and, through convection, force surface water into the ocean’s interior. Both the density differences in the ocean and the action of the wind on the sea surface drive a global, three-dimensional circulation in the ocean that results in spatial and temporal variability in SST. Likewise, solar heating and turbulent heat and moisture fluxes at the ocean and land surfaces drive atmospheric circulations on a wide range of scales from global to local. Moist, warm parcels of air near the surface become buoyant, and this convection can communicate the influence of the surface broadly through the atmosphere and, in turn, to remote surface locations. In contrast, cooling or evaporation within the lower atmosphere stabilizes the atmospheric boundary layer locally and limits the ability of the surface to force the atmosphere elsewhere.
The ability of the atmosphere, ocean, and land to interact and affect one another occurs over a broad range of spatial scales and timescales. These interactions give rise to complex, often nonlinear, dynamics making it difficult to understand and predict the climate variability that we observe. While much progress has been made extending weather forecast skill to a week or more, the ability to make predictions on timescales longer than two weeks is still limited. At shorter timescales, most of the important dynamics reside within the atmosphere. But for longer timescales, the storage of heat and moisture by the ocean and the land becomes more important. Unfortunately, we have less information about the ocean and the land than we have about the atmosphere, and we often lack a full understanding of the interactions among the three.
Committee Approach to Predictability
Historically, deterministic “predictability” of chaotic systems like day-to-day weather processes has referred to how relatively small errors in the initial conditions lead to relatively large forecast errors some time later—typically 10–14 days. Although developed in the context of weather prediction, this concept of deterministic predictability has also been applied to predictions of the entire climate system, including those on ISI timescales. However, over time, the term “predictability” has been used in confusing ways in the atmospheric and oceanic literature. In this report, the term “predictability” is used qualitatively to describe the extent to which the representation of a physical process can contribute to and perhaps even improve prediction quality. There are two important aspects of the committee’s approach to the concept
It is not possible to quantify a true limit of predictability for the climate system.
Quantitative statements can be made regarding the lower bounds of predictability, as derived by the performance of existing forecast systems. If a forecast system shows quantitative skill according to some metric, then at least that much predictability must exist in nature.
The approach that the committee has pursued impacts its ability to fulfill its requested tasks. Underlying several parts of the Statement of Task is an assumption that nature contains inherent predictability limits that can be accurately and quantitatively estimated through the analysis of observations and/or model results. In particular, Task 4 (see Box 1.1) asks the committee to:
Assess the performance of current prediction systems in relation to the estimated predictability of the climate system on intraseasonal to interannual timescales, and recommend strategies (e.g.,observations, model improvements, and research priorities) to narrow gaps that exist between current predictive capabilities and estimated limits of predictability.
The committee finds that presently observational estimates of predictability are severely limited—the observational record is too short and the estimates require assumptions about the observational data (e.g. stationarity) that are difficult to satisfy. Model-based estimates of the intrinsic predictability2 can also be made but are severely limited by the fidelity of the model. For example, model predictability estimates of the ENSO cycle could in principle span the gamut from zero predictability (modeling the cycle as a white noise process) to perfect predictability (modeling it as a sine wave). Of course, modelers use much more physically-based representations of ENSO; nevertheless, the predictability a model produces is unequivocally a function of the underlying model assumptions—the discretization of flow equations, the parameterizations of physical processes, and so on. Model-based ENSO predictability estimates vary widely among models, and for this and any other such process a higher estimate of predictability is not intrinsically a more accurate one.3
The committee finds that model-based estimates of the intrinsic limit of predictability are useful in a qualitative sense. While the studies themselves may very well be quantitative in implementation and analysis, they are best used to identify physical processes that impact the model-based estimate and therefore provide qualitative guidance in how to attack the forecast improvement problem in that model. (A simple example: if Model A shows no intrinsic predictability for a variable in a region where Model B shows some real forecast skill for that variable, then process formulations underlying that variable in Model A are deficient and could be a focus of improvement.) In fact, the committee recommendations are specifically designed to identify the infrastructure needs (i.e., observations, models, best practices) that will accelerate
These considerations and conclusions are not limited to ENSO but could also be said of the MJO and other sources of predictability.
the process of transitioning the qualitative guidance into quantitative forecast improvements. This process will necessarily involve rigorous forecast verification.
Despite the utility of model-specific estimates of the limits of intrinsic predictability for individual model development, the committee finds that the only quantitative statements that can be made regarding predictability in nature involve its lower bounds, as provided by verifying forecasts from existing prediction systems. In other words, if a forecast system shows true quantitative skill at some level, then at least that much predictability must exist in nature. This sentiment underlies much of the analysis in the report, and can be illustrated with an example. Suppose a statistical prediction of a measure of the strength of ENSO such as the Nino 3.4 index (the departure of the monthly mean SST inside a box bounded by 120ºW–170ºW and 5ºS–5ºN from its long-term mean) is of higher quality than a dynamical prediction. It could then be concluded that additional forecast quality can be obtained with a more accurate dynamical method. If estimates for the upper bound of predictability in nature could be derived, they would be uniquely valuable since they would indicate how much quality may yet be derived through future improvements in forecasting systems. In other words, such estimates could indicate how much potential quality is waiting to be tapped. Unfortunately, such estimates are inaccessible. The true limits of predictability cannot be quantified with any certainty because there is no way of estimating predictability without models or, in the case of observational data, ad hoc assumptions.
Despite the inability to unambiguously quantify the intrinsic or upper “limit of predictability,” the committee was able to assess the performance of forecast systems. The quantitative assessment of forecast quality is a useful lower bound on predictability. It is clear that the skill of models has improved over time (Fig 1.1), at least with respect to the types of ENSO-based metrics that are usually discussed in the literature and has recently been evident with respect to advances in MJO forecasting as well (see “Dynamical Models” section in Chapter 3 and the MJO and ENSO case studies in Chapter 4 for a more specific discussion). With regard to the current generation of forecast systems, attempts to perform a rigorous evaluation of forecast quality have been made using available archives and multi-model ensemble systems (e.g., Climate-system Historical Forecast Project (CHFP), ENSEMBLES). However, these initiatives are relatively recent. The multitude of available forecast formats and metrics4, and the lack of openly available data and information regarding past forecasts and verifications can make it difficult to compare across, or even conduct, such studies. The Best Practice recommendations, especially with respect to archiving forecast information and metrics, have been designed to help facilitate the establishment of a framework for comparing and evaluating estimates of prediction quality (i.e. lower bounds on predictability) derived from forecast models.
STATEMENT OF TASK
This study will review the current state of knowledge about estimates of predictability of the climate system on intraseasonal to interannual timescales, assess in what ways current estimates are deficient, and recommend ways to improve upon the current predictability estimates. The study will also recommend research and model development foci and efforts that will be most beneficial in narrowing the gap between the current skill of predictions and estimated predictability limits. The review of predictability estimates to be addressed will include oceanic and atmospheric variables such as sea surface temperature, sub-surface heat content, surface temperature, precipitation, and soil moisture, as well as indices like Nino3.4 sea surface temperatures or the phases of the Madden-Julian Oscillation.
Specifically, the study committee will:
ISI PREDICTABILITY: THE EXAMPLE OF EL NIÑO-SOUTHERN OSCILLATION
The El Niño-Southern Oscillation (ENSO) serves as a prime example of a process that contributes to forecasts on intraseasonal to interannual (ISI) timescales, which extend from roughly two weeks to several years (see Box 1.2). Figure 1.2 shows the SST anomalies associated with one of the largest El Niño or warm ENSO events observed during the twentieth century. These anomalies tend to be at a maximum during the Northern Hemisphere winter and can persist on the order of months to a year. Although these anomalies are strongest in the equatorial Pacific Ocean, they affect winter temperature and precipitation globally, as shown in Figure 1.3. Current ISI forecast systems, which draw upon observations of the atmosphere and ocean as well as the physical and statistical relationships that describe the coupling between them, can often provide accurate predictions of the SST anomalies associated with ENSO. Figure 1.4 shows the predictions from a number of dynamical and statistical models for the SST anomaly in the equatorial Pacific several months in advance. Although the predictions track the behavior of observed SST anomalies relatively well, the spread among the models is substantial, sometimes even differing in the sign of the SST anomaly.
Intraseasonal to interannual timescale (ISI)—roughly, two weeks to several years; the report focuses on predictions of the climate system on this timescale and the physical processes that are used to make these predictions.
Skill—the statistical evaluation of accuracy. Skill is most often determined by comparison of the disseminated forecast with a reference forecast, such as persistence, climatology, or objective guidance. Skill estimates can encompass deterministic estimates of skill, which are related to accuracy, or probabilistic estimates of skill, which are related to frequency of occurrence of specific events or thresholds. Skill is expressed quantitatively in terms of a specific metric.
Quality—the broad assessment of forecast performance encompassing a range of metrics, presumably related to the fidelity of physical processes (see also Kirtman and Pirani, 2008; Gottschalck et al. 2010).
Prediction—information on future climate (deterministic or probabilistic) from a specific tool (statistical or dynamical).
Forecast—issued guidance on future climate, which may take the form of quantitative outcomes, maps, and/or text. A forecast is usually (though not always) based on a “forecast system” that incorporates several prediction inputs or, at least, is based on the interpretation of an individual prediction input against past experience.
Model validation—comparison between observed and model-simulated climate. This may consider characteristics of climatology, variability or specific model processes.
Forecast verification—comparison between observations and forecasts over a specific time period, which typically involves more than one quantitative metric of skill.
Ensemble—a set of dynamical model runs from a single model, or from multiple models, that can be used to make a forecast. Within a single model, each model run differs from other members of the ensemble by a small perturbation in the initial state. For multiple models, it is assumed that the models differ in their physics and/or their parameterizations of sub-grid scale processes.
Note: these definitions are generally consistent with those appearing in the American Meteorological Society’s Glossary of Meteorology (http://amsglossary.allenpress.com/glossary); in some cases, detail has been added to clarify usage in this report.
There is significant potential for societal benefit from improvements in ISI prediction quality and the provision of ISI forecasts. Many management decisions regarding water supplies, energy production, transportation, agriculture, forestry, and fisheries are made routinely on sub-seasonal, seasonal, or annual schedules. For example, during an El Niño winter, coastal areas in California may experience a heightened risk of flooding caused by an increase in precipitation as well as sea level height, while mountainous areas in the Pacific Northwest of the United States may experience less snowfall, reducing subsequent water availability. Thus, knowledge of the climate system at ISI timescales can be a useful input to making resource management and planning decisions.
Expanding our knowledge of processes affecting the climate on ISI timescales is an important priority. Many such processes have been identified (e.g., the Madden-Julian
Oscillation [MJO] and variability related to monsoonal circulations), but are not completely understood. In addition, ISI variability can be observed at a relatively high frequency (multiple times per year) when compared to longer-term phenomena (e.g., decadal or multi-decadal oscillations such as the Pacific Decadal Oscillation), providing researchers with a relatively greater number of “realizations” to exploit within the observational record.
ORGANIZATION OF THIS REPORT
The remainder of this report is organized into five chapters:
Chapter 2 reviews the concept of predictability, starting with an initial review of the historical background for climate prediction. Lorenz’s work on weather prediction in the 1960s and 1970s is a foundation for present efforts; work in the 1980s extended prediction timescales by exploiting ENSO variability in the tropical Pacific and its associated teleconnections. Chapter 2 also introduces the view that a meaningful definition associates predictability with sources of variability, such as: 1) the inertia, or memory, of that state of the environment; 2) the patterns of interaction or coupling between variables, which include “teleconnections”; and 3) the response to external forcing. Various processes in the atmosphere, ocean, and land offer such sources of predictability. However, many gaps remain in our understanding of these processes. Chapter 2 also introduces the reader to the methodologies used to quantitatively estimate prediction skill and discusses model validation and forecast verification. Appendix A provides more technical detail about statistical methods.
Chapter 3 presents the reader with an introductory review of ISI forecasting followed by the committee’s understanding of its critical components: observations, statistical models, dynamical models, and data assimilation. The processes for making and disseminating forecasts are also discussed, as well as their use by decision makers. It closes with the committee’s summary of the potential improvements to current ISI forecast systems.
Chapter 4 uses three case studies to amplify and illustrate the state of and challenges facing efforts to improve ISI prediction. The three examples are ENSO, MJO, and soil moisture.
Chapter 5 defines the “Best Practices” that could be implemented to improve ISI predictions. This section also discusses some of the synthesizing issues given the content of the preceding chapters, exploring how the suggested activities could improve forecast quality, lead to more effective use of observations, and relate to the concept of “seamless” forecasting. In addition, realistic expectations for the speed and extent of improvements are discussed.
Chapter 6 presents the committee’s recommendations and some remarks on their implementation.