includes “slow” climate processes important on decadal to centennial climate scales, 10 years/day), and
• Earth system models for carbon-cycle studies; paleoclimate models (most complex physics; dominated by slow processes and millennial-scale variability, 100 years/day).
A single national modeling framework could allow the climate modeling community to configure all of these models from a palette of available components of varying complexity and resolution, as well as supporting high-end modeling. This idea has been proposed in the past: the history of previous efforts is recounted in Chapter 2. The committee believes that current trends in our methodology, both its strengths and weaknesses, point in the direction of a concerted effort to make this a reality, for reasons that are outlined below.
A related methodological advance is the multimodel ensemble and the model intercomparison project, which has become ubiquitous as a method for advancing climate science, including short-term climate forecasting. The community as a whole, under the aegis of the World Meteorological Organization’s World Climate Research Programme—through two working groups, the Working Group on Climate Modeling and the Working Group on Numerical Experimentation—comes to consensus on a suite of experiments, which they agree would help advance scientific understanding (more information in Chapter 8). All the major modeling groups agree to a suite of numerical experiments defined for the current-generation Coupled Model Intercomparison Project (CMIP5) as a sound basis for advancing the science of secular climate change, assessing decadal predictability, etc., for participation in defining the experiments and protocols. The research community addressing climate variations on intraseasonal, seasonal, and interannual (ISI) time scales agrees on similar multimodel approaches for seasonal forecasting. A globally coordinated suite of experiments is then run, and results shared for a comparative study of model results.
The model intercomparison projects (MIPs) are sometimes described as “ensembles of opportunity” that do not necessarily sample uncertainty adequately. A second major concern is the scientific reproducibility of numerical simulations. Even though different models are ostensibly running the same experiment, there are often systematic differences between them that cannot be traced to any single cause. Masson and Knutti (2011) have shown that the intermodel spread is much larger than differences between individual ensemble members of a single model, even when that ensemble is extremely large, such as in the massive ensembles of QUMP (Collins et al., 2011) and CPDN (Stainforth et al., 2005). To take but one example of why this is so troublesome in the public sphere, consider different studies of Sahel drought made from the