Prospectus Question 1(b): How does a product evolve from a scientific prototype to an operational product? Chapter 1 provides little or nothing on the evolution from prototype (i.e., experimental research inside or outside government agencies) to operational products. One pathway is discussed in Chapter 3. The revision should address the roles in this evolution of a variety of nongovernmental actors, including academia and private-sector organizations. The report would be strengthened by providing a brief overview of the evolution of seasonal predictions. It is worth highlighting that methodologies are becoming more objective, centers are working on providing products with greater information content (more flexible), and the models are improving in their physics and by moving to higher resolution.


Prospectus Question 2: What steps are taken to ensure product is needed and will be used in decision support? Chapter 1 says little on this topic because there has been little conversation with users of information about what they require when designing forecast products. Some empirical reports are relevant to the issue, however (e.g., National Research Council, 2005b; Hartmann et al., 2002; McNie et al., 2007; Rayner et al., 2005). Some of this work is discussed in other chapters, but the findings are not integrated into the discussions of the usefulness of forecast information in Chapter 1.


Prospectus Question 3(a): What is the level of confidence of the product within the science community and within decisionmaking community? This question seems to be addressing forecast quality. There are three possible interpretations of quality, all of which are addressed to some degree in the chapter. First, information is given on sources of skill in seasonal climate and hydrological forecasts. The fact that a signal in the forecast can be attributed to a physically reasonable process helps give confidence in the forecast product.

Second, the level of confidence in a particular forecast is implicit in the probabilities assigned to particular outcomes, as climate forecasts (and an increasing number of hydrological forecasts) are probabilistic. We caution against strong suggestions in the report that the spread of ensemble members for a particular climate model gives a meaningful estimate of confidence. Forecast probabilities are much more reliable (i.e., mean what they say) after the historical response of the model ensemble has been appropriately recalibrated to the observed climate variability. Although they provide some insight into decision-making criteria, climate projections based on scenarios should also be distinguished clearly from probabilistic forecasts.

Third is the issue of overall quality of prediction tools. This is addressed, but not adequately. Only accuracy, which is a feature only of deterministic forecasts, was addressed. Probabilistic skill measures, such as reliability and resolution, are also important. The World Meteorological Organization (WMO) has developed a set of recommendations on forecast verification: the Standardised Verification System for Long Range Forecasts (SVSLRF). We recommend that WMO efforts in this regard be reviewed, consulted, or at the very least mentioned. However, we note that although the WMO-SVSLRF set of metrics gives a more complete view of forecast quality, it still may not address quality concerns that decision makers may have, such as frequency of errors exceeding a certain magnitude. The report would be improved by covering these issues and particularly by emphasizing the need for forecasts and projections to use metrics of importance to users if they are to gain their confidence.


Prospectus Question 3(b): Who establishes these confidence levels and how are they determined? Chapter 1 implicitly answers this question. Confidence is defined primarily by the



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement