Review Criterion 6: Is there a summary that effectively, concisely and accurately describes the key findings and recommendations? Is it consistent with other sections of the document?
The Executive Summary begins with a good general statement of the concept of decision support and its evolution over time. This statement is actually more coherent than what appears in the chapters that follow, which were written, as already noted, in subgroups that separated the natural scientists and the social scientists and therefore comes across as somewhat lacking in integration. We discuss this issue further in the next section of this chapter. We recommend that when the report is revised, the chapters are made more consistent with this section of the Executive Summary. It is our understanding that this is the authoring group’s intent.
The bulk of the Executive Summary simply recapitulates the key findings from the chapters. We discuss these in Chapter 4, in the context of assessing Review Criterion 2, about support for the document’s findings and recommendations.
Review Criterion 7: What other significant improvements, if any, might be made in the document?
As noted, we see some disconnects between different sections of the report that should be resolved in revision. In some cases, these reflect different implicit assumptions in different sections of the report. We suggest that the authors give explicit consideration to a few assumptions we see as implicit in the report, or in sections of it, that we find problematic or inconsistent with assumptions implicit elsewhere. We suggest that the revised report reconcile such inconsistencies and explicitly state which assumptions are being made on the following matters, provide justification for making them, and, if some assumptions apply only to certain parts of the report, state where the assumptions are and are not being applied.
Assumptions about the relationship between quality of forecasts and usefulness for decision support: Parts of the document, particularly in Chapter 1, seem to assume implicitly that forecasts that have greater skill or higher resolution in time and space will necessarily be better for decision support. Climate information is assumed to be useful, and better information is therefore assumed to be more useful. These assumptions support recommendations to invest in improved forecast skill and resolution. Other parts of the document focus on the need to improve networks linking forecast producers and users and do not make these assumptions. These parts of the report lead to recommendations to invest in improving networks. The thrust of these two parts of the report are in somewhat inconsistent directions; moreover, it is the sections emphasizing networks that are more consistent with the language in the Executive Summary.
Recommendations to support improved forecast skill and to improve networks are likely to be in competition with each other in an environment of limited resources: priorities need to be set between investing in forecast skill and investing in networks and communication. The document advocates both types of investment and does not address relative priority or relative levels of investment needed. We suspect that this was not a conscious decision, but rather an inadvertent outcome of a division of labor in which Chapter 1 was written by climate scientists who are much concerned about forecast skill and resolution, and Chapters 2 and 4 were written by social scientists who were more concerned with forecast utility. The recommendations seem to have been simply compiled in the completed draft. The apparent disconnect in thrust between the chapters and their recommendations should be addressed in the revision.