Click for next page ( 11

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 10
2 OVERVIEW ISSUES We begin our review by addressing some of the review criteria that raise overview questions about the document. We find the document generally adequate in most of these respects. Review Criterion 3: Are the data and analyses handled in a competent manner? Are statistical methods applied appropriately? To the extent that this pair of questions refers to quantitative data, it is most applicable to Chapter 1. The data, the analyses, and the statistical methods are appropriately reviewed and presented there. Review Criterion 5: Is the document scientifically objective and policy neutral? Is it consistent with the scientific literature? With a few exceptions, the document is objective and policy neutral. We note some use of prescriptive language, particularly in Chapter 2 (such terms as “must,” “essential,” “ought to,” etc.). The main body of the chapter should describe what is known, with judgments, recommendations, suggestions, and the like concentrated at the end of sections or the chapter and labeled as such, with some direct connection to the basis of the comment (e.g., based on increasing awareness of ___, we recommend that ___). The document is generally consistent with the scientific literature. However, there are some matters on which different scientific literatures lead thinking about decision support in different directions. We discuss this issue below in relation to other improvements that can be made in this document (under Review Criterion 7). We also have a few chapter-specific comments related to consistency with the scientific literature. In Chapter 1, more emphasis should be placed on the more recent work, due to the rapidly changing state of knowledge. This is particularly true for forecast methodologies and operational practices. Chapter 2 should draw on a much broader array of sources. For example, on the impact of water shortages on poor populations, only one forthcoming study by Lemos is cited. The authors should draw on the recent Intergovernmental Panel on Climate Change Working Group II reports regarding climate change impacts, adaptation, and vulnerability, which include a report specifically on water resources. In revising the document, it might be worthwhile to do a quick literature search on each of the major topics, especially as they relate to water resource management (knowledge-action networks, equity implications, framing, etc.). Chapter 3 includes very little discussion of published research. It gives limited attention to models of innovation other than the one presented, and it includes scant discussion of how the model presented, or any other model of innovation, might provide useful insight for those attempting to integrate climate information into water resource decision making. 10

OCR for page 10
Review Criterion 6: Is there a summary that effectively, concisely and accurately describes the key findings and recommendations? Is it consistent with other sections of the document? The Executive Summary begins with a good general statement of the concept of decision support and its evolution over time. This statement is actually more coherent than what appears in the chapters that follow, which were written, as already noted, in subgroups that separated the natural scientists and the social scientists and therefore comes across as somewhat lacking in integration. We discuss this issue further in the next section of this chapter. We recommend that when the report is revised, the chapters are made more consistent with this section of the Executive Summary. It is our understanding that this is the authoring group’s intent. The bulk of the Executive Summary simply recapitulates the key findings from the chapters. We discuss these in Chapter 4, in the context of assessing Review Criterion 2, about support for the document’s findings and recommendations. Review Criterion 7: What other significant improvements, if any, might be made in the document? As noted, we see some disconnects between different sections of the report that should be resolved in revision. In some cases, these reflect different implicit assumptions in different sections of the report. We suggest that the authors give explicit consideration to a few assumptions we see as implicit in the report, or in sections of it, that we find problematic or inconsistent with assumptions implicit elsewhere. We suggest that the revised report reconcile such inconsistencies and explicitly state which assumptions are being made on the following matters, provide justification for making them, and, if some assumptions apply only to certain parts of the report, state where the assumptions are and are not being applied. Assumptions about the relationship between quality of forecasts and usefulness for decision support: Parts of the document, particularly in Chapter 1, seem to assume implicitly that forecasts that have greater skill or higher resolution in time and space will necessarily be better for decision support. Climate information is assumed to be useful, and better information is therefore assumed to be more useful. These assumptions support recommendations to invest in improved forecast skill and resolution. Other parts of the document focus on the need to improve networks linking forecast producers and users and do not make these assumptions. These parts of the report lead to recommendations to invest in improving networks. The thrust of these two parts of the report are in somewhat inconsistent directions; moreover, it is the sections emphasizing networks that are more consistent with the language in the Executive Summary. Recommendations to support improved forecast skill and to improve networks are likely to be in competition with each other in an environment of limited resources: priorities need to be set between investing in forecast skill and investing in networks and communication. The document advocates both types of investment and does not address relative priority or relative levels of investment needed. We suspect that this was not a conscious decision, but rather an inadvertent outcome of a division of labor in which Chapter 1 was written by climate scientists who are much concerned about forecast skill and resolution, and Chapters 2 and 4 were written by social scientists who were more concerned with forecast utility. The recommendations seem to have been simply compiled in the completed draft. The apparent disconnect in thrust between the chapters and their recommendations should be addressed in the revision. 11

OCR for page 10
Available scientific evidence mainly fails to support the assumption that forecasts that are better scientifically are more likely to be used and is therefore consistent with the emphasis on network building in the Executive Summary. Some of this evidence is summarized in Chapter 2 of a new National Research Council report, Research and Networks for Decision Support in the NOAA Sectoral Applications Research Program, released in September 2007. That study concludes that there is no evidence that forecasts that are better are therefore more likely to be used, for several reasons, including that forecasts are not useful unless they provide outputs that matter for decisions. Improving the quality of forecasts that do not provide such outputs adds no value for decision making, whatever value it has for science. Although the SAP 5.3 draft could not cite the new report, the final SAP 5.3 report could. More importantly, we suggest that the revised document discuss the evidence covered there and follow that evidence through by discussing its implications for how to proceed toward the twin objectives of making climate information more decision-relevant and more commonly used in the water management sector. We also note the potential for users under some circumstances to attribute greater skill to climate projections than they actually have, and then to lose confidence in the entire enterprise when they act on a projection that yields an expectation inconsistent with subsequent events. Assumptions about the kinds of climate information that decision makers need. Related to the assumption that forecasts that are scientifically better are therefore more useful is another assumption that is implicit, at least in Chapter 1—namely, that the most useful form of scientific information is the kind now usually provided: a forecast or projection in the form of an expected future value of some outcome parameter with a probability distribution reflecting uncertainty. We think it is a mistake to assume that the most useful form of scientific output is already known. Other kinds of scientific outputs might more closely fill the needs of some decision makers in the water management sector. For example, instead of standard forecasts, some users might prefer a set of drought or streamflow scenarios, each of which is considered sufficiently probable on the basis of scientific knowledge about climate and hydrology to be worth considering for purposes of planning and emergency preparedness. Some users might prefer outputs that link simple water demand forecasts for outdoor urban or agricultural water use to streamflow forecasts based on climate scenarios. What we are suggesting is that the most useful kind of scientific output might not be a forecast but a package based on forecast information, perhaps combining forecasts with some simple way of ascertaining their implications for what a water manager does. The tasks of determining which forms of scientific output are needed, and more generally of identifying and meeting decision makers’ information needs, may be performed in part by continuing discussions in groups involving producers and users of climate information, along with intermediaries or information integrators working in such programs as the Regional Integrated Science Applications Program and the Sector Applications Research Program, both recently established by the National Oceanic and Atmospheric Administration. These are logical entities to coordinate the different actors that must interact to generate and disseminate appropriate sets of end-to-end decision-support products for particular sectors in their respective regions. Research can also play a role—for example, in testing pilot scientific outputs on representative samples of target user groups. Assumptions about the nature of innovation. In the discussion of innovation (now in Chapter 3), the authors should consider whether a linear innovation flow chart, as described in words or implied by the hourglass Figure 3.1, is the correct model of innovation. We urge the 12

OCR for page 10
authors to consider a continuous improvement model that is circular in nature. Figure 2-1 presents one potential description of this process. Climate data needs. Finally, we suggest that the report explicitly address needs for collecting and maintaining data related to climate as it affects the water management sector. These data needs, which have been identified in numerous previous studies (e.g., National Research Council, 1999a, 2000, 2004a; Trenberth et al., 2002), include maintenance of stream gauges and adequacy of observation coverage in situ in mountainous regions of the Western United States where climatic variables most directly affect water supply. In revising the report, the authors should consider ways that these needs might be met by coordinated efforts among federal agencies, possibly including both space-based and in situ observations. 13

OCR for page 10
Figure 2-1 A model of innovation showing feedbacks. User Developer Interaction Developer and User Self Innovation Technology Assessment Development /Enhancement User applies results to the problem 14