Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 24
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data 4 SUPPORT FOR FINDINGS AND RECOMMENDATIONS This chapter considers the document in relation to Review Criterion 2, which contains two subquestions: Are any findings and/or recommendations [in the report] adequately supported by evidence and analysis? In cases where recommendations might be based on expert value judgments or the collective opinions of the authors, is this acknowledged and supported by sound reasoning? OVERALL COMMENTS The central subject matter of this document—decision-support “experiments” in the water sector—is one for which very little evidence and analysis are available. Thus, findings must necessarily be based on the relatively weak grounding provided by case study evidence, and recommendations must necessarily be based largely on judgment. These points could be made more explicitly in the document. Nevertheless, despite the weakness of the available evidence base, it remains worth assessing the strength of the support and reasoning underlying the authoring group’s judgments. In addition, we encourage the authors to look outside the federal government and even outside the U.S. experience for evidence on the effects of decision support activities in the water sector. The main findings and recommendations in the document appear in Chapter 5. We focus first on these and then turn to the key findings from the other chapters. Chapter 5 identifies seven research priorities and three general recommendations. To support these, the document should ideally demonstrate that (a) the recommended activities deserve higher priority than other activities and (b) that they deserve action by the document’s audience group. The research priorities and general recommendations are all reasonable ideas and are generally supported by the argumentation in the document. However, they are stated in vague language that is hard to contradict and yet does not offer clear guidance to agencies about the relative importance of different objectives or activities. Given the arguments raised in the bulk of the document, we think that persuasive arguments could be made for giving some of these ideas higher priority than others and for making some of the recommendations more pointed. Evidence or argumentation should be presented that Climate Change Science Program agencies should pursue the recommended activities. The case has not generally been made that private-sector organizations or local and state governments will not undertake these research priorities, so that the federal government must. An exception is in the discussion of the recommendation to “adopt appropriate roles for private enterprise,” in which the argument is made that although private organizations may provide tailored decision-support products to those who can afford them, the government should provide a “baseline level” of useful information for general use by those who may not be able to afford customized information or are not requiring it (e.g., smaller water districts, towns, rural areas). Many such users may be able to take advantage of web-based resources. The appropriate balance of roles between governmental and private efforts deserves more careful consideration. Considering the pessimistic outlook for discretionary federal funding, the
OCR for page 25
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data future of climate programs may absolutely require the infusion of corporate funding. The document might therefore give consideration to an approach to climate forecast development that includes public-private partnerships in funding and developing needed information. It is worth noting that some priorities and recommendations appear to have a broader audience than federal government agencies. One is the recommendation to adopt appropriate roles for private enterprise, which implicitly calls for action by businesses. Another is the priority of improving understanding of water resources vulnerability. Presumably, it is water resource managers and not only federal agencies that need this better understanding. COMMENTS ON INDIVIDUAL CHAPTERS Chapter 1: A Description and Evaluation of Forecast and Data Products There are four key findings and recommendations in Chapter 1 (pp. 111-112). Continued support for efforts to improve the skill in climate forecasting are crucial for improving the skill in hydrologic forecasting at seasonal lead times. This summary/ recommendation points to the need for further strengthening of climate and hydrologic forecasts. There is perhaps a perception that seasonal to interannual forecast skill (as measured by accuracy) is at a plateau. We recommend that the revised document indicate what advances are likely in forecast quality with increased investment (examples might be more reliable probabilistic forecasts, higher spatial resolution information, statistics of weather within climate, etc.). Needless to say, the models have room for further improvement. Examples of areas with most relevance to the coupling of climate and hydrology that are not being accounted for in dynamical models are realistic land-atmosphere interaction and cryospheric processes. Support for the maintenance, expansion, and integration of dense hydrologic monitoring networks is paramount in supporting hydrologic and water resources forecasts. This conclusion is an important one, but it is far stronger than the text in the associated section in the chapter. The text may need to be strengthened with some additional references in support of this recommendation. Support for coordinated efforts to standardize and quantify the skill in hydrologic forecasts is needed. This recommendation implies the evolution of hydrologic forecasts from deterministic to probabilistic but then advocates accuracy metrics. In support of the latter, discussion of the literature of available metrics of “quantitative estimates for the forecast uncertainty” is necessary. New efforts are needed to extend “forecasts of opportunity” beyond those years when anomalous ENSO conditions are underway. It is not clear what is intended by “extending forecasts of opportunity beyond ENSO years.” However, probabilistic forecasts may still offer information beyond “climatology,” such as indicating more extreme outcomes having a lower probability of occurrence. A clear discussion indicating that decadal trends provide additional skill to the seasonal forecasts is perhaps necessary. We note again a sense of disconnection between these recommendations, which come from and propose investments to improve climate science, and the rest of the document, which is
OCR for page 26
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data about decision support. Also, these recommendations may go beyond the charge to the authors, which calls for evaluating “seasonal to interannual forecasts and observations currently available for use by decisionmakers.” Some of these recommendations speak to needs for improved forecast skill beyond what is available; more importantly, the recommendations do not make specific reference to improvements that will make model outputs more closely meet decision makers’ needs. Chapter 2: Moving Knowledge to Action This chapter includes six key findings but no recommendations. The primary message is to call for promotion of science citizenship to improve decision-making processes. The discussion provided in this chapter supports this overall message and the findings (key findings in this and the other chapters are listed verbatim below in italics). However, it would be worthwhile for the authors to seek and cite additional evidence supporting the proposition that citizen science leads to improved responses to climate forecasts. The social, cultural, and political contexts in which climate variation and change forecasts are considered has changed dramatically in the past twenty years, enhancing the likelihood that such information will be incorporated into innovative water management. Although section 2.2 (New Understanding of Climate Variability and Change, p. 126) starts out with a quote about rising interest in the visibility of climate change, the text seems to be more about the changing context of water resource management than new understanding of climate variability. The claim that climate information may be integrated into the innovative water management regimes is not supported by reference to existing research. Water issues are being framed as more salient and as integral to sustainability. It would be helpful to provide a summary statement about what role reframing takes in policy and behavioral change. The section on issue frames could describe how such issues as climate change come to be seen as important for public policy, so that decision support becomes an issue for government attention. However, the processes of framing are not clearly described for readers unfamiliar with the concept. The framing of water as an “ecosystem service” (p. 134) is no longer “emergent”—it is widely accepted in the science and policy arenas—see the publications of the Ecological Society of America, the Millennium Ecosystem Assessment (2005), and the Intergovernmental Panel on Climate Change (2007). A discussion of how ecosystem functions (such as the provision of water) are being reframed as services (ultimately for humans) is an interesting example of how the visibility and legitimacy of issues can be shaped by political and scientific reframing processes. New venues or forums for discourse and decision making are emerging. New venues are indeed emerging, including some not mentioned in the draft. They include local governance structures, such as watershed councils, water banks, and nongovernmental organizations dedicated to water issues, especially in the developing world. Some of these are emerging as major players in local water resource management. There is no discussion in Chapter 2, however, of a major difficulty with the new venues: the mismatch between local decision venues
OCR for page 27
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data and climate science products, which are produced at regional, national, and global scales (although this is discussed in Chapter 1). Knowledge-to-action networks that cut across levels of government and include public and private actors facilitate communication and information exchange across organizational boundaries. Knowledge-to-action networks that include locally based actors are important to implementation of innovative ideas. A more detailed discussion is needed of what knowledge-to-action networks are, how they differ from other venues or institutions (e.g., collective action, governance), their strengths and limitations, and their potential role in integrating climate science in water resource management. The definition of knowledge-action networks in this draft is too vague to be useful in thinking through how the examples pertain to more general points of the discussion. For example, why are water markets or banks considered an example of a knowledge-action network and not a new governance structure? Also, why are rebates provided for water conservation not just an example of a commonly used policy tool (i.e., incentives)? The following website describes a “knowledge network on vulnerability and adaptability to climate change” on water resources hosted by the United Nations Environment Programme: http://ncsp.vanetwork.org/section/resources/resource_water. Is this an example of a knowledge-action network? Can the authors offer an example of a knowledge-action network that integrates “scientific knowledge into societal beliefs,” as claimed on p. 143? Finally, the document claims that knowledge-action networks have the potential to (a) get issues recognized and make action on them legitimate, (b) integrate local knowledge, (c) translate and integrate scientific information in decision and policy making, and (d) build social capital. It would be helpful to provide sources of research and illustrative examples to substantiate each claim. Equitable distribution of the benefits of water-related climate variation and change forecasts depend upon effective two-way communication to disadvantaged, vulnerable populations, and provision of sufficient resources to them to enable meaningful response. It is not clear that the problems of communication pertain only to poor and/or less technologically sophisticated users of seasonal climate forecasts. The document claims (lines 2637-2640) that “utility and value [of forecasts] is often hampered by factors such as poor communication, inequitable distribution of knowledge, institutional barriers, and most critically, the inability of many of the targeted populations to respond to forecasts because of their lack of financial and human resources.” Some of these issues would seem to apply to all potential users of seasonal forecasts. The document should make a stronger case that lack of financial resources is a key variable affecting the use of the forecasts—or else revise the claim. Water resource management has great unrealized potential for the inclusion of science citizenship that involves enhanced citizens’ understanding of water related climatic risks; citizen participation in the development of knowledge and knowledge-to-action networks; and citizen cooperation in producing water management innovations. The document presents a relatively strong discussion of knowledge about science citizenship but does not clearly link it to the idea of using climate information in decisions. It also does not link the discussion of citizenship to the one about the use or nonuse and levels of understanding of climate information by nonexperts (including agency staff and members of the public). If citizens take interest in
OCR for page 28
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data climate, is there evidence to support the assumption that they will they see existing climate information as useful or usable for informing their long-term decisions? Chapter 3: Managing Innovation: Ensuring Success in Joining Research and Operations This chapter includes five key findings but no recommendations. For the most part, the key findings are not directly supported by discussion in the chapter. Relevant evidence sometimes appears in Chapter 2 or 4. Depending on the overall revision strategy adopted by the authoring team, some of these findings and accompanying discussions may be incorporated into other chapters and hence may be supported more strongly by the revised text. By whatever method of revision, the findings should be linked more closely to the discussions that support them. There are many ways in which forecasts can improve. Skill is only one dimension of quality, whereas timeliness, understandability, and relevance are among some of the others. This is an interesting and important concept that is of direct concern to potential users; however, it does not appear to be very well supported in this chapter. There is support available for it in research and also in some of the case material discussed in Chapter 4. The support for this finding should be gathered in the same chapter as the finding itself. Climate forecasting generally has a national organization structure, whereas hydrologic forecasting is focused on a more regional scale. This finding probably does not need detailed support. However, its implications for the use of climate information in decision making are not developed either here or in Chapters 2 and 4, where scale mismatch is also mentioned. For change to be attractive, improvement must be expected. Without a framework for comparing the quality of the existing system to its alternatives, the pursuit for better forecasts has been largely unstructured and based on qualitative impressions of expected benefits. Information to support this finding is included in section 3.9 (and some in section 3.7), although the conclusion regarding a framework for comparison is not strongly supported by the information provided. Incompatibility with existing forecasting systems can be a major obstacle to adopting new technology into operational practice. Few resources exist among researchers or forecasters to foster this compatibility. The evidence of incompatibility is basically not found in the chapter. There is no discussion of what resources would foster compatibility or of their extent among researchers or forecasters. Although known to be an effective product development tool, structured user testing is rarely done. In particular, almost no research is done on effective seasonal forecast communication. Instead, users are commonly engaged only near the end of the product development process. No support for is offered for this finding in Chapter 3.
OCR for page 29
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data Other Comments: Several other statements in Chapter 3 deserve comment: Page 163, line 3007: The statement that “Water management decisions can strongly benefit from better seasonal forecasts” sounds good, but it should at least be qualified, considering Key Finding #1 from this chapter, that skill is not the only dimension of quality in forecasts and also considering that benefits should be weighed against costs. Page 165, line 3046: The statement that innovation leads to lower cost is not substantiated and may not hold up to scrutiny. Quite often, innovations have initial incremental costs. The ultimate result may be a more valuable output, so that the cost is justifiable, but that is not the same as lower cost. Page 168, line 3113: Reference is made to the employment of a schoolteacher in the summer to generate regression equations. This is not documented. More importantly, it is not clear whether this example, contrasted to “hydrologists using computers,” is related to staffing qualifications, technology, or both. Page 173, line 3199: The statement: “There is evidence supporting a system-wide decline in water supply forecast skills …” could be clarified. It could be read to indicate that in 1970 there were better educated, trained, and more intuitive personnel than those working today. If decreases in skill are due to changes in the timing of precipitation as claimed, the import of this change needs explanation. For example, can the skill be regained simply by adding in better information about precipitation, or has there been a fundamental change in precipitation timing brought about, for example, by climate change? Section 3.5 demonstrates and contrasts regionally versus centrally developed methods, user interfaces, etc. The discussion implies a preference for regionally developed applications. For example, in one location it is suggested that one of the driving forces for a national application look and feel is that of “branding.” Perhaps a more compelling consideration is the fact that users interested in multiple regions are better served if they can operate within the same look and feel on a website, and so forth. Another is that most agencies are required to provide summary reports and findings, which are made easier with common formats. The issue of regional versus national development should be viewed through the lens of decision-support needs. Page 193, line 3640: The characterization of a motive for innovation as “laziness” here and elsewhere invites unwarranted criticism of forecasters and agencies. The motive might as easily be characterized as “efficiency.” Page 198: The discussion of user interaction in development in this section seems to endorse a prototype-and-test method that gives inadequate consideration to user requirements and goes against the recommendations in Chapters 2 and 4 regarding involvement of users/practitioners in developing climate science. As the software industry has learned, user requirements must be emphasized throughout the entire development process if an application is to be successful. Users need to be able to explain what they want, and at the same time be shown examples of the look and feel of a product so they can gain some sense of the possibilities. Section 3.11 has an anti-innovation tone. It is important to recognize cost and risk, but also to balance these considerations with return. Benefits are discussed elsewhere; editing could usefully bring the discussions together.
OCR for page 30
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data Chapter 4: Decision-Support Experiments Within the Water Resource Management Sector This chapter includes six key findings or recommendations. The material covered in the chapter generally supports the findings; however, this material and the key findings should be more tightly integrated. Reference to the many case studies, which provide much of the evidence for the document’s findings, should be integrated into the text and vice versa, so that the case studies are clear illustrations supporting the analysis and key findings. Some of the key findings seem to depend on an analysis of lessons implicit in the case studies, and the case studies do not always include the type of information needed to support the findings. While the findings call for end-to-end studies, the discussion does not explicitly address the communication of forecasts and operationalization issues that are part of the authoring team’s charge. The six key findings are: Effective integration of climate information in decisions requires sustaining long-term collaborative research and application of decision-support outcomes. Most “experiments” in the use of climate information are relatively young, and it remains to be seen whether they can be sustained. This point comes through mainly in the case studies. It seems to rely heavily on the South Florida water management case, one of 11 cases presented. The background on the other case studies does not always provide information on how long the effort has been developing. The analysis of cases that support this finding could be more effectively summarized and presented. A critical mass of scientists and diverse decision-makers is needed for collaboration to succeed, and there are currently an inadequate number of “integrators” of climate information for specific applications. Other than in findings and summaries, the term “critical mass” appears only in the case study of the Regional Integrated Science and Assessment centers. The definition of critical mass appears in the conclusions on page 327. This point could be supported more strongly by addressing and highlighting this issue in the case studies and in the text, where the emphasis is more often on broad inclusiveness. The claim that there are not many people working as “integrators” is plausible but not well supported in the main text. Forums and other means of stakeholder engagement must be adequately funded and supported by decision-makers and scientists. The finding on forums seems to be supported most strongly by the discussion in Chapter 2. In Chapter 4, it is embedded in the discussion of boundary organizations. Given the emphasis placed on the value and potential of boundary organizations, is not clear why this seemingly narrower point appears among the key findings, whereas a finding about boundary organizations does not. The need for funding of other forms of stakeholder engagement also needs additional support, either from the experience of the authoring group or from other sources. For example, the section that calls for balance of funding might be elaborated. Additional material is needed explain to the reader why this finding is labeled as key. Effective decision support tools must be “end-to-end” useful, meaning that they engage a range of participants, including those who generate them and those who translate them into predictions for decision-maker use. This point is a bit confusing as written. Presumably it
OCR for page 31
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data is decision-support systems that should be useful from end to end, not decision-support tools, and the people who “translate” climate information to make it useful are more likely to be producing decision-support tools than translating them. If the finding is correctly interpreted in this manner, it is illustrated and supported in Chapter 4, although that support could be strengthened by better integration of the case experiments with the main text. Good seasonal forecasts are an important tool for bringing scientists and water decision makers together. The tone of this statement is entrepreneurial, as though seasonal forecasts open the door to a new market. Perhaps this point relates to the first conclusion, about long-term collaboration. There are many examples in which seasonal forecasts have served as a topic of mutual interest for scientists and decision makers and collaborations have developed around them. Customizable tools—rather than generic services—are the most important products needed by decision-makers. This statement implies that decision makers need tools they can customize. Research and the text suggest that they also need scientists and boundary organizations—in fact, it may be scientists and boundary organizations rather than the users that customize climate information, thereby creating tools to meet users’ needs. The comments about the need for efforts that allow communities and other groups to develop their own capacity are mentioned but not developed in this chapter. Other Comments: Page 229, lines 4342 to 4346, lists a number of consequences of changes in streamflow. The connection is not clear to NOAA’s seasonal or interannual forecasts, in the sense that it is not clear that improved forecasts from NOAA will do much to help with these issues. Page 240 lists four major challenges to decision-support systems: lack of integrated decision-support systems, lack of coordinating institutions, lack of stakeholder participation, and overspecialization of science and engineering education. The evidence that these are important challenges is not made explicit. Also, this list does not address the claim about wealth made in Chapter 3. The claim that decision-support information providers have difficulty communicating with each other (lines 4593-4594) contravenes the experience of some such scientists. The document also fails to make clear which of these challenges are most profound and enduring, or which can be addressed effectively by the actions of federal agencies. Pages 246 (bottom) and 247 (top) identify three reasons that managers may not use climate forecasts. There is documentation for these reasons but no discussion of another potentially important reason: that the expected payoff from using the forecast is relatively small. This might be the case, for example, if a manager does not see climate as a hazard, if the forecast lacks skill in relation to decision-relevant parameters, or if the expected benefit is too small to justify using the forecast. The case studies of “decision support experiments” are characterized on page 257, line 4837, as being on “employing climate information.” However, the first example of the Rio Grande Silver Minnow is about how climate information might help in the analysis, not how it was employed. Also, the Delaware River Basin example is about the potential of the use of climate information, not in its actual use. Such examples should be reconsidered and not used unless they add to the main points of the chapter. The discussion of how climate variability influences water resource management (pages 238-239) actually addresses only the effects of climate change and cites only Intergovernmental
OCR for page 32
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data Panel on Climate Change reports as sources. There is a large number of more geographically detailed studies of the impact of climate change on hydrology and water resources (e.g., in the Western United States) that might be worth examining for this section if indeed it should be including discussion of the effects of climate change. In California, the studies suggest that with climate change, there will be more water when it is not needed, in the early spring, and less when it is needed, during the summer irrigation season. Vicuña and Dracup (2007) review over 60 of these articles.