Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 33
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data 5 ORGANIZATION AND ACCESSIBILITY This chapter considers the document in relation to Review Criterion 4, which contains two subquestions: Are the document's presentation, level of technicality, and organization effective? Are the questions outlined in the prospectus addressed and communicated in a manner that is appropriate and accessible for the intended audience? We respond to these questions first for the report overall and then chapter by chapter. OVERALL COMMENTS When we met with members of the authoring team, they told us that they were not satisfied with its organization and were already planning to reorganize it. Our comments on organization are based on the July 5 draft and do not take into account the authors’ plans as of our meeting July 17. After Chapter 1, much of the rest of the document seems to confuse the ideas of climate variability and climate change and of predictions (or forecasts) and projections. The division of content between Chapters 2 and 4 can be confusing. The flow might be improved by putting the discussion of context in Chapter 2 and the findings from decision-support experiments Chapter 4. With the Chapter 3 material moved into another chapter or an appendix, this could considerably improve the presentation. COMMENTS ON THE INDIVIDUAL CHAPTERS Chapter 1: A Description and Evaluation of Forecast and Data Products In this chapter, the level of technicality is varied. There is occasional jargon, much of it mentioned in specific comments below. The organization needs work, and the chapter could definitely be shortened—perhaps by half. The communication is appropriate and accessible but, due to the current length and organization, some messages may get lost. Distinguish the time scales of forecasts/projections. It would help to have clear descriptions of how forecasts of different timescales are made—different inputs are necessary to determine “signal” (predictability). For example, weather forecasts need initial atmospheric state; seasonal forecasts may need initial atmospheric state in the first month but rely more on sea surface temperature data at longer lead times; climate change forecasts are influenced by changing atmospheric composition (e.g., CO2); decadal forecasts, which don’t really exist yet, will need data on both atmospheric composition and initial state of the oceans. All these predictions use dynamical models—perhaps the same ones—but they are initialized and run differently. The climate community is slowly moving toward “seamless” prediction, but it is not there yet.
OCR for page 34
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data It would be helpful to provide a more explicit link between seasonal to interannual climate variability and climate change, since there is so much emphasis on climate change in the rest of the document. Some relevant points to consider: The value of seasonal to interannual decision-support systems to climate change adaptation. In theory, awareness and preparation for seasonal to interannual variability can contribute to adaptation to climate change. However, it would be useful to specify the decisions that seasonal to interannual forecasting does not address that require longer time-scale information and to be clearer about the relevant time scales: 10 years? 50 years? Expectations of skill may be erroneous, that is, for low-frequency variations predicted in year-to-year operations. While seasonal to interannual predictions show greater skill for temperature variability than precipitation, they have not done a good job at capturing the widespread increases in above-normal temperatures over the United States. Although precipitation would appear to be more difficult to predict, many seasonal to interannual predictions did a reasonable job capturing the multiyear drought from 1998 to 2001 (prediction review of the Predictability, Prediction & Applications Interface Panel, U.S. CLIVAR). There are important similarities and differences in the current approaches to predictions versus projections (e.g., no greenhouse gas changes in seasonal to interannual predictions). Shorten the discussion of forecast skill. This discussion currently takes up two-thirds of the chapter and has a lot of repetition that could be eliminated with tighter organization. Section 184.108.40.206, Some Basic Concepts Regarding Forecast Skill, could be dropped. This is 4 pages long, and much of what it says is repeated later. This could potentially be replaced by a short section on the metrics of forecast skill that describes correlation and perhaps something probabilistic, as well as the differences between real and potential predictability. Those tangents later detract from the discussion. Information in section 1.4.4 should be absorbed into 1.4.3 and not be a separate section. There is a lot of repeated information between 1.4.2, Sources of Hydrologic Forecast Skill, and 220.127.116.11, Skill of Seasonal Water-Supply Forecasts. Perhaps it would be more economical to not separate “sources of skill” from “skill” but have those be a single section—one section for climate and one for hydrology. The section Skill of Climate Forecast-Driven Hydrologic Forecasts also has much redundancy. Skill of forecasts is the same concept, whether they are statistical or dynamically driven. If these really need to be broken out into separate subsections, at least have one follow the other. The section on skill of long-term climate projections has little on skill assessment, so the section could be shortened quite a bit. Why does climate come after the hydrology in section 1.4? It would seem to make more sense for climate to come first.
OCR for page 35
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data Overemphasis on forecast accuracy. Forecast accuracy is a deterministic measure, but much of the discussion of the use of forecasts emphasizes their probabilistic nature. The discussion of skill should include the concept as applied to probabilistic forecasts. Expand the section on skill of seasonal climate forecasts: This section is only two sentences long and contains nothing that states or shows actual skill levels of seasonal climate forecasts. Yet there are some relevant sources, regarding International Research Institute forecasts or Climate Prediction Center forecasts, even though the systems have changed since those articles. Also, the Goddard et al (2006) article shows an example of seasonal forecast skill from the predictions of a large collection of dynamical models both atmosphere with predicted sea surface temperatures and coupled global circulation models, including the Climate Forecast System (see Figure 5-1). Improve the section on observational networks and data products. This section is currently quite short, and not as powerful as the summary/recommendation point. The section doesn’t need to be long, just more specific and compelling. Reduce the number of tables and figures: F1.1 and F1.2 could be dropped or replaced by something available from CPC (a modified version of that is pasted below). Specifically, on F1.1, “lead time” is the time between release of the forecast and the start of the forecast target period. This is about 3-10 days for medium range and only 1-12 months for season to interannual forecasts. Also, the weather-climate boundary is between medium range and season to interannual, not between short and medium range. Table 1.2 doesn’t add much to the discussion. F1.3 is not particularly useful. The link above it leads to a potentially confusing list of products, none of them accompanied by any description. It might be better just to keep F1.4 and F1.5 and add URLs to their captions. F1.6 caption should make clear that this is a “POE Map” (add URL?) or else interested parties will never find it on the Climate Prediction Center site. F1.7 could be deleted. The F1.8 caption could indicate that the Probability of Exceedance graphs are based on climate division data. F1.9 and F1.10 could be deleted. F1.15-F1.18 all show examples of hydrological forecasts with associated uncertainties. Could they be combined into a single 4-panel figure, which then illustrates similarities and differences of hydrological forecast presentation? Or do they actually make different points? F1.20 is valuable and helps make several relevant points in the text. F1.25 is confusing and doesn’t add much to the discussion. Chapter 2: Moving Knowledge to Action This chapter focuses on the context of decision making. Although these issues are critical, they aren't all captured in this chapter. For example, risk perceptions and risk communication strategies, both discussed in Chapter 4, are also part of the context of decision
OCR for page 36
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data making. These concepts are not fully developed in the Chapter 4 discussions. For example, what is known about how the framing of climate change as a public policy issue may affect how water resource managers utilize climate information? The discussion of the “prior appropriation doctrine” is not very clear. A further discussion is needed of overappropriated streams that create problems due to junior rights holders who have claims to any water not claimed by the senior rights holder (the issued is not that the senior rights holder uses “virtually all the water”). Water conservation schemes have to be agreements among all users, or senior rights holders have to sell or lease rights to another user. Water markets and banks in the West are still highly controversial, especially among landowners/water users, and a market solution for water shortages is still some distance in the future. In discussing the communication of climate science to and with varying audiences, the authors reference the “deficit model” but don’t talk about other communication models and research. There are several recent articles in Public Understanding of Science (e.g., Weingart, Engels, and Panescrau, 2000). An older review of risk communication research is the National Research Council report, Improving Risk Communication (1989), including an appendix by Baruch Fischhoff. The discussion of institutional response, adaptation, and learning in relation to climate science opens with reference to the work of Baumgartner and Jones (p. 129) but does not follow up very systematically. (Water Resources Research has published some interesting work regarding water resource agencies). Chapter 3: Managing Innovation: Ensuring Success in Joining Research and Operations This chapter focuses on innovation in the context of federal agencies responsible for developing climate forecasts. Much of the chapter does not directly engage with the insights developed in other chapters about various kinds of disconnects between what forecasters produce and what users want or need. As written, the material on innovation is too nonspecific to engage researchers concerned with applications to water resources and much too lengthy to engage executive readers. Despite the level of detail, the chapter doesn’t fully cover the range of innovation models that may help explain why climate information is or is not integrated into existing or emerging decision systems. Managing innovation may be a critical component to understanding decision systems, but the document does not make a compelling case. Much of the chapter seems more like a sidebar than part of the main flow of the argument about decision-support needs and experiments. The information on innovation in federal agencies might be placed appropriately in an appendix, with other text moved to Chapter 4, condensed and sharpened to relate more clearly to the rest of the chapters. In particular, sections 3.2 and 3.4 are too detailed and should be seriously shortened or moved to an appendix. Section 3.6 is a list of rhetorical questions, the value of which to the report is unclear. Similarly, the value of section 3.8 is not evident. The kinds of innovation that are the focus of this chapter—innovations in forecasting apparently developed without direct connection to user needs—do not fit well with the issues raised in Chapters 2 and 4. Such innovations in forecasting may have served the nation well in an era when climate change and variability were issues of lesser concern, but this is no longer the case. Now, forecast information related to climate variability on a 1- to 10-year time horizon
OCR for page 37
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data may be of profound interest to agribusiness, natural resource managers and industries, water supply managers, and others. Decadal projections will be of interest to these groups and others, such as those making long-term investment decisions (e.g., the oil and gas industry in Alaska, which has long operated on ice roads constructed according to historic permafrost conditions that may now be changing). Moreover, the projected growth of the U.S. population by nearly 100 million people in the next 40 years will cause additional demand on resources that will be affected by climate variability and change. Discussion of user needs in the document, in whatever chapter, should provide some context related to demographic changes (population, geographic density, immigration and risk, etc.) that may further change needs for climate projections, particular on long time scales, and perhaps also for better characterization of uncertainty in the projections. Until near the end, the chapter proceeds without recognition that (as noted elsewhere in the document), federal science agencies often lack understanding of the needs of users and of how to appropriately integrate them. For example, a recent National Research Council (2006b) review of the Advanced Hydrologic Prediction Service showed that the National Weather Service had only marginally considered a user integration strategy. The Advanced Hydrologic Prediction Service, a suite of tools to enhance the river forecast centers, was virtually unknown by the floodplain management community, a key potential user. Innovation by forecasters may have little to do with making climate information more useful to decision makers, but this possibility is barely addressed or considered in this chapter. The way this chapter is written makes it difficult to determine if the authors are raising concerns related to the ineffective incorporation of users into the process or continuing to write from a model that does not fully recognize the challenges of user engagement. There are some apparent references to comments made by attendees at a workshop or conference. It would be helpful to know more about the methods used to collect information, including at the workshop or conference. However, such anecdotal information cannot substitute for a discussion of the published research on this topic. Chapter 4: Decision-Support Experiments Within the Water Resource Management Sector The organization of Chapter 4 needs to be revisited to reduce redundancies and treatment of the same topics in multiple places. In addition, in several instances, the contents of the sections do not correspond closely to the central questions identified in the subheadings. The case studies do not make a clear effort to develop the major themes and observations made in the text or to support the key findings of the chapter. The language needs careful review for consistency and accuracy, so that climate variability and climate change do not appear to be used interchangeably and so that projections are not confused with forecasts. This chapter suffers from too much technical jargon that is not clearly related to the context of water resources. For example, does “adaptive management” as used in this chapter mean anything more than simply changing strategies as new information becomes available? If more is meant, the meaning should be made clear. The term “decision-support system” should be given a clear definition for the water resource management context. The term is critical in Chapter 4 but should be defined early in the report. How the authors see the term as being defined for water resource management would
OCR for page 38
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data be an important contribution. The term is often used to refer to a computerized system for making decisions or that aids the process of decision making. Clearly, the authors sometimes are using a broader meaning. They are now able to refer to the discussion of the term in a new NRC (2007) report, Research and Networks for Decision Support in NOAA’s Sectoral Applications Research Program.
OCR for page 39
Review of CCSP Draft Synthesis and Assessment Product 5.3: Decision-Support Experiments and Evaluations Using Seasonal to Interannual Forecasts and Observational Data Figure 5-1 An example of seasonal forecast skill. SOURCE: Modified by L. Goddard, based on NCEP-CPC schematic from http://www.cpc.ncep.noaa.gov/products/forecasts.