3
RESPONSIVENESS TO PROSPECTUS QUESTIONS

This chapter considers Review Criterion 1, which consists of two subquestions: Are the goals, objectives and intended audience of the product clearly described in the document? Does the product address all questions outlined in the prospectus?

GOALS, OBJECTIVES, AND INTENDED AUDIENCE

The product’s goals and objectives are described with adequate clarity. However, the audience for the document and its recommendations is not made explicit. As the document was created as part of a process of the U.S. Climate Change Science Program (CCSP) with the National Oceanic and Atmospheric Administration (NOAA) acting as lead agency, it seems reasonable to presume that the intended audience consists of the CCSP agencies, including but not restricted to NOAA. It would help to make the audience more explicit, and then to address recommendations as much as possible to specific parts of that audience.

RESPONSIVENESS TO QUESTIONS IN THE PROSPECTUS

The prospectus called on the authors to address numerous questions and issues (see Box 1-1). Most of these are addressed in more than one place in the document; two do not seem to be addressed explicitly at all. We suggest that, in the revision, the authors make it easier for readers to locate the places where the prospectus questions are answered. We organize our comments by chapter, to make them easier to address in a revision, and then by question within each chapter.

Chapter 1:
A Description and Evaluation of Forecast and Data Products

Chapter 1 covers the issues identified in Section I of the product as described in the prospectus and has the same title as that section.

Prospectus Question 1(a): What are the seasonal to interannual forecast/data products currently available? Chapter 1 covers this question extensively, providing valuable information based on the literature. However, there should be some clarification about what is “currently available.” All the centers mentioned, in addition to the National Centers for Environmental Prediction (NCEP)—the International Research Institute for Climate Prediction (IRI), the Climate Diagnostics Center at NOAA (CDC), and the Experimental Climate Prediction Center at the Scripps Institution of Oceanography (ECPC)—do have products of potential interest, but they are quite different in what they offer. Although NCEP is working toward more objective forecasts, only IRI offers a synthesized product—skill-filtered, objective multimodel, with some additional final subjective filtering based on response to potentially erroneous predictions of sea surface temperature (SST). CDC and ECPC essentially produce raw model outputs. Our understanding is that CDC doesn’t post actual model predictions; instead, after analyzing the models’ historical responses to imposed SSTs, it statistically, and linearly, estimates what the models’ responses should be to the currently predicted SSTs.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 15
3 RESPONSIVENESS TO PROSPECTUS QUESTIONS This chapter considers Review Criterion 1, which consists of two subquestions: Are the goals, objectives and intended audience of the product clearly described in the document? Does the product address all questions outlined in the prospectus? GOALS, OBJECTIVES, AND INTENDED AUDIENCE The product’s goals and objectives are described with adequate clarity. However, the audience for the document and its recommendations is not made explicit. As the document was created as part of a process of the U.S. Climate Change Science Program (CCSP) with the National Oceanic and Atmospheric Administration (NOAA) acting as lead agency, it seems reasonable to presume that the intended audience consists of the CCSP agencies, including but not restricted to NOAA. It would help to make the audience more explicit, and then to address recommendations as much as possible to specific parts of that audience. RESPONSIVENESS TO QUESTIONS IN THE PROSPECTUS The prospectus called on the authors to address numerous questions and issues (see Box 1-1). Most of these are addressed in more than one place in the document; two do not seem to be addressed explicitly at all. We suggest that, in the revision, the authors make it easier for readers to locate the places where the prospectus questions are answered. We organize our comments by chapter, to make them easier to address in a revision, and then by question within each chapter. Chapter 1: A Description and Evaluation of Forecast and Data Products Chapter 1 covers the issues identified in Section I of the product as described in the prospectus and has the same title as that section. Prospectus Question 1(a): What are the seasonal to interannual forecast/data products currently available? Chapter 1 covers this question extensively, providing valuable information based on the literature. However, there should be some clarification about what is “currently available.” All the centers mentioned, in addition to the National Centers for Environmental Prediction (NCEP)—the International Research Institute for Climate Prediction (IRI), the Climate Diagnostics Center at NOAA (CDC), and the Experimental Climate Prediction Center at the Scripps Institution of Oceanography (ECPC)—do have products of potential interest, but they are quite different in what they offer. Although NCEP is working toward more objective forecasts, only IRI offers a synthesized product—skill-filtered, objective multimodel, with some additional final subjective filtering based on response to potentially erroneous predictions of sea surface temperature (SST). CDC and ECPC essentially produce raw model outputs. Our understanding is that CDC doesn’t post actual model predictions; instead, after analyzing the models’ historical responses to imposed SSTs, it statistically, and linearly, estimates what the models’ responses should be to the currently predicted SSTs. 15

OCR for page 15
Prospectus Question 1(b): How does a product evolve from a scientific prototype to an operational product? Chapter 1 provides little or nothing on the evolution from prototype (i.e., experimental research inside or outside government agencies) to operational products. One pathway is discussed in Chapter 3. The revision should address the roles in this evolution of a variety of nongovernmental actors, including academia and private-sector organizations. The report would be strengthened by providing a brief overview of the evolution of seasonal predictions. It is worth highlighting that methodologies are becoming more objective, centers are working on providing products with greater information content (more flexible), and the models are improving in their physics and by moving to higher resolution. Prospectus Question 2: What steps are taken to ensure product is needed and will be used in decision support? Chapter 1 says little on this topic because there has been little conversation with users of information about what they require when designing forecast products. Some empirical reports are relevant to the issue, however (e.g., National Research Council, 2005b; Hartmann et al., 2002; McNie et al., 2007; Rayner et al., 2005). Some of this work is discussed in other chapters, but the findings are not integrated into the discussions of the usefulness of forecast information in Chapter 1. Prospectus Question 3(a): What is the level of confidence of the product within the science community and within decisionmaking community? This question seems to be addressing forecast quality. There are three possible interpretations of quality, all of which are addressed to some degree in the chapter. First, information is given on sources of skill in seasonal climate and hydrological forecasts. The fact that a signal in the forecast can be attributed to a physically reasonable process helps give confidence in the forecast product. Second, the level of confidence in a particular forecast is implicit in the probabilities assigned to particular outcomes, as climate forecasts (and an increasing number of hydrological forecasts) are probabilistic. We caution against strong suggestions in the report that the spread of ensemble members for a particular climate model gives a meaningful estimate of confidence. Forecast probabilities are much more reliable (i.e., mean what they say) after the historical response of the model ensemble has been appropriately recalibrated to the observed climate variability. Although they provide some insight into decision-making criteria, climate projections based on scenarios should also be distinguished clearly from probabilistic forecasts. Third is the issue of overall quality of prediction tools. This is addressed, but not adequately. Only accuracy, which is a feature only of deterministic forecasts, was addressed. Probabilistic skill measures, such as reliability and resolution, are also important. The World Meteorological Organization (WMO) has developed a set of recommendations on forecast verification: the Standardised Verification System for Long Range Forecasts (SVSLRF). We recommend that WMO efforts in this regard be reviewed, consulted, or at the very least mentioned. However, we note that although the WMO-SVSLRF set of metrics gives a more complete view of forecast quality, it still may not address quality concerns that decision makers may have, such as frequency of errors exceeding a certain magnitude. The report would be improved by covering these issues and particularly by emphasizing the need for forecasts and projections to use metrics of importance to users if they are to gain their confidence. Prospectus Question 3(b): Who establishes these confidence levels and how are they determined? Chapter 1 implicitly answers this question. Confidence is defined primarily by the 16

OCR for page 15
community that produces the forecast information. This so far one-sided approach can be improved. It maybe helpful for the report to recommend that the communities of forecast producers and users work together to develop mutually useful measures of forecast quality. Such a recommendation would be more consistent with the way decision support is defined in the Executive Summary. Chapter 2: Moving Knowledge to Action The contents of Chapter 2 are most relevant to Section II of the document as described in the prospectus. This chapter, which focuses on the social, cultural, and political contexts in which decisions take place, addresses several of the specific prospectus questions. Emphasis is placed on the changing context for decision making in the water resources arena. The chapter reviews the difficulty of integrating seasonal to interannual forecasts into existing decision- support systems. With knowledge of these difficulties, policy makers and decision makers can consider new processes or systems of information production and delivery. The chapter suggests ways to use emerging structures for integrating climate information to improve decision making. It might be helpful to have authors reference specific difficulties in proposing their recommendations (e.g., “based on what we know about issues related to inequity in use of climate forecasts . . .”). Prospectus Question 5: What is the role that seasonal to interannual forecasts play and could play? This chapter reviews the emerging venues or forums (e.g., knowledge-to-action networks, “science citizenship”) that may facilitate the use of climate information in water resource decisions. It also reviews the equity questions related to the use of forecasts. Overall the presentation is balanced and addresses this question well. Prospectus Question 7: What seasonal to interannual (e.g., probabilistic) forecast information do decisionmakers need to manage water resources? The chapter reviews research regarding the difficulties water resource managers experience in using climate forecasts. It would be helpful to have these difficulties presented in the summary, as it might help to focus discussions in Chapter 4. Prospectus Question 9: What are the obstacles and challenges decisionmakers face in translating climate forecasts and hydrology information into integrated resource management? The chapter considers this topic in depth, bringing together concepts regarding decision contexts, emerging venues, and equity issues (e.g., unequal distribution of knowledge and unequal benefits from climate forecasts) in relation to the usefulness of climate information for decision makers. It points to such issues as barriers to innovation in water resource management, the low visibility of water supply as a policy issue, and the need for citizens “to alter their perceptions of climate risks and uncertainties” (p. 126), in addition to equity issues. Prospectus Question 11: What is the role of probabilistic forecast information in the context of decision support in the water resources sector? The chapter considers how the changing social, political, and cultural contexts affect the use of climate change information as well as how emerging governance structures may change the way water resources decisions are made. 17

OCR for page 15
Prospectus Question 13: How much involvement do practitioners have in product development? The chapter describes the rise of science citizenship and knowledge-to-action networks as potential sources of deep involvement by practitioners. These concepts should be expanded on and referenced in Chapter 4 when thinking about practitioner involvement. Chapter 3: Managing Innovation: Ensuring Success in Joining Research and Operations Chapter 3 provides an interesting discussion of different kinds of innovation from an operational perspective. A brief description is given for the various ways that current operational forecast environments attempt to improve and streamline. The chapter provides a useful discussion of the responsibilities placed on operational units and the resources that they possess to innovate. In this context, the discussion focuses on distinguishing between innovation resources that are centralized and those that are distributed among regional offices. It provides a comparison of the advantages and disadvantages of centralized and standardized versus distributed and individualized systems. The role of users in this process is also described, followed by a discussion of challenges of garnering institutional support for innovation. While the material in this chapter is highly relevant to the overall goal of SAP 5.3, it lacks references from the literature. We appreciate the challenge, as most of the operational issues covered are not of a type that can be written up for submission to a peer-reviewed journal. The issues are mostly based on the experience of those engaged in the operational units. We think that much of the content is highly relevant and useful and should be presented in the final product in perhaps a modified form (i.e., incorporating the chapter material into other parts of SAP 5.3). As already noted, the chapter does not draw on the research literature on innovation processes generally. Incorporating concepts from this literature may help conceptualize the operational insights and their implications. The discussions provided in Chapter 3 address a number of the specific questions in the prospectus, as discussed below. Prospectus Question 1(b): How does a product evolve from a scientific prototype to an operational product? This is the major focus of this chapter. There are some difficulties with layout and content. It would help to focus this discussion as much as possible on what is known about how climate information has been transformed by users into operational products. There have been reviews that suggest how climate information can be integrated into existing decision routines (e.g., McNie et al., 2007, on NOAA’s Regional Integrated Science Applications Program). Such work could provide examples of the use of climate information in existing decision routines in the water resources sector. The report should present and cite existing evidence on the role of users in the transformation of scientific information into operational products. Also, as already noted, the conceptual framework used to discuss innovation in this chapter is rather deterministic in nature (i.e., the innovation itself determines its construction, use, etc.) and may give the impression that innovations mainly affect those who adopt them. Some of questions that deserve attention are: How do producers receive feedback on products they produce? How are ideas for new products generated, particularly by users? What about any “fixes” to the innovation? We recommend that the writing team to take a harder look at other models for thinking about technological change (e.g., Rip and Kemp, 1998). 18

OCR for page 15
Prospectus Question 2: What steps are taken to ensure that this product is needed and will be used in decision support? In our experience, science-based agencies of the federal government often lack a true understanding of the role of users and struggle with how to appropriately integrate the user community so as to increase the chances that they produce science that is accepted as useful. For example, during the National Academies’ review of the National Weather Service’s Advanced Hydrological Prediction Service (AHPS; National Research Council, 2006), the review committee heard that the NWS had only marginally considered a user integration strategy both for agency employees and potential external users. It was reported that AHPS, a suite of tools to enhance the river forecast centers, was virtually unknown by the floodplain management community, a key potential user. NWS has made efforts and some progress on addressing these issues since the AHPS report was published. However, it is important to realize that this sort of lack of user integration can be found in many science-based agencies. Typically, scientists are content with data collection and analysis or perhaps coming forth with a new innovation based on analysis of science-driven questions, but they do not fully appreciate the importance of truly engaging the user community. Due to the writing style used (the posing of questions), it was difficult to determine whether or not the writing team was expressing concerns that science users are not effectively incorporated into the process. An unaddressed issue in relation to making climate forecasts useful is the likelihood of increased need for information about climate variability and change because of larger changes going on in the society. These include population growth, increasing population density in vulnerable areas, and so forth—changes that will increase the importance of decisions that could be informed by climate forecasts and projections. These changes may make social systems more sensitive to climate variability in the short term and to climate change over longer time scales. Such changes may result in additional layers of complexity and increased uncertainties in estimates of future climate effects. The model of innovation in seasonal to interannual climate forecasting as driven by scientists looking to improve model accuracy, which is implied in much of the Chapter 3 discussion, falls short of what will be needed. Perhaps user demand will force change in this approach. The document needs to provide some context related to demographic and other societal changes as background to discussing the kinds of innovation needed. Prospectus Question 5: What is the role that seasonal to interannual forecasts play and could play? The chapter reviews a limited range of innovation models that might be relevant, but it does not address this question directly. Prospectus Question 6: How does climate variability influence water resource management? This issue is addressed obliquely in discussions related to issues of scale in hydrologic decision making. It could be addressed more directly by considering what kinds of innovations in water management might be advantageous, given the development of some skill in forecasting climate variability. Prospectus Question 9: What are the obstacles and challenges decisionmakers face in translating climate forecasts and hydrology information into integrated resource management? The chapter considers how patterns of integrating innovation in organizations may pose obstacles to the use of climate forecasts and hydrologic information. 19

OCR for page 15
Prospectus Question 10: What are the barriers that exist in convincing decisionmakers to consider using risk-based hydrology information (including climate forecasts)? Chapter 3 reviews how limits and constraints to the adoption of innovation in organizations, particularly state and federal agencies, create barriers to use of risk-based information. Prospectus Question 16: Identify critical components, mechanisms, and pathways that have led to successful utilization of climate information by water managers. Chapter 3 describes models of innovation but does not provide analytical descriptions of any successful utilization. Prospectus Question 18: Discuss options for (a) improving the use of existing forecasts/data products and (b) identify other user needs and challenges in order to prioritize research for improving forecasts and products. This chapter suggests ways to use emerging structures for integrating climate information, such as consistent databases, to improve decision making (especially section 3.6.3). Chapter 4: Decision-Support Experiments Within the Water Resource Management Sector This chapter addresses a number of questions identified in the prospectus and goes beyond the prospectus in bringing in numerous examples of climate change needs. However, it is important for this report to clearly indicate that the focus is meant to be on seasonal and interannual forecasts—climate variability, not climate change. While these are related processes, it is helpful to the readers that the distinction is made, because they can pose significantly different issues for decision support, as well as for climate prediction. For example, seasonal forecasts provide probabilities and skill assessments based on observations, model predictions, and expert judgment, whereas climate change projections offer ranges based on scenario inputs built up from a set of plausible, coherent narratives that many would like to see revised. Engaging the climate change discussion would also require consideration of a broader literature than is currently covered. It is important that the study clarify what it does and does not cover and better justify the inclusion of material on climate change, if the authors wish to include it. Section 4.6 briefly mentions that it includes climate change experiments as useful analogues, but it is not clear that they are useful analogues, since there is no equivalent prediction or decision history. Some clarification as to the intended focus, throughout the document, would be helpful. Responses to questions related to communication of forecasts, operationalization of tools, and evaluation should be elaborated. The discussion of evaluation, a major area of interest for the policy maker audience, is quite limited. If this is due to a lack of published materials, then it should be mentioned. We offer brief comments below on how specific issues in the prospectus are addressed. Prospectus Question 1(b): How does a product evolve from a scientific prototype to an operational product? While implicitly covered in places, it maybe helpful to address the role of NOAA’s Transition of Research Applications to Climate Services program. It may also be helpful to be explicit about the roles of NOAA and other CCSP agencies in the cases presented. 20

OCR for page 15
Prospectus Question 2: What steps are taken to ensure that this product is needed and will be used in decision support? The question is addressed most directly in prescriptive discussions of boundary organizations and end-to-end tool design processes. Several case experiments describe collaborative approaches now in process. Prospectus Question 3: What is the level of confidence of the product within the science community and within the decisionmaking community; who establishes these confidence levels and how are they determined? Chapter 4 focuses on confidence as it relates to credibility, trust, and risk perception. In doing so, it usefully broadens the discussion of the types of confidence involved in decision making beyond the narrower technical definition that might be inferred from the term “confidence levels” and from the treatment of the issue in Chapter 1. Still, it would be appropriate to add discussion of work, such as that by Hartmann et al. (2002), which address the importance and value of different skill measures to decision makers. The Red River case study included in this chapter gives a great deal of attention to how deterministic forecasts may be perceived by user groups, but this discussion is not well integrated with the main body of the text. Prospectus Question 4: What types of decisions are made related to water resources? Section 4.2 provides an overview and a table of general examples (Table 4.1). It would be helpful to link this discussion more closely with the presentation of forecasts and forecast uses in Chapter 1, where some specific products are mentioned. Prospectus Question 5: What is the role that seasonal to interannual forecasts play and could play? The discussion in first part of Chapter 4 overlaps in some ways with the Chapter 1 discussion of forecasts. Table 4.1 includes listings of general types of decisions that might include forecast information. Some decision makers are using the forecasts as suggested in Table 4.1, and examples of uses are provided in later case studies. However, they are not referenced in the table. Better integration of these discussions with more detailed material in section 4.6 would strengthen the report. Section 4.5.1.1, under a title on climate variability, includes a summary of climate change issues drawing primarily on materials from the Intergovernmental Panel on Climate Change. This section would be a better fit earlier in a discussion of impacts and influence of climate on decision makers. This is an example of a place where the distinction between climate variability and climate change should be drawn more carefully. Comparing decision maker needs (discussed in section 4.3.1 and Figure 4.1) against the availability of forecasts (Table 1.3) highlights the generality of available information related to decision maker needs and the potentially enormous number of context-specific decisions. Perhaps this is where the point about the need for more integrators could be advanced more forcefully. Prospectus Question 6: How does climate variability influence water resource management? This topic is covered well and additional research needs are identified. Prospectus Question 7: What seasonal to interannual (e.g., probabilistic) forecast information do decisionmakers need to manage water resources? Chapter 4 stresses the diversity of circumstances and possible information needs. The case studies give specific 21

OCR for page 15
examples of needs. Table 4.1 indicates general decision areas and decision makers. Processes for determining needs are reviewed. The text also addresses the importance of the timing of forecasts but not the spatial scale of needs. Prospectus Question 8: How do forecasters convey information on climate variability and how is the relative skill and level of confidence of the results communicated to resource managers? The issue of how to communicate information through collaborative engagement with decision makers receives a great deal of well-documented attention. Other work on communicating risk and uncertainty, such as covered in past National Research Council reports (e.g., 1989, 1996, 1999b) reports, should be elaborated. Prospectus Question 9: What are the obstacles and challenges decisionmakers face in translating climate forecasts and hydrology information into integrated resource management? Chapter 4 considers this topic in depth, although the discussion is somewhat fragmented across sections. This concern can easily be addressed in the final version. Prospectus Question 10: What are the barriers that exist in convincing decisionmakers to consider using risk-based hydrology information (including climate forecasts)? The question is addressed well at the agency, institutional, and individual levels. Prospectus Question 11: What is the role of probabilistic forecast information in the context of decision support in the water resources sector? The case studies provide discussion relevant to this question. Prospectus Question 12: What challenges do tool developers have in finding out the needs of decision makers? Major issues related to this question are raised in the discussion of challenges to building collaborative relationships. Prospectus Question 13: How much involvement do practitioners have in product development? Chapter 4 takes a prescriptive approach to this question by calling for end-to-end involvement of practitioners in development and dissemination of tools as an alternative to the “loading dock model” that is still used in some cases. The case studies presented do not describe the collaborative processes, perhaps because so little of the kind of literature that is available regarding public involvement practices is available in this topic area. Strategies for achieving the long-term involvement seen as essential to building these collaborative relationships offer indirect insights. Prospectus Questions 14 and 15: What are the measurable indicators of progress in terms of access to information and its effective uses? How is data quality controlled? There is very little discussion of these topics in this chapter. Even though the peer-reviewed literature may be sparse, the value of this document for policy makers could be enhanced by expanding this discussion, perhaps by suggesting indicators that might be appropriate to use. Some examples might be generated by examining reports of consultations between agencies and potential forecast user communities. 22

OCR for page 15
Prospectus Question 16: Identify critical components, mechanisms, and pathways that have lead to successful utilization of climate information by water managers. This issue is addressed in numerous ways, including discussion of various approaches, such as development of boundary institutions, long-term collaboration, and end-to end forecasts, as well as discussions of individuals’ perceptions of risks and of the case of the Regional Integrated Sciences and Assessment centers. Prospectus Question 17: Discuss how these findings can be transferred to other sectors. This issue is not addressed in the document in any detail. Prospectus Question 18: Discuss options for (a) improving the use of existing forecasts/data products and (b) identify other user needs and challenges in order to prioritize research for improving forecasts and products. The closing discussion of research needs includes several important items, but it does not address issues of communicating uncertainty that are seen as priorities in the recent NRC reports, Decision Making for the Environment (2005) and Completing the Forecast (2006). The document will be greatly enhanced by addressing more clearly the need for vulnerability assessments. Only one of the case studies included to support the discussion in section 4.6 addresses issues of vulnerability. There is much more literature that could be introduced to support this case. Similarly, the discussion of barriers and obstacles might note a lack of information on the implications of smaller scale tailored products. 23