4
Review of Individual Chapters

This chapter provides specific comments on the eight individual chapters of draft Synthesis and Assessment Product (SAP) 5.2. In some cases, these specific comments relate to the overarching comments provided in the previous two chapters of this review. In the other cases, these specific comments are generally minor in nature. The review of each chapter includes a statement that summarizes the committee’s overall thoughts. For some chapters, there are enumerated comments that follow this statement to provide suggested editorial changes or other details for the authors to consider during the revision process.

CHAPTER 1
Sources and Types of Uncertainty

As noted earlier, the committee recommends the addition of a full introduction section and a foreword to provide framing and context before the technical material in Chapter 1 is presented. The comments provided here pertain only to what is contained in the draft Chapter 1. Chapter 1 provides a good summary of the sources and types of uncertainty; the committee suggests that these concepts could be further elucidated with the use of real-world examples related to climate and decision making. In addition, the authors should acknowledge that some types of uncertainty cannot be defined or characterized because the existence of the subject in question, and all the sources and types of uncertainty associated with that subject, are completely unknown.

  • Page 1, Line 19: Insert “in a timely fashion” in front of “before.”

  • Page 1, Line 29: “stationarity” is a relative concept.

  • Page 1, Line 33: This is not true.

  • Page 1, Line 35: Change “belief” to another word (e.g., “certainty”)

  • Page 4, Lines 5-10: Consider replacing this lengthy quote with a paraphrased version using common prose.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 13
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making” 4 Review of Individual Chapters This chapter provides specific comments on the eight individual chapters of draft Synthesis and Assessment Product (SAP) 5.2. In some cases, these specific comments relate to the overarching comments provided in the previous two chapters of this review. In the other cases, these specific comments are generally minor in nature. The review of each chapter includes a statement that summarizes the committee’s overall thoughts. For some chapters, there are enumerated comments that follow this statement to provide suggested editorial changes or other details for the authors to consider during the revision process. CHAPTER 1 Sources and Types of Uncertainty As noted earlier, the committee recommends the addition of a full introduction section and a foreword to provide framing and context before the technical material in Chapter 1 is presented. The comments provided here pertain only to what is contained in the draft Chapter 1. Chapter 1 provides a good summary of the sources and types of uncertainty; the committee suggests that these concepts could be further elucidated with the use of real-world examples related to climate and decision making. In addition, the authors should acknowledge that some types of uncertainty cannot be defined or characterized because the existence of the subject in question, and all the sources and types of uncertainty associated with that subject, are completely unknown. Page 1, Line 19: Insert “in a timely fashion” in front of “before.” Page 1, Line 29: “stationarity” is a relative concept. Page 1, Line 33: This is not true. Page 1, Line 35: Change “belief” to another word (e.g., “certainty”) Page 4, Lines 5-10: Consider replacing this lengthy quote with a paraphrased version using common prose.

OCR for page 13
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making” Page 4, Line 31: Not sure this is entirely true in the case of this example. Please further elucidate the concepts of aleatory and epistemic. Consider using more accessible terms for the concepts. Page 4, Line 33: Replace “inherently” with “often.” Page 5, Line 1: Is it not true that probabilities can only be assigned on empirical quantities. Page 5, lines 19-20: The terms “parametric analysis” and “switchover” could be made more accessible by using the phrase “sensitivity analysis,” which is used commonly in the climate research community. CHAPTER 2 The Importance of Quantifying Uncertainty The committee suggests that the content of this chapter be merged with that of Chapter 6 under the rubric of communication and that this material be placed near the current location of Chapter 6 (after the discussion of the sources and types of uncertainty and the various methods for characterizing uncertainty). It would also be helpful if the authors provided some background information on the research process that led to the formulation of the terminology for numerical probabilities outlined in the figures and tables. This chapter could also be an appropriate place for further elaborating on the concept of risk, wherein probabilities are combined with consequences. CHAPTER 3 Cognitive Challenges in Estimating Uncertainty In this chapter the authors provide an excellent summary of the cognitive challenges that affect rational analysis of uncertainty by individuals. We normally consider decisions to be made cognitively, and the ideal decision is made “rationally,” with a calculus of plusses and minuses, tradeoffs, and net gain or loss. This leads to an emphasis on cognitive elements in judgments and decisions and a relative under-emphasis on other factors influencing people’s estimates of uncertainty. Two factors that ought to be discussed, at least briefly, are group processes of decision making and the role that emotions play. People are often asked to make decisions about issues that are complex and involve uncertainty in terms of potential outcomes. These decisions are most often made in groups and in institutional settings. The context of decision making presents important constraints and opportunities on the processes and outcomes. One factor is the worldview of the group, for example as analyzed by Mary Douglas, Aaron Wildavsky,

OCR for page 13
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making” and Michael Thompson under the concept of Cultural Theory. People’s orientations to their groups and their adherence to the rules of the group determine important aspects of their attitudes toward risk and uncertainty. Even within cognitive analysis, decision making can (perhaps ought to) be modeled as social rationality (with non-optimizing choices), as assessed and discussed in Jaeger et al. (1998). The role of emotion in decision making is increasingly the subject of research, principally by psychologists and social psychologists. Jennifer Lerner and Baruch Fischhoff of Carnegie Mellon University—along with Roxana Gonzalez and Deborah Small, graduate researchers at the time—examined how emotions affect our assessment of risk. Although we may like to think that our judgments about risk are entirely objective, these researchers have demonstrated that emotional responses to the September 11 terrorist attacks could affect not only a person’s judgment of risk for future attacks, but also risk estimates for other types of hazards. Participants exposed to media clips that induced fear held more pessimistic perceptions and were more risk-averse, while participants exposed to media clips that induced anger held more optimistic perceptions and were more risk-seeking. The same research found that respondents felt public health officials had failed to communicate easily understood (and desired) facts about terrorism. Finally, people and organizations may make faulty judgments and erroneous decisions because they do not take into account all of the relevant information available to them. For example, the Challenger shuttle disaster in 1986 and the Columbia shuttle disaster in 2003 occurred despite the availability of information whose appropriate consideration may have averted the decisions that lead to tragic consequences. Information must be salient to potential users of it. If uncertainty information is not understandable or not seen a useful to decision making, it will be irrelevant to decisions. Some specific points to consider include: Figure 3.3 is interpreted as the availability heuristic, but may be the result of the distinctions people draw between very large and very small hazards, where consequences are orders of magnitude different. The factor “what gets your attention” is not adequately addressed. This factor does not involve the matter of use all available information in making rational decisions, but only what gets ones attention. References to consider: (Epstein 1994; Finucane et al. 2003; Kasperson et al. 2003; Jaeger et al. 1998; Lerner et al. 2006; Lerner and Tiedens 2006; Peters et al. 2003, 2006; Slovic 2004, 2006; Slovic and Slovic 2004, 2005; Slovic et al. 2004, 2005; Sunstein 2005; Thompson and Rayner 1998; Vaughn 1996; Wilson and Arvai 2006; Zajonc 1980)

OCR for page 13
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making” CHAPTER 4 Methods for Estimating Uncertainty As noted, the document focuses almost exclusively on expert elicitation for subjective Bayesian analysis. There is a need to review the full range of methods used to characterize uncertainty. In scientific work, as opposed to decision problems, the classical or frequentist approach remains dominant. Although the Bayesian approach is increasingly used in scientific work, the emphasis is on objective methods involving the use of non-informative prior distributions. In contrast, the Bayesian paradigm is dominant in decision problems. This is due to the nature of the probabilities needed, for example, to maximize expected net benefits or utility. Like the frequentist approach, the Bayesian approach is typically based on a formal structure involving a parametric model and observations generated from it. Although standard methods assume that the model is specified up to a finite set of parameters, formal methods do exist for treating uncertainty about the form of the model (see suggested references). Considerable clarity could be gained by laying out this formulation and referring to it in reviewing different sources of uncertainty and alternative approaches for characterizing it. The authors appear to justify the exclusive focus on the subjective Bayesian approach by appealing to the presence of ‘deep uncertainty’. This term needs to be defined, its implications for alternative methods need to be explained, and the case must be made that such deep uncertainty is endemic to problems involving climate. The committee also suggests that in revising this chapter, in tandem with Chapter 5, the revised document first present and discuss frequentist and more objective techniques, followed by Bayesian approaches (first objective, then subjective), and that the discussion then proceed to a discussion of eliciting subjective probabilities. Some specific points to consider include: The discussion of the model-based exercise and its relation to the larger questions in Chapter 4 is insufficient. Reference should be made to the large literature on assessing the value of information. Page 18, line 5. Need to further elaborate on the concept of “best” strategy (goes along with need for description of when expert elicitation is “best”). Citations to the literature are heavily weighted toward the authors’ own work. The authors should consider citing the recent review by Garthwaite et al. (2005) on elicitation, Genest and Zidek (1986) on combining subjective prior distributions from different experts, Mosteller and Yountz

OCR for page 13
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making” (1990) on quantifying probabilistic expressions, and Berger (1994) on Bayesian robustness. The issue of 2nd order distributions is not worth raising if the purpose is only to dismiss it; if it is raised, something substantive should be said about it; otherwise, it should be deleted. "The caption of Figure 4.1 refers to the use of 'Bayes' Monte Carlo estimation'. This is not a term of art and should not be used without further explanation." The authors state that experts (for elicitation) have difficulty dealing with extremes. Please consider discussing why this is so. Some argue experts have the same problem as the general public. From where does this problem arise? When referring to climate sensitivity, if sensitivity to double CO2 is explicitly meant, as opposed to the change in T per unit forcing (K/Wm−2), please state this. The committee would like to see a discussion in this chapter about “surprise.” One can argue this is associated with large uncertainty or particular types of uncertainty (but not always). What types of events are associated with surprise? Can surprises be anticipated (not what they will be, but that they will occur)? An example in this context is abrupt climate change. CHAPTER 5 Analysis of, and with, Uncertainty Chapter 5 in particular would benefit from the use of “real-world” examples that relate the content to climate decision making. Again, it appears that there is some redundancy of the material presented in this chapter with that presented in Chapter 4. This chapter would also be a good place to discuss the strengths and weaknesses of the use of scenario analysis, which is not addressed at all in the current document. Synthesis and Assessment Product (SAP) 2.1 provides relevant material on this topic. Another concern of the committee regarding the content of this chapter involves the use of the concept of “robustness.” The committee finds that this term is insufficiently defined. A plausible argument can be made that there is no meaningful distinction from usual optimality analysis and that the concept discussed in this report is a matter of a poorly defined utility function. If indeed there is a real technical distinction to be made, the authors should consider expanding and supporting the discussion of this

OCR for page 13
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making” concept. Furthermore, the committee suggests that the authors address the concept of adaptive management in conjunction with discussions of robustness and in particular address how different sources of uncertainty affect different kinds of decisions. Finally, the committee would appreciate a further elucidation of what the author considers to constitute “deep uncertainty” (page 34 and other locations). The committee understands that there is overlap between this concept and the others defined in this section (e.g., “robust”), but nevertheless finds that it is not entirely clear when the author considers the situation inappropriate for use of conventional methods for characterizing uncertainty. Some specific comments include: Page 34, end of Line 3: Please elaborate on the concepts of “traceable accounts” and “multiple metrics aggregation.” Figure 5.3 does not account for the issue of structural uncertainty. For example, consider the formulation of an ocean model (purely diffusive versus advective-diffusive). CHAPTER 6 Communicating Uncertainty The communications chapter constitutes 1.5 pages of a 44 page report. This is very brief in light of the complex issues involved. This chapter currently treats several methods or techniques, such as graphical displays and mental models. It would be helpful to have a discussion of the different target audiences for communication efforts and the issues or challenges that each poses in communicating uncertainty. The chapter appropriately emphasizes the importance of interactions with target audiences in the design of communication messages. This could be further extended to a treatment of communication as a two-way process and carried forward to the section on advice and guidance. An extensive literature exists on empirical studies of the communication of risk and uncertainty, including a number of useful assessments of practice. Some extended work specifically addresses the communication of uncertainty, such as Communicating Uncertainty (Friedman et al. 1999). This literature could be more heavily drawn upon and extensively referenced in this chapter. The social contexts and institutional settings in which communication activities occur have large implications for the types of communication processes that are appropriate and how substantive issues such as social trust and credibility may be addressed. The report would benefit if these issues were addressed. Finally, many analysts have pointed to the need for ongoing evaluation of communication efforts, linked to midcourse corrections and assessment of what is working well (and what is not). This issue could also be mentioned in the revised report. Some specific comments include:

OCR for page 13
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making” Page 37, lines 5-29: Jumps too quickly from effective uncertainty communication methods to methods of risk communication. Please flesh out these links. Page 37, lines 8-73: Note there are also “translators” who are not spinners but “repackagers.” These are scientists who give guidance and advice, not political spinners. This document does not speak to this audience. Consider expanding upon these lines to address this issue. CHAPTER 7 Making Decisions in the Face of Uncertainty Early in SAP 5.2, it would be very useful to have discussion of what particular challenges and problems are present that need to be addressed and considered in decisions under different levels of uncertainty. What is the decision problem that public and private sector managers confront in decisions made under high degrees of uncertainty? Chapter 7 is very brief for such a complex subject, although relevant material is presented in other chapters (e.g., in Chapter 5 on the discussions of robustness). Nevertheless, the draft does not identify and assess the different types of decision strategies that are available to decision makers, their relative strengths and weaknesses, and their situational appropriateness. Notable is the lack of any significant discussion in this chapter of adaptive management, a decision approach that has been identified as particularly appropriate to situations of high uncertainty over long time frames with diffuse actors and interests, such as climate change. Please consider expanding the discussion of this concept beyond Figure 1. Other concepts that could use further discussion include the differences between distributed decision systems and unitary decision structures. Discussions of decision strategies often implicitly assume the latter, but the former is often what exists, particularly in the case of climate decision making. It would also be helpful to relate decision making to the treatment of deliberative processes as discussed in the NRC report Understanding Risk (NRC 1996). This may help to link assessments with the role of stakeholder participation in shaping effective decision strategies. Some specific suggestions include: A discussion of the work presented by Cash et al. (2002) could be useful in considering effective ways to link scientific assessments to decision making. A revised chapter on decision making could draw upon and reference the large amount of prescriptive and empirical literature that exists, little of which is included in the current draft.

OCR for page 13
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making” Page 37, first paragraph: What are these uncertainties? Are they non-scientific? Page 38, Line 8: Please choose a word other than “smart” (wise?). In this context, this adjective could be interpreted as condescending. Page 38, line 25: This may not be strictly true (the notion that research reduces uncertainty is not an assumption of classical decision analysis). Program managers may assume that research has an expected positive value, but in fact research does not necessarily reduce uncertainty. What about reversely identifying research needs via meta-analysis of existing problems? Page 38, Lines 35-36 Take this to its logical conclusion: When exactly should the data collection end and a decision be made? Consider that adaptive management may provide a sensible framework to answer this question. References to consider: (Gregory, et al. 2006; Arvai, et al. 2006) CHAPTER 8 Some Simple Guidance for Researchers This chapter contains a number of excellent suggestions from the authors about practical suggestions for dealing with uncertainty. The committee understands that following these suggestions would substantially improve the representation of uncertainty in the Climate Change Science Program (CCSP) product; however, as a final chapter for the document it does not provide the types of summing up and recommendations that would also be helpful for all the users of the document. In some cases, conclusions are drawn that may not be entirely supported or even discussed elsewhere in the document, and thus may not be entirely clear to users who are not well conversant in all types of uncertainty evaluations. The committee recommends that the authors ensure that all conclusions provided in the revision are supported by material discussed elsewhere in the SAP document, and that statements of opinion be clearly identified as such. In terms of providing a summary of “best practices,” the chapter provides a brief summary of professional advice for those involved in writing assessments, but it does not elucidate a range of best practices for all of the audiences set forth in the document prospectus. As stated in the Summary of this review, the committee believes that this would require a significantly expanded SAP, or the production of a companion Product. Finally, the committee disagrees with the recommendation to adopt the terminology/ranges shown in the figure at the bottom of page 41. This disagreement is not based upon a specific objection to the assignment of particular qualitative words to a particular range of probabilities, but rather on the substantial amount of overlap and ambiguity among the categories. The committee suggests that in a revised version of this figure, the

OCR for page 13
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making” assignment of a given qualitative word to a given range of probability should be less ambiguous, where possible.

OCR for page 13
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product 5.2, “Best Practice Approaches for Characterizing, Communicating, and Incorporating Scientific Uncertainty in Climate Decision Making” This page intentionally left blank.