community. The private sector and policy and decision-makers in the public sector (e.g., congressional staff) fall into the category of users of assessments and need to understand the implications of uncertainty, in contrast to the research science community, who are generators of assessments and associated uncertainty information. The draft provides relatively little information for an audience of users, particularly information that could be used as guidelines for effective communication techniques.

  1. The range of best practices for characterizing uncertainty is not represented. The document fails to review the range of methods used to characterize uncertainty as called for in the study prospectus. Instead, it focuses almost exclusively on expert elicitation for use in a subjective Bayesian analysis. This focus neglects assessments, including other Synthesis and Assessment Products, associated with the observational record. There is a need to discuss more traditional frequentist methods, which remain dominant in scientific work, and objective Bayesian methods based on non-informative prior distributions. By focusing exclusively on the subjective Bayesian approach, the document also fails to elucidate ‘Best Practices’ for characterizing uncertainty as called for in the study prospectus. The committee understands this elucidation to involve a description of alternative approaches and a discussion of the strengths and weaknesses of each. The addition of a statistician to assist with the elucidation of traditional scientific methods would address a significant weakness in the report. This addition to the authorship team should be strongly considered, regardless of whether the current document (and its authorship team) is greatly expanded to address the additional audiences and issues described in the prospectus.

  2. The influence of social context and emotional factors is absent. Although some of the important cognitive factors in understanding and evaluating uncertainty are discussed in Chapter 3, this discussion is incomplete in two senses. First, the draft SAP 5.2 neglects the social context in which such understandings and evaluations are made; even within the narrow focus of discussing expert elicitation, responses will be influenced by how the questions are asked, the context of the interview, the expectations and knowledge of experts about what their peers are saying, and the cultural set of norms that attend the social groups (scientific institutions, universities and departments, etc.) to which respondents belong. Moreover, emotions have been shown to play scientifically measurable roles in estimations of uncertainty. Second, the discussion of cognitive biases, and the missing discussion of the social context and emotive factors in evaluating uncertainty are highly relevant to the communications and decision making chapters but are almost entirely absent in these too-brief chapters. In the communications chapter, the emphasis is on presentation of materials to an amorphous non-technical audience rather than on understanding the needs of multiple audiences within their social contexts, as specified in the prospectus. The decision making chapter neglects the large literatures on decision making in institutional settings where scientific uncertainty is only one of many factors influencing decisions.

  3. Introductory material is lacking. The draft would be improved if the brief paragraph at the top of page 1 was expanded into a formal introduction section that provides framing and context for the rest of the document. The authors could define who climate decision-makers are and discuss the importance of characterizing,

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement