framework in general, but substituted its own terminology in some cases. Working Group III (mitigation) disregarded the idea entirely.


  1. Identify the most important factors and uncertainties that are likely to affect the conclusions. Also specify which important factors or variables are being treated exogenously or fixed.

  2. Document ranges and distributions in the literature, including sources of information about the key causes of uncertainty. It is important to consider the types of evidence available to support a finding (e.g., distinguish findings that are well established through observations and tested theory from those that are not so well established).

  3. Given the nature of the uncertainties and the state of science, make an initial determination of the appropriate level of precision. Is the state of science such that only qualitative estimates are possible, or is quantification possible and if so, to how many significant digits?

  4. Quantitatively or qualitatively characterize the distribution of values that a parameter, variable, or outcome may take. First identify the end points of the range and/or any high-consequence, low-probability outcomes. Specify what portion of the range is included in the estimate (e.g., this is a 90 percent confidence interval) and what the range is based on. Then provide an assessment of the general shape of the distribution and its central tendency, if appropriate.

  5. Using the terms described below, rate and describe the state of scientific information on which the conclusions and/or estimates (i.e., from step 4) are based.

  6. Prepare a “traceable account” of how the estimates were constructed that describes the reasons for adopting a particular probability distribution, including important lines of evidence used, standards of evidence applied, approaches to combining or reconciling multiple lines of evidence, and critical uncertainties.

  7. OPTIONAL: Use formal probabilistic frameworks for assessing expert judgment as appropriate. Moss and Schneider, 2000

So how should we go forward in characterizing uncertainty related to climate sensitivity? In terms of research needs, there is value in the multiplicity of approaches being used by the research community. Note that the leadership of the next IPCC assessment intends to carefully examine recent developments in our scientific understanding of the sensitivity (Seq and Str) parameters. In terms of communication needs, we have to think about the concept of multiple metrics. What are we conveying to policy makers about the seriousness of the problem and the progress we are making? How does a reduction in uncertainty have the potential to affect the decision-making process? Moss would like to see the scientific community be more systematic in describing uncertainties and levels of confidence. Using a Bayesean updating process is one important way to make the products more useful.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement