Communicating Uncertainty to Decision Makers

By their nature, decisions about system development and acquisition involve uncertainties. But current methods for reporting the results of analysis and testing tend to hide the uncertainties. Reports that contain only point estimates of outcome variables—such as cost, time to completion, and system performance parameters (particularly true in COEAs)—or that tell only whether a system passed or failed a test, are inadequate for informed decision making. For example, consider the decision of whether to begin development on a new system whose forecasted performance is better than that of an existing system. Suppose that the benefit/cost trade-off based only on a point estimate is quite favorable to the new system. Suppose, though, that there is significant technical risk associated with the new system so that there is a substantial likelihood that the new system may fail to meet minimal standards of acceptable performance. This uncertainty about the performance of the new system could well change the decision about whether to begin development.

Information from analysis or testing needs to be presented in ways that are constructive for decision making. Which uncertainties to present and how best to communicate them depend on the decision being supported by the analysis or test. Elsewhere in this report, we discuss explicit modeling of the decision problem faced by program evaluators. Regardless of whether such a formal modeling approach is taken, the decision maker needs to address the following questions:



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 19
Statistical Issues in Defense Analysis and Testing: Summary of a Workshop Communicating Uncertainty to Decision Makers By their nature, decisions about system development and acquisition involve uncertainties. But current methods for reporting the results of analysis and testing tend to hide the uncertainties. Reports that contain only point estimates of outcome variables—such as cost, time to completion, and system performance parameters (particularly true in COEAs)—or that tell only whether a system passed or failed a test, are inadequate for informed decision making. For example, consider the decision of whether to begin development on a new system whose forecasted performance is better than that of an existing system. Suppose that the benefit/cost trade-off based only on a point estimate is quite favorable to the new system. Suppose, though, that there is significant technical risk associated with the new system so that there is a substantial likelihood that the new system may fail to meet minimal standards of acceptable performance. This uncertainty about the performance of the new system could well change the decision about whether to begin development. Information from analysis or testing needs to be presented in ways that are constructive for decision making. Which uncertainties to present and how best to communicate them depend on the decision being supported by the analysis or test. Elsewhere in this report, we discuss explicit modeling of the decision problem faced by program evaluators. Regardless of whether such a formal modeling approach is taken, the decision maker needs to address the following questions:

OCR for page 19
Statistical Issues in Defense Analysis and Testing: Summary of a Workshop What choices are being considered regarding the systems being evaluated? What are the objectives against which the desirability of these choices are to be measured? What uncertainties remain after the analysis about the extent to which each of the choices meets these objectives? What are the contingencies that affect the resolution of these uncertainties? Analysts need to inform decision makers about the robustness of conclusions. It is important to consider all the uncertainties in the analysis, and it is equally important to report to users when these uncertainties affect the conclusions of an analysis. Sources of information used in analysis, as well as potential biases in the information, should also be reported. Some guiding principles are listed below. Further discussion may be found in Fischhoff et al. (1981), Keeney (1992), and Clemen (1992). Present uncertainties in a way that avoids technical jargon and makes their implications clear. This is a guiding theme through the rest of the discussion. Practitioners of statistical analysis must remember that most decision makers are not familiar with technical statistical terms. Use terms that can be understood by nonstatisticians. Standard, simple, and understandable presentation formats for communicating information about policy-relevant uncertainties would be helpful. List and discuss assumptions and omissions. Because the conclusions of an analysis depend on the validity of its assumptions and the sensitivity of the conclusions to violations of the assumptions, it is incumbent on the analyst to present explicitly the assumptions underlying the analysis. Any analysis leaves out some factors, either because they are too difficult to include given the available models and methodologies, or because the information required to include them is not available. When factors are left out of an analysis, they should be identified and the potential effect of the omitted factors on the results should be discussed. Identify all important sources of uncertainty. It is tempting for analysts to consider only uncertainty that can be easily quantified. Confidence intervals around the value of an estimated parameter as produced by a computer package account for only the within-model uncertainty associated with an estimate. Other important errors include: Uncertainty due to modeling assumptions and omitted factors. For example, a piece of equipment may not have been tested under all relevant conditions, and the way it is employed on the test range may be very different from how it will be employed on the battlefield. Uncertainty about the future military climate. The combat utility

OCR for page 19
Statistical Issues in Defense Analysis and Testing: Summary of a Workshop of a weapon system depends on the combat scenario in which it will be used. There is considerable uncertainty about the types of military situations in which weapon systems of the future will be used. Uncertainty about scientific and technological knowledge. Being at the cutting edge of technology carries with it technical risk. As the Department of Defense tries to bring results from research laboratories more rapidly into the field, less will be known about the technologies employed in a new system prior to testing of that system. Awareness of and an assessment of these uncertainties is essential to good decision making. Uncertainty about the appropriateness of analysis methods. It is important to acknowledge potential fallibility or limitations in the analytical methods employed and to assess the effect of plausible alternative methods on the results. Whenever an estimate is presented, uncertainty bounds should also be reported. An analysis produces estimates of unknown quantities, and these estimates vary in precision. Error bounds should become a standard part of reporting procedures. A related and sometimes useful principle is to present multiple analyses. When different models or modeling assumptions result in qualitatively different conclusions, it may be appropriate to present the results of multiple analyses, together with discussion about the reasons for the discrepancies among the analyses. Frequently the most important output of analysis is insight, not an answer. Good analysis is not simply cranking a problem through a model to get “the answer.” A competent analyst will try different models and different assumptions to determine the sensitivity of the result to the modeling assumptions. (This comment applies to statistical analysis of operational test results as well as to analysis of the results of simulations.) In the process of doing the analysis, the analyst gains considerable understanding of the problem, the model, and the benefits and limits of the model as it applies to the problem. Rather than simply presenting the answer from an analysis, it is important for the analyst to convey the important insights he or she has gained during the analysis process. Such insight can help decision makers weigh choices in a more informed and effective way. Use graphical presentation methods. Not only is a picture worth a thousand words, but it may also be worth 100 tables. Recent advances in computer graphics for data analysis and isualization can be useful in understanding and presenting the results of complex experiments. Such graphics have been integrated into widely available statistical packages, which should facilitate their use. Tufte (1983) provides an entertaining and informative discussion of graphical methods for displaying complex quantitative data. Decision makers must realize, however, that they, too, must pay a substantial price to successfully incorporate uncertainty assessments into their thinking. As recent experience in the industrial quality movement has taught,

OCR for page 19
Statistical Issues in Defense Analysis and Testing: Summary of a Workshop a commitment from management is essential. The strength of the statistical method is explicitly to quantify our ignorance—not our knowledge. Decision makers may sometimes be reluctant to acknowledge uncertainty, because substantial uncertainty usually complicates the process of building consensus around a decision. But as technicians must look beyond the question “How big does n have to be?” so too must the decision maker pose other questions in addition to “What's the number?”