National Academies Press: OpenBook
« Previous: 1 Overview
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 18
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 19
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 20
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 21
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 22
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 23
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 24
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 25
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 26
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 27
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 28
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 29
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 30
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 31
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 32
Suggested Citation:"2 Use of the QMU Methodology." National Research Council. 2009. Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile. Washington, DC: The National Academies Press. doi: 10.17226/12531.
×
Page 33

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

2 Use of the QMU Methodology Task 1: Evaluate the use of the quantification of margins and uncertain- ties methodology by the national security laboratories, including under- lying assumptions of weapons performance, the ability of modeling and simulation tools to predict nuclear explosive package characteristics, and the recently proposed modifications to that methodology to calculate margins and uncertainties. Finding 1-1. Quantification of Margins and Uncertainties (QMU) is a sound and valuable framework that aids the assessment and evaluation of the confidence in the nuclear weapons stockpile.  • QMU organizes many of the stockpile stewardship tools already in use, such as advanced simulation and computing (ASC) codes and computing, archival data, and aboveground experiments on both large and small facilities. • Aboveground experiments are critical in validating the ASC codes. • QMU does not replace existing assessment methodologies but extends their usage in a systematic manner. • QMU aids the national security laboratories in allocating impor- tant stockpile stewardship resources.   The first number of the findings and recommendations numbering system refers to the task number with which the finding or recommendation is associated. 18

use of the qmu methodology 19 • QMU could facilitate the communication of weapons system per- formance information to the Department of Defense (DOD) and Congress. QMU extends the concept of classic engineering factors that com- pute the ratio of design load to maximum expected load. Its use brings a systematic, quantitative approach to thinking about margins, M, and uncertainties, U. Using QMU, the national security laboratories can iden- tify the factors and uncertainties that are most important to warhead performance. Resources can then be devoted to improving reliability and confidence based on those results. From its investigations, the committee determined that QMU offers the following benefits: • Its use has led to a greater emphasis on quantifying uncertainties in weapons performance to complement the national security labs’ long-standing emphasis on quantifying margins. • It allows performance margins to be managed as a system. This in turn allows designers to better evaluate the interconnections among components of the system and to answer quantitatively questions such as How much margin is enough? or How much uncertainty can be tolerated? QMU allows weapons designers and managers to consider trade-offs among schedule, cost, and performance. It is being used to guide investment decisions for both R&D and stockpile stewardship. • It enables designers to monitor aging weapons and compare designs. The confidence ratio, M/U, is most effective for assessing the performance by tracking changes in it over time. For example, determining M/U as a function of the age of gas in the gas bottle can let the designers decide when the bottle must be replaced. It should be noted that the QMU methodology is likely to evolve over time as well, possibly faster than the changes that occur in aging warheads. To the extent such changes might affect the value of time-dependent measurements, these changes need to be accounted for when using QMU to monitor aging weapons. • It is helping to improve communication among weapons design- ers, national security laboratory managers, and the three labora- tories. It is also being used to explain the annual assessment pro- cess to nontechnical audiences, including senior DOE managers, senior DOD officials, Congress, and other external customers. • An ongoing purpose of the science campaigns of the weapons program is to reduce uncertainties. QMU is applied in a snapshot mode when stockpile assessments are made in order to quantify

20 evaluation of qmu methodology the uncertainties that exist at that time. Sensitivity studies allow one to guide and prioritize the application of effort to further reduce uncertainties. It is important to remember, however, that QMU alone cannot enable the assessment or certification of a nuclear warhead. It complements and organizes but does not replace the assessment and certification methods developed in decades past. Some combination of surveillance, enhanced surveillance, statistical testing, enhanced aging experiments, testing to failure, significant findings investigations, and other methods directed and interpreted by experienced warhead design experts will always be a part of the QMU framework. Recommendation 1-1a. The national security laboratories and NNSA should be encouraged to expand their use of QMU while continuing to develop and improve the methodology. Recommendation 1-1b. The laboratories and NNSA should strive to improve the connections between advanced simulation and com- puting programs and experimental programs. Experiments are essential for quantification of uncertainties in simula- tion results. Coordination of experimental and computational programs can enhance the benefits of each. Coordination between the advanced sim- ulation and the experimental programs at the laboratories has improved, but further improvement is possible and desirable. UNCERTAINTY QUANTIFICATION Finding 1-2. The national security laboratories have focused much of their effort for uncertainty quantification on computing the sensitivity of code output to uncertainties in input parameters. A broader effort is necessary. Methods for the identification, quan- tification, aggregation, and propagation of uncertainties require further development. The laboratories have always been concerned with margins, M; QMU has appropriately placed emphasis on also quantifying uncertainties.   Findings and recommendations are numbered to associate recommendations with their corresponding findings. For example, Recommendations 1-1a and 1-1b are associated with Finding 1-1, and Recommendation 1-2 is associated with Finding 1-2. Findings 1-3 and 1-6 have no associated recommendations.

use of the qmu methodology 21 There are serious and difficult problems to be resolved in uncertainty quantification, however, including physical phenomena that are mod- eled crudely or not at all, the possibility of unknown unknowns, lack of computing power to guarantee convergence of codes, and insufficient attention to validating experiments. At the heart of uncertainty quantification efforts are today’s modern simulation codes. Many factors, however, limit their ability to accurately simulate warhead performance. These factors, each of which introduces uncertainty to any code-calculated quantity, include the following: 1. Some physical phenomena remain unmodeled, including phe- nomena that have been recognized as potentially important. There may also be unmodeled phenomena that have not been recognized as important. 2. Some physical phenomena are modeled only crudely. 3. Even the most advanced supercomputers of today and the near future lack the memory and speed to permit numerically con- verged simulations using the best physics models in the codes. 4. The input data needed by the physics models are not known with perfect precision. 5. Only limited experimental data are available for assessing the accuracy of simulated quantities of interest. UNCERTAINTY PROPAGATION AND AGGREGATION The uncertainty introduced by each factor above is difficult to quan- tify or even to rigorously bound. Further, it is difficult to propagate and correctly aggregate the uncertainties arising from the myriad sources. The state of the art at the design labs is approximately as follows: • Given sufficient computational resources, the labs can sample from input-parameter distributions to create output-quantity distributions that quantify code sensitivity to input variations. However,   —Resources are not sufficient to do this with high fidelity;   — ampling from the actual high-dimensional input space is not S a solved problem and is not done in the nuclear weapons c ­ ontext;   — ften the unstated premise is that imperfect code is somehow O good at calculating sensitivities to input variations. • Discretization errors are not estimated in practice, and if they were, the machinery does not exist to propagate them and esti- mate the uncertainties that they generate in output quantities.

22 evaluation of qmu methodology • Errors introduced by subgrid models are not estimated or propagated. • Overall integrated physics-model errors are estimated by com- paring post-shot simulation output against measured data, often from underground nuclear tests, with knobs set to values that the code users believe are reasonable and that best fit some chosen data set. • Even if the uncertainties arising from all of the different sources were estimated, their aggregation into an overall uncertainty for a given quantity of interest is a problem that needs further attention. Recommendation 1-2. The national security laboratories should continue to focus attention on quantifying uncertainties that arise from epistemic uncertainties such as poorly modeled phenom- ena, numerical errors, coding errors, systematic uncertainties in experiment. Because discretization errors, code errors, and subgrid-model errors (poorly modeled physical phenomena) are not separately quantified, they are effectively lumped with errors in the physics models. As a result, dif- ferences between simulation and experiment may be attributed to one kind of error when in fact another kind is responsible. (More information on this topic is included in Note 1 in the classified Annex.) The lesson is that unquantified numerical (and other) errors can lead to erroneous conclusions about important physics and to costly wasted effort. Unraveling the effects of numerical error (from insufficient resolu- tion or from roundoff, for example) and model error (from poorly mod- eled real physical phenomena) continues to be an important unmet need. One lesson learned from modern advanced simulation and computing codes is the critical importance of modeling turbulence, so important for mix phenomena, in three dimensions instead of one or two. SOURCES OF UNCERTAINTY Finding 1-3. In characterizing uncertainties it is important to pay attention to the distinction between those arising from incomplete   See, for example, P.J. Roache, Verification and Validation in Computational Science and Engi- neering, Albuquerque, N.M., Hermosa Publishers (1998); T.G. Trucano, L.P. Swiler, T. Igusa, W.L. Oberkampf, and M. Pilch, Calibration, Validation, and Sensitivity Analysis: What’s What, Reliability Engineering and System Safety 91(10-11)(2006): 1331-1357; and American Institute of Aeronautics and Astronautics, AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations, Reston, Va.: AIAA G-077-1998 (1998).

use of the qmu methodology 23 knowledge (“epistemic,” or systematic) and those arising from device-to-device variation (“aleatory,” or random). Another issue that arises in assessing and communicating uncertain- ties in simulated quantities is that these uncertainties have at least three sources: 1. Uncertainty in our knowledge of properties of materials (includ- ing cross sections, opacities, etc.), 2. Differences between as-built and as-modeled devices (geometry, composition, initial conditions, etc.), and 3. Differences between the code (model plus numerical error plus bugs) and reality. If M is large relative to U for some performance metric, there may be no need to differentiate the portion of U arising from each source. If M and U are close, however, such differentiation may be important. A simple (contrived) example illustrates the importance of keeping the first and second sources separate. Consider two hypothetical scenarios: • Scenario A. The device design, combined with our excellent knowledge of nature’s constants, is such that our uncertainty in those constants produces a very small uncertainty in device per- formance. Because manufacturing tolerances are loose, however, they or other factors can cause significant device-to-device vari- ability. As a result, analysis and testing indicate that 90 percent of the device population will meet design requirements and 10 percent will fail to meet design requirements. • Scenario B. The design and manufacturing tolerances are such that the device-to-device variability is very small. Basically, either all of the devices work or all fail. Uncertainties in the value of the properties of the device’s materials, however, lead to significant variation in calculated performance relative to design require- ments. As a result, analysis shows that 90 percent of the realistic input space (describing possible values of nature’s constants) maps to acceptable performance, while 10 percent maps to failure. This 90 percent is a confidence number arising only from our lack of knowledge of nature’s constants. Based on this limited knowl-   See, for example, A. Mosleh, N. Tsiu, and C. Smidts, Model Uncertainty: Its Charac- terization and Quantification, Proceedings of Workshop I in Advanced Topics in Risk and Reli- ability Analysis, NUREG/CP-0138, Washington, D.C.: U.S. Nuclear Regulatory Commission (1994).

24 evaluation of qmu methodology edge we have a 90 percent confidence that all devices will meet requirements and a 10 percent confidence that all will fail to meet requirements. These two contrived scenarios lead to significantly different conse- quences. In Scenario A, the probability of at least one device succeeding can be increased to 99 percent by using two devices. In Scenario B, how- ever, nothing can increase the confidence of a success above 90 percent. A 10 percent chance that all devices fail presents different concerns than the knowledge that 10 percent of the devices will fail. The committee uses this (admittedly contrived) example to illustrate a potentially useful concept for communicating assessment results. This concept is taken from the probabilistic risk assessment community (see Appendix A) and is called the probability of frequency. In Scenario A we are highly confident that 90 percent of the devices will succeed. This translates to a nearly 100 percent probability that the frequency of success is 0.9. This is depicted graphically in Figure 2-1. In Scenario B, there is a 90 percent probability that the frequency of success is 1.0 and a 10 percent probability that it is 0.0. In reality, the situation is not as sharply defined as in the committee’s contrived example. Both types of uncertainty can exist simultaneously, and it is often difficult to separate them in the analysis. But as the exam- ples illustrate, in some cases it may be very important to separate them to the extent possible, to recognize their different implications, and to devise a way to clearly communicate these important truths to the stakeholders. Scenario A Scenario B Area = 1.0 Probability Probability Area = 0.9 Area = 0.1 0.5 1.0 0.5 1.0 Frequency of success Frequency of success FIGURE 2-1  “Probability of frequency” for Scenarios A and B. Fig 2-1

use of the qmu methodology 25 Consider, for example, the consequences for the stockpile if Scenario A pertains or Scenario B pertains. The third source of uncertainty above is associated with model error. Assessment of the accuracy of a computational prediction depends on assessment of model error, which is the difference between the laws of nature and the mathematical equations that are used to model them. Comparison against experiment is the only way to quantify model error and is the only connection between a simulation and reality. If a particular experiment were perfectly characterized, measured data were free from error, and the mathematical model equations were solved perfectly, the difference between the mathematical solution and the experimental mea- surement would be the model error for the measured quantity. In practice the picture is muddied by imperfect characterization of experiments, imperfect measurements, numerical approximations of the mathematical model equations, and coding errors. Unless these factors are quantified and controlled, it is difficult to deduce model error, which in turn makes it difficult to assess the predictive capability of a simulation system. Even if model error can be quantified for a given set of experimental measurements, it is difficult to draw justifiable broad conclusions from the comparison of a finite set of simulations and measurements. Importantly, if one has made comparisons for any set of experiments, it is not clear how to estimate the accuracy of a simulated quantity of interest for an experi- ment that has not yet been done. Said another way, it is not clear how to assess the proximity of a new problem to existing experimental experience or the likelihood that the simulation error for the next problem is similar to that for previous problems. Such assessments cannot be accomplished without heavy reliance upon expert judgment. In the end there are inherent limits in the ability to quantify uncer- tainty. Such limits might arise from the paucity of underground nuclear data and the circularity of doing sensitivity studies using the same codes that are to be improved in ways guided by the sensitivity studies. REPRESENTATION OF A SIMPLE PERFORMANCE GATE Finding 1-4. There is much more to QMU than one or a few mar- gin-to-uncertainty (M/U) ratios. By themselves, these ratios cannot convey all of the information needed for proper assessment, nor can one or a few probability distributions. A performance gate is represented by a range of values for some performance metric that must be achieved for success. A performance threshold, on the other hand, is a value of a metric that must be exceeded to achieve success. The value of the threshold is uncertain, and the value

26 evaluation of qmu methodology U2 = Uncertainty in secondary U1 = Uncertainty in reproducibility of VBE output at TBE at low end of operating range Secondary Yield U2 U1 Design Range TBE = Best estimate of threshold (minimum primary yield) Margin (M) VBE = Best estimate of metric (primary yield) at low end of operating range U (total uncertainty) = U1 + U2 TBE VBE Primary Yield FIGURE 2-2  Cliff chart representation of warhead performance. Fig 2-2 of the metric comes from calculations that are also uncertain. In this example (see Figure 2-2) the simplest use of QMU is to compare a single estimated number for a margin, M, against a single estimated number for the uncertainty, U. Here M is the difference between the best-estimate value of the lower bound of the design range of metric VBE (in the figure, the primary yield) and the best-estimate value of the upper bound of the threshold, T BE (in the figure the minimum primary yield). U is the sum of two uncertain- ties. One, U1, represents how much lower the metric’s actual value, Vtrue, might be than its best estimate, and the other, U2, represents how much higher the actual threshold, Ttrue, might be than its best estimate. If one interprets TBE + U2 as the maximum credible value of the thresh- old and VBE – U1 as the minimum credible value of the metric, then one can interpret the difference (VBE – U1) – (TBE + U2) as a measure of confidence or comfort that the performance gate has been passed. If the difference is positive—that is, if there is “white space” between the maximum credible threshold and the minimum credible metric value—there is some basis for confidence that the gate has been passed. The larger the difference is, the greater the confidence. We note that this difference is simply M – U and that a positive M – U means a ratio M/U that exceeds unity.

use of the qmu methodology 27 INTRODUCING MORE COMPLEX PROBABILITY DISTRIBUTIONS The interpretations described above can be criticized on several grounds. First, taking TBE + U2 as the maximum credible threshold implies that the uncertainty in the threshold is bounded. This implication is equiv- alent to assuming that the distribution of threshold values arising from all sources of uncertainty does not have a significant tail. A similar comment applies to the metric value. Some observers argue that these distribu- tions may have tails. Second, even if the distributions have finite extent, it may be difficult to demonstrate that U1 and U2 actually encompass the full extent of those distributions and thus that essentially 100 percent of the possible scenarios are within the given bounds. Third, even if U1 and U2 come from compact, finite distributions, the methods for estimating U1 and U2 contain assumptions, approximations, neglected factors, and other sources of uncertainty. It follows that the values of U1 and U2 are not precisely known. This calls into question the interpretations of “maximum credible value” and “minimum credible value.” In order for these uncer- tainties to be meaningful, they should be prescribed unambiguously. A commonly used measure is the number s of standard deviations—such as 1s, 2s, or 3σ—of the uncertainty probability distribution. At the moment, there is no universally accepted definition at the laboratories of whether uncertainty refers to 1σ or 2σ (see Table 4-1). How does one handle this? For a particular gate of interest, the labo- ratories interpret U1 and U2 as arising from distributions of finite extent and consider that the values they estimate are attempts to encompass the entire bound. However, they recognize the third criticism above and do not claim that (M – U) > 0 is sufficient but rather that (M – U) should be significantly greater than zero (or M/U significantly greater than unity) in order to inspire confidence. If the distributions have tails, and if one knows the type of distri- bution, it could be very helpful to quantify uncertainties in terms of standard deviations. This approach facilitates meaningful quantitative statements about the likelihood of successful functioning. For example, if the threshold distribution is normal and U2 is its standard deviation (U2 = 1 × σ2) and TBE is assumed to be the mean, then we know that there is approximately a 16 percent chance that the true threshold, Ttrue, is greater than (TBE + U2). If the distribution of the metric values V is known, similar quantitative statements can be readily made about the likelihood   To the extent (which is considerable) that input uncertainties are epistemic and that prob- ability distribution functions (PDFs) cannot be applied to them, uncertainties in output/inte- gral parameters cannot be described by PDFs. A bounding approach to the epistemic input uncertainties must be applied, and the output/integral uncertainties can only be bounded rather than being specified even in part by a PDF.

28 evaluation of qmu methodology that Vtrue < Ttrue, which implies failure. This kind of statement is based on a knowledge of the actual shapes of meaningful distributions, of course, which may be difficult to find. DIRECT COMPUTATION OF DISTRIBUTION OVERLAP A similar approach that has been suggested is to compute distribu- tions and use them directly—that is, without necessarily trying to iden- tify values for M or U—to assess confidence that a performance gate is passed. This general idea appears to avoid some of the issues discussed above, such as how to rigorously define numbers such as M and U. The laboratories have computed distributions for several years as part of their sensitivity analyses, and they are evaluating how best to interpret them. Particular attempts to implement this general idea are also open to criticisms. First, they have not yet been shown, by analysis or demonstra- tion, to be feasible with the full required scope and with present comput- ing capability. Second, there is no obvious relation between confidence that a gate is passed and the specific metrics that have been proposed, such as the fraction of a given distribution that overlaps with some refer- ence distribution or the width of a given distribution compared with a reference distribution. It could be challenging to devise a metric that does have the desired relation. Third, great care should be taken not to over- or misinterpret these distributions. Conclusions based on such misinterpre- tations could lead to harmful decisions. Fourth, there are questions about the meaning contained in the shape of these distributions, for they are direct results of the shapes of the distributions assumed for input param- eters. In the presentations to the committee, the input distributions were simply uniform, meaning that each selected input value was considered just as likely as any other. The meaning of the shapes of the output dis- tributions is not clear in this case. Fifth, there are similar questions about the meaning of the span of the distributions, which is a direct result of the span chosen for the input parameters as well as the particular (small set of) input parameters chosen for variation. If additional uncertain parameters are chosen, the width of the output distributions will almost certainly increase, the chosen metrics used to describe the distributions will change, and the conclusions drawn from the analysis also will likely change. If analysis results are sensitive to judgment-based choices of analysis inputs, then care should be taken to transparently show the effects of judgment on the results.   Raymond Orbach, Undersecretary of Energy for Science, Presentations to the committee on October 26, 2007, and February 18, 2008.

use of the qmu methodology 29 Recommendation 1-4. The national security laboratories should further develop the QMU methodology to aggregate and propagate uncertainties. For full-system simulations, it is important to ex- plore the validity and efficiency of alternative means of sampling the large input-parameter space to determine the expected perfor- mance output of the warhead and its uncertainty. Regardless of flaws in a particular distribution-based approach, the laboratories need to continue to evaluate a variety of approaches to quan- tifying confidence, including distribution-based approaches. No single approach presented to date is without flaws; further work is needed to characterize and reduce these flaws. Full system calculations are more commonly carried out using a staged computational approach wherein it is necessary to be concerned with how uncertainties calculated in one stage (e.g., simulations of pri- mary performance) using a sampling of input parameters are aggregated and/or propagated into the second-stage calculations (e.g., simulations of secondary performance). In the first stage, for example, uncertainties in pit mass and surface finish propagate to variations in cavity compactness; the latter then lead to variations in boost yield and, ultimately, variations in primary yield. Clearly, the imprint of a variation is carried through the entire first-stage calculation. These variations, however, do not propagate without designer intervention into a set of second-stage calculations (sec-   See, for example, M.D. McKay, R.J. Beckman, and W.J. Conover, A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code, Technometrics 21(2)(1979): 239-245; and J.C. Helton and F.J. Davis, Latin Hypercube Sampling and the Propagation of Uncertainty in Analyses of Complex Systems, Reliability Engineering and System Safety 81(1) (2003): 23-69.   For example, evidence theory, possibility theory, and interval analysis. See, for example, T.J. Ross, Fuzzy Logic with Engineering Applications, 2nd ed., New York, N.Y.:Wiley (2004); T.J. Ross, J.M. Booker, and W.J. Parkinson (eds.), Fuzzy Logic and Probability Applications: Bridging the Gap, Philadelphia, Pa.: Society for Industrial and Applied Mathematics (2002); C. Baudrit and D. Dubois, Practical Representations of Incomplete Probabilistic Knowledge, Computa- tional Statistics & Data Analysis 51(1)(2006): 86-108; and J.C. Helton, J.D. Johnson, and W.L. Oberkampf, An Exploration of Alternative Approaches to the Representation of Uncertainty in Model Predictions, Reliability Engineering and System Safety 85(1-3)(2004): 39-71.   See, for example, R.J. Breeding, J.C. Helton, E.D. Gorham, and F.T. Harper, Summary Description of the Methods Used in the Probabilistic Risk Assessments for NUREG-1150, Nuclear Engineering and Design 135(1)(1992): 1-27; U.S. Nuclear Regulatory Commission, Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants, NUREG-1150, Vols. 1-3, Washington, D.C.: U.S. Nuclear Regulatory Commission, Office of Nuclear Regula- tory Research, Division of Systems Research (1990-1991); and J.C. Helton and R.J. Breed- ing, Calculation of Reactor Accident Safety Goals, Reliability Engineering and System Safety 39(2)(1992): 129-158.

30 evaluation of qmu methodology ondary performance for a given primary input), which are carried out independently. Ideally, a full-system calculation could have a range of input param- eters (uncertainties) in the codes, in material equations-of-state, and in the specification of parts. A hands-off calculation for a particular choice from the enormous parameter set, for instance, gives as output a yield of the weapon. The uncertainties in output are then determined from a series of such full-system calculations, done by sampling the parametric spaces of input parameters. No compounding rule is needed for this approach. Finding 1-5. QMU cannot be reduced to a black box of mathematical formulas. It relies upon expert judgment and will continue to do so for the foreseeable future. The successful application of QMU requires a great deal of expert judgment from scientists and engineers with relevant weapons exper- tise—especially weapons designers—particularly in quantifying uncer- tainties.10 This expertise is supported by advanced computer facilities. Several designers noted that expert judgment is based on experience; the number of experts with these capabilities will decline unless ongoing efforts to support necessary projects and experiments and to attract and retain quality staff continue to succeed. Recommendation 1-5. To implement assessment methodologies such as QMU effectively, NNSA and the national security labora- tories should explore all options to retain a quality staff of weapons designers, engineers, and computer scientists. PHENOMENOLOGY OF NUCLEAR EXPLOSIONS Finding 1-6. The identification of performance gates and the mar- gin and uncertainty of each gate is incomplete. The application of QMU to some of the gates that have been identified is incomplete. 10  See, for example, B.M. Ayyub, Elicitation of Expert Opinions for Uncertainty and Risks, Boca Raton, Fla.: CRC Press (2001); R.J. Budnitz, G. Apostolakis, D.M. Boore, L.S. Cluff, K.J. Coppersmith, C.A. Cornell, and P.A. Morris, Use of Technical Expert Panels: Applications to Probabilistic Seismic Hazard Analysis, Risk Analysis 18(4)(1998): 463-469; R.M. Cooke, Experts in Uncertainty: Opinion and Subjective Probability in Science, New York, N.Y.: Oxford University Press (1991); and S.C. Hora and R.L. Iman, Expert Opinion in Risk Analysis: The NUREG-1150 Methodology, Nuclear Science and Engineering 102(4)(1989): 323-331.

use of the qmu methodology 31 As a prelude to a discussion about performance gates, it is important to review some of the critical physics processes in a nuclear explosion and how they are simulated. These explosions produce the most extreme tem- perature, pressure, and radiation conditions encountered on earth. The multistep process that produces such an explosion cannot be observed directly. (More information on this topic is included in Figure B-I and Figure B-2 in the classified Annex.) Rather, knowledge of this process has been pieced together from physics understanding, experiments, full-scale nuclear tests (primarily underground nuclear tests), and expert judgment. (More information on this topic is included in Note 2 in the classified Annex.) In the absence of a detailed physics understanding for these phenom- ena, the labs use four knobs (see Glossary) to represent them in the simu- lation models. (More information on this topic is included in Figure B-I in the classified Annex.) Each knob is a parameter in the simulation codes that can be adjusted to match important features of underground nuclear test data and of experiments on devices of similar design. Collectively, these four knobs represent the largest gap in scientific understanding of the nuclear explosive process. Much of the ongoing weapons physics work at the labs is focused on gaining a better understanding of the phys- ics underlying these knobs. (More information on this topic is included in Note 3 in the classified Annex.) ROLE OF MODELING AND SIMULATION IN QMU In the QMU framework, modeling and simulation tools are used to determine the margins, M, for the various performance gates. They are also used in conjunction with experiments to estimate uncertainties, U. In addition, the effect that the performance at one gate has on the perfor- mance of downstream gates needs to be determined. The performance gates must be considered by any of the methodolo- gies that inform the QMU methodology. Performance gates can also be considered as checkpoints that assess the performance margins of key parameters as the explosion progresses. A similar list of safety and secu- rity gates and failure points is also required. (More information on this topic is included in Note 4 and Table B-1 in the classified Annex.) Communications and transparency between the two labs would be enhanced if they were to draw up a comprehensive list of these gates and metrics. This point is discussed further in Chapter 4. Finding 1-7. In the development and implementation of the QMU process, the national security laboratories are not taking full advan-

32 evaluation of qmu methodology tage of their probabilistic risk assessment expertise. For example, the distinction made by probabilistic risk assessment experts between probability of frequency and probability is a concept believed to have merit in QMU applications. PRA concepts have demonstrated their value in assessing performance measures or gates such as safety and security and could contribute to making the assessment of weapons risk issues more transparent. The committee observed that the national security laboratories have considerable expertise in probabilistic risk assessment, a discipline devel- oped over the past several decades to facilitate the assessment of rare events for which there are limited data and testing results. The labora- tories do not appear, however, to be drawing much on that expertise to supplement and possibly enhance the QMU process.11 Probabilistic risk assessment and QMU face similar challenges to quantify the risk and performance of complex systems for which test- ing results and data are very limited. In both cases, the quantification of uncertainties is essential but very difficult to do in a transparent manner (see Appendix A, prepared by study committee member B. John Garrick, for a more detailed discussion of how PRA might be able to contribute to the QMU process). While PRA has historically focused on the risk of system failures and current QMU efforts are primarily targeting nuclear weapons reliability, QMU must eventually address issues of safety and security. PRA concepts may help with this. Many concepts and ideas developed in the PRA field could contribute significantly to the QMU methodology in both reliability and risk applications, especially with respect to making the process more transparent. Examples are the “prob- ability of frequency” concept for interpreting and presenting results (dis- cussed in the example illustrated in Figure 2-1), the scenario approach for linking initiating events and initial conditions to events of interest, and methods of quantifying uncertainties. Perhaps the biggest contribution that probabilistic risk assessment could make to enhance the QMU process would be a comprehensive PRA for each basic weapons system. The probability of frequency approach would be the best format for applying PRA because there is uncertainty 11  See, for example, R.P. Rechard, Historical Relationship Between Performance Assess- ment for Radioactive Waste Disposal and Other Types of Risk Assessment, Risk Analysis 19(5)(1999): 763-807; U.S. Nuclear Regulatory Commission, Reactor Safety Study—An Assess- ment of Accident Risks in U.S. Commercial Nuclear Power Plants, WASH-1400 (NUREG-75/014), Washington, D.C. (1975); and W.H. Lewis, R.J. Budnitz, H.J.C. Kouts, W.B. Loewenstein, W.D. Rowe, F. von Hippel, and F. Zachariasen, Risk Assessment Review Group Report to the U.S. Nuclear Regulatory Commission, NUREG/CR-0400, Washington, D.C.: U.S. Nuclear Regula- tory Commission (1978).

use of the qmu methodology 33 in the frequency with which any performance metric occurs. The resulting information and knowledge base could complement and contribute to the credibility of the QMU process. As noted in Appendix A, probabilistic risk assessments greatly expand the knowledge base of systems while facilitat- ing their analysis and fundamental understanding. Recommendation 1-7. The national security laboratories should investigate the utility of a probability of frequency approach in presenting uncertainties in the stockpile.12 As noted in the simplified example illustrated in Figure 2-1 and fur- ther discussed in Appendix A, representing failure modes in terms of probability of frequency could provide decision makers with a richer understanding of the uncertainties—and a clearer notion of how to address them—than could estimating the reliability or M/U. 12  See,for example, J.C. Helton and R.J. Breeding, Calculation of Reactor Accident Safety Goals, Reliability Engineering and System Safety 39(2)(1993): 129-158.

Next: 3 QMU and the Annual Assessment Review »
Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile Get This Book
×
 Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Maintaining the capabilities of the nuclear weapons stockpile and performing the annual assessment for the stockpile's certification involves a wide range of processes, technologies, and expertise. An important and valuable framework helping to link those components is the quantification of margins and uncertainties (QMU) methodology.

In this book, the National Research Council evaluates:

  • how the national security labs were using QMU, including any significant differences among the three labs
  • its use in the annual assessment
  • whether the applications of QMU to assess the proposed reliable replacement warhead (RRW) could reduce the likelihood of resuming underground nuclear testing

This book presents an assessment of each of these issues and includes findings and recommendations to help guide laboratory and NNSA implementation and development of the QMU framework. It also serves as a guide for congressional oversight of those activities.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!