Methods for implementing QMU continue to evolve, as they should, and the laboratories should explore different approaches as a means to determine the best approach for a given warhead. For example, LANL has focused on estimating uncertainties by a sensitivity analysis that examines the variations in simulated primary yield resulting from variations in input parameters (e.g., pit mass) for a given weapon. LLNL, on the other hand, has attempted to develop a comprehensive model that explains the test results for a variety of different primaries and thus addresses modeling uncertainties as well as parameter uncertainties.

Some differences in approach arise naturally from the different missions of the laboratories. For example, much of SNL’s work is different from that of the design labs, because it involves warhead components other than the nuclear explosive package (e.g., the firing set and the neutron generator). In principle, SNL can test its systems under many of the relevant conditions; for these conditions SNL is not forced to rely on simulation codes to generate estimates of thresholds, margins, and uncertainties. For practical reasons, SNL cannot test statistically significant numbers of some of its components and therefore still uses computational modeling; however, the models can be challenged in many cases by full system tests. LANL and LLNL, on the other hand, cannot perform full system tests and must instead rely heavily on simulation codes in their assessments of margins and uncertainties. SNL also cannot test its components in “hostile” environments, in which the warhead is subjected to a nearby nuclear explosion, and thus much of its hostile-environment work shares many of the challenges faced by the design laboratories.

Recommendation 3-1. The national security laboratories should continue to explore different approaches—for example, using different codes—for QMU analysis for estimating uncertainties. These different methods should be evaluated in well-defined intra- and interlaboratory exercises.

Differences in methodology are potentially positive, leading to healthy competition and the evolution of best practices. To determine a best practice, however, would require an ability to assess various competing practices. The committee has not seen an assessment of competing uncertainty quantification methodologies at any of the laboratories, nor has it even seen an organized attempt to compare them.

Finding 3-2. Differences in definitions among the national security labs of some QMU terms cause confusion and limit transparency.

Table 4-1 shows that, in some cases, earlier concerns about inconsistencies continue to be valid. (More information on this topic is included



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement