Click for next page ( 2

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 1
Summary BACKGROUND In order to meet their obligation to help maintain the capabilities of the nuclear weapons stockpile and to perform the annual assessment for the stockpile’s certification, the national security laboratories—Los Ala- mos National Laboratory (LANL), Lawrence Livermore National Labora- tory (LLNL), and Sandia National Laboratories (SNL)—of the National Nuclear Security Administration (NNSA) employ a wide range of pro- cesses, technologies, and expertise. The quantification of margins and uncertainties (QMU) framework plays a key role in helping to link those three elements. While it does not replace existing assessment methodolo- gies, QMU makes a number of critical contributions. Concerns about its use, however, led the Congress to ask the National Research Council to evaluate (1) how the national security labs were using QMU, including any significant differences among the three labs; (2) its use in the annual assessment; and (3) whether the application of QMU to assess the pro- posed reliable replacement warhead (RRW) could reduce the likelihood of resuming underground nuclear testing. This request was endorsed by the NNSA.   Throughout this report, the terms nuclear test and nuclear testing refer to nuclear explosions. 

OCR for page 1
 evaluation of qmu methodology MAJOR FINDINGS AND RECOMMENDATIONS QMU is a sound and valuable framework that helps the national security laboratories carry out the Department of Energy’s (DOE) respon- sibility to maintain the nation’s nuclear weapons capabilities. Its value is evident in many ways, including for the organization of the many stock- pile stewardship tools such as the advanced simulation and computing codes and computing and for the allocation of important resources. The national security laboratories and NNSA should expand their use of QMU while continuing to develop, improve, and increase application of the methodology. While they have focused much attention on uncertainty quantification, a broader effort is needed in this area, including further development of the methodology to identify, aggregate, and propagate uncertainties. In a related issue, the identification of performance gates (see Glossary) and their margins is incomplete. QMU also relies on expert judgment, and effective implementation of QMU will depend on maintaining a quality staff at the national secu- rity labs, particularly weapons designers. Finally, the national security labs are not taking full advantage of their own probabilistic risk assess- ment capabilities. Several probabilistic risk assessment concepts could be applied to QMU applications. In particular, the national security labs should investigate the probability of frequency (see Glossary) approach in presenting uncertainties. The application of QMU in the annual assessment review conducted by the national security laboratories is growing and providing important insights, such as a basis for confidence in stockpile performance. Its use in the review is still limited, however, and should be expanded. In particular, margins (M) and uncertainties (U) should be reported for all gates that are judged to be critical for warhead performance. While there are differences among the national security labs in how the QMU methodology is implemented, these differences can enhance the development of QMU. Different approaches for estimating uncertainties, for example, should continue to be explored. Differences in definitions and terminology, however, can inhibit communication and transparency, and the national security labs should agree upon a common set of defini- tions and terms. Consistency and transparency of the application of QMU are also being inhibited by the lack of documentation. Both NNSA and the labs should issue QMU guidance documents in time for the current assessment cycle. QMU can be used to evaluate new warheads, such as the RRW design, and for certification. If the design of a new nuclear warhead is sufficiently “close” to existing tested designs, the new warhead could, in principle, be certified without nuclear tests, based on archival tests, modeling and

OCR for page 1
summary  simulation tools, and a more mature QMU methodology. The design labs (LANL and LLNL) should provide detailed justification for use of archi- val tests to support any proposed RRW design and investigate ways to help quantify “closeness.” Also essential for a credible RRW certification process are expanded peer review, documentation, and experimentation without nuclear testing.