1
Overview

INTRODUCTION

Context

When the moratorium on nuclear testing went into effect in 1992, the Department of Energy (DOE) began to develop other ways to maintain the nation’s nuclear weapons stockpile. In 1993, Congress and the President directed DOE to “establish a stewardship program to ensure the preservation of the core intellectual and technical competencies of the United States in nuclear weapons.”1 The Stockpile Stewardship Program’s (SSP’s) objective was to develop ways to simulate—with computer models and experiments that remain subcritical—the various processes that take place during a nuclear weapon explosion and to apply the knowledge gained to extend the life of the existing weapons in the stockpile. The SSP evolved over time, and in 1999, DOE created 18 subprograms—called campaigns—to organize the science and stockpile maintenance activities.2 The objective of these campaigns has been to develop the critical capabilities for assessing the performance, safety and reliability of the stockpile without the need for nuclear testing. Finally, in 2000, Congress

1

U.S. Congress, The National Defense Authorization Act for Fiscal Year 1994, P.L. 103-160, Sec. 3135 (1994).

2

U.S. Government Accountability Office, NNSA Needs to Refine and More Effectively Manage Its New Approach for Assessing and Certifying Nuclear Weapons. GAO-06-261 (2006).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 4
1 Overview INTRODuCTION Context When the moratorium on nuclear testing went into effect in 1992, the Department of Energy (DOE) began to develop other ways to main- tain the nation’s nuclear weapons stockpile. In 1993, Congress and the President directed DOE to “establish a stewardship program to ensure the preservation of the core intellectual and technical competencies of the United States in nuclear weapons.” 1 The Stockpile Stewardship Program’s (SSP’s) objective was to develop ways to simulate—with computer mod- els and experiments that remain subcritical—the various processes that take place during a nuclear weapon explosion and to apply the knowl- edge gained to extend the life of the existing weapons in the stockpile. The SSP evolved over time, and in 1999, DOE created 18 subprograms—called campaigns—to organize the science and stockpile maintenance activi- ties.2 The objective of these campaigns has been to develop the critical capabilities for assessing the performance, safety and reliability of the stockpile without the need for nuclear testing. Finally, in 2000, Congress 1 U.S. Congress, The National Defense Authorization Act for Fiscal Year 1994, P.L. 103-160, Sec. 3135 (1994). 2 U.S. Government Accountability Office, NNSA Needs to Refine and More Effectively Manage Its New Approach for Assessing and Certifying Nuclear Weapons. GAO-06-261 (2006). 

OCR for page 4
 OvErviEW created a new, separate entity within DOE—the National Nuclear Security Administration (NNSA)—whose primary task is to maintain the nuclear weapons stockpile. Every year, the SSP must assess the safety, reliability, performance, and effectiveness of the nuclear weapons stockpile in the absence of nuclear testing. The directors of the three national security laboratories—Los Ala- mos National Laboratory (LANL), Lawrence Livermore National Labora- tory (LLNL), and Sandia National Laboratories (SNL)—are required to submit letters each year to the Secretary of Energy with their assessment of whether the stockpile is safe, reliable, and effective and expressing their opinions on whether nuclear testing needs to be resumed in the subse- quent year to assure those conditions. In 2001, the three national security labs began using a framework called quantification of margins and uncertainties (QMU) to help with the assessment process. QMU is a decision-support framework that pro- vides a means for quantifying the laboratories’ confidence that the critical stages of a nuclear weapon will operate as intended. In general terms, its purpose is to provide a systematic means to apply—using sophisticated simulation models—the varied output of the science base of the SSP to the assessment of the nuclear weapons stockpile. This output includes the aboveground nonnuclear and subcritical experiments, data from past underground nuclear tests, and expert judgments of the weapons scientists. QMU is an important part of the assessment process and one that is growing in significance. It is also used to help set priorities for SSP research and engineering activities. And it helps to identify those compo- nents or operating characteristics of the various nuclear weapons in the stockpile that put them most at risk. Recently, the NNSA reported that extending the life of the existing stockpile would become increasingly difficult over time.3,4 It has raised concerns about how the need for continual refurbishments of existing warheads could affect the reliability of the stockpile. Over time, it argued, there would be a buildup of small changes that would cause the warhead to become more and more removed from the tested design.5 To counter 3 U.S. Congress, Congressional Research Service, The Reliable Replacement Warhead Pro- gram: Background and Current Developments, RL32929 (updated November 8, 2007), p. 1. 4 This concern relates not so much to replacement of control systems and electronics that can be fully tested but more to the nuclear explosive package itself. Thus, the nuclear explo- sive package is the primary target of the QMU process. It is important to clearly document changes already made or expected to be made to the nuclear explosive package that would result in the warhead becoming more and more removed from the tested design. 5 U.S. Congress, Testimony by NNSA Acting Administrator Thomas D’Agostino before the House Armed Services Committee, March 20, 2007.

OCR for page 4
 EvALuATiON Of qMu METhODOLOGy this trend, NNSA proposed a major restructuring of the nuclear weapons program including the development of a warhead6 class or type known as the reliable replacement warhead (RRW). The aim of the RRW program is to develop a warhead based on existing warheads whose performance, safety, security, and manufacturability could be assured with high con- fidence and whose stewardship, without the need for nuclear testing, would be relatively straightforward for decades to come. Even though it would be based on tested weapons, however, the RRW would not be identical to any existing weapon. NNSA is requiring the use of the QMU methodology as an important—but not the only—component of the cer- tification process for the RRW. As a first step, a competition for the first RRW design was held in 2007 between LANL-SNL and LLNL-SNL, and NNSA selected the latter’s design. Issues The QMU framework is becoming an increasingly important part of the nuclear weapons assessment process, and the national security labo- ratories and NNSA express optimism about its future value to the SSP. Nevertheless, QMU is a relatively new component of the program, and both internal and external reviews over the past few years have raised issues about the QMU framework and its application to nuclear weapons assessment.7 As a consequence, both Congress and the NNSA expressed interest in late 2006 in further evaluation of the QMU framework and its application. Some of the issues driving this interest are the role of expert judgment; the difficulties in quantifying margins and uncertainties for complex systems; the variable quality and quantity of test data that are needed to validate warhead simulation codes that are used to develop good quantitative estimates of margins and uncertainties; and how to properly incorporate statistical considerations into those estimates. It should be noted that these issues would probably arise with any approach to assessment and certification and are not unique to the QMU methodol- ogy. Furthermore, the QMU framework can be expected to evolve as new tools and methodologies become available. One review also expressed concern about the current implementa- tion practices of the QMU methodology.8 According to this analysis, there 6 The term warhead encompasses both missile systems and gravity bombs. 7 Reviewby the Government Accountability Office; JASON, Quantification of Margins and Uncertainties (QMU), JSR-04-330, Mitre Corporation (March 2005); U.S. Department of Defense, Report on the Friendly Reviews of QMU at the NNSA Laboratories, Defense Program Science Council (March 2004); Raymond Orbach, Undersecretary of Energy for Science, Presentation to the committee, October 26, 2007. 8 Review by the Government Accountability Office.

OCR for page 4
 OvErviEW might be important differences between LLNL and LANL in the applica- tion of QMU and between the two design laboratories and SNL in the application of QMU.9 These issues are of particular interest to Congress in connection with the proposed RRW. If this warhead is to be developed without nuclear testing, an assessment that involves application of a QMU framework as a key component of the certification process appears critical.10 As with assessment of the existing stockpile, however, the other key elements of the weapons program—the underground nuclear test data archive, expert judgment, aboveground experiments, simulation models, and so on—will contribute to the assessment and certification of the RRW in ways other than as input to the implementation of the QMU framework. Never- theless, dependence on the QMU methodology appears to be growing, leading to increased congressional interest in this aspect of the weapons program. Statement of Task In 2006, as a result of congressional concerns about the methodology, implementation, and likely role of QMU in any potential RRW, the House Armed Services Committee inserted language in HR 5122, the John War- ner National Defense Authorization Act for Fiscal Year 2007, requesting the National Academy of Sciences to conduct an independent evaluation of the QMU methodology employed by the national security laboratories and to say whether this methodology could be used to certify an RRW without underground nuclear testing. The Senate agreed to nearly iden- tical language in the conference report for the bill, and the request was enacted into law in the National Defense Authorization Act of FY2007, P.L. 109-364, Sec. 3. The request was independently endorsed by NNSA, which added a task to be covered in a second phase of the study. For the purposes of this report, the Congress and DOE requested that the National Academy of Sciences carry out the following tasks: • (1) Evaluate the use of the quantification of margins and uncer- tainties methodology by the national security laboratories, includ- ing underlying assumptions of weapons performance, the ability of modeling and simulation tools to predict nuclear explosive 9 SNL also uses QMU as part of its assessment and stewardship activities. Because SNL focuses on the engineering aspects of nuclear weapons, however, there are some differences in the way it applies QMU. 10 JASON, Reliable Replacement Warhead Executive Summary, JSR-07-336E, Mitre Corpo- ration (September 2007), p. 7.

OCR for page 4
 EvALuATiON Of qMu METhODOLOGy package characteristics, and the recently proposed modifications to that methodology to calculate margins and uncertainties. • (2) Evaluate the manner in which that methodology is used to con- duct the annual assessments of the nuclear weapons stockpile. • (3) Evaluate how the use of that methodology compares and con- trasts between the national security laboratories. • (4) Evaluate whether the application of the quantification of mar- gins and uncertainties used for annual assessments and certi- fication of the nuclear weapons stockpile can be applied to the planned Reliable Replacement Warhead program so as to carry out the objective of that program to reduce the likelihood of the resumption of underground testing of nuclear weapons. • (5) Assess how archived data are used in the evaluation of mar- gins and uncertainties. This includes use for baselining codes, informing annual assessment, assessing significant finding inves- tigations (SFIs), etc. Are the design labs fully exploiting the data for QMU? Are they missing opportunities? Tasks 1 through 4 are covered in this report. A second report will cover Task 5. Some other recent congressional actions concerning the nation’s nuclear weapons program are also worth noting. In the FY2008 Consoli- dated Appropriations Act, Congress denied funding for the RRW program and provided new funding for advanced certification.11 In accompanying language, Congress stated that before any such warhead was developed, “a new strategic nuclear deterrent mission assessment for the 21st century is required to define the associated stockpile requirements and determine the scope of the weapons complex modernization plans.” Accordingly, it directed NNSA “to develop and submit to the Congress a comprehensive nuclear weapons strategy for the 21st century.” In conjunction with this strategic planning effort, Congress also requested that NNSA “develop a long-term scientific capability roadmap for the national laboratories.” In the same legislation, Congress directed NNSA to begin a new Science Campaign called Advanced Certification to address “significant systemic gaps in NNSA’s stockpile certification process” and funded this effort at $15 million. In the conference report accompanying the FY2008 Defense Authori- zation Bill, Congress urged NNSA “to approach the RRW program cau- tiously, with a commitment to address and resolve all issues as completely 11 U.S. Congress, Consolidated Appropriations Act, 2008, Division C—Energy and Water Development and Related Agencies Appropriations Act, 2008, House Appropriations Com- mittee Print to accompany P.L. 110-161 (2008), p. 583.

OCR for page 4
 OvErviEW as possible.”12 The authorization bill also called for an examination of U.S. nuclear policy and strategic posture. bACkgROuND Definition and Current Implementation of QMu This section provides a description of how the QMU methodology is currently being implemented. As was noted above, QMU is an important part of the process by which nuclear weapons computer simulation mod- els, experiments producing no nuclear yield, prior underground nuclear tests, and expert judgment are brought to bear to assess the reliability of the existing weapons stockpile. The QMU process is analogous to the con- cept of engineering safety margins—that is, a system is designed so that its operating margins are far enough removed from the failure thresholds to instill high confidence that the system will work reliably even though the magnitude and uncertainty of the margin for a particular performance metric (for example, the voltage applied to a detonator—see Figure 1-1) may not be known with great precision. It is important to note that the QMU framework is not the only process underpinning such an assess- ment. This comparison of margins and uncertainties leaves out many important features and nuances. The QMU framework is discussed in greater detail in Box 1-1. It might be helpful to consider the following example using QMU to assess the function of one of the unclassified performance gates in a typi- cal nuclear explosive. A voltage must be applied to a detonator in order for it to function properly. Figure 1-1 provides a graphical representation of the process. The graph is called a cliff chart because the performance curve has the form of a cliff at the threshold of operation. Let us assume that it has been determined experimentally that 100 volts (V) is required for the detonator to operate. This is the threshold value, VT,BE, in Figure 1-1. Also assume that the test of many detonators of the type used shows that the required voltage varies by no more than 5 V. This uncertainty in the threshold voltage is given by U2 in Figure 1-1. The engineers there- fore design a firing system that applies 150 V with a maximum variation of 10 V. The latter is the uncertainty in the firing system voltage at its design point, U1, in Figure 1-1. The QMU analysis states that the margin, M, is 50 V (150 V – 100 V) and the total uncertainty, U, equals 10 V (U 1, the uncertainty in the applied voltage) + 5 V (U2, the uncertainty in the firing voltage) = 15 V. Consequently, M/U = 50 V/15 V = 3.3, which is 12 U.S. Congress, H. Report 110-477 (2007), p. 1323.

OCR for page 4
0 EvALuATiON Of qMu METhODOLOGy U2 = Uncertainty in threshold U1 = Uncertainty (variation) in design voltage = 5 volts voltage for firing system = 10 volts U U2 U1 Detonator Performance VT,BE = Best estimate of threshold voltage for detonator operation = 100 volts VBE = Design voltage of firing Margin (M) system = 150 volts U (total uncertainty) = U1 + U2 = 15 volts M = 150 volts – 100 volts = 50 volts VT,BE VBE Voltage FIGURE 1-1 Cliff chart representation of detonator performance. Fig 1 Box 1-1 QMU QMU is a framework that utilizes data and analysis from many experimental and computational sources to help assess the performance of nuclear weapons. One purpose of this framework is to provide a transparent, systematic approach by which designers and the national security laboratories can do two things: • Quantify their confidence in the performance and reliability of a weapon design through a set of high-level metrics, such as the ratio of a perfor- mance margin, M, to the uncertainties of important weapon subsystems. • Communicate this confidence clearly to people outside the design com- munity, including government officials and the general public. The scientific and mathematical methodologies included in the QMU frame- work have been applied by the laboratories to evaluate both the yield margin for a primary to drive a secondary and a related overall uncertainty. However, it should not be thought that the construction of a single overall margin and overall uncertainty is the essence of how the labs implement QMU. The QMU framework and its many experimental, analytic, and computational re- sources are also applied to many weapons subsystems and subelements in ways most appropriate to them, with the quantification of margins and uncertainties for each. Ultimately these many margins and uncertainties are combined in assess- ing the entire nuclear explosive package. QMU has been used for today's nuclear weapons and their predecessors as well as for the RRW design. QMU provides input for a risk-informed decision-making process that com- bines quantitative analysis and expert judgment and requires answers to the following questions: • What are the key gates through which a weapon's performance must pass in order to meet its design objectives? Can a necessary and sufficient set be identified? • What are the key metrics for each gate, and what are the thresholds for each metric?

OCR for page 4
 OvErviEW • If a gate is passed with only a narrow margin, how does this affect the thresholds of other gates? • What are the best and most reliable tools for quantifying or (when precise quan- tification is unnecessary) bounding technical and scientific uncertainty? • What are the most important scientific and engineering uncertainties? • How much is enough? That is, how much understanding of a particu- lar basic science or engineering issue is needed? How small must an uncertainty be made, consistent with achieving confidence required for weapons performance, reliability, and surety? • What is the uncertainty budget (or goal) for the system or subsystem under consideration? • How are subsystem uncertainties being aggregated into a bound or quan- tification of overall system uncertainty? • What joint computational and experimental activities are needed? One fundamental scientific tool is the estimation of uncertainties with sensitivity analyses, as applied to or backed up by an assessment of failure modes, cliffs, mar- gins, mining the data from underground nuclear tests, and experimental validation. The fundamental output of QMU is a measure, common to the three national security laboratories, of confidence in the performance of specific systems or subsystems as quantified in a comparison of margins, M, and uncertainties, U, in a credible and transparent form that is easy to convey to others. A valid assessment and certification QMU methodology has five essential criteria. It must be (1) complete, (2) connected, (3) validated, (4) demonstrated, and (5) communicated. The completeness aspects of QMU can benefit from the structured methodol- ogy and discipline of quantitative risk assessment and probabilistic risk assess- ment. Construction of a functional sequence also allows experts of all varieties to weigh in on the problem. An ideal QMU methodology is not complete until all failure modes are identified and incorporated. Transparency is important here. A QMU methodology is connected if the interactions between failure modes are included. Application of QMU to only a single failure mode provides no connec- tions. It is not enough to calculate a valid margin and uncertainty for each failure mode. The functional interconnection of modes must be included. For instance, if one failure mode passes near its minimum (or maximum) it can affect the nominal performance of subsequent failure modes. The convolution and propagation of uncertainties is almost certainly an important part of QMU. The evaluation of the stockpile sequence must be connected to the deployment and use sequence. Changes and differences in the stockpiled warheads do affect both the reliability and performance of the nuclear explosive package. A QMU methodology should be validated and verified in much the same way as a simulation. Demonstration is the key to transparency and acceptance of a QMU methodology. QMU must be applied to a real example. Demonstration is the strongest form of definition. The obvious example is the RRW. A demonstration of RRW certification will show that a relaxation of the requirements of size, weight, and limited quantities of key materials can allow the designer to increase margins. Increased margins can overcome possible increases in uncertainty for each of the failure nodes. In the final analysis the values of M and U are important but much more is required for an assessment or certification. Finally, the QMU methodology must be communicated to the community of interested parties (labs, DOE, DOD, and Congress)

OCR for page 4
 EvALuATiON Of qMu METhODOLOGy the measure of confidence that the detonator will work as designed. A similar procedure is applied to the other performance gates that perform in a serial or serial/parallel fashion. It should be emphasized that in this simple example, the uncertainties can be added. In a more complex system, the uncertainties might be statistical in nature and such a simple aggregation might not be valid. Also, the computation of the margin and uncertainties did not require sophisticated computational models, which would be needed for more complex systems. For application to nuclear weapons, the centerpiece of the QMU meth- odology is the set of complex weapons simulation codes that have been developed over the last 65 years. These codes are made up of many physi- cal models describing weapons physics, data from prior underground nonnuclear and nuclear tests, data from subcritical and aboveground experiments, expert judgment, and properties of materials (equations of state, opacity, nuclear reaction cross sections, etc.). Adjustments are made to various inputs and model parameters in the simulation to match ana- lytic calculations and selected test data. The first step in the QMU methodology is to identify the critical per- formance gates. The term “performance gate” will be used throughout the report to represent performance indicators, checkpoints, thresholds, etc. As such, a performance gate is represented by a range of acceptable values, defined by subsystem margins and uncertainties, for the per- formance of each of many subsystems in the chain of events occurring in a nuclear explosion. Performance gates are associated with the key components and operating characteristics of the weapon whose failure would severely compromise the overall performance of the weapon. The performance of these components and their characteristics are described by metrics (quantitative measures) determined by experimental data and computer simulations. A metric can be any quantity that depends on the physical characteristics and state of the system and/or its opera- tion. Performance gates test cliffs (thresholds), quantities, configurations, and coincidences. They are “high-level indicators of some aspect of the system’s operation.”13 For simplicity, only cliffs are addressed in this sec- tion. An example is the energy of the imploding plutonium (Pu) pit. The system can be represented by performance gates through which or cliffs over which the metric must “pass” for successful operation. A nuclear warhead is so complex that it is not easy to devise a quick and read- ily understood way of presenting information about performance gates, metrics, margins, and uncertainties. This is one of the important tasks for the QMU framework. 13 D.H.Sharp and M.M. Wood-Schultz, QMU and Nuclear Weapons Certification: What’s Under the Hood? Los Alamos Science 28 (2003): 48.

OCR for page 4
 OvErviEW U2 = Uncertainty in minimum U1 = Uncertainty of metric at low end metric Y p,min (threshold) of operating range System Performance U2 U1 Operating Range Y p,min = Best estimate of minimum metric (threshold) Margin (M) YBE = Best estimate of metric at low end of operating range U (total uncertainty) = U1 + U2 Yp,min YBE Metric FIGURE 1-2 Cliff chart representation of system performance. Fig 2 One way of presenting results is to use the cliff chart graphical rep- resentation introduced above. Figure 1-2 shows a generalized cliff chart. This graphic predated QMU itself and it is still being used for QMU pur- poses. The cliff chart is a high-level summary and displays the expected system performance as a function of some metric (such as primary yield), with margins and uncertainties in that metric indicated by a simple band of values. Underlying this graphic are the detailed calculations called for by the QMU framework; they deal with a large number of metrics not explicitly indicated on the cliff chart. Generally, these metrics—including those shown explicitly on a cliff chart—are described by probability dis- tribution functions, not by a simple band of values. There is a minimum value—threshold—that each metric must meet or exceed for the compo- nent or characteristic to operate properly. For example, the criticality of Pu in the weapon’s primary must reach a threshold value for the primary to produce sufficient yield. The amount by which a minimum expected metric exceeds this threshold value is the operating margin, M, for that component or characteristic. In normally functioning weapons, where all key metrics are operating above this margin, the system performance would be in the design range. The codes, along with underground nuclear test data, expert judgment, and aboveground experiments,14 are used to estimate the threshold and lowest expected performance level and, accordingly, the value of the margin for each component or operating 14Aboveground experiments are by definition subcritical (see Glossary).

OCR for page 4
 EvALuATiON Of qMu METhODOLOGy characteristic. The cliff chart represents only a summary of what is done in the QMU methodology and must not be confused with the framework itself or with the huge amounts of data and simulations that back up the cliff chart. The most difficult part of using the QMU framework for evaluating nuclear weapons performance is identifying, characterizing, quantifying, and aggregating the large number of uncertainties, U, that arise. There are uncertainties in the simulation codes’ predicted threshold value and the operating range lower boundary of the margin at each stage of the warhead process. (These are given by U2 and U1, respectively, in Figure 1-2.) One class of uncertainties is called epistemic or systematic uncertain- ties; it includes incomplete knowledge of the parameters describing the phenomena of interest, incorrect and missing physics models, approxima- tions and numerical errors, code bugs, and the like. In principle these can be lessened by gathering more knowledge and data. A second class of uncertainties—random or aleatory uncertainties—is intrinsic; it includes manufacturing variability, variability in materials used, and test-to-test variability.15 The total uncertainty for a performance gate is the sum of the thresh- old uncertainty and the minimum performance uncertainty. If the uncer- tainties are large enough, they will erode or destroy confidence that the component or operating characteristic will perform as designed. There- fore, a condition of reliable operation is that the margin must be larger than the total uncertainty found by aggregating all the subsystem uncer- tainties. This is expressed as a confidence ratio, M/U. If M/U >> 1, the degree of confidence that the system will perform as expected should be high. If M/U is not significantly greater than 1, the system needs careful examination. (This would be even more true if M/U ≤ 1.) Obviously, it is important to understand M, U, and M/U to be able to specify confidence levels with these quantities. Spreads of uncertainties in output values of different performance gates caused by uncertain input parameters are estimated by performing sensitivity analyses16 across the plausible ranges of input parameters, with a large number of runs of the simulation codes, each with different param- 15 See, for example, G.W. Parry and P.W. Winter, Characterization and Evaluation of Un- certainty in Probabilistic Risk Analysis, Nuclear Safety 22(1) (1981): 28-42; M.E. Paté-Cornell, Uncertainties in Risk Analysis: Six Levels of Treatment, reliability Engineering and System Safety 54(2-3) (1996): 95-111; and J.C. Helton, Uncertainty and Sensitivity Analysis in the Presence of Stochastic and Subjective Uncertainty, Journal of Statistical Computation and Simulation 57(1-4) (1997): 3-76. 16 See, for example, A. Saltelli, K.P.-S. Chan, and E.M. Scott, eds., Sensitiity Analysis, New York, N.Y.: Wiley (2000); A. Saltelli, S. Tarantola, and K.P.-S. Chan, A Quantitative Model- Independent Method for Global Sensitivity Analysis of Model Output, Technometrics 41(1)

OCR for page 4
 OvErviEW eter settings intended to explore this plausible range. These computed code sensitivities are often assumed to be representative of sensitivities in the physical systems being modeled. To the extent possible, the outputs of the computer models are validated by comparing them to experimen- tal data—primarily from archived underground nuclear test data and aboveground experiments—in an effort to estimate uncertainties arising from gaps and errors in the physics models. These comparisons can lead to enhancements of the models, improving their predictive capabilities. Uncertainty is further increased, however, by the fact that underground nuclear test experiments were only rarely conducted at performance thresholds, and data from aboveground experiments extend over a very limited range of the performance space of a nuclear explosion. The codes themselves are sources of uncertainty. At least three of these sources of uncertainty must be addressed. The models must be veri- fied to eliminate bugs and to control numerical errors so that the adopted physical models are correctly solved. The models also must be validated by experiments so that the simulation is a faithful representation of the processes it is intended to emulate. And the conceptual design must be correct so that all the essential features of the overall performance have been accurately included in the simulation. (More information on this topic is included in Note 10 in the classified Annex.) An important aspect of uncertainty quantification is to calculate the (output) probability distribution of a given metric and from that distribu- tion to estimate the uncertainty of that metric. The meaning of the confi- dence ratio (M/U) depends significantly on this definition and on the way subsystem uncertainties are aggregated to an overall system uncertainty. Knowing this distribution allows determining the degree to which the threshold and operating range uncertainties might overlap and, therefore, the likelihood that the M/U ratio is not defined. Sensitivity analyses are one part of the process of transforming initial (input) uncertainties into final (output) uncertainties. It may happen that a particular bit of input knowledge is poorly known—for example, some property of some material. But sensitivity analysis may reveal that the actual value of that property has little effect on the final output when uncertainties from all sources are aggregated into a system uncertainty. It may also happen that there is no particular input information on the probability distribution function (PDF) of certain quantities, in which case analysts begin with a simple spread of plausible values for that quantity. Sensitivity analyses, however, will help provide information about the output uncertainty for that quantity. (1999): 39-56; and H.C. Frey and S.R. Patil, Identification and Review of Sensitivity Analysis Methods, risk Analysis 22(3) (2002): 553-578.

OCR for page 4
 EvALuATiON Of qMu METhODOLOGy The QMU methodology, as just outlined, is applied to the weap- ons in the stockpile, and the M/U values of a range of critical perfor- mance gates are used as one input to judge the reliability of the warhead being assessed. For those components or characteristics that would have required a nuclear test for assessment, simulations are necessary. By using the simulation codes to predict changes in thresholds and uncertainties as components age, estimates can be made of how the performance of the warhead will change over time. Finally, the QMU framework is expected to play an important role in the certification of the reliable replacement warhead design. The objective is to quantify the margins and uncertain- ties of critical components and performance metrics in order to help deter- mine whether the RRW designs can be certified to operate as intended. Study Process The study committee met first in May 2007 to discuss the charge and background with staff of the Government Accountability Office and NNSA. Briefings were also given by study committee members on the JASON QMU study. In addition, the committee formulated a set of issues to explore directly with the national security labs. In August and September 2007, the study committee met with experts at the three national security labs—SNL, LANL, and LLNL. During those meetings, detailed presentations were provided on QMU methodology, examples of the application of QMU to weapons components and weap- ons systems, and, in the case of LLNL, the use of QMU to help with certification of the proposed RRW design. Presentations were made by management and by the staff members responsible for putting the QMU framework into practice on stockpile issues to provide views from both perspectives. A feature of these meetings was a series of roundtable dis- cussions with designers about a broad range of QMU-related issues as seen by the practitioners. In October 2007 and again in February 2008, the study committee spent most of its time in closed session arriving at findings and recom- mendations and writing the report. At both meetings, it also heard from the DOE Undersecretary for Science, Raymond Orbach, who presented some concerns about QMU as currently applied by the two design labs. Officials from the two design labs also attended the February meeting and spoke about their reactions to Dr. Orbach’s concerns. A final meeting of a small group of the committee took place in April to complete a draft of the report. Because of the classified nature of the report, all report writing and subsequent report review had to be done in secure facilities. Upon completion of the report review, the document underwent a classification review by NNSA classification officials.

OCR for page 4
 OvErviEW In addition, the study committee had access to a wide range of pub- lished documents about QMU, including all available reviews of its implementation by the laboratories.17 Both unclassified and classified (at the secret restricted data level) material were made available to the committee. 17 See, for example, D.H. Sharp and M.M. Wood-Schultz, QMU and Nuclear Weapons Certification: What’s Under the Hood? Los Alamos Science 28 (2003): 47-53; and M. Pilch, T.G. Trucano, and J.C. Helton, ideas underlying the quantification of Margins and uncertainties (qMu): A White Paper, SAND2006-5001, Albuquerque, N.M.: Sandia National Laboratories (2006).