National Academies Press: OpenBook
« Previous: Summary
Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×

1


Introduction and Charge

The National Nuclear Security Administration (NNSA) is responsible for providing and maintaining the capabilities necessary to sustain a safe, secure, and reliable nuclear weapons stockpile for the nation and its allies. Major responsibility for meeting the NNSA missions falls to the three NNSA laboratories: Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The NNSA national security laboratories contribute to that goal by maintaining the skills and capabilities necessary for stewardship of a reliable nuclear stockpile and also by maintaining a high level of technical credibility, which is a component of the nuclear deterrent.

Since 1992 it has been U.S. policy not to conduct explosion tests of nuclear weapons.1 The resulting technical challenges have been substantial. Whereas a nuclear test was in some sense the ultimate “peer review” of the performance of a particular NEP design, the cessation of nuclear testing necessitated a much greater reliance on both intralab and interlab expert peer review to identify potential problems with weapon designs and define the solution space.

______________

1 On September 24, 1996, the United States signed the Comprehensive Nuclear Test Ban Treaty, although it had been observing a nuclear explosion testing moratorium since 1992. The U.S. Senate has not ratified the treaty, but the United States has observed the treaty and fulfilled its obligations under the treaty.

Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×

COMMITTEE CHARGE

The statement of task to which this report responds reads as follows:

Assess the following:

  • The quality and effectiveness of peer review of designs, development plans, engineering and scientific activities, and priorities related to both nuclear and non-nuclear aspects of nuclear weapons;
  • Incentives for effective peer review;
  • The potential effectiveness, efficiency, and cost of alternative methods of conducting peer review and design competition related to both nuclear and non-nuclear aspects of nuclear weapons, as compared to current methods;
  • Known instances where current peer review practices and design competition succeeded or failed in finding problems or potential problems; and
  • How peer review practices related to both nuclear and non-nuclear aspects of nuclear weapons should be adjusted as the three NNSA laboratories transition to a broader national security mission.

The last task seeks to explore how the evolving mission of the NNSA laboratories—from an exclusive focus on nuclear weapons in the 1950s to a broader national security mission today—might impact peer review processes at the laboratories that relate to nuclear weapons.

The committee has understood that the “effectiveness” of peer review referred to in its charge is the effectiveness that can be achieved under the current conditions in which the nuclear weapons program now operates—that is, without nuclear explosion testing.

HISTORICAL CONTEXT

Founded in 1943, Los Alamos was the first U.S. nuclear weapons laboratory, followed by Sandia Laboratory in 1949. Sandia Laboratory was originally the Z-Division of Los Alamos, which designed and developed the non-nuclear components of the nuclear weapons and was later relocated to Kirtland Air Force Base in Albuquerque, New Mexico. Lawrence Livermore Laboratory was formed in 1952. In 1956, a branch of Sandia was also established in Livermore, California, to design and develop the non-nuclear components of nuclear weapons in support of Lawrence Livermore-designed nuclear explosive packages (NEPs). Following multiple organizational transitions subsequent to their formation, the three nuclear weapons laboratories became today’s three NNSA national security laboratories: LANL, LLNL, and SNL. Their role within the national

Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×

nuclear security enterprise is illustrated on pages 4 and 5 of the recent “Augustine-Mies” report.2

Throughout the Cold War, the two distinctly separate system design teams3—the LLNL/SNL-California design collaboration team and the LANL/SNL-New Mexico design collaboration team—competed intensely to win the right to design each new weapon system.4

During the Cold War, the ultimate validation of NEP design procedures, design codes, and designer judgment was provided by a nuclear test or a series of tests. Design competitions during the Cold War—and the associated design, development, production, and testing of new nuclear weapons—enabled the nuclear weapons complex to respond to evolving strategic threats with a strong and reliable nuclear deterrent. The nuclear explosion testing moratorium implemented in 1992 resulted in a fundamental shift in the approach for maintaining the safety, security, and effectiveness of the U.S. nuclear weapons stockpile. Following the cessation of nuclear explosion testing and of new weapon development in 1992, comprehensive design competition for NEPs in which the designs were validated with manufactured prototypes essentially ceased. The result is that today the full design expertise at LANL, LLNL, and SNL is no longer directly exercised.

The Science-Based Stockpile Stewardship Program5 was introduced by the Department of Energy (DOE) in the mid-1990s to strengthen the foundations of knowledge on which the laboratories’ work relies, especially to enable the laboratories to assess the safety, security, and effectiveness of the stockpile without nuclear testing. This included, in particular, a major effort to extend the capabilities of computer modeling and simulation to offset partially the loss of the information stream that had been provided by nuclear tests.

The committee strongly supports the position that it is a core responsibility of the three NNSA national security laboratories to sustain the essential science and engineering (S&E) capabilities that enabled the United States to successfully develop and manufacture its current stockpile, as these capabilities are essential for extending the life of the stock-

______________

2 Congressional Advisory Panel on the Governance of the Nuclear Security Enterprise, A New Foundation for the Nuclear Enterprise: Report of the Congressional Advisory Panel on the Governance of the Nuclear Security Enterprise, November 2014.

3 The separate system design teams in Livermore and Albuquerque relied on common Sandia component design groups in Albuquerque, e.g., one radar design group, one neutron generator design group.

4 Though often the task of taking the design forward to detailed design was assigned to the team that originated the design, occasionally the design was assigned to the other laboratory team for “workload leveling.”

5 The name was later shortened to the Stockpile Stewardship Program, or SSP.

Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×

pile (as individual weapon components age and/or are replaced by substitute components), to evaluate intelligence about potential adversaries, and to address what-if questions that arise as threats evolve.

At present, some S&E experts at the laboratories who have direct experience with nuclear weapons design, testing, and subsequent manufacturing continue to be active in all three national security laboratories. However, with the passage of time, there will soon be no laboratory personnel with first-hand experience in NEP design or the practical insights gained from testing their designs. Such hands-on experience and the judgment that derives from it is necessary for stockpile stewardship and for addressing technical challenges of nuclear nonproliferation. Scientists and engineers are well aware that models and theory for highly complex physical processes are incomplete and must absolutely be validated by experiments. The tests of actual weapons before the test moratorium provided essential understanding of how different design choices affected actual performance, and those test results were sometimes strikingly different from the best predictions then available. By engaging in design, production, and nuclear explosion testing, weapons designers built up invaluable judgment about the interplay between predicted and actual performance.

In the absence of nuclear explosion testing, all three laboratories today apply technical questioning, collaborative and competitive reviews, various tests of subsystems, and modeling and simulation to regularly check work and ensure that a range of perspectives is brought to bear. These processes, along with data from improved computational and experimental facilities, help current and future S&E staff develop the insights and expertise that had previously been gained by experience with direct nuclear tests. The laboratories have also strengthened their technical evaluation and peer review processes to ensure the safety, security, and effectiveness of the nuclear stockpile and to help maintain the associated S&E design and innovation capabilities.

PEER REVIEW

Peer review has become an increasingly important practice at the three NNSA laboratories as a means of reducing the risk of misconceptions and errors slipping into their science and engineering activities. Rigorous peer review is essential to maintaining high standards of science, technology, and design and to sustaining confidence in the performance of an aging stockpile. More generally, peer review at the laboratories—as throughout the research enterprise worldwide—is recognized as a means of ensuring high-quality work products. It thus contributes to the vibrancy and technical credibility of the laboratories’ S&E workforce,

Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×

which, as mentioned at the beginning of this chapter, is of fundamental importance to the nation’s nuclear deterrent capability.

All three laboratories recognize the need for multiple types of review, including the following:

  • External peer review, involving persons actively engaged in equivalent work outside the laboratory who review a laboratory’s work in a particular area;
  • Internal peer review, involving persons from the same laboratory actively engaged in equivalent work (but not the work being reviewed) who review the laboratory’s work in the given area;
  • Subject-matter reviews, involving internal and external subject-matter experts, who review a laboratory’s execution of, or capabilities in, particular areas of science or engineering; and
  • Technical reviews of programs or projects, involving internal and external experts who may not have as full a range of expertise as the people performing the original work but who provide technical scrutiny and checking of selected aspects of the work.

Examples of Peer Review

Because of the wide variation in how “review” and “peer review” are understood, the committee found it useful to develop a taxonomy that describes the primary types of review and structured competition activities that are valuable for strengthening S&E quality at the laboratories:

  • Traditional peer review. A review by an expert or a body of experts who are independent of the activity under review but who could have carried out that activity.
  • Independent weapon assessment. Teams from each laboratory independently assess a weapon type6 for the Annual Assessment Report (AAR) prepared by each laboratory for the use of the laboratory director in preparing the annual letter to the President assessing the readiness of that weapon. Similarly, independent teams review evidence before closing a significant finding investigation (SFI), which can arise from regular surveillance of

______________

6 To “assess” a weapon type in the stockpile means to analyze all existing relevant data, including data from surveillance, experiments, and previous nuclear tests, to gauge whether there are any issues with its safety, security, and reliability. The assessment also involves computer simulations using the latest models of the weapon’s performance consistent with the existing data.

Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
  • stockpiled weapons,7 and in reviewing plans for Life-Extension Programs (LEPs), in which all the weapons of a given type are refurbished or have aging parts replaced.

  • Red teaming. A special category of peer review, in which the reviewers actively seek to find serious flaws in the work under review. Beginning in 2003, red team reviews were mandated by law as part of the AAR. For an AAR, a red team composed of representatives from all three weapons laboratories reviews the assessments submitted to the NEP design laboratory directors, and those directors take into account the red team reports in crafting their annual assessment letters.
  • Outside expert review. A review of an activity by subject-matter experts in some or all of the technical components of the reviewed activity. Subject-matter experts typically come from outside the laboratories but have the appropriate clearances. An example would be the many reviews conducted by the JASON group. In the context of this report, a review by JASON would differ from the traditional peer review described above because the outside experts might have backgrounds in disciplines other than nuclear weapons. As a result, they are not peers in the sense of themselves being able to carry out the work being reviewed.

Metrics

The most common metric used by the nuclear weapons community is quantification of margins and uncertainties, or QMU. “QMU is a decision-support framework that provides a means for quantifying the laboratories’ confidence that the critical stages of a nuclear weapon will operate as intended.”8 QMU systematically applies the output of the Stockpile Stewardship Program (aboveground non-nuclear and subcritical experiments, data from past underground nuclear tests, sophisticated modeling, and the expert judgment of weapons scientists) to the assessment of

______________

7 Patrick Garcia, LANL, “SFIs—Connections to Design & Peer Review,” Presentation to the committee on September 24, 2014. Problems identified during weapon surveillance programs trigger SFIs, which involve an evaluation of potential impact and required response, and those study results undergo a management review. If management deems an SFI to be more than a one-off issue with minimal impact and an obvious response, an internal peer review is used if the impact or response is not completely clear and some independent validation is warranted, whereas an external peer review is used if the impact is viewed as potentially significant and design agency consensus is warranted.

8 National Research Council, 2009, Evaluation of Quantification of Margins and Uncertainties Methodology for Assessing and Certifying the Reliability of the Nuclear Stockpile, The National Academies Press, Washington, D.C., p. 5.

Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×

the stockpile. Simply stated, Q = M/U, where Q is a measure of assured performance, M is the margin of uncertainty, and U is the uncertainty of M. The framework helps identify areas of risk with the greatest impact on performance and thus to set priorities for investment and testing. Because the application of QMU relies heavily on expert judgment, it relies strongly on peer review at all three laboratories.

DESIGN COMPETITION

Design competition is not a type of peer review. Rather, it refers to a process in which independent teams compete to design warheads that offer the best response to a specified set of goals and requirements. Whereas the primary goal of peer review is to check the quality of cutting-edge work, design competition fosters parallel efforts that vie to push frontiers. In this report the committee distinguishes two types of design competition:

  • Design studies. These produce paper analyses and modeling and simulation results from competing teams but are not carried through to the production of a prototype. Perhaps the most complete example is the design study done for the Reliable Replacement Warhead (RRW),9 as discussed in Chapter 3.
  • Full design competition. Design competition as practiced during the Cold War exercised the full range of skills in the weapons complex: design, engineering, prototyping, testing, and production. The designers involved received feedback about the feasibility of their designs from the engineering process and from peers in the production facility, and later from actual nuclear tests. As used in this report, full design competition means a competition whose winning design would be carried through to a prototype device. The device would not be manufactured for the stockpile and would only be tested in a manner consistent with U.S. treaty obligations—that is, without nuclear yield.

Ultimately, the goal of reviews and competitions in the present day is to mitigate, to the extent possible in the absence of nuclear testing, the risk that the nation’s nuclear weapons will fail to perform as expected if

______________

9 The RRW design study was intended to generate competitive designs for a highly reliable warhead with enhanced surety. In addition, the competition aimed for a design that could be manufactured relatively easily with currently available materials, eliminating some potentially hazardous materials that had been used previously. Finally, it was necessary that the weapon as designed could be certified without nuclear explosion testing.

Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×

needed. For example, reviews are intended to ensure that all significant technical risks have been identified in an LEP for a weapon, that a sound plan to mitigate those risks has been developed, and that the plan is being well implemented. And, limited-scope design competitions are embarked upon to explore a broader set of refurbishment options while at the same time providing an opportunity to exercise and transfer necessary skill sets to the next generation of nuclear weapons designers, in closer analogy to the successful modes of activity that were practiced during the Cold War.

In this report, the committee evaluates the efficacy of peer review and design competition in today’s national nuclear security program and provides recommendations for how to ensure robust and reliable processes for peer review and design competition in the future.

CONDUCT OF THE STUDY

This study was mandated in Public Law 112-239, the National Defense Authorization Act for Fiscal Year 2013, Sec. 3144. It is sponsored by the National Nuclear Security Administration. The details of the legislation mandating the study are given in Appendix C. In response to the congressional mandate, the National Research Council10 formed the Committee on Peer Review and Design Competition Related to Nuclear Weapons. The committee was formed so as to include people with experience in research and management in academe, national laboratories, and industry. Its members have various direct and indirect interactions with one or more of the NNSA laboratories in order for the committee to have insight into special circumstances associated with nuclear weapons science and engineering. All members hold security clearances at a level sufficient to access appropriate technical details. Biographical sketches of the committee members are included in Appendix A.

The study committee began its work with an open meeting on June 10, 2014, to receive background on the study and engage in discussions about the most important elements of peer review and design competition at the NNSA laboratories. The open session was preceded and followed by closed sessions in which the committee established a framework for how it would conduct its study. The study committee met in closed session the next day, June 11, 2014, to further develop its plans.

To collect information about peer review and design competition and to assess the effectiveness of current peer review and design competition practices, the study committee met subsequently at each of the three

______________

10 Effective July 1, 2015, the institution is called the National Academies of Sciences, Engineering, and Medicine. References in this report to the National Research Council (NRC) are used in a historical context to refer to activities before July 1.

Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×

laboratories to engage with a range of senior managers and staff. These meetings included presentations and discussions on topics contained in the charge. In each of these meetings, senior managers from all three laboratories were in attendance and participated in the discussions. The committee provided the laboratories in advance with specific questions it wished to investigate at these meetings and relied on each laboratory to arrange for an appropriate set of speakers and topics. Meetings at the laboratories were held in a classified setting, as needed, to allow for discussion of specific details. In the course of its information gathering, the committee also gained insight into current peer review practices in the Russian and U.K. nuclear enterprises—the former through a discussion with former LANL director Siegfried Hecker and the latter through a meeting with three experts associated with the U.K.’s atomic weapons establishment. Details of all information-gathering sessions at these meetings are given in Appendix B.

OUTLINE OF THE REPORT

The chapters that follow are organized chronologically, looking first at peer review and design competition during the period when nuclear explosion tests were still being conducted and new weapons developed, followed by a discussion of today’s practices, and, finally, making recommendations to improve peer review and design competition at the laboratories going forward.

As discussed in Chapter 2, during the Cold War the United States designed, built, tested, and deployed numerous nuclear warheads of various designs. As noted above, the results of nuclear tests provided the ultimate validation of the NEP design procedures, weapon design codes, and designer judgment. Extensive peer reviews of the types used today were neither necessary nor practiced because the option of nuclear testing was available. That option notwithstanding, formal design competitions between teams from different laboratories were routinely held and led to important innovations in weapons designs. This chapter identifies key attributes of the laboratory practices during the Cold War that contributed to U.S. success in building and sustaining an effective nuclear arsenal and deterrent.

Chapter 3 presents the programs and practices for peer review and design competition currently used in the three national security laboratories.

As the NEP laboratories’ experience base with weapons design, production, and testing fades, and as the international nuclear weapons landscape evolves, the NNSA laboratories face formidable challenges, discussed in Chapter 4. How they respond to these challenges is of vital

Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×

importance for the nation. Effective peer review and design competition is part of the solution, as already recognized by NNSA and the laboratories, but a broader approach is needed. Chapter 4 provides conclusions and recommendations for strengthening current practices of peer review and design competition related to nuclear weapons and for maintaining a credible nuclear weapons design capability and an effective deterrent.

Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
Page 7
Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
Page 8
Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
Page 9
Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
Page 10
Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
Page 11
Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
Page 12
Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
Page 13
Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
Page 14
Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
Page 15
Suggested Citation:"1 Introduction and Charge." National Academies of Sciences, Engineering, and Medicine. 2015. Peer Review and Design Competition in the NNSA National Security Laboratories. Washington, DC: The National Academies Press. doi: 10.17226/21806.
×
Page 16
Next: 2 The Past: Before the 1992 Nuclear Explosion Testing Moratorium »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!