B. John Garrick, Committee Member
It is the purpose of this appendix to consider if the several decades of experience with the application of probabilistic risk assessment (PRA) (Garrick, 2008), especially with respect to nuclear power plant applications, involve methods that might complement or benefit the QMU methodology. The quantification of margins and uncertainties (QMU) methodology refers to the methods and data used by the national security laboratories to predict nuclear weapons performance, including reliability, safety, and security. Both communities, PRA and QMU, have similar challenges. They are being asked to quantify performance measures of complex systems with very limited experience and testing information on the primary events of interest. The quantification of the uncertainties involved to establish margins of performance is the major challenge in both cases. Of course the systems of the two communities are very different and require systemspecific modeling methods. To date the emphasis in the QMU effort has been on a reliability prediction process, not yet the important performance measures of safety and security. PRA focuses on what can go wrong with a system and thus could be an ideal method for assessing the safety and security of nuclear weapon systems.
NOTE: This Appendix was authored by an individual committee member. It is not part of the consensus report. The appendix provides a description of PRA and probability of frequency concepts that are discussed in the report.
Below are the first 10 and last 10 pages of uncorrected machineread text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapterrepresentative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 51
Appendix A
A Probabilistic Risk Assessment
Perspective of QMU
B. John Garrick, Committee Member
PuRPOSE
It is the purpose of this appendix to consider if the several decades
of experience with the application of probabilistic risk assessment (PRA)
(Garrick, 2008), especially with respect to nuclear power plant appli
cations, involve methods that might complement or benefit the QMU
methodology. The quantification of margins and uncertainties (QMU)
methodology refers to the methods and data used by the national security
laboratories to predict nuclear weapons performance, including reliabil
ity, safety, and security. Both communities, PRA and QMU, have similar
challenges. They are being asked to quantify performance measures of
complex systems with very limited experience and testing information
on the primary events of interest. The quantification of the uncertainties
involved to establish margins of performance is the major challenge in
both cases. Of course the systems of the two communities are very differ
ent and require systemspecific modeling methods. To date the emphasis
in the QMU effort has been on a reliability prediction process, not yet the
important performance measures of safety and security. PRA focuses on
what can go wrong with a system and thus could be an ideal method for
assessing the safety and security of nuclear weapon systems.
NOTE: This Appendix was authored by an individual committee member. It is not part
of the consensus report. The appendix provides a description of PRA and probability of
frequency concepts that are discussed in the report.
OCR for page 51
EvALuATiON Of qMu METhODOLOGy
The approach taken in this review is to highlight the PRA method of
quantification, comment on applying PRA to weapon performance assess
ment, discuss possible links and differences between QMU as currently
used and PRA, and to identify possible PRA enhancements of QMU. The
QMU approach itself is covered elsewhere in this report.
THE PRA APPROACH TO QuANTIFICATION
The PRA approach highlighted is based on the framework of the trip
let definition of risk (Kaplan and Garrick, 1981):
R = {}c,
where R denotes the risk attendant on the system or activity of interest.
On the right, Si denotes the ith risk scenario (a description of something
that can go wrong). Li denotes the likelihood of that scenario happen
ing and Xi denotes the consequences of that scenario if it does happen.
The angle brackets enclose the risk triplets, the curly brackets { } are
mathspeak for “the set of,” and the subscript c denotes “complete,” mean
ing that all of the scenarios, or at least all of the important ones, must be
included in the set. The body of methods used to identify the scenarios
(Si) constitutes the “Theory of Scenario Structuring.” Quantifying the L i
and the Xi is based on the available evidence using Bayes’ theorem, illus
trated later.
In accordance with this set of triplets definition of risk, the actual
quantification of risk consists of answering the following three questions:
1. What can go wrong? (Si)
2. How likely is that to happen? (Li)
3. What are the consequences if it does happen? (Xi )
The first question is answered by describing a structured, organized,
and complete set of possible risk scenarios. As above, we denote these sce
narios by Si. The second question requires us to calculate the “likelihoods,”
Li , of each of the scenarios, Si. Each such likelihood, Li, is expressed as a
“frequency,” a “probability,” or a “probability of frequency” curve (more
about this later).
The third question is answered by describing the “damage states”
or “end states” (denoted Xi ) resulting from these risk scenarios. These
damage states are also, in general, uncertain. Therefore these uncertain
ties must also be quantified, as part of the quantitative risk assessment
process. Indeed, it is part of the quantitative risk assessment philoso
OCR for page 51
APPENDiX A
phy to quantify all the uncertainties in all the parameters in the risk
assessment.
Some authors have added other questions to the above definition
such as What are the uncertainties? and What corrective actions should
be taken? The uncertainty question is embedded in the interpretation
of “likelihood,” as noted later. The question about corrective actions is
interpreted as a matter of decision analysis and risk management, not
risk assessment per se. Therefore it is not considered a fundamental prop
erty of this definition of risk. Risk assessment does become involved to
determine the impact of the corrective actions on the “new risk” of the
affected systems.
Using the triplet definition of risk as the overarching framework, the
following steps generally represent the PRA process:
Step . Define the system being analyzed in terms of what constitutes
normal or successful operation to serve as a baseline for departures
from normal operation.
Step . Identify and characterize what constitutes an undesirable out
come of the system. Examples are failure to perform as designed,
damage to the system, and a catastrophic accident.
Step . Develop “What can go wrong?” scenarios to establish levels of
damage and consequences while identifying points of vulnerability.
Step . Quantify the likelihoods of the different scenarios and their
attendant levels of damage based on the totality of relevant evidence
available.
Step . Assemble the scenarios according to damage levels and cast
the results into the appropriate risk curves and risk priorities.
Step . Interpret the results to guide the riskmanagement process.
These six steps tend to collapse into the three general analytical pro
cesses illustrated in Figure A1—a system analysis, a threat assessment,
and a vulnerability assessment. That is, a PRA basically involves three
main processes: (1) a system analysis that defines the system in terms of
how it operates and what constitutes success, (2) an initiating event and
initial condition assessment that quantifies the threats to the system, and
(3) a vulnerability assessment that quantifies the resulting risk scenarios
and different consequences or damage states of the system, given the pos
sible threats to the system. A valuable attribute of the triplet approach is
that it can track multiple end states in a common framework.
In Figure A1 the system analysis is denoted as the “system states for
successful operation.” The second part of the process requires a deter
mination of the threats to any part of the total system—that is, events
OCR for page 51
EvALuATiON Of qMu METhODOLOGy
FIGURE A1 The concept of an integrated threat and vulnerability risk assessment.
that could trigger or initiate a disturbance to an otherwise successfully
operating system. The third part of the process structures the course and
Figure A1
consequence of events (scenarios) that could emanate from specific initiat
Bitmapped, lowres
ing events or initial conditions.
A number of thought processes and analytical concepts are employed
to carry out the three processes conceptualized in Figure A1. They in
volve an interpretation of “likelihood,” a definition of “probability,” the
algorithms of deductive and inductive reasoning, the processing of the
evidence, the quantification and propagation of uncertainties, and the
assembly of the results into an interpretable form. Some of the more
important concepts are highlighted.
Three explicit and quantitative interpretations of likelihood are “fre
quency,” “probability,” and “probability of frequency.”
• frequency. If the scenario is recurrent—that is, if it happens repeat
edly—then the question How frequently? can be asked and the
answer can be expressed in occurrences per day, per year, per
trial, per demand, etc.
• Probability (credibility). If the scenario is not recurrent—if it hap
pens either once or not at all—then its likelihood can be quantified
in terms of probability. “Probability” is taken to be synonymous
OCR for page 51
APPENDiX A
with “credibility.” Credibility is a scale invented to quantitatively
measure the degree of believability of a hypothesis, in the same
way that scales were invented to measure distance, weight, tem
perature, etc. Thus, in this usage, probability is the degree of
credibility of the hypothesis in question based on the totality of
relevant evidence available.
• Probability of frequency. If the scenario is recurrent (like a hurricane,
for example) and therefore has a frequency whose numerical
value is not, however, fully known, and if there is some evidence
relevant to that numerical value, then Bayes’ theorem (as the
fundamental principle governing the process of making inference
from evidence) can be used to develop a probability curve over a
frequency axis. This “probability of frequency” interpretation of
likelihood is often the most informative, and thus is the preferred
way of capturing/quantifying the state of knowledge about the
likelihood of a specific scenario.
Having proposed a definition of probability, it is of interest to note
that it emerges also from what some call the “subjectivist” view of prob
ability, best expressed by the physicist E.T. Jaynes (2003):
A probability assignment is ‘subjective’ in the sense that it describes a
state of knowledge rather than any property of the ‘real’ world, but is
‘objective’ in the sense that it is independent of the personality of the
user. Two rational beings faced with the same total background of knowl
edge must assign the same probabilities.
The central idea of Jaynes is to bypass opinions and seek out the
underlying evidence for the opinions, which thereby become more objec
tive and less subjective.
Recalling the interpretation of probability as credibility, in this situa
tion, probability is a positive number ranging from zero to one and obeys
Bayes’ theorem. Thus, if we write p(HE) to denote the credibility of
hypothesis H, given evidence E, then
p(EH)
p(HE) = p(H) ,
p(E)
which is Bayes’ theorem. It tells us how the credibility of hypothesis H
changes when new evidence, E, occurs. Bayes’ theorem is a simple two
step derivation from the product rules of probability and plausible rea
soning. This theorem has a long and bitterly controversial history but in
recent years has become widely understood and accepted.
A central feature of probabilistic risk assessment is making uncer
OCR for page 51
EvALuATiON Of qMu METhODOLOGy
tainty an inherent part of the analysis. Uncertainty exists, to varying
degrees, in all the parameters that are used to describe or measure risk. Of
course there are sources of uncertainty other than parameter uncertainty,
such as uncertainty about whether a particular phenomenon is being cor
rectly modeled. A common approach to assessing modeling uncertainty is
to apply different models to the same calculation in an attempt to expose
modeling variability. Adjustments are made to the model to increase
confidence in the results. The lack of confidence resulting from such an
analysis can be a basis for assigning a modeling uncertainty component to
parameter uncertainty in order to better characterize the total uncertainty
of the analysis.
In PRA, parameter uncertainties are quantified by plotting probability
curves against the possible values of these parameters. These probability
curves are obtained using Bayes’ theorem.
Before the risk scenarios themselves can be quantified, the initiat
ing events (IE) or the initial conditions (IC) of the risk scenarios must be
identified and quantified.1 The relationship between the initial states (IEs
and ICs), the system being impacted, and the vulnerability of the system
being impacted is illustrated in Figure A1.
A deductive logic model—that is, a fault tree or master logic diagram—
is developed for each initiating event of a screened set. The structure of
the logic model is to deduce from the “top events”—that is, the selected
set of hypothetical IEs or ICs—the intervening events down to the point
of “basic events.” A “basic event” can be thought of as the initial input
point for a deductive logic model of the failure paths of a system. For the
case of accident risk, a basic event might be fundamental information on
the behavior of structures, components, and equipment. For the case of a
natural system such as a nuclear waste disposal site, a basic event could
be a change in the ICs having to do with climate brought about by green
house gases. For the case of terrorism risk, the basic event relates to the
intentions of the terrorist—that is, the decision to launch an attack. For
the case of a nuclear weapon system, either environments or conditions
could impact weapon performance. The intervening events of the master
logic diagram for terrorism risk are representations of the planning, train
ing, logistics, resources, activities, and capabilities of the terrorists. The
intervening events of the master logic diagram for accident risk are the
processes and activities that lead to the failure of structures, components,
and equipment. The intervening events of the ICs for a nuclear waste
disposal site could be factors that influence climate, and the intervening
1 Both IE and IC terminology are used, since for some systems such as the risk of a nuclear
waste repository the issue is not so much an initiating event as it is a set of initial conditions
such as annual rainfall.
OCR for page 51
APPENDiX A
events for a nuclear weapon system could be environments that impact
weapon yield.
Once the initiating events are quantified, the resulting scenarios could
be structured to the undesired consequences or end states. The actual
quantification of the risk scenarios is done with the aid of event trees
similar to the one in Figure A2. An event tree is a diagram that traces the
response of a system to an initiating event, such as a terrorist attack, to
different possible end points or outcomes (consequences). A single path
through the event tree is called a “scenario” or an “event sequence.” The
terms are sometimes used interchangeably. The event tree displays the
systems, equipment, human actions, procedures, processes, and so on that
can affect the consequences of an initiating event depending on the suc
cess or failure of intervening actions. In Figure A2 boxes with the letters
A, B, C, and D represent these intervening actions. The general conven
tion is that if a defensive action is successful, the scenario is mitigated. If
the action is unsuccessful, then the effect of the initiating event continues
as a downward line from the branch point as shown in Figure A2. For
accident risk, an example of a mitigating system might be a source of
emergency power. For terrorism risk, an action that could mitigate the
hijacking of a commercial airliner to use it as a weapon to crash into a
football stadium would be a remote takeover of the airplane by ground
control. For a natural system, a mitigating feature might be an engineered
barrier, and for a nuclear weapon a mitigating system might be the shield
ing of external radiation.
Each branch point in the event tree has a probability associated with
it. It should be noted that the diagram shown in Figure A2 shows only
two branches (e.g., success or failure) from each top event. However, a
top event can have multiple branches to account for different degrees
of degradation of a system. These branch points have associated “split
f(A I)
I
I
1 – f(A I)
S=IABCD
ϕ(S) = ϕ(I) f (A I) f (B IA) f (C IAB) f (D IABC)
FIGURE A2 Quantification of a scenario using an event tree.
OCR for page 51
EvALuATiON Of qMu METhODOLOGy
fractions” that must be quantified based on the available evidence. The
process involves writing an equation for each scenario (event sequence)
of interest. For example, the path through the event tree that has been
highlighted in Figure A2 could be a scenario that we wish to quantify.
The first step is to write a Boolean equation for the highlighted path. If we
denote the scenario by the letter S, we have the following equation,
S = I AB CD ,
where the bars over the letters indicate that the event in the box did not
perform its intended function. The next step is to convert the Boolean
equation into a numerical calculation of the frequency of the scenario. Let
ting ϕ stand for frequency and adopting the split fraction notation, f(…),
of Figure A2 gives the following equation for calculating the frequency
of the highlighted scenario:
ϕ(S) = ϕ(I)f (AI)f ( BIA)f ( C IAB)f ( DIABC)

The remaining step is to communicate the uncertainties in the fre
quencies with the appropriate probability distributions. This is done
using Bayes’ theorem to process the elemental parameters (Figure A3).
The “probability of frequency” of the individual scenarios is obtained
by convoluting the elemental parameters in accordance with the above
equation.
Once the scenarios have been quantified, the results take the form
shown in Figure A4. Each scenario has a probabilityoffrequency curve
in the form of a probability density function quantifying its likelihood
of occurrence. The total area under the curve represents a probability
S=IABCD
ϕ(S) = ϕ(I) f (A I) f (B IA) f (C IAB) f (D IABC)
’ ’ ’
FIGURE A3 Bayes’ theorem used to process parameters.
Figure A3
Bitmapped, lowres
OCR for page 51
APPENDiX A
of 1. The fractional area between two values of ϕ represents the confi
dence—that is, the probability—that ϕ has the values over that interval
(see below).
Figure A4 shows the curve for a single scenario or a set of scenarios
leading to a single consequence. Showing different levels of damage,
such as the risk of varying numbers of injuries or fatalities, requires a
different type of presentation. The most common form is the classical
risk curve, also known as the frequencyofexceedance curve or the even
more esoteric label, the complementary cumulative distribution function.
This curve is constructed by ordering the scenarios by increasing levels
of damage and cumulating the probabilities from the bottom up in the
ordered set against the different damage levels. Plotting the results on
loglog paper generates curves such as those shown in Figure A5.
Suppose P3 in Figure A5 has the value of 0.95—that is, a probability
of 0.95. We can be 95 percent confident that the frequency of an X1 conse
quence or greater is ϕ1. The family of curves (usually called percentiles)
can include as many curves as necessary. The ones most often selected in
practice are the 5th, 50th, and 95th percentiles. A popular fourth choice
is the mean.
A common method of communicating uncertainty in the risk of an
event is to present the risk in terms of a confidence interval. To illustrate
confidence intervals some notation is added to the above figures, which
now become Figures A6 and A7. If the area between ϕ1 and ϕ2 of Figure
A6 takes up 90 percent of the area under the curve, we are 90 percent con
fident (the 90 percent confidence interval) that the frequency is between
ϕ1 and ϕ2. Figure A7 can also be read in terms of a confidence interval.
Let P1 be 0.05, P3 be 0.95, ϕ1 be one in 1,000, ϕ2 one in 10,000, and X1 be
10,000 fatalities. Because P3 minus P1 is 0.90, we are 90 percent confident
that the frequency of an event having 10,000 fatalities or more varies from
one every 1,000 years to one every 10,000 years.
Although risk measures such as those illustrated in Figures A6 and
A7 answer two questions—What is the risk? How much confidence is
there in the results?—they are not necessarily the most important output
Probability (P)
Frequency ( ϕ )
FIGURE A4 Probability of frequency curve.
Figure A4
God type, bitmpped line art
OCR for page 51
0 EvALuATiON Of qMu METhODOLOGy
Frequency ( ϕ )
ϕ 1
X1
Consequence (X)
FIGURE A5 Risk curves for varying consequences.
Figure A5
Good outside yps
of the risk assessment. Often the most important
output is the exposure
bitmapped art and interior type
of the detailed causes of the risks, a critical result needed for effective
risk management. The contributors to this risk are buried in the results
assembled to generate the curves in Figures A6 and A7. Most risk assess
ment software packages contain algorithms for ranking the importance of
contributors to the risk.
For a Specific Consequence
Probability (P)
1 2
Frequency ( )
FIGURE A6 Probability density.
FIGURE A6
OCR for page 51
APPENDiX A
For Varying Consequences
ϕ2
P3
P2
ϕ1
P1
X1
Consequence (X)
(
FIGURE A7 Cumulative probability.
APPLyINg PRA TO WEAPON A7
FIGURE PERFORMANCE ASSESSMENT
Since the probabilistic risk assessment was developed to apply to any
type of risk assessment, it is believed that it could be a framework for
assessing the risk in any type of system, natural or engineered, weapon
or nonweapon. However, it could not be applied to nuclear weapons
without using the whole host of computer codes and analytical pro
cesses that have been developed to support the current efforts for the
quantification of margins and uncertainties methodology developed by
the national security laboratories. In fact, because of the advanced state
of development of the laboratory codes for calculating confidence ratios
of performance margins and uncertainties, the most prudent use of the
PRA thought process is probably for safety and security issues and ele
ments such as the impact on weapon performance of stockpile storage
or other events associated with the stockpiletotarget sequence. In fact,
these elements could be the major contributors to the risk of poor weapon
performance.
In general, PRA methods have been successfully applied to nonwar
head operational elements of the nuclear weapon functional life cycle.
This still leaves open the question of how PRA might be used to quantify
the risk of lessthanacceptable performance of a nuclear warhead. Pro
viding a full answer to this question is beyond the scope of this appendix,
but it is possible to describe the concept.
Suppose as a part of an assessment of the risk of a warhead not
OCR for page 51
EvALuATiON Of qMu METhODOLOGy
Radiation High Explosive Unboosted Boost Gas Boosted Yield
Input Implosion Fission Burn Fission Yi
i=1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
FIGURE A8 Singlestage boosted fission explosive event tree.
performing to its specification, consideration is given to the risk of the
Figure A8
weapon’s primary explosion performance being compromised by exter
nal radiation. (More information on this topic is included in Note 11 in
the classified Annex.) In particular, an initiating event is defined as the
frequency per mission hour of an external radiation pulse of sufficient
energy to impact weapon performance. The frequency of such an event
would have to be based on multiple sources of evidence, including the
state of the technology for defensive systems (which could, for example,
come from intelligence reports), the mitigation capability of the weapon
system itself, evasive procedures, mission conditions, etc. To be sure there
would be uncertainties, which means that the frequency would have to
be represented by a probability distribution in “probability of frequency”
format. Figure A8 is a conceptual interpretation of the events that would
have to successfully occur for a singlestage boosted fission explosive to
perform its intended function.
The event tree identifies the possible pathways triggered by the initi
ating event. The end states are a range of primary yields of the different
event sequences (scenarios). Of course, the physics of the process will lead
to many of the branch points being bypassed and the number of outcome
states being reduced. The events may be briefly described as follows:
• radiation input. An external radiation source impinged on the
weapon system. The event is represented as a frequency per mis
OCR for page 51
APPENDiX A
sion hour of an external radiation pulse of sufficient energy to
impact weapon performance.
• high explosie implosion. If the radiation fluence impinging on a
boosted fission explosive is varied, the performance of the device
will vary.
• unboosted fission. The degraded performance of the high explo
sive implosion will reduce the criticality of the explosive fission
able material and reduce the amount of fission energy generated
before the boost stage.
• Boost gas burn. The boost is dependent on a sufficient amount of
fission energy to heat and compress the boost gas to thermonu
clear fusion conditions. The number of boost neutrons produced
is affected.
• Boosted fission. The final yield is determined by the number of
boosted fission events. Boosted fission scales with the number of
boost neutrons available.
• yield. The end states are probability of frequency (POF) distribu
tions of different yields, including the design yield.
Having a POF distribution for each of these scenarios sets the stage for
developing risk curves for a particular initiating event. The outputs of the
event tree are calculated as described in the section “The PRA Approach
to Quantification.” The results from the event tree can be assembled into
several different forms. One form would be to probabilistically add all
the lessthandesignyield POF distributions to achieve the probability
density curve for the risk of the primary not reaching its intended yield.
This result would be in the form shown in Figure A4, which character
izes the risk, including uncertainty, of this stage not performing to its
specification.
A second form, if there are multiple degraded end states to be consid
ered, would be to arrange the end state POF curves of the degraded yields
in order of increasing degradation and cumulate them from the bottom to
the top in the form of a complementary cumulative distribution function.
The result is given in Figure A5, which quantifies the risk of different
degraded yields of singlestage boosted fission explosion with probability,
P, as the parameter of the model.
A third form of presenting the results would be the POF curve repre
senting the success scenario.
These three results represent a comprehensive set of metrics for mea
suring the performance with uncertainty of the primary fission explosion
under the specific threat of a single initiator. To complete the risk assess
ment of the singlestage boosted fission explosive requires the consid
eration of all the important risk contributing initiators. Usually that is
OCR for page 51
EvALuATiON Of qMu METhODOLOGy
done by creatively defining a relatively small number of initiating event
“categories” that represent several individual initiating events.
POSSIbLE LINkS bETWEEN QMu AND PRA
Among the common challenges to both the QMU and PRA meth
odologies is a convincing treatment of parameter and modeling uncer
tainty. Linking supporting evidence to the PRA and QMU calculations is
critical to providing transparency and confidence in uncertainty analyses.
Experience indicates that the key to uncertainty analysis is not so much
data limitations as it is to have a system in place to capture and process
the data and information that are often available but perhaps not easy
to retrieve or in the proper form. Experience with nuclear power plant
PRAs has shown this many times. For example, the systematic processing
of maintenance and operations data has provided a robust database for
assessing nuclear plant risk, which was thought to not be possible when
PRAs were first implemented. Of course, this is not a database of many
of the events of interest such as core melts or large releases of radioactive
materials. Fortunately, not many such events have occurred. But, it is an
important database for precursor events to these more serious events. If
the precursor events where there are data are logically connected to the
events of interest by detailed logic models, then the opportunity exists to
appropriately propagate the uncertainties to the desired end states.
The nuclear weapons field would seem to be in a situation similar to
that of the nuclear power field. While there is no actual testing being per
formed on fullscale nuclear weapons, data are being developed through
precursor tests and weapons management activities. Nuclear explosive
safety teams have been analyzing and observing assembly, disassembly,
and repair activities for decades. An examination of this robust experience
in nuclear weapons operations would seem to be similar to the experience
in nuclear power maintenance and operations, especially with respect
to safety and security issues. To be sure, many nuclear explosive safety
activities go beyond the nuclear explosive package of nuclear warheads
and some data needs of the nuclear explosive package are unique, thus
limiting data collection opportunities. Nevertheless, it would appear that
opportunities exist for largescale data collection and processing in the
weapons field. It is interesting to observe that both communities have
benefited considerably by increased use of Bayesian methods to infer the
performance characteristics of their respective systems’ components.
OCR for page 51
APPENDiX A
APPARENT DIFFERENCES bETWEEN THE APPROACHES
One of the main differences between the two approaches (PRA and
QMU), at least from an outsider’s perspective, is the transparency of
the performance assessment. The QMU assessments are packaged in a
series of highly sophisticated computer codes that have a history of many
decades. These codes represent the legacy memory and expert systems of
decades of experience in predicting weapon performance. The sophistica
tion of the codes and the matter of security compromises their transpar
ency. However, nuclear power is highly regulated, and the transparency
of its safety analysis has always been an inherent requirement of the
process. Thus, it is expected that the safety analysis methods for nuclear
power plants would make the basic structure and results of the modeling
highly visible and accessible.
Another difference between the two approaches is that, at present at
least, they are trying to answer different questions. The QMU question is
currently driven by a reliability perspective and PRA by a risk perspec
tive. Of course, to understand reliability one must know what the risks
are and vice versa. But they are different because the emphasis in the
models is different. The final form of the results in the QMU approach
is a reliability number and the final result in a PRA is the risk of damage
and adverse consequences. Both approaches attempt to quantify margins
of performance and the uncertainties involved. There will indeed be con
vergence to common goals as QMU begins to address more explicitly such
issues as safety, security, the stockpiletotarget sequence, and stockpile
aging.
POSSIbLE PRA ENHANCEMENTS OF QMu
This appendix started out with the goal of identifying possible
enhancements of the QMU methodology as a result of the very large
experience base in probabilistic risk assessment, especially in the case of
nuclear power. Perhaps the biggest contribution that PRA could make
to the QMU methodology would be a comprehensive PRA of each basic
weapon system. Experience with PRA strongly supports the view that
the information and knowledge base created in the course of performing
the PRA could contribute to the credibility of the QMU process. Almost
every phase of nuclear power plant operation has been favorably affected
by PRAs, from maintenance to operating procedures, from outage plan
ning to plant capacity factors, from sound operating practices to recovery
and emergency response, and from plant simulation to operator training.
It is logical to expect the same would be true for the QMU process, for
OCR for page 51
EvALuATiON Of qMu METhODOLOGy
conducting weapon performance assessments, and for carrying out the
nuclear explosives safety process.
Some of the characteristics of PRA that might enhance the QMU
process are (1) explicitness of event sequences (scenarios) leading to
degraded performance, (2) ranking of contributors to nonperformance,
(3) the probability of frequency concept for presenting results (see earlier
discussion), (4) increased emphasis on evidencebased distribution func
tions (as opposed to assumed distributions such as Gaussian), and (5) the
actual quantification of the risk of degraded performance.
As suggested above, the PRA thought process could very well be the
primary vehicle for quantifying the safety and security risk of nuclear
weapon systems and of other steps in the nuclear weapon functional life
cycle such as the stockpiletotarget sequence and the issue of the aging
stockpile and its effect on performance. The PRA framework is compatible
with tracking multiple performance measures including safety, military
compatibility, and logistics.
One final thought about how PRA might enhance the QMU process
has to do with the changing of management mindsets about performance
metrics. PRA has altered the thinking of nuclear power plant management
about the importance of having multiple metrics for measuring risk and
performance of complex systems. Maybe the weapons community has
to do the same thing with its leadership. A single number for weapons
reliability is not a confidence builder in understanding the performance
characteristics of something as complicated as a nuclear weapon, where
there is a need to expose the uncertainties in the reliability predictions.
REFERENCES
Garrick, B.J. 2008. quantifying and Controlling Catastrophic risks, Elsevier Press.
Jaynes, E.T. 2003. Probability Theory; The Logic of Science, Cambridge University Press.
Kaplan, S., and B.J. Garrick. 1981. On the Quantitative Definition of Risk, risk Analysis 1(1):
1127.