which production processes can be considered stable and which prototypes may have been poorly produced. However, an approach for estimating system or fleet reliability that would be consistent with this line of reasoning would be to analyze the pattern of failures from each source alone and separately model the impact on reliability. The expectation would be that decisions at the boundary would not make that much difference in such analyses.

A number of advantages could potentially result from this general approach. For example, better decisions could be made in separating out faulty designs from faulty processes. Also, some design failures might be attributed to a single component and easily remedied.


Models now exist or are being developed for representing how weapon systems work, such as missile intercept models and models of earth penetration. These models have input variables, for example, impact velocities. If one has a multivariate distribution for the inputs, one can run simulations and estimate a number of characteristics concerning model performance, such as system reliability. A partial means of understanding how useful these estimates are and how they should be compared or combined with real data is model validation.

Consider a computer model that is intended to simulate a real-world phenomenon, such as the functioning of a defense system. The validity of a computer model necessarily focuses on the differences between the model’s predictions, y*, and the corresponding observations of the phenomenon, y, that is, the prediction errors. To learn efficiently about the prediction errors, one designs an experiment by carefully choosing a collection of inputs, x, and then running the model, observing the system, and computing the prediction errors at those inputs. The prediction errors are then analyzed, often through development of a model of those errors as a function of the inputs. A candidate starting point for a prediction error model is that the prediction errors are normally distributed with parameters that are dependent on x; that is, y = y* + ex, where ex~ N (mx, σx). The objective of the prediction error model is to characterize the ex’s, which can be difficult to do since (1) x is typically high-dimensional; (2) the model may be numerically challenging and hence may take a long time to converge; (3) observing what actually happens can be extremely expensive, and as a result the number of separate experimental runs may be highly limited; and (4)

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement