A further reason that test and field performance data are not fully utilized in industrial applications is that easy access to these data has an associated cost. While the collection of field performance data is effectively free since they are often required for other purposes, field performance (and test) data must be catalogued in a database structure in a way that facilitates the above uses. The construction and maintenance of such a database is time- and resource-intensive. This point can be illustrated if one considers the need to create a (living) cross- referencing system that identifies all (current and future) systems having components in common with a given system, all test event results, the conditions underlying each test event, the performance of components when fielded, and the conditions underlying field use. The benefits from the use of such data must be sufficient to offset this substantial cost. Making this argument was one goal of some of the presentations related to this topic at the workshop.
Bill Meeker provided an overview, from an industrial perspective (in particular, automobile warranty data), of the many opportunities to learn from the analysis of field performance data. He focused on features of such data that would be expected for defense systems: (1) data are collected until the system is a certain age or until it has covered a fixed number of miles, (2) there is only limited information on the exact cause of failure, (3) there is good information on the date of manufacture, (4) there is often useful information on the rate of use for each system, and (5) there are potential biases in estimation resulting from various homogeneity assumptions (e.g., high-speed drivers may have a different miles-per-failure distribution).
Field performance data have the following key applications. A primary use is to support early detection of production processes in trouble. A common approach used for the purpose is to graph the observed percentage of system failures by months in service alongside a graph of the upper bound for an estimate of the same based on a quantile of a standard cumulative distribution function used to model failure rates (e.g., the Poisson distribution) with its parameters estimated using historical data. Two detection rules are used to signal the need for corrective action: (1) if the observed failure rate at a point in time exceeds a particular quantile based on the historical data, or if some function of the observed number of failures (usually chosen to approximate some standard distribution) exceeds the historical estimate plus a critical value times an estimate of the standard deviation of the historical estimate; and (2) if the difference between some function of the observed number of failures at time t and at time t – 1 is greater than the historical estimate for the same plus a critical value times