failures and field performance are often retained, they are rarely archived in a manner that facilitates analysis of improvements in the system development process. In particular, such information could be used to improve the future design of developmental and operational tests by helping to explore how system flaws were missed during previous developmental and operational testing and how this can be remedied in the future.

In contrast, in industrial applications, test and field use data are often employed for these and other purposes and are frequently archived in a manner that facilitates analysis in support of these uses. Specifically, field use data are employed for prediction of future warranty or maintenance costs, as well as for early detection of reliability problems in fielded systems. Albeit less frequently, these data are also used to provide information on the discovery of failure modes and their frequency of occurrence—information that is in turn used to improve developmental and operational test procedures. Further, this information supports comparisons of system performance (failure modes and their frequencies) in developmental or laboratory tests, versus performance in operational tests, versus performance in the field. Understanding how system performance is related in tests with various degrees of operational realism is extremely valuable for performing reliability growth modeling and for learning how to design laboratory and operational testing with greater operational realism. Finally, field performance data are used to feed component-level reliability information back to design engineers so they can improve current or future component or system designs.

While field performance data have many potential uses in industry, they also have disadvantages. Some disadvantages stem from the primary reason for the collection of field performance data in industry—to support administrative action such as warranty management. Therefore, the data often are not as suitable for the analyses outlined above as would be the case for data from a structured experiment. Deficiencies include the following. First, a sizable fraction of the data is missing, and there are reporting errors and delays. Second, while collecting time of actual use would be optimal for measuring system life, what is commonly available is only calendar time. Third, the environment of use is commonly known only partially or totally unknown. Fourth, in warranty situations, failures are reported only for units that are under warranty. As a result, data are reliable only until the warranty period is exceeded, and the status of units that are not reported is unknown (including retired units and units that were never put into service). Finally, most field performance data are collected only for repairable systems.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement