National Academies Press: OpenBook

Statistical Methods for Testing and Evaluating Defense Systems: Interim Report (1995)

Chapter: 4 System Reliability, Availability, and Maintainability

« Previous: 3 Testing of Software-Intensive Systems
Suggested Citation:"4 System Reliability, Availability, and Maintainability." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

4

System Reliability, Availability, and Maintainability

The panel has undertaken an inquiry into current policies and statistical practices in the area of system reliability, availability, and maintainability as related to operational testing in the DoD acquisition process. As noted earlier, operational testing is intended to assess the effectiveness and suitability of defense systems under consideration for procurement. For this purpose, operational suitability is defined in DoD Instruction 5000.2 (U.S. Department of Defense, 1991) as follows:

The degree to which a system can be placed satisfactorily in field use with consideration given to availability, compatibility, transportability, interoperability, wartime usage rates, maintainability, safety, human factors, manpower supportability, logistics supportability, natural environmental effects and impacts, documentation, and training requirements. 1

Considerations of suitability, including reliability, availability, and maintainability, are likely to have different implications for the design and analysis of operational tests than considerations of effectiveness, and consequently merit distinct attention by the panel in its work.

The panel's inquiry in this area has a threefold purpose. First, we will characterize the range of statistically based reliability, availability, and maintainability activities, resources, and personnel in the various branches of DoD in order to gauge the current breadth, depth, and variability of DoD reliability, availability, and maintainability applications. Second, we will examine several systems and activities in detail, with a view toward assessing the scope of reliability, availability, and maintainability practices in particular applications. Third, we will study the technological level of current reliability, availability, and maintainability practices as a foundation for recommendations about the potential applicability of recently developed reliability, availability, and maintainability methodology, or the need for new statistical developments.

The next section reviews reliability, availability, and maintainability testing and evaluation in the military services. This is followed by an examination of variability in reliability, availability, and

1  

Curiously, this definition does not explicitly include reliability as a consideration.

Suggested Citation:"4 System Reliability, Availability, and Maintainability." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

maintainability policy and practice. Next is a brief look at industrial (nonmilitary) reliability, availability, and maintainability standards. The chapter ends with a summary and a review of the panel's planned future work in this area.

RELIABILITY, AVAILABILITY, AND MAINTAINABILITY TESTING AND EVALUATION IN THE MILITARY SERVICES

The panel has examined a substantial collection of government documents that touch on some aspect of reliability, availability, and maintainability testing and evaluation. We have carefully reviewed several of these sources that have been identified as widely used or cited, including DoD's RAM Primer (U.S. Department of Defense, 1982) and the Air Force Operational Test and Evaluation Center's Introduction to JRMET and TDSB: A Handbook for the Logistics Analyst (1995). Other documents have been scanned, including Sampling Procedures and Tables for Life and Reliability Testing (U.S. Department of Defense, 1960) and a substantial number of relevant military standards.

To gain a better understanding of how reliability, availability, and maintainability testing and evaluation is conducted in the military services, the panel held telephone conferences with Army and Navy operational test and evaluation personnel and visited the Air Force Operational Test and Evaluation Center, reviewing a variety of reliability, availability, and maintainability organizational procedures and technical practices. We received briefings on the recent operational tests of the B-1B bomber and the C-130H cargo transport, as well as demonstrations of major software packages in use for test design and analysis. In addition to these activities, we addressed reliability, availability, and maintainability topics as part of our site visit to the Army Test and Experimentation Command Center at Fort Hunter Liggett, during which we observed preparations for operational testing of the Apache Longbow helicopter.

Evaluation of reliability, availability, and maintainability in Navy operational testing occurs within the four major divisions of the Navy Operational Test and Evaluation Force: air warfare, undersea warfare, surface warfare, and command and control systems. Analysts work as part of operational test teams that are typically directed by military personnel with significant operational experience. Many analysts have received graduate training in operations research and statistics at the Naval Postgraduate School.

The Army appears to have achieved the greatest degree of integration between developmental and operational testing in evaluating reliability, availability, and maintainability. The reliability, availability, and maintainability division at the Army Materiel Systems Analysis Activity is the organization that concentrates most on reliability, availability, and maintainability issues, but other units are also involved, including the Test and Evaluation Command, Operational Evaluation Command, Army Materiel Command, Program Evaluation Office, and Training and Doctrine Command. In the Army, reliability, availability, and maintainability data for a system are scored by a joint committee involving personnel from the Operational Evaluation Command, the Army Materiel Systems Analysis Activity, the Training and Doctrine Command system manager, and the program manager.

In the Air Force, reliability, availability, and maintainability evaluation is part of the mission of the Logistics Studies and Analysis Team within the Air Force Operational Test and Evaluation Center 's Directorate of Systems Analysis. Each of approximately ten analysts works concurrently on 10 to 12 different systems. Most Air Force analysts have a background in engineering or operations research, and may receive further training in statistics from courses offered by the Air Force Institute of Technology.

Suggested Citation:"4 System Reliability, Availability, and Maintainability." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

VARIABILITY IN RELIABILITY, AVAILABILITY, AND MAINTAINABILITY POLICY AND PRACTICE

The panel has developed a reasonably comprehensive understanding of the range of documents that guide the majority of DoD reliability, availability, and maintainability applications and of the professional training of technical personnel engaged in reliability, availability, and maintainability applications.

A distinct impression that has emerged is that there is a high degree of variability in reliability, availability, and maintainability policy and practice among the services, as well as within the testing community in each service branch. For example, it is only recently that some agreements have been forged regarding a common vocabulary in this area. There are certain units in the individual services in which reliability, availability, and maintainability practices are modern and rigorous; some of these have members with advanced training (i.e., M.S. or Ph.D.) in statistics or a related field. On the other hand, the opportunities for advanced reliability, availability, and maintainability-related coursework within the services (whether delivered in house or through special contracts) appear to be quite uneven.

The Navy has unique access to an excellent technical resource—the Naval Postgraduate School in Monterey—and often refers its more complex reliability, availability, and maintainability problems to the school's faculty directly or engages graduates of the school in addressing them. Ironically, and possibly as a consequence of this mode of operation, there seems to be less reliability, availability, and maintainability-related technical expertise in residence at Navy operational test and evaluation installations than is found among the other services.

Although the Army appears to have a larger corps of reliability, availability, and maintainability professionals at work, the distribution of this specialized workforce among various developmental and operational testing installations appears to be quite uneven. The reliability group (Army Materiel Systems Analysis Activity) at Aberdeen Proving Ground, for example, has a strong educational profile. The group 's use of modern tools of reliability analysis, including careful attention to experimental design, model robustness questions, and the integration of simulated and experimental inputs, is impressive and commendable. At Fort Hunter Liggett, the profile of the statistical staff is quite different, in terms of both size and years of advanced statistical training. The sample work products the panel has seen from these two installations are noticeably different, the former being more technical and analytical and the latter more descriptive. (The Training and Doctrine Analysis Command is another example of a group for whom an upgrading of capabilities in statistical modeling and analysis could pay some big dividends.)

In its review of Air Force Operational Test and Evaluation Center procedures and practices, the panel was impressed by the care with which materials for training suitability analysts were assembled, and with the coordinated way in which reliability, availability, and maintainability procedures were carried out on specific testing projects. The level of energy, dedication to high standards, and careful execution of statistical procedures were commendable. Certain areas of potential improvement were also noted. Among these, the need for more personnel with advanced degrees in the field of statistics seemed most pressing. It was clear that certain statistical methods in use could be improved through the recognition of failure models other than those resulting in an exponential distribution of time between failures.

Some groups with whom the panel conferred described the RAM Primer as their main reference, while others indicated they view that document as more of an introduction to these topics, providing a management perspective. This again underscores the extent of variability within the DoD testing

Suggested Citation:"4 System Reliability, Availability, and Maintainability." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

community. We have noted a similar variability in the way different testing groups mix civilian and military analysts in the teams assigned to their reliability, availability, and maintainability applications.

The panel recognizes that different phases of the acquisition process may well have different technical requirements; thus, it seems clear that the level of the personnel involved might vary from phase to phase in accordance with the task or mission of the responsible group. We nonetheless observe that the execution of a sequence of tasks and the validity of the cumulative recommendation benefit from technical expertise at each step in the process.

INDUSTRIAL (NONMILITARY) STANDARDS

In parallel with various military documents in the reliability, availability, and maintainability area, the panel has reviewed a collection of documents describing reliability, availability, and maintainability industrial standards and practices. In the course of doing so, we have found that the existence of an accepted set of best practices is a goal much closer to being realized in industrial than in military settings. Models for such developments include the Organization for International Standardization (ISO) 9000 series, and existing documents on practices in the automobile and telephone industries.

In our final report, the panel will want to comment on the possibility that DoD might learn from industrial practices in such areas as documentation, uniform standards, and the pooling of information. Documentation of processes and retention of records (for important decisions and valuable data) are practices now greatly emphasized in industry. The same should be true for DoD, especially for the purposes of operational testing. Efforts to achieve more efficient (i.e., less expensive) decision making by pooling data from various sources require documentation of the data sources and of the conditions under which the data were collected, as well as clear and consistent definitions of various terms. Such efforts complement attempts to standardize practices across the services and encourage the use of best current practices.

The panel does believe that reliability, availability, and maintainability “certification” can better be accomplished through combined use of data collected during training exercises, developmental testing, component testing, bench testing, and operational testing, along with historical data on systems with similar suitability characteristics. In our final report, the panel will seek to clarify the role hierarchical modeling might play in reliability, availability, and maintainability inference from such a broadened perspective. One complication here is that the panel has encountered operational testing reports (one example is an Institute for Defense Analyses report on the Javelin) that put forth raw estimates of parameters of interest with no indication of the amount of uncertainty involved. This practice does not appear to be particularly rare. When such an outcome is combined with other evidence of a system's performance (regardless of the quality of this additional information), the decision maker is at a loss to describe in a definitive way the risks associated with the acquisition of the system, and combination of information from various sources is extremely difficult.

Retention of records may involve some nontrivial costs, but is clearly necessary for accountability in the decision-making process. The trend in industry is to empower employees by giving them more responsibility in the decision-making process, but along with this responsibility comes the need to make people accountable for their decisions. This consideration is likely to be an important organizational aspect of the operational testing of defense systems. In addition, effective retention of information allows one to learn from historical data and past practices in a more systematic manner than is currently the case.

It should not be necessary for DoD or the individual services to develop an approach to uniform standards from scratch; there is no question that existing industry guidelines can be adapted to yield

Suggested Citation:"4 System Reliability, Availability, and Maintainability." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×

much of what is needed. A related observation concerns the need to realize the potential for more effective use of reservoirs of relevant knowledge outside DoD, including experts at the Institute for Defense Analyses and other federally funded research and development centers, faculty and students from the military academies, and personnel from public and private civilian institutions.

FUTURE WORK

In summary, the panel's preliminary reactions to its findings in the reliability, availability, and maintainability area are as follows:

  • The variability in expertise and level of reliability, availability, and maintainability practices across and within services bears further investigation, and may well lead to recommendations regarding minimal training and more comprehensive written resources.

  • DoD should consider emulating industrial models for establishing uniform reliability, availability, and maintainability standards.

  • There is a need for modernizing military reliability, availability, and maintainability practices, extending standard analyses beyond their present, restricted domains of applicability, which often involve an untenable assumption of exponentiality.

The panel has not yet identified a suitable collection of military systems that should play the role of case studies in the context of a more detailed look at DoD reliability, availability, and maintainability practices. We have looked carefully at the Apache Longbow in this connection, and are still entertaining that as a candidate case study. Proceeding with the case study phase of our work remains our principal unfinished task.

In the months ahead, the panel will consider matters such as appropriate research priorities in the reliability, availability, and maintainability area, and in statistics generally, given the array of complex inference problems with which the DoD testing community is currently engaged. The agenda for the next phase of the panel's reliability, availability, and maintainability-related work will include, as high priorities, increased contact with the Air Force and Marine Corps and assessment of the quality and appropriateness of current reliability, availability, and maintainability practices, together with the formulation of possible amendments aimed at greater precision, efficiency, and protection against risk.

We will undertake two main activities: identifying relevant materials related to the identified case studies; interacting with Navy Operational Test and Evaluation Force staff to increase our familiarity with their procedures and with the Army Materiel Systems Analysis Activity to better understand their role in Army reliability, availability, and maintainability methodology for developmental testing. We will also undertake some Monte Carlo simulation to examine the robustness of current reliability, availability, and maintainability practices in the design and execution of life tests.

Suggested Citation:"4 System Reliability, Availability, and Maintainability." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 28
Suggested Citation:"4 System Reliability, Availability, and Maintainability." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 29
Suggested Citation:"4 System Reliability, Availability, and Maintainability." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 30
Suggested Citation:"4 System Reliability, Availability, and Maintainability." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 31
Suggested Citation:"4 System Reliability, Availability, and Maintainability." National Research Council. 1995. Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/9074.
×
Page 32
Next: 5 Use of Modeling and Simulation in Operational Testing »
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!