Over the past decade and a half, the National Research Council, through its Committee on National Statistics, has carried out a number of studies on the application of statistical methods to improve the testing and development of defense systems. These studies were intended to provide advice to the U.S. Department of Defense (DOD), which sponsored these studies. Unlike the earlier ones, the goal of this study was to identify current engineering practices that have proved successful in industrial applications for system development and testing.
The Panel on Industrial Methods for the Effective Test and Development of Defense Systems was given the following charge:
An ad hoc committee, under the auspices of the Committee on National Statistics and the Board on Army Science and Technology, will plan and conduct a workshop that will explore ways in which developmental and operational testing, modeling and simulation, and related techniques can improve the development and performance of defense systems. The workshop will feature invited presentations and discussion to identify specific techniques that have been shown to be effective in industrial applications and are likely to be useful in defense system development.
In addition to the broad issues in its charge, the panel identified three specific topics for its focus, which we selected from a larger number that were immediately motivated by the panel’s charge: finding failure modes earlier, technological maturity, and use of all relevant information for
operational assessments. Our view was that these specific topics were more important and likely to benefit from greater examination.
Finding Failure Modes Earlier It is well known that an effective way to reduce costs and development times is to identify failure modes and design flaws as early as possible during the development of defense systems. What techniques are used in industry to accomplish this? Are there some generally applicable principles and practices that could be learned from the commercial sector and applied to DOD? How useful is it to test under conditions of operational realism early in system development? What aspects of the operational environment can be safely simulated and what can be ignored? What is meant by the envelope of operational performance for a system, and how far beyond that envelope should one test to discover design flaws and system limitations? Related to this, how are accelerated life tests utilized in industry? What are the advantages and disadvantages?
Technological Maturity The inclusion of hardware and software components that are not mature is often the cause of delays in defense system development and reduced performance when fielded. It is insufficient to assess the suitability and effectiveness of individual components of defense systems with respect to component-level requirements and specifications, disregarding how a component functions as part of the whole system. Such an approach represents an assessment of technological maturity in isolation, ignoring the likely environments of operational use, the impact of the employment of typical users, and other potential difficulties involving interoperability with the remaining system. A second related issue is how much of the testing resources should be allocated to just the components and how much should be devoted to testing them as part of the parent system. How do these issues differ for hardware versus software systems?
Use of All Relevant Information for Operational Assessment Data from many different sources are used to design tests and assess operational system performance. These include developmental testing, operational testing, modeling and simulation, and the same types of data from earlier stages of development for both the current system (when evolutionary acquisition is used) and for closely related systems. In evolutionary acquisition, there are also field performance data that are often available from the fielding of earlier versions of the system. As a result, information may be available from the operation of a system in very different contexts and can also involve appreciably different systems, given that the system in question will change during development. It is therefore a challenge to
incorporate all of these sources of data to guide developmental and operational test design and to improve operational evaluation. Field performance data represent a particularly valuable resource since they can be extremely useful in supporting three types of feedback loops: (1) improving system design based on deficiencies experienced in the field (recognizing that field performance data can be severely incomplete), (2) improving developmental and operational test strategy by observing what system design flaws were missed in developmental and operational testing that later appeared in the field, and (3) using field performance data to validate modeling and simulation.
The main information-gathering activity for the study was a one- and-a-half-day workshop (see Appendix A for the program and list of speakers). The workshop was preceded by a preliminary meeting of the panel to plan the workshop, and it was immediately followed by a second panel meeting to develop the general outline of the report and some of its conclusions. There were two subsequent meetings at which panel members worked on drafts of the report.
The panel stresses that it could not, and did not, carry out a comprehensive literature review or examination of industrial engineering methods for systems development. Further, while our intentions were to address the three motivating questions relatively completely, many of the issues posed as part of the three motivating questions were not addressed by speakers at the workshop. What the report does do is highlight important techniques that have been found to be very useful in commercial industries and discusses their application in the DOD environment. These include processes for setting requirements, systems design, and testing. It was also necessary to consider the broader DOD acquisitions environment, since characteristics of that environment affect the applicability of industrial practices to DOD. Thus, the study considered availability and access to data (especially test data), availability of engineering and modeling expertise, and organizational structure of defense acquisition. The traditional issues in modeling and simulation were not covered in the workshop, except for the use of model-based design tools for requirements setting and test generation.
The panel recognizes that many, perhaps even most, of the leading- edge industrial practices discussed in this report may have been (or are currently being) used in DOD. Thus, the findings and recommendations in the report will not come as a surprise to some readers. However, the environment in DOD is very heterogeneous, and industrial best practices are currently not being followed consistently. Thus, one of the major goals
of this report is to emphasize the benefits of such techniques and promote them so that their use becomes routine and is institutionalized.
The panel is also cognizant of the differences in the environment and incentive structures under which DOD operates compared with those in commercial industries. We have tried to keep these differences in mind in our analyses, findings, and recommendations. The panel believes that there are important gains to be achieved from using these industrial practices and processes.
The remainder of this report is organized as follows. Chapter 2 provides a summary of the workshop presentations and suggestions on hardware and software development processes. The following five chapters focus on the applicability of industrial practices in the DOD environment and offers the panel’s conclusions and recommendations. Chapter 3 covers requirements setting; Chapter 4 covers system design and development; Chapter 5 covers testing methods; Chapter 6 covers communication, resources, and infrastructure; and Chapter 7 covers organization structure and related issues.
The agenda for the panel’s workshop is provided in Appendix A. A brief overview of the defense acquisitions process is in Appendix B. Biographical sketches of panel members and staff are in Appendix C.