The Committee on National Statistics of the National Research Council (NRC) has had a long-standing goal of helping to develop and encourage the use of state-of-the-art statistical methods across the federal government. As a result of this interest, discussions began several years ago during meetings of the Committee on National Statistics about the possibility of conducting a study for the U.S. Department of Defense (DoD). Mutual interest between the committee and the DoD Office of Program Analysis and Evaluation in greater application of statistics within DoD led to a meeting of key DoD personnel and several NRC staff. As a result of this meeting, system testing and evaluation emerged as an area where statistical science could prove useful.
Consequently, at the request of DoD, the Committee on National Statistics, in conjunction with the NRC Committee on Applied and Theoretical Statistics, held a two-day workshop in September 1992 on experimental design, statistical modeling, simulation, sources of variability, data storage and use, and operational testing of weapon systems. The workshop was sponsored by the Office of the Director of Operational Test and Evaluation, and the Office of the Assistant Secretary of Defense for Program Analysis and Evaluation. The overarching theme of the workshop was that using more appropriate statistical approaches could improve the evaluation of weapon systems in the DoD acquisition process.
Workshop participants expressed the need for a study to address in greater depth the issues that surfaced at the workshop. Therefore, at the request of DoD, a multiyear panel study was undertaken by the Committee on National Statistics in early 1994. The Panel on Statistical Methods for Testing and Evaluating Defense Systems was established to recommend statistical methods for improving the effectiveness and efficiency of testing and evaluation of defense systems, with emphasis on operational testing. The 13-member panel comprises experts in the fields of statistics (including quality management, decision theory, sequential testing, reliability theory, and experimental design), operations research, software engineering, defense acquisition, and military systems. The study is sponsored by the DoD Office of the Director of Operational Test and Evaluation.
Early in its work, the panel formed seven working groups to study particular aspects of defense testing: (1) design of experiments; (2) uses of modeling and computer simulation; (3) system reliability,
availability, and maintainability; (4) software-intensive systems; (5) organizational context; (6) taxonomy of defense systems in operational testing; and (7) development of case studies. An eighth working group, on methods for combining information, has been merged into two of the other working groups: modeling and simulation, a primary application of such methods; and organizational context, which will consider organizational aspects of combining information in the testing process.
This interim report presents the results of the panel's work to date in these areas. We have two goals in preparing this report: (1) to provide the sponsor and the defense testing community with feedback based on the panel's ongoing review of current test practices and (2) to present our current approaches and plans so that interested parties can provide input—for example, additional literature or expert testimony—for our final report. Because the report represents work in progress, we include few conclusions and no recommendations at this time.
From the beginning of this study we have enjoyed the cooperation and participation of many people. We particularly wish to acknowledge the support of Philip Coyle, director, and Ernest Seglie, science advisor, Office of the Director, Operational Test and Evaluation, U.S. Department of Defense (the study sponsors); Henry Dubin, technical director, U.S. Army Operational Test and Evaluation Command; James Duff, technical director, U.S. Navy Operational Test and Evaluation Force; Marion Williams, technical director, U.S. Air Force Operational Test and Evaluation Center; and Robert Bell, technical director, U.S. Marine Corps Operational Test and Evaluation Activity. In addition, we are grateful to many other representatives from the military services, the Office of the Secretary of Defense, and private organizations in the testing community. Appendix D provides a more comprehensive list of the panel's contacts.
As the study moves into its final phase of work, the panel will investigate further the issues addressed in this report. The final report is planned for publication in December 1996.
John E. Rolph, Chair
Panel on Statistical Methods for Testing and Evaluating Defense Systems