Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 70
Vulnerability Assessment of Aircraft: A Review of the Department of Defense Live Fire Test and Evaluation Program B The General Accounting Office Study: Live Fire Testing; Evaluating DoD’S Programs The General Accounting Office (GAO) study was conducted at the request of the Chairman, Seapower Subcommittee, House Armed Services Committee. The purpose of this study was to answer four questions: (1) What is the status of each system originally scheduled for live fire testing under the JLF program? (2) What has been the methodological quality of the test and evaluation process? (3) What are the advantages and limitations of full-up live fire testing, and how do other methods complement full-up testing? (4) How can live fire testing be improved? Of interest here are questions 2, 3, and 4. The conclusions presented by GAO for each of these three questions for the aircraft portion of its study are given below. The abbreviations DTPS and V/L stand for detailed test plans and vulnerability/lethality, respectively. What Has Been the Methodological Quality of the Test and Evaluation Process? AIRCRAFT SPECIFIC Overall Planning In general, JLF/Aircraft planning has been well organized and thorough. JLF/Aircraft established a formal process to designate test priorities; however, test priorities were actually driven by more pragmatic concerns (target availability and the need to ensure tri-service cooperation). The principal constraint on realism is the inability to simulate flight conditions on the ground. Airflow is used to simulate airspeed but the coverage area is small, and other environmental factors affecting fire are not simulated at all. Setting Test Objectives In FY85 and FY86 DTPS, JLF/Aircraft specified objectives congruent with the version of the program objectives they had established. These were generally feasible, with the exception of objectives related to determining probabilities.
OCR for page 71
Vulnerability Assessment of Aircraft: A Review of the Department of Defense Live Fire Test and Evaluation Program Test Planning JLF/Aircraft test designs are generally congruent with test objectives, efficient with respect to conserving targets, and realistic given their limited objectives. Some DTPS specified target requirements which exceeded the availability of those targets. Testers are highly sensitive to test efficiency from an engineering standpoint, i.e., designing tests to conserve targets and prevent testing effects. DTPS omit key information (e.g., data analysis plans) and are inconsistent in selection of threat velocities. Implementation To the limited extent we could observe them, departures from test plans have generally been reasonable. Analysis and Results Only one draft report has been completed—the F100 engine steady state fuel ingestion test. This report omitted key information, overstated the generalizability of results, and presented a highly questionable mode. Recommendations were congruent with results and sensitive to the likelihood of user acceptance. GENERAL LFT ISSUES Conflict over Objectives The JLF charter did not define live fire testing well enough to give test designers a clear direction. There have been several conflicting versions of the objectives of JLF and live fire testing in general. This appears to have in part resulted from the decision to task the JTCG’s to implement JLF. The conflict over objectives reflects underlying differences between the interests of proponents of full-up testing and those of modelers, resulting in largely incompatible approaches. Availability of Targets The principal constraint faced by all JLF test officials is a lack of targets. This is in part a result of inadequate planning; there is no assigned responsibility to provide targets and related support to JLF. Consequently, test officials have had to spend a substantial portion of their time “selling” the program to skeptical service components. The systems and components that JLF does receive are frequently in poor condition, yet JLF provided no funds for restoration. JLF has been further hindered by competing governmental and non-governmental interests and negative attitudes toward destructive testing. Statistical Validity In general, the sample sizes of JLF and related live fire testing have not been sufficient to produce statistically reliable results. This would be a problem even if the number of targets listed in the test plans could be obtained. The statistical input to JLF has been minimal and had little effect, and the few applications of statistical analysis to live fire test data thus far are highly questionable. Several efforts are underway to make live fire tests more statistically interpretable. As a substitute for statistical analysis, engineering judgment—which is heavily relied upon through the V/L process—has little scientific validity, being subject to individual and collective biases. The most common form of vulnerability/lethality indicator—probability of kill given a hit PK/H has not been demonstrated to be reliable or valid.
OCR for page 72
Vulnerability Assessment of Aircraft: A Review of the Department of Defense Live Fire Test and Evaluation Program Shot Selection Controversy over shot selection is to some degree a conflict between sampling efficiency and the desire to avoid bias at all costs. Random sampling from combat distributions is a reasonable way to preclude intentional or inadvertent bias in shot selection. However, sampling from a uniform distribution avoids tester bias and biases in the combat data. The shot selection problem will not be resolved by technical solutions alone. An interim solution might be to designate that some proportion of shots be selected judgmentally and others randomly, but ultimately, it appears impossible to agree on how to select live fire shots without first deciding on test objectives. Human Effects JLF plans do not provide an adequate treatment of human effects. The claims of some JLF officials that personnel vulnerability is well known are overstated. Given the current state of the art, it is unlikely that JLF will produce precise estimates of casualties. Incentive Structure DoD’s incentive structure is not entirely conducive to realistic live fire testing. COMPARISON PROGRAMS Past Programs The state of the art of live fire testing has improved since prior live fire testing programs, but some potentially solvable problems raised earlier have not been solved. For example, little progress has been made in the empirical validation of V/L estimates SUMMARY CONCLUSION There is little completed testing on which to base a methodology evaluation. However, it is apparent that the technical capability to do full-up testing is not well developed. This is partly due to the historically low emphasis on live fire testing in the U.S.
OCR for page 73
Vulnerability Assessment of Aircraft: A Review of the Department of Defense Live Fire Test and Evaluation Program What Are the Advantages and Limitations of Full-up Live Fire Testing? (Both Land and Air Targets) ADVANTAGES OF LIVE FIRE TESTING As the only method providing direct visual observation of the damage caused by a weapon/target interaction under realistic combat conditions, full-up live fire testing offers a unique advantage over all other methods of V/L assessment. The descriptions of directly observable damage that full-up testing provides are regarded as highly beneficial by users. Full-up testing has already demonstrated some value by producing several “surprises,” i.e., results that were not predicted, and might not have been detected by other methods of testing or analysis. LIMITATIONS OF LIVE FIRE TESTING High Cost The primary limitation of full-up, full-scale live fire testing is cost. On a per shot basis, it is considerably more expensive than inert or subscale testing, primarily due to the high cost and limited availability of targets. Testing and restoration costs are also higher, as are their associated time requirements. Nonetheless, live fire testing costs are a very small percentage of total program costs. Limited Information Full-up testing potentially yields less information about damage mechanisms per shot than inert or subscale testing, primarily because catastrophic kills destroy the target and its components, along with much of the instrumentation used to record the damage. However, not all full-up shots result in catastrophic kills; such shots potentially yield more interpretable information than equivalent inert shots. Limited Generalizability Full-up live fire test results typically are less easily generalized beyond the specific test conditions than inert or subscale testing. Full-up testing brings a larger number of variables into play that potentially affect outcomes, yet because full-up testing destroys targets, a smaller proportion of relevant test conditions can be examined. Limited Redesign Opportunities The impact of live fire testing of developed systems is limited by “frozen” designs which are prohibitively expensive to change. For this reason, test officials see the main benefit of JLF and related programs as reducing vulnerability of future systems through lessons learned. This is not to suggest, however, that important V/L modifications are never feasible.
OCR for page 74
Vulnerability Assessment of Aircraft: A Review of the Department of Defense Live Fire Test and Evaluation Program How Do Other Methods Complement Full-up Testing? (Both Land and Air Targets) Subscale Testing Subscale tests can support larger sample sizes than full-scale tests (whether full-up or inert), and are useful in bounding effects and providing input to models. Certain types of subscale testing are also useful for developing generic characterization of munitions effects. Subscale tests can provide only indirect evidence of synergistic effects on realistic targets, which must be inferred through an unproven analytical process (modeling). Therefore, subscale testing can supplement full-up, full-scale testing but not substitute for it. Inert Testing Inert testing of full-scale targets is superior to full-up testing in characterizing mechanical damage to individual components and in conserving both components and targets. Catastrophic damage cannot be observed directly from shots on inert targets, and the standard method for inferring a K-kill underestimates its true likelihood. Like subscale tests, inert tests can provide only indirect evidence of effects on realistic (i.e., full-up) targets, inferred through models acknowledged to be weak on combustibles. Therefore, inert testing can supplement full-up, full-scale testing but not substitute for it. Combat Data Analysis of combat data, if available, has several advantages over V/L testing: it provides greater realism, includes information above the level of vulnerability and lethality (e.g., aggregated survivability measures), and is considerably less expensive. Combat data provide less scientific control than testing, are limited to munitions and systems that have been employed in combat, and offer no direct view of the damage process or the conditions of firing. Like subscale and inert testing, combat data can supplement full-up, full-scale testing but not substitute for it. Modeling V/L models support the design and interpretation of live fire tests, and are potentially useful in extrapolating beyond test results. A unique advantage of models over testing is their applicability to systems not yet built. Models are widely used in V/L assessment generally, but play a more central role in the design and interpretation of armor tests than in aircraft tests. It does not appear that models have as yet played as great a role in the design of live fire tests as some statements by the modelers would indicate. Current vulnerability models share numerous limitations; specifically, fire, explosion, multiple hits, ricochets, synergistic effects, and human effects are not yet well modeled. Many of the most important mechanisms for producing casualties are poorly modeled, if at all. Without specific efforts to bring these casualty mechanisms into the modeling process, V/L models can be expected to be of limited utility in predicting casualty reduction. Currently used V/L models are inadequately validated.
OCR for page 75
Vulnerability Assessment of Aircraft: A Review of the Department of Defense Live Fire Test and Evaluation Program A large part of the modeling and model revision process is closed to outside analysis, including weapon designers. This has led to claims that modelers ignore or misspecify important V/L mechanisms, or that they are accountable only to their own community. Claims that vulnerability models predict poorly are somewhat overstated, often referring to predictions from older models not expected to be used in live fire tests, and insufficient test or combat data to permit unqualified conclusions. Additionally, little attention has been paid to the different levels of accuracy required for different user’s purposes. The stochastic components introduced into vulnerability models after the Bradley Phase I tests provide an unknown level of protection from invalidation by test data. There are no clearly specified mechanisms for using live fire test data to calibrate or revise models.
OCR for page 76
Vulnerability Assessment of Aircraft: A Review of the Department of Defense Live Fire Test and Evaluation Program How Can Live Fire Testing Be Improved? TECHNICAL IMPROVEMENTS We suggest that DoD Improve the estimation of human effects. Begin by replacing noninstrumented plywood mannequins with the instrumented anthropomorphic type. Improve the reliability and validity of quantitative V/ L estimates. For example, interrater agreement studies could determine the magnitude of the reliability problem, and provide insights into reducing it. Expand efforts to improve statistical validity, and establish guidelines for the statistical interpretation of small-sample live fire test results. Concentrate model improvements on currently weak areas vital to casualty estimation—fire and explosion and human effects. Establish guidelines for how models can better support the design and interpretation of live fire tests. Establish guidelines for how live fire test results can be used in the revision of models. Allow outside analysis into the modeling and modeling revision process, and provide better documentation of the process for use by those analysts. Accumulate comparisons of model predictions with live fire test results over multiple tests in order to assess improvements in models, and make results available to outside analysts; also redo predictions of earlier live fire shots after models have been revised in order to validate improvements. Require that detailed test plans include shotlines, munitions, sample sizes, predictions, analysis plans, rationales for decisions, and other critical information to enable proper oversight. Keeping plans unclassified should not be a justification for omitting key information. Develop, modify, or procure instrumentation to yield more information from catastrophic shots. Improve methods for simulating in-flight conditions; specifically altitude, altitude history, maneuver load, and slosh. GENERAL IMPROVEMENTS We suggest that DoD Avoid requiring unrealistic or incompatible objectives in future live fire tests (e.g., combat realism and model validation). Consider total program costs in considerations of target costs, including for example the concept of a percentage set-aside for live fire testing. Determine whether the live fire testing infrastructure is adequate to implement the legislation, or has to be expanded. For example, only two facilities in the U.S. currently have high speed airflow capability. Determine to the extent possible the cost of live fire testing of new systems, and the relative costs and benefits of different approaches to live fire testing. Currently, there are claims and counter-claims about the costs of full-up vs. subscale tests, but little data. Promote awareness of the benefits to be obtained from destructive testing to top level military and civilian officials. With the legislation as a foundation, continue to strengthen incentives that support realistic live fire testing.
OCR for page 77
Vulnerability Assessment of Aircraft: A Review of the Department of Defense Live Fire Test and Evaluation Program Recommendations to the Secretary of Defense In addition to the improvements noted elsewhere, there is a need to resolve current conflicts about the purpose of live fire tests and to make clear that the objective of reducing vulnerability and increasing lethality of U.S. systems is the primary emphasis of testing. Accordingly, we recommend that the Secretary of Defense Conduct full-up tests of developing systems, first at the subscale level as subscale systems are developed, and later at the full-scale level mandated in the legislation. This will minimize vulnerability “surprises” at the full-scale level, at which time design changes are more difficult and costly. Establish guidelines on the role live fire testing will play in procurement. Establish guidelines on the objectives and conduct of live fire testing of new systems, with particular attention to clarifying what is to be expected from the services. Ensure that the primary users’ priorities drive the objectives of live fire tests. Modelers are secondary users. Recent live fire legislation requires the services to provide targets for testing new systems, but there is no similar requirement for the fielded systems in JLF, where lack of targets has impeded testing. Accordingly, we recommend that the Secretary of Defense provide more support to JLF for obtaining targets. References U.S. General Accounting Office, 1987. Live Fire Testing, Evaluating DOD’s Programs” GAO/PEMD-87-17, Washington, D.C.: U.S. Government Printing Office.