Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 23
Test Process The Army's interim armored combat vehicle, now called the Stryker, is in the latter stages of development. As is the case in all major acquisitions, it is necessary for the Army to subject the vehicle to a set of tests and evaluations to be sure it understands what it is buying and how well it works in the hands of the users. The Army Test and Evaluation Command (ATEC) has been charged to do these tests and evaluations. A key element in this series of tests is the operational testing, commencing with the initial operational test (IOT). The basic thought in the development of any operational test plan is to test the equipment in an environment as similar as possible to the environ- ment in which the equipment will actually operate. For combat vehicles such as the Stryker, the standard practice is to create combat situations similar to those in which the test vehicle would be expected to perform. The system is then inserted into the combat situations with trained opera- tors and with an opposition force (OPFOR) of the type expected. Preplanned scenarios and training schedules for the players in the test are developed, the nature of the force in which the test vehicle will be embed- ded is identified, and the test plans are developed by ATEC. Testing and evaluation of the Stryker is especially challenging, because several issues must be addressed together: · To what extent does the Stryker in various configurations, equipped with integrated government-furnished equipment (GFE) meet (or fail to meet) its requirements (e.g., for suitability and survivability)? 23
OCR for page 24
24 IMPROVED OPERATIONAL TESTING AND EVALUATION · How effective is the Interim Brigade Combat Team (IBCT), equipped with the Stryker system, and how does its effectiveness compare with that of a baseline force? · What factors (in the forces and in the systems) account for successes and failures in performance and for any performance differences between the forces and the systems? Thus, a primary objective of the IOT is to compare an organization that includes the Stryker with a baseline organization that does not include the Stryker. This makes the evaluation of the data particularly important and challenging, because the effects of the differences in organizations will be confounded with the differences in their supporting equipment systems. The planning for the test of the IBCT/Stryker is particularly difficult because of the complex interactions among these issues, the varying mis- sions in which the IBCT/Stryker will be tested, the number of variants of the vehicle itself, time and budget constraints (which affect the feasible size and length of tests), uncertainty about the characteristics of the planned add-on armor, and the times of year at which the test will be run. Factors that must be considered in the evaluation of the test data are: . Modeling and simulation using the results of the live test, account- ing for the uncertainty due to small sample size in the live tests. 2. The incorporation of developmental test data, manufacturer test data, and historical data in the evaluation. 3. Extrapolation of IOT field data to higher echelons that, due to re- source constraints, will not be tested by a live, representative force in the Stryker JOT. In particular, one of the three companies in the battalion that will be played in the Stryker IOT is notional (i.e., its communications, disposition, and effects will be simulated); the battalion headquarters has no other companies to worry about; and the brigade headquarters is played as a "white force" (neutral entity that directs and monitors operations). 4. The relative weight to give to "hard" instrument-gathered data vis- a-vis the observations and judgments of subject-matter experts (SMEs). OVERALL TESTING AND EVALUATION PLAN Two organizations, the government and the contractors that build the systems, are involved in the test and evaluation of Army systems. The
OCR for page 25
TEST PROCESS 25 Army's Developmental Test Command (within the ATEC organization) conducts, with contractor support, the production verification test. The purpose of this test is to ensure that the system, as manufactured, meets all of the specifications given in the contract. This information can be valu- able in the design and evaluation of results of subsequent developmental tests, particularly the testing of reliability, availability, and maintainability (RAM). Within the Army, responsibility for test and evaluation is given to ATEC. When ATEC is assigned the responsibility for performing test and evaluation for a given system, several documents are developed: · the test and evaluation master plan, · the test design plan, · the detailed test plan, · the system evaluation plan, and · the failure definition and scoring criteria. Testers perform a developmental test on the early production items in order to verify that the specifications have been met or exceeded (e.g., a confirmation, by noncontractor personnel, of the product verification test results on delivered systems). Following the developmental test, ATEC designs and executes one or more operational tests, commencing with the JOT. Modeling and simulation are often used to assist in test design, to verify test results, and to add information that cannot be obtained from the JOT. EVALUATION OF THE DATA When the results of the product verification test, the developmental test, the initial operational test, the modeling and simulation, and the his- tory of use have been gathered, ATEC is responsible for compiling all data relevant to the system into a final evaluation report. The Director, Opera- tional Test and Evaluation, approves the ATEC IOT event design plan and conducts an independent evaluation. As noted above, IBCT/Stryker test- ing will address two fundamental questions: (1) To what extent does the Stryker system (i.e., integration of the Stryker vehicle and its GFE in vari- ous configurations) meet its requirements? (2) How well does the IBCT force, equipped with the Stryker system, perform and meet its require-
OCR for page 26
26 IMPROVED OPERATIONAL TESTING AND EVALUATION meets, compared with the baseline Light Infantry Brigade (LIB) force? Evaluators will also assess the ways in which the IBCT force employs the Stryker and the extent to which the Stryker GFE provides situation aware- ness to the IBCT. They will also use the test data to help develop an understanding of why the IBCT and Stryker perform as well (or poorly) as they do. The ATEC evaluator is often asked about the most effective way to employ the system. If the test has been designed properly, extrapolation from the test data can often shed light on this question. In tests like those planned for the IBCT/Stryker, the judgment of the SMEs is highly valu- able. In the design, the SMEs may be asked to recommend, after early trials of the Stryker, changes to make the force more effective. This process of testing, recommending improvements, and implementing the recom- mendations can be done iteratively. Clearly, although the baseline trials can provide helpful insights, they are not intended primarily to support this kind of analysis. With the outcome of each test event recorded and with the aid of modeling, the evaluator will also extrapolate from the outcome of the IOT trials to what the outcome would have been under different circumstances. This extrapolation can involve the expected outcomes at different loca- tions, with different force sizes, or with a full brigade being present, for example. C7 ' TEST PROCESS Scripting In the test design documents, each activity is scripted, or planned in advance. The IOT consists of two sets of operational trials, currently planned to be separated by approximately three weeks: one using the IBCT/ Stryker, the other the baseline LIB. Each trial is scheduled to have a nine- day duration, incorporating three types of mission events (raid, perimeter defense, and security operations in a stability environment) scripted during the nine days. The scripting indicates where and when each of these events in the test period occurs. It also establishes starting and stopping criteria for each event. There will be three separate nine-day trials using the IBCT test force and three separate nine-day trials using the LIB baseline force.
OCR for page 27
TEST PROCESS 27 Integrated Logistics Support Logistics is always a consideration during an JOT. It will be especially important in Stryker, since the length of a trial (clays) is longer than for typical weapon system IOT trials (hours). The supporting unit will be assigned in aclvance, ancl its actions will be controllecl. It will be precleter- minecl whether a unit can continue based on the logistical problems en- counterecl. The handling of repairs ancl replacements will be scriptecl. The role of the contractor in logistics support is always a key issue: contractors often maintain systems during introduction to a force, ancl both the level of "raining of Army maintenance personnel ancl the extent of con- tractor involvement in maintenance can affect force ancl system perfor- mance during operational testing. The contractor will not be present clur- ing actual combat, so it could be arguccl that the contractor should not be permitted in the areas reserved for the JOT. A counterargument is that the IOT can represent an opportunity for the contractor to learn where ancl how system failures occur in a combat environment. Safety A safety officer, present at all times, attempts to ensure that safety rules are followocl ancl is allowocl to stop the trial if it becomes apparent that an r . . unsafe cone citron exists. Constraints on Test and Evaluation The test ancl evaluation design ancl execution are greatly influenced by constraints on time, money, availability of trained participants, ancl avail- ability of test vehicles, as well as by demands by the contractor, the project manager, the director of operational test ancl evaluation, ancl Congress. In the IBCT/Stryker JOT, the time constraint is especially critical. The avail- ability of test units, test assets, test players, ancl test sites has created con- straints on test clesign; implications of these constraints are discussed later in this report. One key constraint is the selection of the baseline force (LIB) ancl its equipment. We note that there are alternative baselines (e.g., the Mechanizecl Infantry Brigacle ancl variations to the baseline equipment configurations) that could have been selected for the Stryker IOT but con- sicler it beyond the scope of the panel's charge to assess the choice of baseline.
OCR for page 28
28 IMPROVED OPERATIONAL TESTING AND EVALUATION One possibility that might be considered by ATEC would be to have the subject-matter experts also tasked to identify test results that might have been affected had the baseline force been different. Although this might involve more speculation than would be typical for SME's given their training, their responses could provide (with suitable caveats) valuable . . nslg. Its. One salient example ofthe effects of resource constraints on the Stryker IOT is the limited number of options available to test the situation aware- ness features of the Stryker's C4ISR (command, control, communications, computers, intelligence, surveillance, and reconnaissance). The evaluation of the C4ISR and the ensuing situation awareness is difficult. If time would permit, it would be valuable to run one full trial with complete informa- tion and communication and a matched trial with the information and the transmission thereof degraded. It is unlikely that this will be feasible in the Stryker JOT. In fact, it will be feasible to do only a few of the possible treatment combinations needed to consider the quality of the intelligence, the countermeasures against it, the quality of transmission, and how much information should be given to whom. CURRENT STATISTICAL DESIGN The IBCT/Stryker IOT will be conducted using two live companies operating simultaneously with a simulated company. These companies will carry out three types of missions: raid, perimeter defense, and security operations in a stable environment. The stated objective of the operational test is to compare Stryker-equipped companies with a baseline of light in- fantry companies for these three types of missions. The operational test consists of three nine-day scenarios of seven mis- sions per scenario for each of two live companies, generating a total of 42 missions, 21 for each company, carried out in 27 days. These missions are to be carried out by both the IBCT/Stryker and the baseline force/system. We have been informed that only one force (IBCT or the baseline) can carry out these missions at Fort Knox at one time, so two separate blocks of IThe following material is taken from a slide presentation to the panel: April 15, 2002 (U.S. Department of Defense, 2002a). A number of details, for example about the treatment of failed equipment and simulated casualties, are omitted in this very brief design summary.
OCR for page 29
TEST PROCESS 29 27 days have been reserved for testing there. The baseline LIB portion of the test will be conducted first, followed by the IBCT/Stryker portion. ATEC has identified the four design variables to be controlled during the operational test: mission type (raid, perimeter defense, security opera- tions in a stable environment), terrain (urban, rural), time of dray (day, night), and opposition force intensity (low civilians and partisans; me- dium civilians, partisans, paramilitary units; and high civilians, parti- sans, paramilitary units, and conventional units). The scenario (a scripted description of what the OPFOR will do, the objectives and tasks for test units, etc.) for each test is also controlled for and is essentially a replication, since both the IBCT and the baseline force will execute the same scenarios. The panel has commented in our October 2002 letter report (Appendix A) that the use of the same OPFOR for both the IBCT and the baseline trials (though in different roles), which are conducted in sequence, will intro- duce a learning effect. We suggested in that letter that interspersing the trials for the two forces could, if feasible, reduce or eliminate that con- founding effect. ATEC has conducted previous analyses that demonstrate that a test sample size of 36 missions for the IBCT/Stryker and for the baseline would provide acceptable statistical power for overall comparisons between them and for some more focused comparisons, for example, in urban environ- ments. We comment on these power calculations in Chapter 4. The current design has the structure shown in Table 2-1. The variable "time of day," which refers to whether the mission is mainly carried out during daylight or nighttime, is not explicitly mentioned in the design matrix. Although we assume that efforts will be made in real time, oppor- tunistically, to begin missions so that a roughly constant percentage of test events by mission type, terrain, and intensity, and for both the IBCT/ Stryker and the baseline companies, are carried out during daylight and nighttime, time of day should be formalized as a test factor. It is also our understanding that, except for the allocation of the six extra missions, ATEC considers it infeasible at this point to modify the general layout of the design matrix shown in the table.
OCR for page 30
30 o · - of o · · at o · - o . - o . - o .~ .~ %~= s.' To ~- . . 1 ~ B ~ ~ 8 -I to Fill ~ ma ~ . _ a ~ ~ —~ ~ ~ ~ ~. - ,~0, a H ~ ~ ~ ~ ~ O .= .bc ~ ~ ~ .= ~ ~ O C,, ~ o D a ° ~ ~ 0 ~ ~ I ~ ~ ~ ~ .¢ => to ~ 0 ~ ~ 0 0 ~ 0 ' .a ~ ca I._ .~ ~.~ E 5 .~ ~ Ma
Representative terms from entire chapter: