4
Changes to Infrastructure, Process, and Culture in Support of Evolutionary Acquisition

A number of changes to the infrastructure of the test and acquisition community are needed to take full advantage of the opportunities present in the evolutionary acquisition process and to confront the formidable challenges posed.

TECHNICAL EXPERTISE

The DoD testing community will need greater access to, and use of, expertise in statistical methods, particularly in the areas of experimental design, modern analysis methods, and, more generally, eliciting and combining information. Test designs for even ACAT I systems currently make use of relatively standard “cookbook” designs, often modifying a design used for a similar system in the past. This approach to operational test design is not designed to fully exploit the increased information available in an evolutionary context, and it will often be inadequate to effectively test the highly complicated systems-of-systems that are becoming increasingly common. Formulating effective test designs for systems developed in stages will often involve questions that are of the level and complexity of publishable research. Much greater access to and use of the highest levels of statistical expertise will therefore be needed. Several ways of developing a more collaborative relationship with the statistical community were outlined in Chapter 10 of Statistics, Testing, and Defense Acquisition: New Approaches and Methodological Improvements (National Research Council, 1998).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 34
Testing of Defense Systems in an Evolutionary Acquisition Environment 4 Changes to Infrastructure, Process, and Culture in Support of Evolutionary Acquisition A number of changes to the infrastructure of the test and acquisition community are needed to take full advantage of the opportunities present in the evolutionary acquisition process and to confront the formidable challenges posed. TECHNICAL EXPERTISE The DoD testing community will need greater access to, and use of, expertise in statistical methods, particularly in the areas of experimental design, modern analysis methods, and, more generally, eliciting and combining information. Test designs for even ACAT I systems currently make use of relatively standard “cookbook” designs, often modifying a design used for a similar system in the past. This approach to operational test design is not designed to fully exploit the increased information available in an evolutionary context, and it will often be inadequate to effectively test the highly complicated systems-of-systems that are becoming increasingly common. Formulating effective test designs for systems developed in stages will often involve questions that are of the level and complexity of publishable research. Much greater access to and use of the highest levels of statistical expertise will therefore be needed. Several ways of developing a more collaborative relationship with the statistical community were outlined in Chapter 10 of Statistics, Testing, and Defense Acquisition: New Approaches and Methodological Improvements (National Research Council, 1998).

OCR for page 34
Testing of Defense Systems in an Evolutionary Acquisition Environment The increasing role that software-based components are playing in complicated defense systems and the evidence that a substantial percentage of cost increases, schedule delays, and performance problems are due to software problems require greater access to expertise in software engineering and software testing methods. Although a number of extremely effective relevant procedures have been developed in the past decade, there is little evidence that these techniques have been applied to defense systems (see, for example, National Research Council, 2003). The greater role that modeling and simulation will play in testing and evaluation of increasingly complex systems-of-systems will also require greater access to expertise in the use of physics-based modeling and simulation and modeling at the operational level. DATA ARCHIVING A test and field data archive is absolutely necessary to provide facilitated access to information on the test designs and outcomes of tests from previous stages of development. Methods of combining information clearly require information from previous stages of development. By a test (and field) data archive, we mean the following (as discussed in National Research Council, 2004:59-60): a rich set of variables to adequately represent the test environment, the system under test, and the performance of the system…. In order to accurately represent system performance, including the appearance of various failure modes and their associated failure frequencies, the circumstances of the test must be understood well enough that the test, training exercise, or field use can be effectively replicated, including the environment of use (e.g., weather, terrain, foliage, and time of day) and type of use (e.g., mission, intensity, and threat). While a system is under development, the system design is often under constant modification. Given the need to be able to replicate a test event in the database, it is crucial to represent with fidelity the system that was in operation during the event so that proper inference is possible.

OCR for page 34
Testing of Defense Systems in an Evolutionary Acquisition Environment In addition to storing the length of time between system failures, it is also important to identify which hardware or software component malfunctioned; the maintenance (including repair) record of the system; the time of previous failures; the number of cycles of use between failures; the degree of failure; and any other variables that indicate the stresses and strains to which the system was subjected, such as speed and payload. It is also useful to include the environments and stresses to which individual system prototypes have been exposed historically (e.g., in transport, storage, and repeated on/off cycling), in order to support comprehensive failure mode analysis, especially if an apparent declining trend in system reliability appears. If the above data are available, they can be used to help design efficient testing in the system’s evolutionary development process; they can greatly facilitate the effective combining of data from different sources over time; they can be used to interpolate the results of testing in a limited number of operational situations in order to assess capabilities of the system in situations not tested; they can provide a “hot plant” for reliability assessments; and they can assist in developing performance correlates (e.g., reliability) for design of the system’s evolutionary upgrades. For programs of the size and complexity that are typical of recent ACAT I systems, and especially for those developed using evolutionary acquisition, there will be an enormous amount of data from contractor testing, early and late government testing, results from training exercises, results from modeling and simulation, and results from field use for those systems that have gone into production. Being able to find and utilize these diverse sources of data for various purposes requires that they be documented and arrayed in a way that facilitates a variety of analyses. This will also strongly support the operation of the various feedback loops needed to improve system design, to identify the sources of field performance failures, to improve modeling and simulation, and to improve test planning and conduct. Evolutionary acquisition is tailor-made to support the operation of these feedback loops, given the continuity of the systems developed, and given that there is an opportunity to examine their performance once fielded. While there are existing data archives maintained by some of the Service test agencies, none of them contain all the information that is listed here and that is essential for fully understanding the system that was tested

OCR for page 34
Testing of Defense Systems in an Evolutionary Acquisition Environment or fielded, the governing operating conditions, and the associated performance records. This information is not easy to collect in controlled settings, such as operational tests, and it is considerably more difficult to collect in less controlled types of use, such as training exercises and field use. However, much in this direction can be accomplished. For example, in many commercial industries, sensors are attached to fielded systems to collect much of the information automatically. Recommendation 6: (a) To support the implementation of evolutionary acquisition, DoD should acquire, either through hiring in-house or through consulting or contractual agreements, greater access to expertise in the following areas: (1) combining information from various sources for efficient multistage design, statistical modeling, and analysis; (2) software engineering; and (3) physics-based and operational-level modeling and simulation. (b) Test and field data archives should be established to facilitate access to data on test and field performance of systems in development and those fielded. These data archives should be used to support feedback loops to improve the system design, to improve testing methodology over time, and to help validate (and improve) modeling and simulation for operational evaluation. To help implement this recommendation, ideally, a data plan could be established early in the program that would be connected to a modeling plan, so that at every stage the needed data could be obtained. Various performance parameters and related statistics could therefore have a common format that would facilitate using those data. TERMINOLOGY AND DOCUMENTATION Evolutionary acquisition is defined in DoD Instruction 5000.2, although the term has not been used in a consistent manner by DoD leadership or in various other DoD documents. The variety of terms, including evolutionary acquisition, incremental development, and spiral development, and their inconsistent usage have served to obscure the original goal and intent of the evolutionary acquisition process.

OCR for page 34
Testing of Defense Systems in an Evolutionary Acquisition Environment For example, it has been put forward in some documents (Aldridge, 2002) that spiral development is the process used in an individual stage of (what is referred to as) incremental development to determine appropriate requirements for that stage. However, in DoD Instruction 5000.2, which represents official DoD policy, spiral and incremental development are represented instead as two parallel forms of evolutionary acquisition—with incremental development having fixed requirements in advance, and spiral development having requirements that are developed when specific stages of development have been initiated. Such conflicting use of terminology by DoD managers has created some confusion. Another area of confusion is whether the requirements are defined clearly up front for each stage of the development, or if it is only decided at the beginning of that stage. There was widespread concern among the industrial participants at the workshop that this flexibility in the identification of requirements can lead to serious problems in evaluating the performance of the systems and potential for “gaming” the development process. Besides clear and consistent definitions, a consistent and definitive articulation of the goals and intention of the evolutionary acquisition process is needed, as well as supporting documentation detailing the specific changes that will need to be implemented throughout the acquisition community (e.g., how the milestone system is intended to operate in conjunction with evolutionary acquisition). There also is a need for more direction as to how evolutionary acquisition will affect testing, both development and operational, and what the process should be for deciding how to use test results to help determine whether to field a system having attained a given stage of development. This would assist all of those in the acquisition community, including contractors, program officials, and testers, in adjusting to this new acquisition methodology. At first glance, it might seem attractive to develop a formal taxonomy of programs with separate guidelines in each cell. However, this runs the risk of making the process too rigid and cumbersome and negating the flexibilities inherent in evolutionary acquisition. The best alternative would be to work with the relevant technical personnel in DoD to develop a set of simple, clear, and coherent guidelines. Recommendation 7: The under secretary of defense (acquisition, technology and logistics) should eliminate inconsistencies in DoD Directive 5000.1 and DoD Instruction 5000.2 and clarify other significant memoranda and documents regarding evolutionary ac-

OCR for page 34
Testing of Defense Systems in an Evolutionary Acquisition Environment quisition. All policies and procedures to be used in applying evolutionary acquisition principles to DoD acquisition programs should be strictly enforced. This clarification and enforcement should be applied to program management both in DoD and in all supporting contractor activities. PROCESS AND CULTURE IN DoD Evolutionary acquisition is being folded into an acquisition environment that already has a counterproductive incentive system (see, e.g., National Research Council, 1998:23-29). The flexibilities inherent in the evolutionary acquisition process present greater opportunities for these counterproductive incentives to be expressed. For example, evolutionary acquisition allows requirements to be set at the beginning of each stage of development, rather than stating them for all stages at the outset. While there are obvious advantages to having fluid requirements for a system developed in stages (e.g., addressing unpredictable changes in threats to the system, being able to incorporate new and unanticipated technological advances in capabilities), there is also great potential for abusing this fluidity. It has been shown several times that even in the single-stage context, initial system requirements used to justify an acquisition program often are revised downward during system development (see, for example, Christle et al., 2001). This suggests that the initial requirements (and cost estimates) for a defense system are sometimes used as a sales brochure to support the decision to initiate an acquisition program. Then, once a program has become more established, modifications are made to adjust the requirements to lower levels of performance. The interplay, or lack thereof, between user, developer, and tester in setting requirements for initial and intermediate stages of system development (often mentioned as part of evolutionary acquisition) is currently a factor in setting unrealistic initial requirements. The more flexible approach of evolutionary acquisition may encourage this practice, since requirements that are difficult to meet might be pushed back to later stages of development. Clearly, there is a trade-off between gaining efficiencies inherent in a flexible acquisition process and the pressures for greater oversight and systematic documentation of the overall development process. To date, few if any program managers for defense systems have requested, or been approved for, evolutionary acquisition classification. It is reasonable to sup-

OCR for page 34
Testing of Defense Systems in an Evolutionary Acquisition Environment pose that, in addition to the overall confusion surrounding evolutionary acquisition, some of the associated activities listed in DoD Instruction 5000.2 are considered burdensome. In particular, Congress has imposed strict reporting requirements on any program officially designated in DoD as an evolutionary system, and this may discourage program officials from requesting this designation. While there are trade-offs between flexibility and oversight, the need for clear accountability is even more critical in the flexible evolutionary environment, in which there is also a desire to reduce the oversight burden. It is worth comparing some key aspects of the product development process in the commercial industrial sector with that in DoD. In industry, the broad spectrum of activities for the development of a major new product is typically overseen by a chief engineer. The chief engineer is accountable for the entire product line, from concept design, through product development, to manufacturing and product launch. This is intended to ensure overall accountability, from original justification of the product and projection of market share to quality and costs of design and development and ultimately to reliability and warranty costs. By contrast, in DoD, mission capabilities are often overstated and costs understated at the initiation of an acquisition program, with transference or recognition of added costs downstream along with delays in product development. Moreover, the typical tenure of program managers is much shorter in DoD, often on the order of three years, so there is generally no full accountability for the decisions made over the life of an acquisition program. The incentive structure in industry encourages a “survival of the fittest” behavior, so that poor quality or high-cost operations (or both) will fail in the long run. The reporting and management structure in industry is well defined. In DoD, the existence of multiple players, reporting structures, and layers of oversight groups complicates the decision-making process substantially. The current acquisition process, environment, and culture have significant built-in inefficiencies. These problems will become even more critical under evolutionary acquisition and can seriously hinder the process and even lead to abuse. Similar comments were made in the past in the context of the current acquisition process (see National Research Council, 1998). However, there is little evidence of any systematic effort to address the underlying issues. Improving the incentive structure (i.e., aligning the incentives of the major players) is perhaps the most onerous obstacle facing the DoD acquisition process if it is to better meet the difficult challenges in acquiring and fielding complex, expensive systems in a timely, efficient manner.

OCR for page 34
Testing of Defense Systems in an Evolutionary Acquisition Environment Recommendation 8: The deputy secretary of defense should charge a blue ribbon panel, including experts in organizational behavior, multiobjective decision making, and other relevant areas, to review the Defense Acquisition Performance Assessment panel’s proposed changes to DoD acquisition policies and procedures. This review should take place (if possible) before any changes to the policies are implemented in order to assess and improve their likelihood of being successfully implemented in the DoD and defense industry culture.