decision points in DOD IT systems are quite different in the iterative, incremental development (IID) processes discussed in Chapter 3. As a result, an equivalent understanding of what is required and when it is required has not been reached for IT systems acquisition. The results are frustration for developers and other participants in the acquisition process and uncertainty and delay in the process itself. Much can be gleaned from the experience of commercial IT systems developers and suppliers—such insights are just beginning to be incorporated into DOD practice. This chapter briefly reviews key elements and shortcomings of current practice and outlines opportunities for improvement, with a focus on making the perspective of the end user more salient.
The current DOD process for the acquisition of IT systems has its roots in the procurement of hardware-oriented systems that will be manufactured in quantity. As such, the DOD’s typical practice is to determine whether or not a design is adequate for its purpose before committing to advancing the production decision. For programs that are dominated by manufacturing cost, this approach reduces the possibility that a costly reworking of a system might become necessary should a defect be identified only after fielding of units in the operational environment has begun.
DODI 5000 directs programs to conduct testing and evaluation against a predetermined set of goals or requirements. As shown in Table 4.1, the acquisition process, including the T&E process, is governed by a large set of rules, test agents, and conditions, each trying to satisfy a different customer. Traditional test and acceptance encompass three basic phases: developmental test and evaluation (DT&E; see Box 4.1), the obtaining of the necessary certification and accreditation (C&A), and operational test and evaluation (OT&E; see Box 4.2).
In essence, the current approach encourages delayed testing for the assessment of the acceptability of an IT system and of whether it satisfies user expectations. A final operational test is convened in order to validate the suitability and effectiveness of the system envisioned as the ultimate deliverable, according to a specified and approved requirements document. This approach would work if the stakeholders (program manager [PM], user, and tester) were to share a common understanding of the system’s requirements. However, because of the length of time that it takes an IT system to reach a mature and stable test, what the user originally sought is often not what is currently needed. Thus, unless a responsive process had been put in place to update the requirement and associated