Programmatic, Engineering, and Systems Risk
Programmatic or project risks pertain to the successful completion of engineering projects with respect to expectations and priorities for cost, schedule, capability, quality, and other attributes. A principal influence on programmatic risk is the process by which engineering risks are identified and addressed. This applies particularly to engineering risks related to architecture and ecosystems choices, quality attributes, and overall resourcing. With innovative projects, programmatic risk can be reduced through use of iteration, incremental engineering, and modeling and simulation (as used in many engineering disciplines). Programmatic risks that derive from overly aggressive functional or quality requirements, where engineering risks are not readily mitigated, are often best addressed through moderation on the “value side,” for example, through scoping of functional requirements. Indeed, for ambitious and innovative programs—those characterized as “high risk, high reward”—for identifying and sorting engineering risks, it is often most effective to focus as early as possible on architecture. Once overall scope of functionality is defined, architecture risks may often dominate the detailed development of functional requirements.
A well-known example of negative consequences of unmitigated programmatic risks is the FBI Virtual Case File (VCF) project.1 The project is documented in the IEEE Spectrum: “The VCF was supposed to automate the FBI’s paper-based work environment, allow agents and intelligence analysts to share vital investigative information, and replace the obsolete Automated Case Support (ACS) system. Instead, the FBI claims, the VCF’s contractor, Science Applications International Corp. (SAIC), in San Diego, delivered 700,000 lines of code so bug-ridden and functionally off target that this past April , the bureau had to scrap the US $170 million project, including $105 million worth of unusable code. However, various government and independent reports show that the FBI—lacking IT management and technical expertise—shares the blame for the project’s failure.”2
Eight factors that contributed to the VCF's failure were noted in a 2005 Department of Justice audit. These included “poorly defined and slowly evolving design requirements; overly ambitious schedules; and the lack of a plan to guide hardware purchases, network deployments, and software development for the bureau….” Finally, “Detailed interviews with people directly involved with the VCF paint a picture of an enterprise IT project that fell into the most basic traps of software development, from poor planning to bad communication.” (Today, 5 years later, the program has been scrapped yet again.)
Supply chain risk is an area of engineering risk that is growing in significance and that often develops into programmatic risk. This is evident in the DoD’s increasingly complex and dynamic supply-chain structure, with particular emphasis on concerns related to assurance, security, and evolution of components and systems infrastructure. This risk can be mitigated through techniques outlined in Chapters 3 and 4 related to architecture design, improved assurance and direct evaluation techniques, multi-sourcing, provenance assessment, and tracking and auditing of sourcing information. Supply chain risk is particularly challenging for infrastructure software and hardware, because of the astonishingly rapid evolution of computing technologies, with commercial replacement cycles typically every 3 to 5 years. In the absence of careful planning, this means that early ecosystem commitments can potentially create programmatic risks in downstream