In this and the next four chapters, the panel assesses the industry practices described at the workshop and discusses their applicability within defense acquisitions. As noted in Chapter 1, a number of the suggestions made at the workshop are already represented in the documents specifying the acquisition policies and procedures of the U.S. Department of Defense (DOD); practices that DOD has been trying to implement; or practices that previous National Research Council (NRC) panels or other advisory bodies have recommended. For these situations, we have chosen not to make new policy or procedural recommendations. In cases for which it appears those practices are not being followed widely, we have reiterated the benefits and the need to widely adopt and institutionalize the practices. In other cases, we offer additional arguments for following the previously recommended procedures. Our recommendations are restricted to situations in which the panel believes that the practices are either new, have elements that are new, or in which DOD practices are moving in the opposite direction.
In this chapter, we consider requirements setting in light of the practices discussed at the workshop. The panel recognizes that requirements are often initially set at overly optimistic levels so that a program will attract funding. This issue is beyond the scope of our study and is not explicitly addressed here.
Conclusion 1: It is critical that there is early and clear communications and collaboration with users about requirements. In particular, it is extremely beneficial to get users, developers, and testers to collaborate on initial estimates of feasibility and for users to then categorize their requirements into a list of “must-haves” and a “wish list” with some prioritization that can be used to trade off at later stages of system development if necessary.
This conclusion reflects the need for continuous exchange and involvement of users in the development of requirements. User input can assist in assessing cost and mission effectiveness of a design and can aid in the development of the “analysis of alternatives.”1 Although continuous involvement in the development of requirements by users does occur in DOD—for example, the Army designates a capabilities manager for the U.S. Army Training and Doctrine Command to represent the user on a program—it does not appear to be emphasized as much or conducted as extensively as in industry.
The industry practice of asking customers to separate their needs into a list of “must-haves” and a “wish list” is especially appealing. It imposes discipline on customers: they are forced to carefully examine a system’s needs and capabilities and any discrepancies between them and, thus, make decisions early in the development process. Communication and collaboration also ensure that all parties, including the user, the program manager, the developer, and the tester, agree on the required performance levels of a system. Although elements of this concept have been implemented in DOD through the use of threshold and objective levels for requirements and by banding requirements and key system attributes, with appropriately higher authority approval required for any change, we emphasize that more can be done for more effective requirements setting.
1An analysis of alternatives (AoA) is part of several steps in the Joint Capabilities Integration and Development System (JCIDS), which assesses cost and mission effectiveness, given levels of performance and suitability. In JCIDS (a formal DOD procedure that defines requirements and evaluation criteria for defense systems in development) a required capability (e.g., defeat an Integrated Air Defense System) is evaluated through a capability-based analysis (CBA) and then by an AoA to develop system attributes as a function of required levels of performance and suitability. However, only system attributes are provided as “requirements” to the development and test community. Currently, there is no quantitative way to assess the impact of not meeting a system requirement on accomplishing the mission. If, on the other hand, the JCIDS/CBA/AoA process provided a quantitative linkage between mission accomplishment and system attributes, the acquisitions community would have an effective method for making decisions on threshold levels set by the requirements process and for understanding the cost effectiveness of changing those requirements.
The steps proposed above must be complemented by rigorous assessment of feasibility and costs. Such an assessment will ensure that the user and the developer understand and agree that, although some additional capabilities or features may be useful add-ons, they should be sacrificed to ensure that the system attains its necessary levels of effectiveness and suitability and that they do so at an acceptable cost and in a timely manner. The panel appreciates the challenges involved in establishing shared estimates of feasibility at the outset and in making tradeoffs during requirements setting and development for major DOD acquisitions. Nevertheless, we strongly encourage the systematic approach and rigorous exchange of ideas that are part of this process.
As the workshop speakers emphasized, it is important to use input from the test and evaluation community in the setting of initial requirements. Testers can identify requirements that are either difficult or impossible to test or those that are ambiguous or are mutually inconsistent. Therefore, input from testers is a critical part of system design. In staged development, input from users and from the field can also be very informative in understanding what an early system can and cannot do.2
Conclusion 2: Changes to requirements that necessitate a substantial revision of a system’s architecture should be avoided as they can result in considerable cost increases, delays in development, and even the introduction of other defects.
Once a system’s architecture is set, changing requirements can be extremely expensive, is likely to add considerably to development time,
2Bell (2008) strongly advocates the use of a team approach to the setting of requirements. He states that the benefits from the testers and the program management offices becoming a team from the beginning of acquisition has at least six benefits: (1) more realistic requirements, (2) verifiable requirements, (3) verifiable specifications, (4) requirements and specifications that are understood, (5) appropriate testing-related schedule, budget, and infrastructure, and (6) contractors prevented from under- or overbidding the test and evaluation part of their proposal. With a team approach in place, the system integration laboratory becomes a useful preparatory time and place. Testers are encouraged to double-check that proper reliability growth is planned and executed, to interact with independent operational testers, and to plan and execute developmental test and evaluation thoroughly enough to virtually ensure success in initial operational test and evaluation. With this approach, program management offices, with only a small initial investment, can potentially save large sums of money.
and can introduce additional failure modes and design flaws.3,4 Having stable requirements during development allows the system architecture to be optimized for a specific set of specifications, rather than being modified in a suboptimal manner to try and accommodate various updates to the requirements. At the same time, however, there also must be some flexibility that allows for modifications that are responsive to users’ needs and changing environments.
A previous NRC report (2008:50) discussed the tension between these two goals:
One must clearly establish a complete and stable set of system-level requirements and products at Milestone A. While requirements creep is a real problem that must be addressed, some degree of requirements flexibility is also necessary as lessons involving feasibility and practicality are learned and insights are gained as technology is matured and the development subsequently proceeds. Certainly control is necessary, but not an absolute freeze. Also, planning ahead for most likely change possibilities through architectural choices should be encouraged, but deliberately managed, a concept encouraged herein.
The panel endorses this statement and notes that it is consistent with the views expressed by the participants at our workshop.
As noted above, greater fluidity in requirements may be quite reasonable (and even desirable) for software systems: reworking may be feasible later in development for software systems than for hardware systems. And even with hardware systems, changes to requirements may be relatively easy for systems that are acquired in an evolutionary manner. The key is that the process for changing requirements should be well managed, with adequate oversight, clear accountability, and enforcement of the rules. In
3Thompson (1992:738-739) notes that the F-16a fighter is good example of the effects on system reliability when one is allowed to keep changing requirements: “Instead of the simple, austere, pure fighter it was originally planned to be, the air force made it into a dual purpose aircraft, used to attack ground targets as well as a dog-fighter. This increased its price 75 percent and increased its weight from ten tons to over twelve, with a proportional reduction in acceleration. It also increased the plane’s complexity, owing to the installation of additional avionics, radar, and electronic countermeasures, with proportional reductions in reliability and maintainability.”
4Tangentially, we note that it often makes little difference whether the ultimate system passes or slightly fails achievement of the requirements in the fielded system. For example, compare the situations in which a jet fighter in development either flies at better than Mach 2 or flies at only Mach 1.8. That is unlikely to make an important difference as to the successful completion of missions. Instead, what is important is that once the system is fielded, the user needs to have a comprehensive understanding of precisely what the system can and cannot do. That is why it is very important to test to failure in development whenever possible rather than test exclusively to requirements.
particular, input from engineers as to the feasibility of any changes to requirements needs to play a key role in decisions as to whether or not to permit any requested changes.
The panel recognizes that existing DOD regulations mandate that changes in requirements go through a rigorous engineering assessment before they are approved. However, it appears that these regulations are not being followed: there are many instances in which requirements continue to change throughout development, including reductions that result from concerns about feasibility.
Conclusion 3: Model-based design tools are very useful in providing a systematic and rigorous approach to requirements setting. There are also benefits from applying them during the test generation stage. These tools are increasingly gaining attention in industry, including among defense contractors. Providing a common representation of the system under development will also enhance interactions with defense contractors.
Modeling and simulation tools are used widely in DOD, but use of the term “model-based design” here is narrower. The focus is on the use of tools to formally translate and quantify requirements from high-level system and subsystem specifications, assess the feasibility of proposed requirements, and help examine the implications of trading off various performance capabilities (including various aspects of effectiveness and suitability, such as durability and maintainability). A recent presentation by the National Defense Industrial Association (NDIA) engineering division’s modeling and simulation committee (2011) refers to this as model-based engineering (MBE) and defines it as “an approach to engineering that uses models as an integral part of the technical baseline that includes the requirements, analysis, design, implementation, and verification of a capability, system, and/or product throughout the acquisition life cycle” (p. 7). The NDIA report notes that MBE can also include the use of physics-based models, but these are not part of the discussion here.
These tools start at a high level, when the key performance parameters or the high-level requirements are first specified. System-level requirements then flow down to subsystem- and component-level requirements, following the classic V-diagram of systems engineering. The process allocates the high-level requirements to a more detailed functional design and functional architecture for various component systems. As this happens, the model becomes more refined and acquires higher fidelity.
As described at the workshop, this approach has many benefits:
• It provides a formal specification of the actual intent of the functionality so that it is very clear and precise.
• It is reusable if it is well documented.
• It is executable, so any ambiguities can be identified.
• It can be used to automatically generate test suites.
• Perhaps most importantly, the model captures knowledge that can be preserved and institutionalized.
This is a very good way to have a formal understanding of the specification (need) and performance (deliverable) of the intended system. This approach is now used in some programs; it needs to be expanded and needs to include supplier performance models.
The model-based approach also provides a platform for common and consistent use of terminology and codification of requirements. This consistency supports the greater acceptance of performance characteristics by the contractor, program manager, users, and testers. It allows for validation or refinement of requirements by domain experts. Such models can also be used in simulation environments to assess technology readiness. They can also allow for the linkage of system-level performance requirements to the performance of sub-systems and components.
Furthermore, an overall modeling and simulation-based vision is crucial for identifying where initial efforts should be concentrated to achieve the required performance levels. Then, as development proceeds, modeling and simulation can be used to ensure that subsequent efforts remain focused on what is needed to achieve performance goals. Such a comprehensive approach to the modeling of system performance can and should be used as a repository of information on system performance, initially fed by engineering knowledge gained from previous systems and then informed and updated by test data. The modeling tools also facilitate system testing, integration, and automated code generation for specific tasks. In addition, they provide a convenient framework to archive relevant information on all past tests. There are also modeling tools specifically designed to check integration issues. The architecture at the higher levels is the integration platform. Without such tools the integration of the full system is likely to be problematic.
We note, however, that the extent to which legacy models for related systems are used for this purpose will depend on the system in question and how related the new system is to previous systems. Even relatively modest changes in a system may make legacy models and simulations poor representations of stresses and strains, etc., and, as a result, any legacy modeling and simulation needs to be rigorously validated for use on a new system.
Industries are increasingly using such model-based design tools to
General Motors Develops Two-Mode Hybrid Powertrain with Model-Based Design Reduces Development Time by 2 Years with Math and Simulation-Based Tools from MathWorks®
General Motors Company (GM) has developed its Two-Mode Hybrid powertrain control system using Model-Based Design. By using math and simulation-based tools from The MathWorks®, GM designed the powertrain prototype within 9 months, shaving 24 months off the expected development time.… By adopting Model-Based Design, where the development process centers around a system model, GM engineers increase time savings. Also, by verifying the control system before hardware prototyping and by using production code generated from the controller models, GM has rolled out production vehicles featuring the hybrid powertrain within four years of starting the control system design process. The ability to reuse design information has helped the global development teams foster more efficient communication and reduced response time, eliminating integration issues.… GM used MATLAB®, Simulink®, and Stateflow® to design the control system architecture and model all the control and diagnostic functions. Real-Time Workshop Embedded Coder provided the capability to generate production code from the models, and Real-Time Workshop and hardware-in-the-loop (HIL) simulators helped verify the control system.
SOURCE: MathWorks® (2009). Available: http://www.mathworks.com/company/pressroom/General-Motors-Develops-Two-Mode-Hybrid-Powertrain-With-Model-Based-Design.html [November 2011]. Reprinted with permission.
assess the feasibility of requirements. In some cases, the entire architecture of complicated systems is driven by modeling tools such as those employed by General Motors: see Box 3-1.5 Some DOD programs are obviously far more complex than automotive programs and so can benefit greatly from these tools. The NDIA report suggests that defense contractors are already using these tools, although the level of usage may vary considerably.
DOD should have expertise in these tools and technologies so that the agency can use them in its interactions with contractors and users. It is crucial that DOD at least actively participate, if not lead, in the development of the relevant models. Operational effectiveness models are critical for requirements setting; systems performance models are
critical for assessing feasibility. The latter can serve as a critical tool for collaboration between contractors and DOD. For example, they allow for the traceability of requirements since everyone is working from the same set of assumptions, leading to a disciplined approach in the development process. The model is also a feedback mechanism, providing answers to “what if” questions about the functioning of the system. For all these reasons, DOD should not rely completely on contractors to develop and use this capability. Given their importance, performance models should be part of contract deliverables, just as computer-assisted design models are now, and their review should be a key part of any milestone decision.
Recommendation 1: The Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, and the Office of the Director of Operational Test and Evaluation, and their service equivalents should acquire expertise and appropriate tools related to model-based approaches for the requirements-setting process and for test case and scenario generation for validation.
This expertise will be very beneficial in collaborating with defense contractors and in providing a systematic and rigorous framework for overseeing the entire requirements generation process. The expertise can be acquired inhouse or through consulting and contractual agreements.