Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 53
--> 4 Implementation of Recommended DOD Software Policy The committee's recommendations for DOD's software policy address two broad objectives. The first part of this chapter describes appropriate principles for selection of a programming language, and Appendix A contains the committee's proposed modifications to a revised version of DOD Directive 3405.1 (DOD, 1987a), which was in the process of being redrafted during the course of this study. The second component of the committee's recommendations concerns the Software Engineering Plan Review, which is proposed as a method for implementing DOD's software policy and is described in the second part of this chapter. RECOMMENDED POLICY FOR CHOICE OF PROGRAMMING LANGUAGE1 The committee recommends that DOD approach programming language policy at three levels of precedence. The overall goal is to achieve the best combination of costs and benefits (each interpreted quite broadly, as explained below); a number of principles for acquisition of software follow from and are subordinate to this overriding goal. The second level of precedence interprets those principles as they apply to the choice of a programming language (at any level of programming). The third level specifies circumstances under which Ada is required for software development using a third-generation programming language (3GL). This hierarchy expresses goals for software acquisition that are broader than the choice of programming language alone, clarifying the importance of many other decisions (such as decisions about whether to make, buy, or, build components; design of the development process; and necessary skills) required to achieve DOD's goals. The focus is on operational software. It does not apply to software developed, acquired, or used by DOD research and development activities, funded by 6.1, 6.2, and 6.3a appropriations. However, research and development software efforts likely to lead to new DOD operational capabilities should include plans for the transition of such software to meet operational software policy requirements (these plans are described under "Approval Authority and Milestones" in the next section).
OCR for page 54
--> Goals of Software Development High-quality, low-cost, and timely delivery are the primary goals for software development. Here, "quality" and "cost" are interpreted broadly. Quality includes, but is not necessarily limited to, functionality, fitness for a purpose, assurance (including reliability, survivability, availability, safety, and information security), efficiency, ease of use, interoperability, future adaptability (including extensibility, maintainability, portability, scalability, and compliance with standards), and development of DOD's software expertise. Cost includes, but is not limited to, full life-cycle monetary costs (i.e., both shortand long-term costs) and the extent of use of other scarce resources such as expert personnel. Cost also includes assessment of program risk and monetary and non-monetary consequences of system failure. Timely delivery, or schedule, is listed as a third goal because it is difficult to classify as either a quality or cost factor. These overriding goals are reflected in the following statements, which the committee believes should serve as guidance for DOD software development. Projects will specify and prioritize quality, cost, and schedule goals, and will analyze trade-offs and the business-case for particular decisions. Failure to articulate and prioritize project requirements appropriately, and to analyze them in the context of their impacts on cost and schedule, commonly leads to project failure or inappropriate acquisition decisions. It is not reasonable for DOD to specify a single prioritization of goals, because the importance and relevance of different factors vary widely. However, projects should conduct an analysis and defend it in the review process. Requirements should not be overstated, an approach that often has the effect of ruling out simpler, more cost-effective solutions. Projects will not develop new software unless quality, cost, and schedule goals cannot be met with non-developmental items (NDIs). Developing and maintaining new software within projects tend to be more expensive than reusing suitable existing software. Commercial items are preferred over other non-developmental items if they meet quality, cost, and schedule constraints. True commercial items will spread the costs of maintenance and improvement over a larger base, leading to cost savings. Issues such as possible "lock-in" to a single source should be considered as constraints to achieving desirable qualities such as adaptability and portability. Software development will emphasize good software engineering practice, including the application of management techniques, methodologies, support tools, metrics, and appropriate programming languages. Good practices provide better quality at lower cost, regardless of which programming language is used. Good practices also tend to improve timeliness and reduce risk. Software developers should be chosen based on their experience, a criterion that includes, but is not limited to, successful past performance; experience in the software domain or product-line; use of appropriate management techniques, methodologies, support tools, and metrics; and mature software engineering capability and expertise. Projects will, when possible, exploit and/or contribute to open system architectures and common product-lines, frameworks, and libraries. Investment in commonality, where feasible, increases portability and opportunities for reuse, and reduces cost. Projects will avoid developing project-specific tools and technologies unless the cost, schedule, and/or quality advantage can be defended. Such development is expensive and is seldom justified.
OCR for page 55
--> Guidelines for Choice of Programming Language 1. Projects will use the highest-level language that meets quality, cost, and schedule constraints for each software component. Other things being equal, higher-level languages increase productivity and reduce cost. Specifically, 3GLs (high-order languages) are generally preferable to machine or assembly language; further, fourth-generation programming languages (4GLs), program generators, graphical user interface builders, and database query languages, such as Structured Query Language (SQL), are generally preferable to 3GLs. Modification of the lower-level language output from a higher-level language processor should be considered as programming at the lower-level; that is, components written in a language should be maintained in that language, and the output of a language processor should be changed only in exceptional cases. 2. Standardized and non-proprietary languages are preferred. Using standardized languages increases the portability of code and programmers, and diminishes the possibility of "lock-in" to a single source. This principle applies at all language levels. Thus standard SQL is preferable to a proprietary database query language. In some cases, unusual or "niche" languages are the best choice; however, these choices need to be defended. 3. Projects should not develop new languages, and language processors for them, except for domain-specific languages that provide directives for application generators. Such development is costly, in both the short and long-term, and should require unusual justification. 4. All relevant quality, cost, and schedule factors should be considered in the choice of programming language for each component. Applying these four principles, it is reasonable, for example, to use small "shell" scripts to "glue" together system components, rather than writing them in Ada or some other high-order language; however, large and complex shell scripts may violate the principles by being difficult to maintain. Likewise, large packages of spreadsheet macros, or other code written in (more or less) proprietary 4GLs, need to be considered carefully. The key is to ensure that decisions are made carefully, weighing all relevant economic and engineering cost, quality, and schedule factors. These requirements lead to the following recommended policy for the use of Ada. Recommended Policy for Requiring the Use of the Ada Programming Language The committee believes that Ada should be presumed to be the best choice, and thus should be used for software development, for subsystems of DOD's operational software systems that meet all of the following criteria: 1. The subsystem is in a warfighting software application area as defined in Chapter 3. While Ada may still be a good choice for other systems, DOD policy should require that Ada be used only in areas where it has clear advantages and is most likely to maximize DOD's competitive position relative to that of its adversaries. 2. DOD will direct the maintenance of the software. If a vendor is serving a broader customer community, then maintenance costs are spread over a larger base and are thus of less concern. If DOD directs the maintenance, whether or not the maintenance is performed by DOD personnel or a vendor, then DOD must cover the life-cycle cost, and Ada is assumed to be more cost-effective over an entire life-cycle. 3. The software subsystem is large, more than 10,000 lines of code, or the subsystem is critical. Small and non-critical subsystems, as a rule, incur lower development and maintenance costs, and thus
OCR for page 56
--> are not worth the cost of oversight. Such systems tend to be simpler and the choice of programming language is less critical. However, the choice is more important for critical components. 4. There is no better COTS, NDI, or 4GL software solution. If existing software or higher-level language solutions are suitable, new development solely to promote Ada should not be required. 5. There is no life-cycle cost-effectiveness justification for using another programming language. 6. New software is being developed or an existing subsystem is being re-engineered; a re-engineering is a modification substantial enough that rewriting the subsystem would be cost-effective. For systems meeting criteria 1 through 5, Ada is generally superior to other high-order languages, and conversion over time should be encouraged. For systems that meet all of the above criteria, Ada (preferably Ada 95) must be used for the preponderance (95 percent) of new or modified software subsystems or components; up to 5 percent may be written in other languages to facilitate component integration and other functions. Projects that meet all of the criteria except number I above must analyze Ada as an alternative. As explained in Chapter 2, Ada is generally preferred for custom software because, compared with other 3GLs, it encourages better software development practices, has better error checking and recovery capacity, has better support in certain domains, is standardized and has a validation facility, contributes to commonality, and leads to high-quality at lower life-cycle cost. SOFTWARE ENGINEERING PLAN REVIEW PROCESS The committee recommends that DOD broaden its current policy on programming language to include a range of software engineering factors that have a greater overall influence on software capability than does choice of a particular language alone. This section addresses how policy guidance regarding these factors, as described in Chapter 2, can be translated into operational decisions in systems development. The principal mechanism is the Software Engineering Plan Review (SEPR). The committee explored a number of approaches for integrating selection of a programming language with related review and approval processes for software engineering decisions. One approach was to integrate the programming language selection process with a Capability Maturity Model assessment (Paulk et al., 1993), but this type of assessment focuses more on organizational process maturity than on specific technical decisions made by a particular project. Another approach was to add review of programming language and software engineering decisions to the Defense Acquisition Board (DAB) and Major Automated Information Systems Review Council (MAISRC) Milestones I and II review processes (as defined by DOD Directive 5000.2-R (DOD, 1996c)). Key software decisions are generally covered well in MAISRC reviews but often fall below the threshold of visibility in DAB reviews, which cover most DOD-dominated software application areas. The committee determined that the DOD's best alternative to these two approaches was to require passage of a focused SEPR as a part of a major system's DAB or MAISRC Milestone I and II reviews. SEPRs used in commercial practice have proven to be highly effective for reviewing software and system requirements, plans, architectural decisions, and programming language decisions at life-cycle points similar to DAB and MAISRC Milestones I and II. For example, the SEPR concept has been used successfully in large technology-dependent commercial and government organizations, including AT&T and Lucent Technologies (architecture review board (AT&T, 1993)), Citibank (building permit system), NASA (architecture reviews), and others.2 The SEPR process is intended to provide a forum for the following activities:
OCR for page 57
--> Involvement of stakeholders in key software engineering decisions, Contributions of peers and experts to key software engineering decisions, Stimulating commonality of process and architectural elements where appropriate, and Establishing accountability of a senior acquisition official for major software engineering decisions throughout the life-cycles of related systems. The SEPR process is intended to help program managers (and possibly contractors) achieve a best-practices level of decision making for the software engineering associated with major systems, as well as to assure consideration of organizational and life-cycle factors. Implementation details are established not by senior officials, but rather by product-line stakeholders and expert peers, who have an incentive to minimize unnecessary bureaucracy and documentation. The principal policy elements for systems subject to DAB and MAISRC reviews are the following: Authority for approving software engineering plans resides in the office of the Assistant Secretary of Defense (C3I) and in the Service Acquisition Executives (SAEs) and their Software Executive Officials (SEOs). SEPRs are conducted by Software Engineering Plan Review Boards (SEPRBs) at key points in the engineering process and are conducted by peers and representatives of key stakeholders. These reviews are typically managed at the Program Executive Officer (PEO) level. Software engineering plans, focusing on major software engineering process, technology, and architecture decisions, are submitted by program managers in preparation for the SEPR process. The Office of the Assistant Secretary of Defense (C3I) periodically reviews the effectiveness of the DOD services' and DOD components' implementation of the SEPR process. The SEPR process has three elements: (1) a policy framework established for major software engineering decisions and for SEPRs; (2) involvement in the review by peers as well as the principal stakeholders in system design; and (3) software engineering common practices, SEPR evaluation criteria, and SEPR process policies developed at the service and command levels (these would be specific to each service, and possibly to PEOs who could, for example, require conformance to particular architectural frameworks for a class of systems (e.g., a particular level of a common operating environment). These three elements are detailed below. Policy Framework The purpose of the SEPR process is to embody institutional and long-term interests in requirements for formulation, development, and post-deployment that might otherwise be neglected or compromised in favor of short-term goals. Such short-term expedients could arise as undesired results of incentives created in the acquisition process or for other reasons. Early decisions concerning design, process, and other software engineering factors can have a significant influence on overall life-cycle cost and risk, and on the potential for product-line commonality and interoperability. For example, the following questions arise: What is the necessary level of maintainability (e.g., ongoing improvements in performance and quality, and evolution of computational infrastructure), and how will it be achieved? What is the necessary level of interoperability (e.g., within product-lines, with related DOD systems, and with related systems controlled by allies and in coalition forces), and how will it be achieved?
OCR for page 58
--> What is the necessary level of trustworthiness (including reliability, fault tolerance, and survivability), and how will it be achieved? What are the likely future needs (e.g., new and changed requirements anticipated), and how will they be accommodated? What are the likely technology constraints, and what plans have been developed for inserting new technology? Stakeholder Role The committee recommends that the SAEs be in charge of carrying out the SEPR process at the DOD service level. The SAEs would establish milestones for the SEPR process, appoint expert reviewers and stakeholder representatives, and establish criteria for evaluation. The SAEs and their associated SEOs would be responsible for implementing these functions, although this responsibility could be delegated as detailed below. The appropriate counterparts in other DOD components would have corresponding responsibilities. The most important element is participation in the SEPR by peer software managers experienced in the application area, as well as by key stakeholders as either advocates or reviewers. Because the SEPRB's staffing from stakeholder organizations can vary considerably among systems, SEPRB representation is divided into mandatory and discretionary categories. The SAE must appoint representatives from mandatory stakeholders, but can include discretionary stakeholders as appropriate to the software engineering plan to be reviewed. For systems subject to Milestone Decision Authority (MDA) at the service level or in the Office of the Secretary of Defense, the mandatory list of stakeholders and peer reviewers includes the following: The PEOs (senior product-line officials) responsible for both development and post-deployment support for the candidate system and closely related systems; Management and technical officials responsible for maintenance of the systems being specified or developed; Representatives from user organizations, as appropriate; Peer program managers with related software engineering management experience; and Program managers for the system being specified, developed, or re-engineered (stakeholders, but reviews rather than reviewers). The discretionary list of stakeholders depends on the characteristics of the system being developed, but could include the following: Program managers for development and support of critical related systems that must interoperate with or are otherwise closely affected by the system under review; Representatives of the DOD community who have specific technical expertise and cognizance of emerging technologies; Representatives of other program executive offices, program offices, or other components that are responsible for key common architectural frameworks; and Representation, where appropriate, from the Office of the Secretary of Defense or the Joint Chiefs of Staff.
OCR for page 59
--> Approval Authority and Milestones For systems subject to MDA, approval authority for the process resides with the Assistant Secretary of Defense (C3I) or the SAE, depending on the class of system. The direct management of the SEPR process would be carried out by the SAEs and their associated SEOs, with possible delegation to the PEO level. But the actual approval authority should not be delegated beyond the SAE. The Assistant Secretary of Defense (C3I) would monitor the review process. When significant deviations are needed from DOD's stated policy and principles, direct approval of the Assistant Secretary of Defense (C3I) may be required; this should be determined when approval authority is delegated to the SAEs. It is the intent of these recommendations, however, that policy be framed with sufficient flexibility and outlook to the future that such deviations are not required in the ordinary conduct of business. It is also the intent that implementation be delegated to a level sufficient to ensure in depth review of software engineering decisions. SEPRs are ongoing processes, with specific approvals pertinent to specific milestones. SEPRs must be linked, at a minimum, to DAB and MAISRC Milestones I and II. Many smaller systems are subject to DOD software engineering policy, but not to MDA. For these systems, approval authority resides with the SAE, but there is flexibility with respect to delegation and the need for formal SEPRs. Normally, the SAE can delegate approval authority to a PEO or, for very small systems, to a major command. In the latter case, approval can be granted for a family of related small systems as a result of a software engineering plan for a single product-line. But the committee suggests that for non-MDA systems, the decision to conduct a formal SEPR process (or some more expedient process) be required for warfighting software, and be a recommended practice at the discretion of the approval authority for other software. Given that the Director of Defense Research and Engineering (DDR&E) is responsible for advanced (6.3a) research, the committee recommends that the DDR&E establish a software engineering review process that addresses issues pertinent to the efficient transition of software technologies associated with major 6.3a demonstration programs, including plans to modify prototype 6.3a software to conform to the committee's recommended policy on selection of programming language, as appropriate. The review criteria, which would be at the discretion of the DDR&E, would not need to use the SEPR process, thus enabling the DDR&E to manage the trade-off between efficient transitions, on the one hand, and responsiveness and flexibility of research programs to the emergence of new technologies and concepts, on the other. Submission of Software Engineering Plans As envisioned by the committee, the SEPR process requires program managers responsible for MDA system specification, development, and major re-engineering efforts to submit a software engineering plan, preceded by a request to the SAE to convene a SEPRB. The SAE, considering the recommendations of the program manager and the cognizant PEO, would then select stakeholder organizations, which appoint representatives. For smaller systems, the SAE and PEO roles are further delegated, as indicated above. It is the committee's intent that the approval authority would work with the SEPRB and the program manager to develop a software engineering plan suitable for the project and in conformance with all DOD policies. No entry into the DAB Milestone I and II reviews could be initiated without concurrence of the approval authority. Criteria used for evaluation of the software engineering plan should be defined by the approval authority. The software engineering plan should be a simple document3 and should cover areas relevant to the decision process, including the following:
OCR for page 60
--> The system's scope and concept of operation; The key system and software requirements, including stakeholder needs; The key elements of the system and software architecture, including programming language decisions; The system and software life-cycle plans, including increments, budgets, and schedules; and A rationale demonstrating that the software can be developed within the budget and schedule specified in the life-cycle plan, can satisfy the requirements and key stakeholder needs, and can successfully support the concept of operation. The SEPR approval authority, in consultation with PEO and program manager representatives, would develop specific criteria to be reviewed. The criteria for review could include, for example: System structural architecture, partitioning the system into components; Differentiation of key architecture requirements from secondary features and capabilities; Nature and extent of compliance of the architectural plan with related open-architecture and DOD framework common interfaces; Definition of increments and completion criteria (e.g., design to cost); Cost and risk management; Risk management plan designed into early releases; Metrics for indicating progress and measuring completion of milestones; Major milestone content, evaluation criteria, and demonstration scenarios; and Basis for decisions to make, buy, or reuse components (see below). For each subsystem or component in the system, the following areas should be addressed: Availability of COTS products, non-developmental items, and other existing or reusable components; Appropriateness of new development; Appropriateness of new development for reuse (capitalization) in related systems; Potential for reuse or insertion into other related systems-incentives can be established by additional resources provided by the PEO; Use of tooling and generators for development, and status of the tooling and generators; Degree of compliance with related interface or framework standards; Maintenance responsibility (government, contractor, or commercial); and Choice of programming language, subject to the recommended policy in Appendix A. Software Engineering Codes As experience is gained, SAEs, PEOs, and other stakeholders will develop service-specific or domain-specific refinements of the review criteria listed in the previous section. For example, a service may designate conformance with a common architectural framework as a review item. These refinements may attain the status of software engineering "codes" (analogous to building codes) particular to a service or PEO product-line. These would serve as "best-practices" documents that would necessarily evolve over time, according to requirements and technology developments. They would also enable program managers to develop expectations concerning the SEPR process on the basis of their conformance with such codes.
OCR for page 61
--> NOTES 1. This section and Appendix A present similar material in different formats. Appendix A was prepared by the committee to serve as a proposed revision to DOD Directive 3405.1 (DOD, 1987a); this section discusses the principles and rationale underlying the committee's suggested changes to that policy document. 2. The committee recommends using the term "software engineering plan reviews" rather than "architecture reviews" to emphasize the importance of integrating plans for products (i.e., architecture or building plan) with plans for process (e.g., increments, milestones, budgets). 3. The life-cycle objectives and life-cycle architecture milestones introduced in Boehm (1996) provide guidelines for the level of detail of a software engineering plan desired at DAB and MAISRC Milestones I and II.
Representative terms from entire chapter: