National Academies Press: OpenBook
« Previous: Front Matter
Suggested Citation:"Executive Summary." National Research Council. 2003. Innovations in Software Engineering for Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/10809.
×
Page 1
Suggested Citation:"Executive Summary." National Research Council. 2003. Innovations in Software Engineering for Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/10809.
×
Page 2
Suggested Citation:"Executive Summary." National Research Council. 2003. Innovations in Software Engineering for Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/10809.
×
Page 3
Suggested Citation:"Executive Summary." National Research Council. 2003. Innovations in Software Engineering for Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/10809.
×
Page 4

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Executive Summary ecent rough estimates are that the U.S. Department of Defense (DoD) spends at least $38 billion a year on the research, develop- , ~ Moment, testing, and evaluation of new defense systems; approxi- mately 40 percent of that cost at least $16 billion is spent on software development and testing. There is widespread understanding within DoD that the effectiveness of software-intensive defense systems is often ham- pered by low-quality software as well as increased costs and late delivery of software components. Given the costs involved, even relatively incremen- tal improvements to the software development process for defense systems could represent a large savings in funds. And given the importance of producing defense software that will carry out its intended function, rela- tively small improvements to the quality of defense software systems would be extremely important to identify. DoD software engineers and test and evaluation officials may not be fully aware of a range of available techniques, because of both the recent development of these techniques and their origination from an orientation somewhat removed from software engineering, i.e., from a statistical per- spectlve. The panel's charge therefore was to convene a workshop to identify statistical software engineering techniques that could have applicability to DoD systems in development. It was in response both to previous reports that identified a need for more attention to software and to the emergence of techniques that have been widely applied in industry and have demon-

2 INNOVATIONS IN SOFTWARE ENGINEERING strafed impressive benefits in the reduction of errors as well as testing costs and time. The panel structured its task consistent with a statistical approach to collecting information on system development. This approach has three stages: (1) determination of complete, consistent, and correct requirements; (2) selection of an effective experimental design to guide data collection on system performance; and (3) evaluation of system performance, consistent with requirements, using the data collected. The intent was to focus on methods that had been found useful in industrial applications. Some of the issues selected for study were suggested by DoD software experts for spe- cific areas to address that would make the workshop relevant to their con- cerns and needs. The panel's work was not intended to be comprehensive in identifying techniques that could be of use to DoD in software system development and for software engineering, or even for statistically oriented software en- gineering. Without question, a number of very effective techniques in software engineering were not represented at the workshop nor considered by the panel. The workshop sessions on specification of requirements focused on software cost reduction and sequence-based software specification. The sessions on test design addressed Markov chain usage testing and combina- torial design testing. Presentations on software evaluation discussed risk assessment, cost modeling, models of software process improvement, and fault-tolerant software. The presentations on software evaluation suggested the great variety of issues that can be addressed through data analysis and, therefore, the value of collecting and analyzing data on system performance, both during system development and after fielding a system. After the workshop, the oversight committee interacted mainly by e-mail and, to a lesser extent, by telephone and in person in small groups of two or three. Informed and primarily motivated by the workshop's presen- tations, but augmented by these subsequent interactions, the oversight com- mittee decided to issue recommendations for further efforts in this area. Recommendation 1: Given the current lack of implementation of state-of-the-art methods in software engineering in the service test agencies, initial steps should be taken to develop access to either in-house or in a closely affiliated relationship state-of-the-art software engineering expertise in the operational or developmen- tal service test agencies.

EXECUTIVE SUMMARY 3 Such expertise could be acquired in part in several ways, especially including internships for doctoral students and postdoctorates at the test and evaluation agencies, and with sabbaticals for test and evaluation agency staff at industry sites at which state-of-the-art techniques are developed and used. Recommendation 2: Each service's operational or developmental test agency should routinely collect and archive data on software performance, including test performance data and data on field performance. The data should include fault types, fault times and frequencies, turnaround rate, use scenarios, and root cause analy- sis. Also, software acquisition contracts should include require- ments to collect such data. Recommendation 3: Each service's operational or developmental test agency should evaluate the advantages of the use of state-of- the-art procedures to check the specification of requirements for a relatively complex defense software-intensive system. One effective way of carrying out this evaluation would be to develop specifications in parallel, using the service's current procedures, so that qual- ity metric comparisons can be made. Each service would select a software system that: (1) requires field configuration, (2) has the capabilities to adapt and evolve to serve future needs, and (3) has both hardware and software that come from multiple sources. Recommendation 4: Each service's operational or developmental test agency should undertake a pilot study in which two or more testing methods, including one model-based technique, are used in parallel for several software-intensive systems throughout devel- opment to determine the amount of training required, the time needed for testing, and the method's effectiveness in identifying software defects. This study should be initiated early in system development. Recommendations 3 and 4 propose that each of the services select software-intensive systems as case studies to evaluate new techniques. By doing so, test and evaluation officials can judge both the costs of training engineers to use the new techniques the ''fixed costs" of widespread imple-

4 INNOVATIONS IN SOFTWARE ENGINEERING mentation and the associated benefits from their use in the DoD acquisi- tion environment. Knowledge of these costs and benefits will aid DoD in deciding whether to implement these techniques more broadly. There are insufficient funds in individual programs or systems under development to support the actions we recommend. Thus, support will be needed from the services themselves or from the Department of Defense. Recommendation 5: DoD should allocate general research and de- velopment funds to support pilot and demonstration projects of the sort recommended in this report in order to identify methods in software engineering that are effective for defense software sys- tems in development. The panel notes that constraints hinder DoD from imposing on its contractors specific state-of-the-art techniques in software engineering and development that are external to the technical considerations of the costs and benefits of the implementation of the techniques themselves. How- ever, the restrictions do not preclude the imposition of a framework, such as the capability maturity model. Recommendation 6: DoD neetls to examine the advantages anti disadvantages of the use of methods for obligating software devel- opers under contract to DoD to use state-of-the-art methods for requirements analysis and software testing, in particular, and soft- ware engineering and development more generally. The techniques discussed in this report are consistent with these con- ditions and constraints. We also note that many of the techniques de- scribed in this report are both system oriented and based on behavior and are therefore applicable to both hardware and software components, which is an important advantage for DoD systems.

Next: 1 Motivation for and Structure of the Workshop »
Innovations in Software Engineering for Defense Systems Get This Book
×
Buy Paperback | $47.00 Buy Ebook | $37.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Recent rough estimates are that the U.S. Department of Defense (DoD) spends at least $38 billion a year on the research, development, testing, and evaluation of new defense systems; approximately 40 percent of that cost-at least $16 billion-is spent on software development and testing. There is widespread understanding within DoD that the effectiveness of software-intensive defense systems is often hampered by low-quality software as well as increased costs and late delivery of software components. Given the costs involved, even relatively incremental improvements to the software development process for defense systems could represent a large savings in funds. And given the importance of producing defense software that will carry out its intended function, relatively small improvements to the quality of defense software systems would be extremely important to identify. DoD software engineers and test and evaluation officials may not be fully aware of a range of available techniques, because of both the recent development of these techniques and their origination from an orientation somewhat removed from software engineering, i.e., from a statistical perspective. The panel's charge therefore was to convene a workshop to identify statistical software engineering techniques that could have applicability to DoD systems in development.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!