The panelists and participants in the workshop largely agreed that mature, comprehensive engineering processes reduce the risk that a software development project will overrun or deliver poor-quality software. But it was also noted that a good process does not guarantee that the resulting software will be of high quality. “Process properties are not predictive of product properties,” was a typical opinion; another was, “You should not believe that if you do it right it will come out right.” The idea that it is possible to “get dirty water out of clean pipes” summarized much of this discussion.

Process was described as a “skeleton” on which a dependable system is built. Process quality is important even if it does not directly predict product quality. For example, it is important to be able to show that the version of the software that has been tested and certified is the exact version that is running, unmodified, in the operational system. Similarly, if the testing process cannot be trusted, it may be difficult to establish confidence in the test results.3

Software development processes were described as a “chain of weak links” and as “weakness in depth,” in that each process step can introduce flaws that damage the product. Product certification requires adequate evidence of the properties of the product, and the weight that can be given to this evidence depends, in turn, on the quality of the processes that created the evidence. So the quality of these processes is an important factor in certification. It is important to distinguish between evidence that the process has been followed (which gives credibility to the outcome of the process) and evidence of the system properties that the process produces.

A detailed record of a systematic process can be essential for the many development tasks that depend on information not easily obtainable from the program itself. For example, reliable build logs and version control are necessary for tracing the affected versions of systems in the field once a fault has been discovered. Similarly, avoiding complete recertification of a system after a maintenance change requires sufficient evidence of the maximum possible impact of the change, so that only the affected areas need to be recertified.

Panelists discussed the artifacts produced as part of development (program code, specifications, designs, analyses, and the like). One panelist noted that it is the properties of these artifacts that should be measured: “I run your code, not your process.” For this reason, a key activity in certification should be measurements of the product and of the intermediate artifacts. Such measurements could include system performance, test coverage, the consistency and completeness of specifications, and/or verification that a design implements a specification. While there does not seem to be a single metric that can predict dependability, several participants said that measures such as these, when used in combination, are good predictors of dependability. It is important to measure properties that actually matter directly, and economic theory suggests that measurement skews incentives: “If you want A and measure B, you will get B.” All of this suggests the importance of good empirical research that relates the attributes under consideration and makes it possible to discern what the dependent and independent variables are.

There was some discussion of the phenomenon that software errors are not evenly distributed throughout a system—they tend to cluster in the more complex areas, creating black holes of software defects. These black holes can be located by looking at the past and recent history of a current release. However, there is an underlying assumption that there are resource constraints and never enough resources to analyze an entire system in this manner. Furthermore, some forms of analysis or testing may be impossible, requiring orders of magnitude more resources than could possibly be made available. One must focus the analysis on areas one thinks deserve the most attention. These black holes in the resulting software can often be traced to black holes in the design or specification, so deep analysis of the quality of these artifacts, early in the development of software, can be very cost-

3  

It was suggested that while somewhat too prescriptive, the Capability Maturity Model is correct in its assessment of which processes matter as well as in its insight that there is a necessary progression in process improvement (it is not possible to leap easily to Level 4 from Level 1). The committee will explore this and other process-oriented models in the second phase of the study.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement