Findings and Recommendations
In this chapter, the committee distills its proposed approach and findings and briefly discusses its recommendations for achieving justifiable confidence in dependable software systems.
Improvements in software development are needed to keep pace with societal demands for software. Avoidable software failures have already been responsible for loss of life and for large economic losses. The quality of software produced by the industry is extremely variable, and there is inadequate oversight in some critical areas. Unless improvements are made, more pervasive deployment of software in the civic infrastructure1 may lead to catastrophic failures. Software has the potential to bring dramatic benefits to society, but it will not be possible to realize these benefits—especially in critical applications—unless software becomes more dependable.
More data are needed about software failures and the efficacy of development approaches. Assessment of the state of the software industry, the risks posed by software, and progress made is currently hampered
As an indication of the growth in the pervasiveness of software, the Bureau of Labor Statistics found in 2003 that the output of prepackaged software increased annually by 26.5 percent between 1990 and 2000, growth attributed to “the increased use of computers and the rising demand for reliable, user-friendly software.” See <http://www.bls.gov/opub/ted/2003/feb/wk3/art01.htm>.
by the lack of a coherent source of information about software failures. Careful documentation and analysis of failures has had dramatic impact in other areas. More attention should be paid to the contributions of software to accidents, and repositories of accident reports are needed that include sufficient details to enable the analysis of trends and an evaluation of technologies and methods. Without a concerted effort to collect better data, investment in software technology and research may be misdirected, ineffective practices will remain, and adoption of the most effective methods will be hindered. In the absence of a federal initiative, the situation might improve dramatically if all the parties currently involved in software production, regulation, and accident reporting were to monitor systems more pervasively and systematically for failures; involve software experts to a greater degree in the investigation of failures of systems that include software as a component; and insist on greater transparency in every aspect of software development and deployment than is currently expected.
To Builders and Users of Software
Make the most of effective software development technologies and formal methods. A variety of modern technologies—in particular, safe programming languages, static analysis, and formal methods—are likely to reduce the cost and difficulty of producing dependable software. Elementary best practices, such as source code control and systematic defect tracking, should be universally adopted, and development organizations that fail to use them should not be regarded as sources of dependable software. Advanced practitioners, especially those working in specialized domains, may be justified in creating their own framework of processes and practices that embodies these recommended elements. But those who are not already familiar with the best practices of the industry (described previously) should first ensure that their developments adhere to these elements and then consider diverging only under extraordinary circumstances. Formal methods have been shown to be effective only for small to medium-sized critical systems and have not been widely adopted. Furthermore, they require a new mindset and may demand staff with greater expertise, especially in the early stages of development. Nevertheless, key elements of formal techniques would aid in the cost-effective construction of dependability cases and could be widely applied, especially in combination with the incrementality and minimality encouraged in some development approaches such as those currently labeled “agile.”
Follow proven principles for software development. The committee’s proposed approach also includes adherence to the following principles:
Take a systems perspective. A systems perspective should be adopted in which the dependability of software is viewed not in terms of intrinsic properties (such as the incidence of bugs in the code) but in terms of the system as a whole, including interactions among people, process, and technology and encompassing both the physical and organizational environment of the system. Engineering of software should be driven by a consideration of risks and their mitigation, and well-established risk analysis and reduction techniques that are applied in other domains (such as hazard analysis) should be routinely applied to software. Different levels of assurance will be appropriate for different systems and for dependability properties within a single system.
Exploit simplicity. If dependability is to be achieved at reasonable cost, simplicity should become a key goal, and developers and customers must be willing to accept the compromises it entails. Unfettered growth in the complexity of the functionality offered is incompatible with dependability. The architecture of the software should reflect the prioritization of requirements, ideally so that the critical properties can be established by examining closely only a small portion of the software, relying on independence arguments to account for lack of interference from the remaining portions.
Make a dependability case for a given system and context: evidence, explicitness, and expertise. A software system should be regarded as dependable only if sufficient evidence is presented to substantiate the dependability claim. The evidence should take the form of a dependability case that explains why the critical properties hold, and it will involve reasoning about both the code and the environmental assumptions. To the extent that this reasoning can be supported by automated tools, it will be more credible. The dependability properties should be explicitly articulated and carefully prioritized; the assumed properties of the environment should be made explicit also. This approach gives considerable leeway to developers to use whatever practices are best suited to the problem at hand. In particular, it allows the use of less robust components and languages at the expense of having to mitigate the risk with a more elaborate dependability argument. Despite this flexibility, in practice the challenges of developing dependable software are sufficiently great that developers will need considerable expertise and will have to justify any deviations from best practices.
Demand more transparency, so that customers and users can make more informed judgments about dependability. Customers and users
can make informed judgments when choosing suppliers and products only if the claims, criteria, and evidence for dependability are transparent. The willingness of a supplier to provide data beyond the dependability case proper (about the qualifications of personnel, its track record in providing dependable software, and the process it used) and the clarity and integrity of the data that it provides will be a strong indicator of its attitude toward dependability.
Make use of but do not rely solely on process and testing. Testing will be an essential component of a dependability case but will not in general suffice, because even the largest test suites typically used will not exercise enough paths to provide evidence that the software is correct, nor will they have sufficient statistical significance for the levels of confidence usually desired. Testing is a vital aspect of every development, not only because it exposes flaws but also because it provides feedback on the quality of the development process. Software that fails many test cases probably cannot be made dependable and should perhaps be abandoned. Adherence to a particular process will not suffice as evidence either. There is no established universal correlation between process and dependability, although demonstrated adherence to process contributes to the dependability case. In other words, rigorous process is essential for preserving the chain of dependability evidence but is not per se evidence of dependability. Without a rigorous process, however, evidence produced by the developers will not be credible, and it is unlikely that the developing organization will be able to identify and correct flaws in the way it produces software. An effective process need not be a burdensome one, and too elaborate a process (especially if it requires the production of excessive documentation) can be damaging.
Base certification on inspection and analysis of the dependability claim and the evidence offered in its support. Because testing and process alone are insufficient, the dependability claim will require, in addition, evidence produced by analysis. Analysis may involve well-reasoned informal argument, formal proofs of code correctness, and mechanical inference (as performed, for example, by type checkers). Indeed, the dependability case for even a relatively simple system will usually require all of these kinds of analysis, and they will need to be fitted together into a coherent whole. A developer that uses COTS components will either have to demonstrate in the dependability case that their failure will not undermine the crucial dependability properties or will have to incorporate in the case appropriate claims about the properties of the components themselves. Absent careful engineering, a system can become as vulnerable as its weakest components, so the inclusion of standard desktop software in critical applications should be carefully examined. Where the customer for the software is not able to carry out that work itself
(through lack of time or lack of expertise) it will need to involve a third party whose judgment it can rely on to be independent of commercial pressures from the vendor. Certification can take many forms, from self-certification through independent third-party certification by a licensed certification authority.
Include security considerations in the dependability case. By violating assumptions about how components behave, about their interactions, or about the expected behavior of users, security vulnerabilities can undermine the case made for dependability properties. The dependability case must therefore account explicitly for security risks that might compromise its other aspects. It is also important to ensure that security certifications give meaningful assurance of resistance to attack. Owners of products and systems whose security has been certified expect that if they deploy the products and systems properly, most attacks against those products or systems will fail. Today’s security certification regimes do not provide this confidence, and new security certification regimes are needed. Such certification regimes can be built by applying the other findings and recommendations of this report, with an emphasis on the role of the environment—in particular, the assumptions made about the potential actions of a hostile attacker and the likelihood that new classes of vulnerabilities will be discovered and new attacks developed to exploit them.
Demand accountability and make it explicit. Where there is a need to deploy certifiably dependable software, it should always be made explicit who is accountable, professionally and legally, for any failure to achieve the declared dependability. At present, it is common for software developers to disclaim liability for defects in their products to a greater extent than customers and society expect from manufacturers in other industries. Clearly, no software should be considered dependable if it is supplied with a disclaimer that withholds the manufacturer’s commitment to provide a warranty or other remedies for software that fails to meet its dependability claims. The appropriate scope of remedies was not determined in this study, however, and would require a careful analysis of benefits and costs.
To Agencies and Organizations That Support Software Education and Research
The committee was not constituted or charged to recommend budget levels or to assess trade-offs between software dependability and other priorities. However, the committee does conclude that the increasing importance of software to society and the extraordinary challenge currently faced in producing software of adequate dependability provide a strong rationale for investment in education and research initiatives.
Place greater emphasis on dependability—and its fundamental underpinnings—in the high school, undergraduate, and graduate education of software developers. Many practitioners do not have an adequate appreciation of software dependability issues, are not aware of the most effective development practices, or are not capable of applying them appropriately. A focus on dependability considerations in high school, undergraduate, and graduate educational contexts is therefore needed. The importance of dependability for software is not adequately stressed in most degree programs in the United States. More emphasis should be placed on systems thinking; on requirements, specification, and large-scale design; on security; on usability; on the development of robust and resilient code; on basic discrete mathematics and statistics; and on the construction and analysis of dependability arguments.
Federal agencies that support information technology research and development should give priority to basic research to further software-enabled system dependability, emphasizing a systems perspective and evidence. Until there is a dramatic improvement in the methods, languages, and tools of software development, there will be systems that cannot be constructed to appropriate levels of dependability. Moreover, even when this is possible, the cost will be higher than it should be. Because of the increasing importance of software to our society and the extraordinary challenge of producing software of adequate dependability, research is needed that emphasizes a systems perspective and “the three E’s,” and such research should be a priority for funding agencies. The research should be informed by a systems view that assigns greater value to advances that are likely to have an impact in a world of large systems interacting with other systems and operators in a complex physical environment and organizational context.
* * *
The committee believes that the approach discussed here will substantially improve the dependability of many critical software systems being produced today. While the economic trade-offs are different in individual cases, the committee believes that its recommendations are generally applicable to many non-safety-critical systems as well—a consideration that becomes increasingly important as COTS components are reused in critical systems and accidental systems are formed from a mix of critical and noncritical components. Applying the committee’s approach to all software systems, safety-critical and non-safety-critical alike, promises to alleviate the heavy costs and frustrations that low-quality software imposes even in noncritical applications.
In the long term, innovations in software engineering are likely to bring dramatic improvements in dependability. Software systems are complex and, just as in other sorts of complex systems, failures will inevitably occur. But if our society succeeds in this ambitious program, we can hope that, 10 or 20 years from now, the adoption of ambitious and potentially dangerous new systems will be justified by rational arguments; a broad consensus in the software industry will guide standard practice; the production of software will be less expensive and more predictable than it is today; and the incidence of software failures will be low and well-documented.