to reduce vulnerabilities by not embedding them in a system than to fix the problems that these vulnerabilities cause as they appear in operation.1 Vulnerabilities can result from design, as when system architects embed security flaws in the structure of a system. Vulnerabilities also result from flaws in development—good designs can be compromised because they are poorly implemented. Testing for security flaws is necessary because designers and implementers inevitably make mistakes or because they have been compromised and have deliberately introduced such flaws.

4.1.1
Research to Support Design
4.1.1.1
Principles of Sound and Secure Design

In the past 40+ years, a substantial amount of effort has been expended in the (relatively small) security community to articulate principles of sound design and to meet the goal of systems that are “secure by design.” On the basis of examinations of a variety of systems, researchers have found that the use of these principles by systems designers and architects correlates highly to the security and reliability of a system. Box 4.1 summarizes the classic Saltzer-Schroeder principles, first published in 1975, that have been widely embraced by cybersecurity researchers.

Systems not built in accord with such principles will almost certainly exhibit inherent vulnerabilities that are difficult or impossible to address. These principles, although well known in the research community and available in the public literature, have not been widely adopted in the mainstream computer hardware and software design and development community. There have been efforts to develop systems following these principles, but observable long-term progress relating specifically to the multitude of requirements for security is limited. For example, research in programming languages has resulted in advances that can obviate whole classes of errors—buffer overflows, race conditions, off-by-one errors, format string attacks, mismatched types, divide-by-zero crashes, and unchecked procedure-call arguments. But these advances, important though they are, have not been adopted on a sufficient scale to make these kinds of error uncommon.

Nonetheless, the principles remain valid—so why have they had so little impact in the design and development process? In the committee’s

1

For example, Soo Hoo et al. determined empirically that fixing security defects after deployment cost almost seven times as much as fixing them before deployment. Furthermore, security investments made in the design stage are 40 percent more cost-effective than similar investments in the development stage. See K. Soo Hoo, A. Sudbury, and A. Jaquith, “Tangible ROI Through Secure Software Engineering,” Secure Business Quarterly, Quarter 4, 2001.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement