BOX 8.2

Secrecy of Design

Secrecy of design is often deprecated with the phrase “security through obscurity,” and one often hears arguments that security-critical systems or elements should be developed in an open environment that encourages peer review by the general community. Evidence is readily available about systems that were developed in secret only to be reverse-engineered and to have their details published on the Internet and their flaws pointed out for all to see. But open-source software has often contained security flaws that have remained for years as well.1

The argument for open development rests on certain assumptions, including these: the open community will have individuals with the necessary tools and expertise, they will devote adequate effort to locate vulnerabilities, they will come forth with vulnerabilities that they find, and vulnerabilities, once discovered, can be closed—even after the system is deployed.

There are environments, such as military and diplomatic settings, in which these assumptions do not necessarily hold. Groups interested in finding vulnerabilities here will mount long-term and well-funded analysis efforts—efforts that are likely to dwarf those that might be launched by individuals or organizations in the open community. Further, these well-funded groups will take great care to ensure that any vulnerabilities they discover are kept secret, so that they may be exploited (in secret) for as long as possible.

Special problems arise when partial public knowledge about the nature of the security mechanisms is necessary, such as when a military security module is designed for integration into commercial off-the-shelf equipment. Residual vulnerabilities are inevitable, and the discovery and publication of even one such vulnerability may, in certain circumstances, render the system defenseless. It is, in general, not sufficient to protect only the exact nature of a vulnerability. The precursor information from which the vulnerability could be readily discovered must also be protected, and that requires an exactness of judgment not often found in group endeavors. When public knowledge of aspects of a military system is required, the

acting on behalf of an adversary—are most likely associated with a high-end threat, such as a hostile major nation-state, and their motivations also vary widely and include the desire for recognition for hacking skills, ideological convictions, and monetary incentives. Knowingly compromised insiders may become compromised because of bribery, blackmail, ideological or psychological predisposition, or successful infiltration, among other reasons. By contrast, unknowingly compromised insiders are those that are the victims of manipulation and social engineering. In essence, unknowingly compromised insiders are tricked into using their special knowledge and position to assist an adversary.

Regarding the knowingly compromised insider, a substantial body of experience suggests that it ranges from very difficult to impossible to identify with reasonable reliability and precision individuals who will

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement