Appendix E
Technical Vulnerabilities Targeted by Cyber Offensive Actions

The discussion in this appendix is based largely though not entirely on an earlier National Research Council report on cybersecurity describing vulnerabilities in the information technology on which the United States relies.1 However, there is no reason to suppose and no evidence available that suggests that other nations (or other non-national parties) are systematically better than the United States in eliminating vulnerabilities from the information technology that they use.

SOFTWARE

Software constitutes the most obvious set of vulnerabilities that an attacker might exploit. In a running operating system or application, vulnerabilities may be present as the result of faulty program design or implementation, and the exploitation of such vulnerabilities may become possible when the targeted system comes into contact with a hostile trigger (either remotely or close up). For example, a pre-implanted vulnerability in a program may be triggered at a particular time, or when a particular input is received.

When vendors find vulnerabilities, they usually issue patches to fix them. But the issuance of a patch sometimes increases the threat to those who do not install it—when a patch is widely disseminated, it also serves

1

National Research Council, Toward a Safer and More Secure Cyberspace, The National Academies Press, Washington, D.C., 2007.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 360
Appendix E Technical Vulnerabilities Targeted by Cyber Offensive Actions The discussion in this appendix is based largely though not entirely on an earlier National Research Council report on cybersecurity describ- ing vulnerabilities in the information technology on which the United States relies.1 However, there is no reason to suppose and no evidence available that suggests that other nations (or other non-national parties) are systematically better than the United States in eliminating vulnerabili- ties from the information technology that they use. SOFTWARE Software constitutes the most obvious set of vulnerabilities that an attacker might exploit. In a running operating system or application, vulnerabilities may be present as the result of faulty program design or implementation, and the exploitation of such vulnerabilities may become possible when the targeted system comes into contact with a hostile trigger (either remotely or close up). For example, a pre-implanted vul- nerability in a program may be triggered at a particular time, or when a particular input is received. When vendors find vulnerabilities, they usually issue patches to fix them. But the issuance of a patch sometimes increases the threat to those who do not install it—when a patch is widely disseminated, it also serves 1 National Research Council, Toward a Safer and More Secure Cyberspace, The National Academies Press, Washington, D.C., 2007. 0

OCR for page 360
 APPENDIX E to notify a broad range of would-be attackers that a specific vulnerability exists. And if the patch is not installed, a broader range of attackers is likely to have knowledge of the vulnerability than if the patch had not been distributed at all. And patches are not always installed when the vendor issues them because patch installation will from time to time damage existing functionality on a system (e.g., causing a critical appli- cation to stop working until it can be made compatible with the patch to be installed). As a rule, vulnerabilities resulting from design errors or insecure design choices are harder to fix than those resulting from implementation errors. Perhaps still more difficult are vulnerabilities introduced by unin- tended functionality (the euphemism for adding a function to software that helps an attacker but that is not desired by the authorized user or developer)—the classic “back-door” vulnerability.2 Most system evalua- tion checks the extent to which a product meets the formal requirements, and not whether it does more than intended. Whereas vulnerabilities due to faulty design and implementation may be uncovered during the testing process or exposed during system operation and then fixed, vulnerabili- ties associated with unintended functionality may go undetected because the problem is tantamount to proving a negative. Today, applications and operating systems are made up of millions of lines of code, not all of which can possibly be audited for every changed line of source code. A widely used program might have vulnerabilities deliberately introduced into it by a “rogue” programmer employed by the software vendor but planted by the attacker. (One of the most plausible vectors for the surreptitious introduction of hostile code is a third-party device driver. In some operating systems, drivers almost always require the calling system to delegate to them privileges higher than those granted 2 As an example of a back door that is harmless, most versions of Microsoft Word from Word 97 to Word 2003 contain some unexpected functionality—typing “=rand()” in a Word document and then pressing the ENTER key results in three paragraphs of five repetitions of the sentence “The quick brown fox jumps over the lazy dog.” This particular back door is harmless and is even documented by Microsoft (see “How to Insert Sample Text into a Document in Word,” available at http://support.microsoft.com/kb/212251). Such function- ality could easily not be documented, and could easily be harmful functionality as well. For example, a security interface to a computer might be designed to require the user to enter a password and to insert a physical “smart card” into a slot before granting her access. But the interface could easily be programmed to ignore the smart-card requirement when a special password is entered, and then to grant the user many more privileges than would be normal. On the other hand, the in-advance installation of a back-door vulnerability always runs a risk of premature exposure—that is, it may be discovered and fixed before the attacker can use it. Even worse from the attacker’s standpoint, it may be fixed in such a way that the at- tacked system appears vulnerable but is in fact not vulnerable to that particular attack. Thus, the attacker may attack and believe he was successful, even though he was not.

OCR for page 360
 TECHNOLOGY, POLICY, LAW, AND ETHICS OF U.S. CYbERATTACK CAPAbILITIES to ordinary users—privileges that allow the code within drivers to bypass operating system protections.) To ensure that such vulnerabilities are not introduced, vendors take many steps such as multiple code reviews dur- ing the software development process. But source code does not always reveal the entire functionality of a system. For example, compilers are used to generate object code from source code. The compiler itself must be secure, for it could introduce object code that subversively and subtly modifies the functionality repre- sented in the source code.3 Moreover, maliciously constructed code intentionally introduced to implant vulnerabilities in a system for later exploitation is typically more difficult to detect than are vulnerabilities that arise in the normal course of software development.4 Attackers highly skilled in the art of obfuscating malicious code can make finding intentionally introduced vulnerabilities a much harder problem than finding accidental flaws. Finding such vul- nerabilities requires tools and skills far beyond those typically employed during system testing and evaluation aimed at discovering accidentally introduced defects. The discovery process requires detailed analysis by human experts, making it extremely expensive. Indeed, it is rarely done except for systems in which reliability and security are paramount (e.g., nuclear command and control systems). The introduction of deliberate vulnerabilities into software is facil- itated by the economic imperatives of software development and the opaqueness of the software development supply chain. Today, develop- ing custom software for every application is impractical in terms of both cost and time. Custom software developed for a single purpose must be paid for entirely by the party for which it is developed, and thus software producers often seek to reduce costs by using commercial off-the-shelf (COTS) software and/or outsourcing their software development when- ever possible (e.g., using commercial operating or database systems), even if critical systems are involved.5 In practice, systems are composed of components designed and implemented by many vendors. These vendors in turn often subcontract major components, and those subcontractors 3 A famous paper by Ken Thompson in 1984 described how to hide malicious bi- nary code in a way that cannot be detected by examining the source program. See Ken L. Thompson, “Reflections on Trusting Trust,” Communications of the ACM 27(8):761-763, August 1984. 4 Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, pp. 40-41. 5 Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, p. vi.

OCR for page 360
 APPENDIX E may in turn subcontract portions of their work. Because the spread of the Internet and high-speed communications capabilities such as broadband fiber optics worldwide has made global development of software not only possible, but also desirable for cheaply tapping the broadest range of talent,6 these subcontractors are often located in nations where labor is relatively inexpensive. The provenance of each component or subcompo- nent can only be completely known if mechanisms are in place to track each contributor, and every subcontractor represents an opportunity to introduce vulnerabilities secretly. The use of open source software is often advocated as a solution to the security problem described above (advocates assert that the many eyes of the open source community focused on software would make it difficult or impossible to introduce deliberate flaws that will endure), and open source software is increasingly being incorporated into systems to save time and money in the development process as well. Open source software development is essentially a form of outsourced development except that the outsourcing is done on an ad hoc basis and even less may be known about the circumstances under which the code is originally produced than is the case with software produced under an outsourcing contract. Vulnerabilities could be deliberately introduced by a cyberat- tacker, and there is no guarantee that the open source inspection process will uncover such vulnerabilities.7 For example, a particular sequence of instructions and input com- bined with a given system state could take advantage of an obscure and poorly known characteristic of hardware functioning, which means that programmers working for an attacking government and well versed in minute behavioral details of the machine on which their code will be running could introduce functionality that would likely go undetected in any review of it.8 As an example of how outsourcing can be used to introduce vulnera- 6 Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, p. 10. 7 Empirical results appear to suggest that open source software—though available for inspection by anyone—in practice is not often audited for security. See, for example, Hal Flynn, “Why Sardonix Failed,” SecurityFocus, February 4, 2004, available at http://www. securityfocus.com/columnists/218. 8 See, for example, Olin Sibert, Phillip A. Porras, and Robert Lindell, “An Analysis of the Intel 80x86 Security Architecture and Implementations,” IEEE Transactions on Software Engineering, 22(5):283-293, May 1996; and Kris Kaspersky and Alice Chang, “Remote Code Execution Through Intel CPU Bugs,” talk presented at Hack-In-The-Box, Dubai, United Arab Emirates, 2008, PowerPoint presentation available at http://nchovy.kr/uploads/3/303/ D2T1%20-%20Kris%20Kaspersky%20-%20Remote%20Code%20Execution%20Through%20 Intel%20CPU%20Bugs.pdf.

OCR for page 360
4 TECHNOLOGY, POLICY, LAW, AND ETHICS OF U.S. CYbERATTACK CAPAbILITIES bilities, a financial services company reportedly outsourced its application development to a company in the Far East. The company had been certi- fied as a CMM level-5 company, meaning that it had a well-established and documented process for developing software. However, unknown to the company, it also employed a few malicious users who inserted a back door in the application that was sent to the financial services client. The client performed only a minimal security review as part of its acceptance testing, and so the back door went undetected. The back door consisted of an undocumented URL that could be accessed remotely, through which malicious users were able to obtain customer information such as account numbers, statement balances, and other information. The back door was discovered months after deployment after the developer’s clients com- plained about fraudulent charges.9 A final kind of software error is sometimes called an emergent error.10 Emergent errors can arise when correct software is used in a situation or environment for which it was not originally designed and implemented. For example, a program may work correctly in a given context and envi- ronment. However, if it is moved to a different computing environment, it may begin to work incorrectly. A software component Z may be certi- fied as being secure, provided certain conditions are met (such as certain constraints on the input values being passed across its interface). It works correctly in environment A, which guarantees that the values passed are indeed restricted in accordance with those constraints. But if it is moved to environment b, which does not check the values passed to Z, the com- ponent may fail if values are passed that are not consistent with those constraints. HARDWARE Vulnerabilities can also be found in hardware, although less attention is usually paid to hardware. Hardware includes microprocessors, micro- controllers, firmware, circuit boards, power supplies, peripherals such as printers or scanners, storage devices, and communications equipment such as network cards. Tampering with such components may require physical access at some point in the hardware’s life cycle, which includes access to the software and libraries of the CAD/CAM tools used to design 9 Ed Adams, “Biggest Information Security Mistakes That Organizations Make,” Se- curity Innovation, Inc., Wilmington, Mass., available at http://www.issa.org/Downloads/ Whitepapers/Biggest-Information-Security-Mistakes_Security-Innovation.pdf. 10 Taimur Aslam, Ivan Krsul, and Eugene H. Spafford, “A Taxonomy of Security Vul- nerabilities,” in Proceedings of the th National Information Systems Security Conference, pp. 551-560, Octobter 1996, available at http://ftp.cerias.purdue.edu/pub/papers/taimur- aslam/aslam-krsul-spaf-taxonomy.pdf.

OCR for page 360
 APPENDIX E the circuits embedded in the hardware. On the other hand, hardware is difficult to inspect, and so hardware compromises are hard to detect. Consider, for example, that peripheral devices or even other circuit cards within the main computer housing often have on-board processors and memory that can support an execution stream entirely separate from that running on a system’s “main” processor. As an experiment to demonstrate the feasibility of making malicious modifications to hardware, King et al. developed two general-purpose methods for designing malicious processors, and used these methods to implement attacks that could steal passwords, enable privilege escala- tion, and allow automatic logins into compromised systems.11 Further- more, the implementation of these attacks required only small amounts of modification to the baseline uncompromised processor. (For example, implementation of the login attack used only 1,341 additional logic gates, or 0.08 percent of the 1,787,958 logic gates used in the baseline; yet an attacker using this attack would gain complete and high-level access to the machine.) Embedded in larger processors involving billions of gates, the changes required would be even smaller (and thus more difficult to detect) as a percentage of the circuitry involved. An important exception to the rule that physical access is required in order to compromise hardware is based on the fact that many systems rely on a field-upgradable read-only memory (ROM) chip to support a boot sequence, and corrupting or compromising the boot ROMs can render a system entirely non-functional (as was the case in the Chernobyl virus12) or only selectively non-functional. To corrupt or compromise the boot ROM that is field-upgradable, the attacker need only masquerade as a legitimate user seeking to upgrade the ROM software. Another attack on programmable read-only memory exploits the fact that the relevant chips support only a limited number of write cycles. Thus, a programmable read-only memory chip can be destroyed by an agent that repeatedly rewrites its contents a sufficient number of times. With many of today’s computer system designs, corruption or destruction of a boot ROM may require at least several hours of manual repair to replace the ROM chip or some other component (such as a power supply) that may have been damaged by improper system operation. In addition, if this attack can be mounted successfully on many network routers at more or less the same time, it is likely to cause significant disruption in the overall network itself 11 Samuel T. King et al., “Designing and Implementing Malicious Hardware,” Proceed- ings of the First USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET), April 2008, available at http://www.usenix.org/event/leet08/tech/full_papers/king/king.pdf. 12 The Chernobyl virus is further documented at http://www.cert.org/incident_notes/ IN-99-03.html.

OCR for page 360
 TECHNOLOGY, POLICY, LAW, AND ETHICS OF U.S. CYbERATTACK CAPAbILITIES and impede network repair efforts—and so restoring the overall network to its normal operating condition will take a much longer time. SEAMS BETWEEN HARDWARE AND SOFTWARE Software and hardware are typically developed independently. Yet from a defensive perspective, the two are inseparable.13 Attacks designed to take advantage of vulnerabilities in the way software and hardware interact—almost always at some interface—may go unnoticed because testing and evaluation at the seam between them are often incidental rather than a focused activity. COMMUNICATIONS CHANNELS The communications channels between the system or network and the “outside” world are still another type of vulnerability. For a system to be useful it must in general communicate with the outside world, and the communications channels used can be compromised—for example, by spoofing (an adversary pretends to be the “authorized” system), by jam- ming (an adversary denies access to anyone else), or by eavesdropping (an adversary obtains information intended to be confidential). One example of a communications channel cyberattack might involve seizing control of an adversary satellite by compromising its command channels. Satellites communicate with their ground stations through wire- less communications, and if the command link is unencrypted or otherwise insecure, a Zendian satellite can be controlled by commands sent from the United States just as easily as by commands sent from Zendia. With access to the command link, adversary satellites can be turned off, redirected, or even directed to self-destruct by operating in unsafe modes. CONFIGURATION Most information technology systems—especially systems based on off-the-shelf commercial components—can be configured in differ- ent ways to support different user preferences. Configuration manage- ment—the task of ensuring that a system is configured in accordance with actual user desires—is often challenging and difficult, and errors in configuration can result in security vulnerabilities. (Many errors are the result of default configurations that turn off security functionality in order 13 Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, p. 4.

OCR for page 360
 APPENDIX E to ease the task of system setup. An example of such an error is a default password, such as “system” or “password,” that is widely known—such a password will remain in effect until someone chooses to change it, and such a change may never occur simply because the need to do so is over- looked.) Other configuration errors result from explicit user choices made to favor convenience—for example, a system administrator may configure a system to allow remote access through a dial-in modem attached to his desktop computer so that he can work at home, but the presence of such a feature can also be used by an attacker. Configuration-based vulnerabilities are in some sense highly fragile, because they can be fixed on very short notice. All it takes for a configura- tion vulnerability to be eliminated is for the operator to choose a different configuration and implement it, which is usually a less demanding task than fixing an implementation error.

OCR for page 360