National Academies Press: OpenBook

Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities (2009)

Chapter: Appendix E: Technical Vulnerabilities Targeted by Cyber Offensive Actions

« Previous: Appendix D: Views on the Use of Force in Cyberspace
Suggested Citation:"Appendix E: Technical Vulnerabilities Targeted by Cyber Offensive Actions." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 360
Suggested Citation:"Appendix E: Technical Vulnerabilities Targeted by Cyber Offensive Actions." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 361
Suggested Citation:"Appendix E: Technical Vulnerabilities Targeted by Cyber Offensive Actions." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 362
Suggested Citation:"Appendix E: Technical Vulnerabilities Targeted by Cyber Offensive Actions." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 363
Suggested Citation:"Appendix E: Technical Vulnerabilities Targeted by Cyber Offensive Actions." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 364
Suggested Citation:"Appendix E: Technical Vulnerabilities Targeted by Cyber Offensive Actions." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 365
Suggested Citation:"Appendix E: Technical Vulnerabilities Targeted by Cyber Offensive Actions." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 366
Suggested Citation:"Appendix E: Technical Vulnerabilities Targeted by Cyber Offensive Actions." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 367
Suggested Citation:"Appendix E: Technical Vulnerabilities Targeted by Cyber Offensive Actions." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 368

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Appendix E Technical Vulnerabilities Targeted by Cyber Offensive Actions The discussion in this appendix is based largely though not entirely on an earlier National Research Council report on cybersecurity describ- ing vulnerabilities in the information technology on which the United States relies. However, there is no reason to suppose and no evidence available that suggests that other nations (or other non-national parties) are systematically better than the United States in eliminating vulnerabili- ties from the information technology that they use. Software Software constitutes the most obvious set of vulnerabilities that an attacker might exploit. In a running operating system or application, vulnerabilities may be present as the result of faulty program design or implementation, and the exploitation of such vulnerabilities may become possible when the targeted system comes into contact with a hostile trigger (either remotely or close up). For example, a pre-implanted vul- nerability in a program may be triggered at a particular time, or when a particular input is received. When vendors find vulnerabilities, they usually issue patches to fix them. But the issuance of a patch sometimes increases the threat to those who do not install it—when a patch is widely disseminated, it also serves  National Research Council, Toward a Safer and More Secure Cyberspace, The National Academies Press, Washington, D.C., 2007. 360

APPENDIX E 361 to notify a broad range of would-be attackers that a specific vulnerability exists. And if the patch is not installed, a broader range of attackers is likely to have knowledge of the vulnerability than if the patch had not been distributed at all. And patches are not always installed when the vendor issues them because patch installation will from time to time damage existing functionality on a system (e.g., causing a critical appli- cation to stop working until it can be made compatible with the patch to be installed). As a rule, vulnerabilities resulting from design errors or insecure design choices are harder to fix than those resulting from implementation errors. Perhaps still more difficult are vulnerabilities introduced by unin- tended functionality (the euphemism for adding a function to software that helps an attacker but that is not desired by the authorized user or developer)—the classic “back-door” vulnerability. Most system evalua- tion checks the extent to which a product meets the formal requirements, and not whether it does more than intended. Whereas vulnerabilities due to faulty design and implementation may be uncovered during the testing process or exposed during system operation and then fixed, vulnerabili- ties associated with unintended functionality may go undetected because the problem is tantamount to proving a negative. Today, applications and operating systems are made up of millions of lines of code, not all of which can possibly be audited for every changed line of source code. A widely used program might have vulnerabilities deliberately introduced into it by a “rogue” programmer employed by the software vendor but planted by the attacker. (One of the most plausible vectors for the surreptitious introduction of hostile code is a third-party device driver. In some operating systems, drivers almost always require the calling system to delegate to them privileges higher than those granted  As an example of a back door that is harmless, most versions of Microsoft Word from Word 97 to Word 2003 contain some unexpected functionality—typing “=rand()” in a Word document and then pressing the ENTER key results in three paragraphs of five repetitions of the sentence “The quick brown fox jumps over the lazy dog.” This particular back door is harmless and is even documented by Microsoft (see “How to Insert Sample Text into a Document in Word,” available at http://support.microsoft.com/kb/212251). Such function- ality could easily not be documented, and could easily be harmful functionality as well. For example, a security interface to a computer might be designed to require the user to enter a password and to insert a physical “smart card” into a slot before granting her access. But the interface could easily be programmed to ignore the smart-card requirement when a special password is entered, and then to grant the user many more privileges than would be normal. On the other hand, the in-advance installation of a back-door vulnerability always runs a risk of premature exposure—that is, it may be discovered and fixed before the attacker can use it. Even worse from the attacker’s standpoint, it may be fixed in such a way that the at- tacked system appears vulnerable but is in fact not vulnerable to that particular attack. Thus, the attacker may attack and believe he was successful, even though he was not.

362 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES to ordinary users—privileges that allow the code within drivers to bypass operating system protections.) To ensure that such vulnerabilities are not introduced, vendors take many steps such as multiple code reviews dur- ing the software development process. But source code does not always reveal the entire functionality of a system. For example, compilers are used to generate object code from source code. The compiler itself must be secure, for it could introduce object code that subversively and subtly modifies the functionality repre- sented in the source code. Moreover, maliciously constructed code intentionally introduced to implant vulnerabilities in a system for later exploitation is typically more difficult to detect than are vulnerabilities that arise in the normal course of software development. Attackers highly skilled in the art of obfuscating malicious code can make finding intentionally introduced vulnerabilities a much harder problem than finding accidental flaws. Finding such vul- nerabilities requires tools and skills far beyond those typically employed during system testing and evaluation aimed at discovering accidentally introduced defects. The discovery process requires detailed analysis by human experts, making it extremely expensive. Indeed, it is rarely done except for systems in which reliability and security are paramount (e.g., nuclear command and control systems). The introduction of deliberate vulnerabilities into software is facil- itated by the economic imperatives of software development and the opaqueness of the software development supply chain. Today, develop- ing custom software for every application is impractical in terms of both cost and time. Custom software developed for a single purpose must be paid for entirely by the party for which it is developed, and thus software producers often seek to reduce costs by using commercial off-the-shelf (COTS) software and/or outsourcing their software development when- ever possible (e.g., using commercial operating or database systems), even if critical systems are involved. In practice, systems are composed of components designed and implemented by many vendors. These vendors in turn often subcontract major components, and those subcontractors  A famous paper by Ken Thompson in 1984 described how to hide malicious bi- nary code in a way that cannot be detected by examining the source program. See Ken L. Thompson, “Reflections on Trusting Trust,” Communications of the ACM 27(8):761-763, August 1984.  Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, pp. 40-41.  Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, p. vi.

APPENDIX E 363 may in turn subcontract portions of their work. Because the spread of the Internet and high-speed communications capabilities such as broadband fiber optics worldwide has made global development of software not only possible, but also desirable for cheaply tapping the broadest range of talent, these subcontractors are often located in nations where labor is relatively inexpensive. The provenance of each component or subcompo- nent can only be completely known if mechanisms are in place to track each contributor, and every subcontractor represents an opportunity to introduce vulnerabilities secretly. The use of open source software is often advocated as a solution to the security problem described above (advocates assert that the many eyes of the open source community focused on software would make it difficult or impossible to introduce deliberate flaws that will endure), and open source software is increasingly being incorporated into systems to save time and money in the development process as well. Open source software development is essentially a form of outsourced development except that the outsourcing is done on an ad hoc basis and even less may be known about the circumstances under which the code is originally produced than is the case with software produced under an outsourcing contract. Vulnerabilities could be deliberately introduced by a cyberat- tacker, and there is no guarantee that the open source inspection process will uncover such vulnerabilities. For example, a particular sequence of instructions and input com- bined with a given system state could take advantage of an obscure and poorly known characteristic of hardware functioning, which means that programmers working for an attacking government and well versed in minute behavioral details of the machine on which their code will be running could introduce functionality that would likely go undetected in any review of it. As an example of how outsourcing can be used to introduce vulnera-  Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, p. 10.  Empirical results appear to suggest that open source software—though available for inspection by anyone—in practice is not often audited for security. See, for example, Hal Flynn, “Why Sardonix Failed,” SecurityFocus, February 4, 2004, available at http://www. securityfocus.com/columnists/218.  See, for example, Olin Sibert, Phillip A. Porras, and Robert Lindell, “An Analysis of the Intel 80x86 Security Architecture and Implementations,” IEEE Transactions on Software Engineering, 22(5):283-293, May 1996; and Kris Kaspersky and Alice Chang, “Remote Code Execution Through Intel CPU Bugs,” talk presented at Hack-In-The-Box, Dubai, United Arab Emirates, 2008, PowerPoint presentation available at http://nchovy.kr/uploads/3/303/ D2T1%20-%20Kris%20Kaspersky%20-%20Remote%20Code%20Execution%20Through%20 Intel%20CPU%20Bugs.pdf.

364 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES bilities, a financial services company reportedly outsourced its application development to a company in the Far East. The company had been certi- fied as a CMM level-5 company, meaning that it had a well-established and documented process for developing software. However, unknown to the company, it also employed a few malicious users who inserted a back door in the application that was sent to the financial services client. The client performed only a minimal security review as part of its acceptance testing, and so the back door went undetected. The back door consisted of an undocumented URL that could be accessed remotely, through which malicious users were able to obtain customer information such as account numbers, statement balances, and other information. The back door was discovered months after deployment after the developer’s clients com- plained about fraudulent charges. A final kind of software error is sometimes called an emergent error.10 Emergent errors can arise when correct software is used in a situation or environment for which it was not originally designed and implemented. For example, a program may work correctly in a given context and envi- ronment. However, if it is moved to a different computing environment, it may begin to work incorrectly. A software component Z may be certi- fied as being secure, provided certain conditions are met (such as certain constraints on the input values being passed across its interface). It works correctly in environment A, which guarantees that the values passed are indeed restricted in accordance with those constraints. But if it is moved to environment B, which does not check the values passed to Z, the com- ponent may fail if values are passed that are not consistent with those constraints. Hardware Vulnerabilities can also be found in hardware, although less attention is usually paid to hardware. Hardware includes microprocessors, micro- controllers, firmware, circuit boards, power supplies, peripherals such as printers or scanners, storage devices, and communications equipment such as network cards. Tampering with such components may require physical access at some point in the hardware’s life cycle, which includes access to the software and libraries of the CAD/CAM tools used to design  Ed Adams, “Biggest Information Security Mistakes That Organizations Make,” Se- curity Innovation, Inc., Wilmington, Mass., available at http://www.issa.org/Downloads/ Whitepapers/Biggest-Information-Security-Mistakes_Security-Innovation.pdf. 10 Taimur Aslam, Ivan Krsul, and Eugene H. Spafford, “A Taxonomy of Security Vul- nerabilities,” in Proceedings of the 19th National Information Systems Security Conference, pp. 551-560, Octobter 1996, available at http://ftp.cerias.purdue.edu/pub/papers/taimur- aslam/aslam-krsul-spaf-taxonomy.pdf.

APPENDIX E 365 the circuits embedded in the hardware. On the other hand, hardware is difficult to inspect, and so hardware compromises are hard to detect. Consider, for example, that peripheral devices or even other circuit cards within the main computer housing often have on-board processors and memory that can support an execution stream entirely separate from that running on a system’s “main” processor. As an experiment to demonstrate the feasibility of making malicious modifications to hardware, King et al. developed two general-purpose methods for designing malicious processors, and used these methods to implement attacks that could steal passwords, enable privilege escala- tion, and allow automatic logins into compromised systems.11 Further- more, the implementation of these attacks required only small amounts of modification to the baseline uncompromised processor. (For example, implementation of the login attack used only 1,341 additional logic gates, or 0.08 percent of the 1,787,958 logic gates used in the baseline; yet an attacker using this attack would gain complete and high-level access to the machine.) Embedded in larger processors involving billions of gates, the changes required would be even smaller (and thus more difficult to detect) as a percentage of the circuitry involved. An important exception to the rule that physical access is required in order to compromise hardware is based on the fact that many systems rely on a field-upgradable read-only memory (ROM) chip to support a boot sequence, and corrupting or compromising the boot ROMs can render a system entirely non-functional (as was the case in the Chernobyl virus12) or only selectively non-functional. To corrupt or compromise the boot ROM that is field-upgradable, the attacker need only masquerade as a legitimate user seeking to upgrade the ROM software. Another attack on programmable read-only memory exploits the fact that the relevant chips support only a limited number of write cycles. Thus, a programmable read-only memory chip can be destroyed by an agent that repeatedly rewrites its contents a sufficient number of times. With many of today’s computer system designs, corruption or destruction of a boot ROM may require at least several hours of manual repair to replace the ROM chip or some other component (such as a power supply) that may have been damaged by improper system operation. In addition, if this attack can be mounted successfully on many network routers at more or less the same time, it is likely to cause significant disruption in the overall network itself 11 Samuel T. King et al., “Designing and Implementing Malicious Hardware,” Proceed- ings of the First USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET), April 2008, available at http://www.usenix.org/event/leet08/tech/full_papers/king/king.pdf. 12 The Chernobyl virus is further documented at http://www.cert.org/incident_notes/ IN-99-03.html.

366 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES and impede network repair efforts—and so restoring the overall network to its normal operating condition will take a much longer time. Seams between Hardware and Software Software and hardware are typically developed independently. Yet from a defensive perspective, the two are inseparable.13 Attacks designed to take advantage of vulnerabilities in the way software and hardware interact—almost always at some interface—may go unnoticed because testing and evaluation at the seam between them are often incidental rather than a focused activity. Communications channels The communications channels between the system or network and the “outside” world are still another type of vulnerability. For a system to be useful it must in general communicate with the outside world, and the communications channels used can be compromised—for example, by spoofing (an adversary pretends to be the “authorized” system), by jam- ming (an adversary denies access to anyone else), or by eavesdropping (an adversary obtains information intended to be confidential).  One example of a communications channel cyberattack might involve seizing control of an adversary satellite by compromising its command channels. Satellites communicate with their ground stations through wire- less communications, and if the command link is unencrypted or otherwise insecure, a Zendian satellite can be controlled by commands sent from the United States just as easily as by commands sent from Zendia. With access to the command link, adversary satellites can be turned off, redirected, or even directed to self-destruct by operating in unsafe modes. Configuration Most information technology systems—especially systems based on off-the-shelf commercial components—can be configured in differ- ent ways to support different user preferences. Configuration manage- ment—the task of ensuring that a system is configured in accordance with actual user desires—is often challenging and difficult, and errors in configuration can result in security vulnerabilities. (Many errors are the result of default configurations that turn off security functionality in order 13 Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, p. 4.

APPENDIX E 367 to ease the task of system setup. An example of such an error is a default password, such as “system” or “password,” that is widely known—such a password will remain in effect until someone chooses to change it, and such a change may never occur simply because the need to do so is over- looked.) Other configuration errors result from explicit user choices made to favor convenience—for example, a system administrator may configure a system to allow remote access through a dial-in modem attached to his desktop computer so that he can work at home, but the presence of such a feature can also be used by an attacker. Configuration-based vulnerabilities are in some sense highly fragile, because they can be fixed on very short notice. All it takes for a configura- tion vulnerability to be eliminated is for the operator to choose a different configuration and implement it, which is usually a less demanding task than fixing an implementation error.

Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities Get This Book
×
 Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities
Buy Paperback | $54.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The United States is increasingly dependent on information and information technology for both civilian and military purposes, as are many other nations. Although there is a substantial literature on the potential impact of a cyberattack on the societal infrastructure of the United States, little has been written about the use of cyberattack as an instrument of U.S. policy.

Cyberattacks--actions intended to damage adversary computer systems or networks--can be used for a variety of military purposes. But they also have application to certain missions of the intelligence community, such as covert action. They may be useful for certain domestic law enforcement purposes, and some analysts believe that they might be useful for certain private sector entities who are themselves under cyberattack. This report considers all of these applications from an integrated perspective that ties together technology, policy, legal, and ethical issues.

Focusing on the use of cyberattack as an instrument of U.S. national policy, Technology, Policy, Law and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities explores important characteristics of cyberattack. It describes the current international and domestic legal structure as it might apply to cyberattack, and considers analogies to other domains of conflict to develop relevant insights. Of special interest to the military, intelligence, law enforcement, and homeland security communities, this report is also an essential point of departure for nongovernmental researchers interested in this rarely discussed topic.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!