National Academies Press: OpenBook

Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities (2009)

Chapter: 2 Technical and Operational Considerations in Cyberattack and Cyberexploitation

« Previous: Part I: Framing and Basic Technology
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 79
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 80
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 81
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 82
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 83
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 84
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 85
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 86
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 87
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 88
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 89
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 90
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 91
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 92
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 93
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 94
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 95
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 96
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 97
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 98
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 99
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 100
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 101
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 102
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 103
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 104
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 105
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 106
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 107
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 108
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 109
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 110
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 111
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 112
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 113
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 114
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 115
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 116
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 117
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 118
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 119
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 120
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 121
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 122
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 123
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 124
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 125
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 126
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 127
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 128
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 129
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 130
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 131
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 132
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 133
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 134
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 135
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 136
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 137
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 138
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 139
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 140
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 141
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 142
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 143
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 144
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 145
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 146
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 147
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 148
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 149
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 150
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 151
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 152
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 153
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 154
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 155
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 156
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 157
Suggested Citation:"2 Technical and Operational Considerations in Cyberattack and Cyberexploitation." National Research Council. 2009. Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Washington, DC: The National Academies Press. doi: 10.17226/12651.
×
Page 158

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

2 Technical and Operational Considerations in Cyberattack and Cyberexploitation This chapter focuses on technical and operational dimensions of cyberattack and cyberexploitation. Section 2.1 provides the essential points of the entire chapter, with the remainder of the chapter provid- ing analytical backup. Section 2.2 addresses the basic technology of cyber­attack. Section 2.3 addresses various operational considerations associated with “weaponizing” the basic technology of cyberattack. These sections are relevant both to the attacker, who uses cyberattack as a tool of his own choosing, and to the defender, who must cope with and respond to incoming cyberattacks launched by an attacker. Section 2.4 focuses on the centrally important issue of characterizing an incoming cyberattack. Cyberattack and cyberdefense are sometimes intimately related through the practice of active defense (Section 2.5), which may call for the defender to launch a cyberattack itself in response to an incoming cyberattack on it. Section 2.6 addresses cyberexploitation and how its technical and operational dimensions differ from cyberattack. Section 2.7 provides some lessons that can be learned from examining criminal use of cyberattack and cyberexploitation. For perspective on tools used for cyberattack, Table 2.1 provides a comparison of tools for kinetic attack and tools for cyberattack. Note: The committee has no specific information on actual U.S. cyber- attack or cyberexploitation capabilities, and all references in this chapter to U.S. cyberattack or cyberexploitation capabilities are entirely hypotheti- cal, provided for illustrative purposes only. 79

80 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES TABLE 2.1  A Comparison of Key Characteristics of Cyberattack Versus Kinetic Attack Kinetic Attack Cyberattack Effects of significance Direct effects usually Indirect effects usually more more important than important than direct indirect effects effects Reversibility of direct Low, entails Often highly reversible on a effects reconstruction or short time scale rebuilding that may be time-consuming Acquisition cost for Largely in procurement Largely in research and weapons development Availability of base Restricted in many Widespread in most cases technologies cases Intelligence Usually smaller than Usually high compared to requirements for those required for kinetic weapons successful use cyberattack Uncertainties in Usually smaller than Usually high compared to planning those involved in kinetic weapons cyberattack 2.1  Important Characteristics of Cyberattack and Cyberexploitation For purposes of this report, cyberattack refers to the use of deliber- ate actions—perhaps over an extended period of time—to alter, disrupt, deceive, degrade, or destroy adversary computer systems or networks or the information and/or programs resident in or transiting these sys- tems or networks. Several characteristics of weapons for cyberattack are w ­ orthy of note: • The indirect effects of such weapons are almost always more con- sequential than the direct effects of the attack. (Direct or immediate effects are effects on the computer system or network attacked. Indirect or fol- low-on effects are effects on the systems and/or devices that the attacked computer system or network controls or interacts with, or on the people that use or rely on the attacked computer system or network.) That is, the computer or network attacked is much less relevant than the systems controlled by the targeted computer or network or the decision making that depends on the information contained in or processed by the targeted computer or network, and indeed the indirect effect is often the primary purpose of the attack. Furthermore, the scale of damage of a cyberattack can span an enormous range.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 81 • The outcomes of a cyberattack are often highly uncertain. Minute details of configuration can affect the outcome of a cyberattack, and cas- cading effects often cannot be reliably predicted. One consequence can be that collateral damage and damage assessment of a cyberattack may be very difficult to estimate. • Cyberattacks are often very complex to plan and execute. They can involve a much larger range of options than most traditional military operations, and because they are fundamentally about an attack’s sec- ondary and tertiary effects, there are many more possible outcome paths whose analysis often requires highly specialized knowledge. The time scales on which cyberattacks operate can range from tenths of a second to years, and the spatial scales may be anywhere from “concentrated in a facility next door” to globally dispersed. • Compared to traditional military operations, cyberattacks are rela- tively inexpensive. The underlying technology for carrying out cyberat- tacks is widely available, inexpensive, and easy to obtain. An attacker can compromise computers belonging to otherwise uninvolved parties to take part in an attack activity; use automation to increase the amount of damage that can be done per person attacking, increase the speed at which the damage is done, and decrease the required knowledge and skill level of the operator of the system; and even steal the financial assets of an adversary to use for its own ends. On the other hand, some cyberattack weapons are usable only once or a few times. • The identity of the originating party behind a significant cyberat- tack can be concealed with relative ease, compared to that of a signifi- cant kinetic attack. Cyberattacks are thus easy to conduct with plausible deniability—indeed, most cyberattacks are inherently deniable. Cyberat- tacks are thus also well suited for being instruments of catalytic conflict— instigating conflict between two other parties. Cyberexploitations are different from cyberattacks primarily in their objectives and in the legal constructs surrounding them. Yet, much of the technology underlying cyberexploitation is similar to that of cyberattack, and the same is true for some of the operational considerations as well. A successful cyberattack requires a vulnerability, access to that vulnerability, and a payload to be executed. A cyberexploitation requires the same three things—and the only difference is in the payload to be executed. That is, what technically distinguishes a cyberexploitation from a cyberattack is the nature of the payload. These technical similarities often mean that a targeted party may not be able to distinguish easily between a cyberex- ploitation and a cyberattack—a fact that may result in that party’s making incorrect or misinformed decisions. On the other hand, the primary tech- nical requirement of a cyberexploitation is that the delivery and execution

82 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES of its payload must be accomplished quietly and undetectably—secrecy is often far less important when cyberattack is the mission. 2.2  The Basic Technology of Cyberattack Perhaps the most important point about cyberattack from the stand- point of a major nation-state, backed by large resources, national intelli- gence capabilities, and political influence is that its cyberattack capabili- ties dwarf the kinds of cyberattacks that most citizens have experienced in everyday life or read about in the newspapers. To use a sports metaphor, the cyberattacks of the misguided teenager—even sophisticated ones— could be compared to the game that a good high school football team can play, whereas the cyberattacks that could be conducted by a major nation- state would be more comparable to the game of a professional football team with a 14-2 win-loss record in the regular season. 2.2.1  Information Technology and Infrastructure Before considering the basic technology of cyberattack, it is helpful to review a few facts about information technology (IT) and today’s IT infrastructure. • The technology substrate of today’s computers, networks, oper- ating systems, and applications is not restricted to the U.S. military, or even just to the United States. Indeed, it is widely available around the world, to nations large and small, to subnational groups, and even to individuals. • The essential operating parameters of this technology substrate are determined largely by commercial needs rather than military needs. Military IT draws heavily on commercial IT rather than the reverse. • A great deal of the IT infrastructure is shared among nations and between civilian and military sectors, though the extent of such sharing varies by nation. Systems and networks used by many nations are built by the same IT vendors. Government and military users often use com- mercial Internet service providers. Consequently, these nominally private entities exert considerable influence over the environment in which any possible cyberconflict might take place. A primer on cyberattack in a military context can be found in Gregory Rattray, Strate­ gic Warfare in Cyberspace, MIT Press, Cambridge, Mass., 2001. Rattray’s treatment covers some of the same ground covered in this chapter.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 83 2.2.2  Vulnerability, Access, and Payload A successful cyberattack requires a vulnerability, access to that vul- nerability, and a payload to be executed. In a non-cyber context, a vulner- ability might be an easily pickable lock in the file cabinet. Access would be an available path for reaching the file cabinet—and from an intruder’s perspective, access to a file cabinet located on the International Space Station would pose a very different problem from that posed by the same cabinet being located in an office in Washington, D.C. The payload is the action taken by the intruder after the lock is picked. For example, he can destroy the papers inside, or he can alter some of the information on those papers. 2.2.2.1  Vulnerabilities For a computer or network, a vulnerability is an aspect of the system that can be used by the attacker to compromise one or more of the attri- butes described in the previous section. Such weaknesses may be acci- dentally introduced through a design or implementation flaw. They may also be introduced intentionally. An unintentionally introduced defect (“bug”) may open the door for opportunistic use of the vulnerability by an attacker who learns of its existence. Many vulnerabilities are widely publicized after they are discovered and may be used by anyone with moderate technical skills until a patch can be disseminated and installed. Attackers with the time and resources may also discover unintentional defects that they protect as valuable secrets—also known as zero-day exploits. As long as those defects go unaddressed, the vulnerabilities they create may be used by the attacker.  In the lexicon of cybersecurity, “using” or “taking advantage” of a vulnerability is often called “exploiting a vulnerability.” Recall that Chapter 1 uses the term “cyberexploita- tion” in an espionage context—a cyber offensive action conducted for the purpose of obtain- ing information. The context of usage will usually make clear which of these meanings of “exploit” is intended.  The lag time between dissemination of a security fix to the public and its installation on a specific computer system may be considerable, and it is not always due to unawareness on the part of the system administrator. It sometimes happens that the installation of a fix will cause an application running on the system to cease working, and administrators may have to weigh the potential benefit of installing a security fix against the potential cost of rendering a critical application non-functional. Attackers take advantage of this lag time to exploit vulnerabilities.  A zero-day attack is a previously unseen attack on a previously unknown vulner- ability. The term refers to the fact that the vulnerability has been known to the defender for zero days. (The attacker has usually known of the attack for a much longer time.) The most dangerous is a zero-day attack on a remotely accessible service that runs by default on all versions of a widely used operating system distribution. These types of remotely accessible

84 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES Two additional factors have increased opportunities for the attacker. First, the use of software in society has grown rapidly in recent years, and the sheer amount of software in use continues to expand across societal functions. For instance, a study by the Center for Strategic and Interna- tional Studies estimated that the amount of software used in Department of Defense systems has been increasing rapidly with no let-up for the foresee- able future. More software in use inevitably means more vulnerabilities. Second, software has also grown in complexity. Users demand more and more from software, and thus the complexity of software to meet user requirements increases steadily. Complex software, in turn, is difficult to understand, evaluate, and test. In addition, software is generally devel- oped to provide functionality for a wide range of users, and for any par- ticular user only a limited set of functionality may actually be useful. But whether used or not, every available capability presents an opportunity for new vulnerabilities. Simply put, unneeded capability means unneces- sary vulnerability. Even custom systems often include non-essential but “nice-to-have” features that from a security perspective represent added potential for risk, and the software acquisition process is often biased in favor of excess functionality (seen as added value) while failing to prop- erly evaluate added risk. Of course, vulnerabilities are of no use to an attacker unless the attacker knows they are present on the system or network being attacked. But an attacker may have some special way of finding vulnerabilities, and nation-states in particular often have special advantages in doing so. For example, although proprietary software producers jealously protect their source code as intellectual property upon which their business is depen- dent, some such producers are known to provide source-code access to governments under certain conditions. zero-day attacks on services appear to be less frequently found as time goes on. In response, a shift in focus to the client side has occurred, resulting in many recent zero-day attacks on client-side applications. For data and analysis of zero-day attack trends, see pages 278-287 in Daniel Geer, Measuring Security, Cambridge, Mass., 2006, available at http://geer.tinho. net/measuringsecurity.tutorialv2.pdf.  Center for Strategic and International Studies, “An Assessment of the National Se- curity Software Industrial Base,” presented at the National Defense Industrial Association Defense Software Strategy Summit, October 19, 2006, available at http://www.diig-csis. org/pdf/Chao_SoftwareIndustrialBase_NDIASoftware.pdf.  Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, p. 19.  Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, p. 55.  See, for example, http://www.microsoft.com/industry/publicsector/government/ programs/GSP.mspx.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 85 Availability of source code for inspection increases the likelihood that the inspecting party (government) will be able to identify vulnerabilities not known to the general public. Furthermore, through covert and non- public channels, nation-states may even be able to persuade vendors or willing employees of those vendors to insert vulnerabilities—secret “back doors”—into commercially available products (or require such insertion as a condition of export approval), by appealing to their patriotism or ideology, bribing or blackmailing or extorting them, or applying political pressure. In other situations, a nation-state may have the resources to obtain (steal, buy) an example of the system of interest (perhaps already embed- ded in a weapons platform, for example). By whatever means the sys- tem makes its way into the hands of the nation-state, the state has the resources to test it extensively to understand its operational strengths and weaknesses, and/or conduct reverse engineering on it to understand its various functions and at least some of its vulnerabilities. Some of the vulnerabilities useful to cyberattackers include the following: • Software. Application or system software may have accidentally or deliberately introduced flaws whose use can subvert the intended pur- pose for which the software is designed. • Hardware. Vulnerabilities can also be found in hardware, including microprocessors, microcontrollers, circuit boards, power supplies, periph- erals such as printers or scanners, storage devices, and communications equipment such as network cards. Tampering with such components may secretly alter the intended functionality of the component, or provide opportunities to introduce hostile software. • Seams between hardware and software. An example of such a seam might be the reprogrammable read-only memory of a computer (firm- ware) that can be improperly and clandestinely reprogrammed. • Communications channels. The communications channels between a system or network and the “outside” world can be used by an attacker in many ways. An attacker can pretend to be an “authorized” user of the channel, jam it and thus deny its use to the adversary, or eavesdrop on it to obtain information intended by the adversary to be confidential.  • Configuration. Most systems provide a variety of configuration options that users can set, based on their own security versus convenience tradeoffs. Because convenience is often valued more than security, many systems are—in practice—configured insecurely. • Users and operators. Authorized users and operators of a system or network can be tricked or blackmailed into doing the bidding of an attacker.

86 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES • Service providers. Many computer installations rely on outside par- ties to provide computer-related services, such as maintenance or Internet service. An attacker may be able to persuade a service provider to take some special action on its behalf, such as installing attack software on a target computer. Appendix E discusses these vulnerabilities in more detail. 2.2.2.2  Access In order to take advantage of a vulnerability, a cyberattacker must have access to it. Targets that are “easy” to attack are those that involve relatively little preparation on the part of the attacker and where access to the target can be gained without much difficulty—such as a target that is known to be connected to the Internet. Public websites are a canonical example of such targets, as they usually run on generic server software and are connected to the Internet, and indeed website defacement is an example of a popular cyberattack that can be launched by relatively unskilled individuals. At the other end of the spectrum, difficult targets are those that require a great deal of preparation on the part of the attacker and where access to the target can be gained only at great effort or may even be impossible for all practical purposes. For example, the on-board avionics of an adver- sary’s fighter plane are not likely to be connected to the Internet for the foreseeable future, which means that launching a cyberattack against it will require some kind of close access to introduce a vulnerability that can be used later (close-access attacks are discussed in Section 2.2.5.2). Nor are these avionics likely to be running on a commercial operating system such as Windows, which means that information on the vulnerabilities of the avionics software will probably have to be found by obtaining a clandestine copy of it. In general, it would be expected that an adversary’s important and sensitive computer systems or networks would fall into the category of difficult targets. Access paths to a target may be transient. For example, antiradiation missiles often home in on the emissions of adversary radar systems; once  An important caveat is the fact that adversary computer systems and networks are subject to the same cost pressures as U.S. systems and networks, and there is no reason to suppose that adversaries are any better at avoiding dumb mistakes than the United States is. Thus, it would not be entirely surprising to see important and/or sensitive systems con- nected to the Internet because the Internet provides a convenient communications medium, or for such systems to be built on commercial operating systems with known vulnerabilities because doing so would reduce the cost of development. However, the point is that no cy- berattacker can count on such dumb mistakes for any particular target of interest.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 87 the radar shuts down, the missile aims at the last known position of the radar. Counterbattery systems locate adversary artillery by backtracing the trajectory of artillery shells, but moving the artillery piece quickly makes it relatively untargetable. Similar considerations sometimes apply to an adversary computer that makes itself known by transmitting (e.g., conducting an attack). Under such circumstances, a successful cyberat- tack on the adversary computer may require speed to establish an access path and use a vulnerability before the computer goes dark and makes establishing a path difficult or impossible. Under some other circumstances, an access path may be intermittent. For example, a submarine’s onboard administrative local area network would necessarily be disconnected from the Internet while underwater at sea, but might be connected to the Internet while in port. If the admin- istrative network is ever connected to the on-board operational network (controlling weapons and propulsion) at sea, an effective access path may be present for an attacker. Access paths to a target can suggest a way of differentiating between two categories of cyberattack: • Remote-access cyberattacks, in which an attack is launched at some distance from the adversary computer or network of interest. The canoni- cal example of a remote access attack is that of an adversary computer attacked through the access path provided by the Internet, but other examples might include accessing an adversary computer through a dial- up modem attached to it or through penetration of the wireless network to which it is connected and then proceeding to destroy data on it. 10 • Close-access cyberattacks, in which an attack on an adversary com- puter or network takes place through the local installation of hardware or software functionality by friendly parties (e.g., covert agents, vendors) in close proximity to the computer or network of interest. Close access is a possibility anywhere in the supply chain of a system that will be deployed, and it may well be easier to gain access to the system before it is deployed. These two categories of cyberattack may overlap to a certain extent. For example, a close-access cyberattack might result in the implantation of friendly code in online, Internet-propagated updates to a widely used 10The Department of Defense (DOD) definition of computer network attack (CNA)— “actions taken through the use of computer networks to disrupt, deny, degrade, or destroy information resident in computers and computer networks, or the computers and networks themselves”—is similar in spirit to this report’s use of “remote-access” cyberattack. See Joint Publication 3-13, Information Operations, February 13, 2006.

88 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES program. Such an attack would embody elements of the two categories. Also, communications channels (the channels through which IT systems and networks transfer information) can also be targeted through remote access (e.g., penetrating or jamming a wireless network) or through close access (e.g., tapping into a physical cable feeding a network). 2.2.2.3  Payload Payload is the term used to describe the things that can be done once a vulnerability has been exploited. For example, once a software agent (such as a virus) has entered a given computer, it can be programmed to do many things—reproducing and retransmitting itself, destroying files on the system, or altering files. Payloads can have multiple capabilities when inserted into an adver- sary system or network—that is, they can be programmed to do more than one thing. The timing of these actions can also be varied. And if a com- munications channel to the attacker is available, payloads can be remotely updated. Indeed, in some cases, the initial delivered payload consists of nothing more than a mechanism for scanning the system to determine its technical characteristics and an update mechanism to retrieve from the attacker the best packages to further its attack. A hostile payload may be a Trojan horse—a program that appears to be innocuous but in fact has a hostile function that is triggered immedi- ately or when some condition is met. It may also be a rootkit—a program that is hidden from the operating system or virus checking software but that nonetheless has access to some or all of the computer’s func- tions. Rootkits can be installed in the boot-up software of a computer, and even in the BIOS ROM hardware that initially controls the boot-up sequence. (Rootkits installed in this latter manner will remain even when the user erases the entire hard disk and reinstalls the operating system from scratch.) Once introduced into a targeted system, the payload sits quietly and does nothing harmful most of the time. However, at the right moment, the program activates itself and proceeds to (for example) destroy or corrupt data, disable system defenses, or introduce false message traffic. The “right moment” can be triggered because a certain date and time are reached, because the payload receives an explicit instruction to activate through some covert channel, because the traffic it monitors signals the right moment, or because something specific happens in its immediate environment. An example is a payload that searches for “packets of death.” This payload examines incoming data packets on a host for a special pattern embedded within it. For almost all packets, the payload does nothing. But when it sees a particular sequence of specially configured packets,

TECHNICAL AND OPERATIONAL CONSIDERATIONS 89 it triggers some other hostile action—it crashes the system, deletes files, corrupts subsequent data packets, and so on. (Note that the hostile action may be to do nothing when action should be taken—an air-defense sys- tem that ignores the signature of certain aircraft when it receives such a packet has clearly been compromised.) Note that payloads for cyberattack may be selective or indiscriminate in their targeting. That is, some payloads for cyberattack can be config- ured to attack any computer to which access may be gained, and others can be configured to attack quite selectively only certain computers. 2.2.3  Scale and Precision A cyberattack can be conducted over a wide range of scales, depend- ing on the needs of the attacker. An attack intended to degrade confidence in the IT infrastructure of a nation might be directed against every Inter- net-connected desktop computer that uses a particular operating system. Attacks intended to “zombify” computers for later use in a botnet need not succeed against any particular machine, but instead rely on the fact that a large fraction of the machines connected to the Internet will be vulnerable to being compromised. Alternatively, a cyberattack might be directed to all available targets across one or more critical infrastructure sectors. A probe intended to test the feasibility of a large-scale cyberattack might be directed against just a few such computers selected at random. An attack might also be directed against a few selected key targets in order to have secondary effects (e.g., disruption of emergency call dispatch centers timed to coincide with physical attacks, thus amplifying the psychological effect of those physi- cal attacks). A cyberattacker may also care about which computers or networks are targeted—an issue of precision. Of greatest significance are the sce- narios in which focused but small-scale attacks are directed against a specific computer or user whose individual compromise would have enormous value (“going after the crown jewels”)—an adversary’s nuclear command and control system, for example. Or, a cyberattack may be directed against a particular electric power generation plant that powers a specific building in which adversary command and control systems are known to operate, rather than all of the generation facilities in a nation’s entire electric grid. 2.2.4  Critical Periods of Cyberattack How a cyberattack evolves over time is relevant, and there are sev- eral time periods of interest. The first, Tintelligence collection, is the period

90 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES available for collecting intelligence needed to launch the attack. A second relevant period, Tattack launch, is the period over which the functionality required to carry out the attack on the targeted system(s) is installed or deployed—that is, during which the attack is launched. A third relevant period, Tcompromise, is the period over which the confidentiality, integrity, or availability attributes of the targeted system(s) are compromised. A fourth relevant period, Teffects apparent, is the time period over which the victim actually suffers the ill effects of such compromises. During this time, the target can recover from the attack or reconstitute its function. Depending on the specific nature of the cyberattack, these four periods may—or may not—overlap with each other. The distinctions between these various periods are important.11 For example, the fact that Tattack launch and Tcompromise are different windows in time means that the period Tattack launch can be used to “pre-position” vulnerabilities to facilitate later actions. This pre-positioning could be in the form of trapdoors left behind from previous virus infections, uninten- tional design vulnerabilities,12 or vulnerable code left by a compromised staff member or by a break-in to the developer’s site.13 Such pre-positioning is helpful for launching high-volume cyber- attacks—possible targets include air-traffic control facilities, systems in manufacturing or shipping departments, logistics systems in rail trans- port companies, energy production facilities, and so on, as well as a variety of military facilities. An attacker that has prepared his targets in this manner has avenues for instantaneous disruption or corruption of operational processes through a large-scale injection of forged com- munications, destruction of data, or flooding of services from inside nor- mal perimeter defenses. When hosts inside a network begin to attack the internal network infrastructure or servers, they are often hard to identify rapidly because the very tools that are used by network opera- tions staff to diagnose network problems may not be available. An attack spread widely enough can overwhelm the network operations and system 11 These concepts can also be found in epidemiologic models for the spread of malware. See, for example, http://geer.tinho.net/measuringsecurity.tutorialv2.pdf. 12 An example is the recent episode during which Sony’s BMG Music Entertainment surreptitiously distributed software on audio compact disks (CD) that was automatically installed on any computers that played the CDs. This software was intended to block the copying of the CD, but it had the unintentional side effect of opening security vulnerabili- ties that could be exploited by other malicious software such as worms or viruses. See Iain Thomson and Tom Sanders, “Virus Writers Exploit Sony DRM,” November 10, 2005, avail- able at http://www.vnunet.com/vnunet/news/2145874/virus-writers-exploit-sony-drm. 13 P.A. Karger and R.R. Schell, Multics Security Evaluation: Vulnerability Analysis, ESD- TR-74-193, Vol. II, June 1974, HQ Electronic Systems Division, Hanscom Air Force Base, available at http://csrc.nist.gov/publications/history/karg74.pdf.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 91 administration staffs, increasing the time it takes to diagnose and mitigate the attack. More complex attacks can also be coordinated with other events to achieve a force-multiplier effect. For example, if an attack of this nature could successfully be made against an air defense network, the attacker could disrupt the network’s operation in concert with a hostile flight operation, potentially blinding the defense system for a period of time. Still another relevant time scale is the duration of what might be called the entire operation, which itself might call for multiple cyberat- tacks to be conducted over time. A denial-of-service attack is the canonical example of an operation that requires multiple cyberattacks over a period of time—when the attacks stop, the denial of service vanishes. Multiple cyberattacks conducted over time might also be needed to coordinate the entire cyber operation with other military activity taken against an adversary. Alternatively, and perhaps more prosaically, multiple cyberat- tacks might be needed to ensure the continuing disruption of an adver- sary computer system or network because the vulnerabilities that an attacker needs to target may not remain static. In an operation that calls for multiple cyberattacks over time, the targeted party may well respond to the first signs of the attack by closing or correcting some or all of the vulnerabilities enabling the attack. Other vulnerabilities would no doubt remain, but the attacker would have to have advance knowledge of them and adjust the attack accordingly. These different time scales also help to explain possible different per- ceptions of the parties involved regarding what might “count” as an attack. For example, the attacker might reasonably believe that a cyberattack has not been committed until a hostile agent planted on the adversary’s com- puter has caused actual damage to it. The adversary might reasonably believe that a cyberattack has been committed when the agent was first planted on the computer, thus giving the agent’s controller the technical capability of causing actual damage to the adversary’s computer. 2.2.5  Approaches for Cyberattack What are the approaches that may be used for cyberattack? In general, a cyberattack depends on the attacker’s operational skill and knowledge of the adversary, and its success relies on taking advantage of a mix of the adversary’s technical and human vulnerabilities. Furthermore, although the military services and intelligence agencies rightly classify a variety of operational techniques and approaches to cyberattack, they too are governed by the same laws of physics as everyone else, and there are no magic technologies behind closed doors that enable, for example, an

92 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES attacker to beam a virus into a computer that lacks connections to the outside world. The discussion in this section is thus based on what is publicly known about tools and methods for conducting cyberattacks. This generic dis- cussion is not intended to be complete or comprehensive, but it provides enough information for readers to gain a sense of what might be possible without providing a detailed road map for a would-be cyberattacker. This section is divided into three subsections: malware suitable for remote attacks, approaches for close-access attacks, and social engineering. In many cases, these tools and methods are known because they have been used in criminal enterprises. Box 2.1 describes some of the advan- tages that a nation-state has over others in conducting cyberattacks; these advantages can be used to refine and weaponize these tools and methods so that they are more effective in carrying out operational missions. (The actions of nation-states are also constrained by applicable domestic laws, although in the United States, certain government agencies are explicitly exempted from complying with certain laws. See Section 7.3.4 for more discussion of this point.) 2.2.5.1  Possible Approaches for Remote-Access Cyberattacks Remote-access cyberattacks can be facilitated with a variety of tools, some of which are described below. 2.2.5.1.1 Botnets An attack technology of particular power and significance is the bot- net. Botnets are arrays of compromised computers that are remotely con- trolled by the attacker. A compromised computer—an individual bot—is connected to the Internet, usually with an “always-on” broadband con- nection, and is running software clandestinely introduced by the attacker. The attack value of a botnet arises from the sheer number of computers that an attacker can control—often tens or hundreds of thousands and perhaps as many as a million. (An individual unprotected computer may be part of multiple botnets as the result of multiple compromises.) Since all of these computers are under one party’s control, the botnet can act as a powerful amplifier of an attacker’s actions. An attacker usually builds a botnet by finding a few individual com- puters to compromise, perhaps using one of the tools described above. The first hostile action that these initial zombies take is to find other machines to compromise—a task that can be undertaken in an automatic manner, and so the size of the botnet can grow quite rapidly. It is widely reported that only minutes elapse between the instant that a computer

TECHNICAL AND OPERATIONAL CONSIDERATIONS 93 BOX 2.1 Cyberattack Advantages of Nation-States over Other Types of Actors Nations have enormous resources to bring to bear on the problem of cyber- attack. They are financed by national treasuries; they can take advantage of the talents of some of the smartest and most motivated individuals in their populations; they often have the luxury of time to plan and execute attacks; and they can draw on all of the other resources available to the national government, such as national intelligence, military, and law enforcement services. As a result, a government- supported cyberattacker can be relatively profligate in executing its attack and in particular can target vulnerabilities at any point in the information technology supply chain from hardware fabrication to user actions. The availability of such resources widens the possible target set of nation- state attackers. Low- and mid-level attackers often benefit from the ability to gain a small profit from each of many targets. Spammers and “bot” harvesters are the best examples of this phenomenon—an individual user or computer is vulnerable in some way to a spammer or a bot harvester, but the spammer or bot harvester profits because many such users or computers are present on the Internet. How- ever, because of the resources available to them, high-end attackers may also be able to target a specific computer or user whose individual compromise would have enormous value (“going after the crown jewels”). In the former case, an attacker confronted with an adequately defended system simply moves on to another sys- tem that is not so well defended. In the latter case, the attacker has the resources to increase the scale and sophistication of the attack to a very high degree if the target is sufficiently valuable. It is also the case that the resources available to a nation are not static. This means that for a sufficiently valuable target, a nation may well be able to deploy additional resources in its continuing attack if its initial attacks fail. In other words, capabilities that are infeasible for a nation today may become feasible tomorrow. A nation-state also has the resources that allow it to obtain detailed informa- tion about the target system, such as knowledge gained by having access to the source code of the software running on the target or the schematics of the target device or through reverse engineering. (A proof of principle is illustrated in the short delay between the unauthorized public disclosure of portions of the source code for Microsoft Windows 2000 and Windows NT 4.0 in February 2004 and the reported appearance of exploits apparently derived from an examination of the source code.1) Success in obtaining such information is not guaranteed, of course, but the likelihood of success is clearly an increasing function of the availability of resources. A nation-state cyberattacker does not care how it succeeds, as long as the path to success meets various constraints such as affordability and secrecy. In particular, the nation-state has the ability to compromise or blackmail a trusted insider to do its bidding or infiltrate a target organization with a trained agent rather than crack a security system if the former is easier to do than the latter. 1 Statement from Microsoft Regarding Illegal Posting of Windows 2000 Source Code, http:// www.microsoft.com/presspass/press/2004/Feb04/02-12windowssource.mspx.

94 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES BOX 2.2  Managing a Botnet Many botnets today make use of a two- or three-tier hierarchy, from attacker to a central controller (or handler), from handler to agents (“bots”), and sometimes from agents (bots) to reflectors.1 There are still limitations, however, on how many bots can be in a given channel at a given time, and attackers using large botnets know to move them around from server to server and channel to channel (known as herding) to avoid discovery or take-down of the botnets. A high-capacity attack network can make good use of another layer between the attacker and the handlers, which ideally is highly survivable and hardened so as to remain active in the face of defensive action. This layer is then used to control multiple independent botnets, or lower levels of distributed attack networks, in a manner similar to the regiment/battalion/company hierarchy used by conven- tional military forces. By adding this additional layer, it is possible to coordinate much larger forces using independent teams, similar to what was done by the team of “consultants” in Operation Cyberslam,2 but on an even larger scale. This additional layer will require a database to keep track of the various lower-level distributed attack networks and to assemble, reassemble, or reconstitute them as need be, in a manner very similar to maintaining force size through replacement of killed or wounded soldiers, and adjusting force strength through redeployment and reinforcement. Another approach to command and control involves two-way communications between controllers and bots. For example, rather than await orders, bots can send requests to controllers asking what to do next (“pulling” orders rather than “pushing” them). Successive requests from one bot go to different controllers. If a controller does not respond, after a while the bot tries another controller. If none of the controllers respond, the bot generates a series of random Domain Name System (DNS) names and tries those hosts, one at a time. The bot herders know attaches to the Internet and the time that it is probed for vulnerabilities and possibly compromised itself.14 A botnet attacker (controller) can communicate with its botnet (Box 2.2) and still stay in the background, unidentified and far away from any action, while the individual bots—which may belong mostly to innocent parties that may be located anywhere in the world—are the ones that are visible to the party under attack. The botnet controller has great flexibility 14 See, for example, Survival Time, available at http://isc.sans.org/survivaltime.html. Also, in a 2008 experiment conducted in Auckland, New Zealand, an unprotected computer was rendered unusable through online attacks. The computer was probed within 30 seconds of its going online, and the first attempt at intrusion occurred within the first 2 minutes. After 100 minutes, the computer was unusable. (See “Experiment Highlights Computer Risks,” December 2, 2008, available at http://www.stuff.co.nz/print/4778864a28.html.)

TECHNICAL AND OPERATIONAL CONSIDERATIONS 95 the random number generation algorithm, and if they lose control of some group of bots (perhaps because some controllers have been discovered and disabled), they register one of the random DNS names just before the orphaned bots are about to try it, and when the bot checks in they update it to regain control. It is also possible to use out-of-band communications (e.g., telephone, ­radio, or face-to-face conversation) to relay targeting information and attack timing. E ­ specially in the case of long-running operations, there is no need for constant or immediate network connections between attacking networks. In fact, it would be less expensive and less risky from an operational security perspective to coordi- nate a large number of distributed attack networks independently of each other. In this way, one or more groups could be responsible for recruiting (compromising and taking control over) new computers, which are then added to individual attack networks as requested when capacity drops below a certain level. If the attack tools are designed in a sufficiently modular way—and IRC botnets today already have these capabilities built in—this becomes an issue of human management rather than technology. 1 An example of a reflector attack would be to send DNS requests with a forged source address containing the intended target’s IP address to a large number of DNS servers, which would in turn send the replies back to what they believed to be the “requester” (the victim), which is then flooded with traffic. If the DNS request packet contained 100 bytes of data, and the replies contained 700 bytes of data, a 7× amplification would result, in addition to reflection. There is no need for malicious software to be installed on the reflector; hence this makes a good indirect attack method that is very hard to trace back to the attacker. 2 Department of Justice, “Criminal Complaint: United States of America v. Paul G. Ashley, Jonathan David Hall, Joshua James Schichtel, Richard Roby and Lee Graham Walker,” 2004, available at http://www.reverse.net/operationcyberslam.pdf. in the actions he may take—he may direct all of the bots to take the same action, or each of them to take different actions. Botnets are ideally suited for conducting distributed denial-of-service (DDOS) attacks against computer systems, and there are few good ways to defend against such attacks. A denial-of-service attack floods a specific target with bogus requests for service, thereby exhausting the resources available to the target to handle legitimate requests for service and thus blocking others from using those resources. Such an attack is relatively easy to block if these bogus requests for service come from a single source, because the target can simply drop all service requests from that source. However, a distributed denial-of-service attack can flood the target with multiple requests from many different machines, each of which might, in principle, be a legitimate requester of service. DDOS attacks are often conducted using unprotected machines in the

96 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES open Internet environment. But there is no reason in principle that they could not be conducted in a more closed environment, such as a classified Zendian network for military communications or planning. Planting the initial botnet “seeds” would probably be more difficult and time-consum- ing than doing so on the open Internet, but once established, it is likely that the “inside” botnet could grow rapidly because many sensitive net- works are protected only by a hardened perimeter. Individual bots can sense and probe their immediate environment. For example, a bot can examine clear-text data (e.g., sensitive informa- tion such as user names and passwords) passing by or through its host computer, including keystrokes entered by users and traffic on the local area network to which that host computer is attached. It might examine data files accessible to the host computer, such as any document stored on the computer. This information could be harvested and passed back to the botnet controller and mined for useful intelligence (a cyberexploitation). A bot could examine system files on the system to ascertain the par- ticular operating system and version being used, transmit this informa- tion back to the controller, and receive in return an upgraded payload that is specifically customized for that environment. This payload might be a destructive one, to be triggered at a certain time, or perhaps when the resident bot receives a subsequent communication from the controller. As a cyberexploitation, it could also ascertain the identity(ies) of the users and possibly their roles in an organization. A bot could assume the identity of a legitimate user, and use its host as the originating site for e-mail. Whereas in the criminal world botnets often generate spam e-mail consisting of millions of identical messages, a military application might call for sending a personalized message from a compromised bot to another, uncompromised user that would mislead or confuse him. Individual bots can also act as hosts for information exfiltration. Bot- nets sharing data using encrypted peer-to-peer connections could be used as “distributed dead drops.” This would make it much more difficult to prevent the information from being received and to discern the ultimate location of the botnet controllers. Perhaps the most important point about botnets is the great flexibility they offer an attacker (or an exploiter). Although they are known to be well suited to DDOS attacks, it is safe to say that their full range of utility for cyberattack and cyberexploitation has not yet been examined. 2.2.5.1.2  Other Tools and Approaches for Remote-Access Cyberattack Security Penetrations  The owner or operator of an important system usu- ally takes at least some measures to protect it against outside intruders. A

TECHNICAL AND OPERATIONAL CONSIDERATIONS 97 common security suite may involve requiring users to authenticate them- selves and running security software that selectively blocks external access (firewalls) and checks for hostile malware that may be introduced. However, password guessing is a common method for penetrating system security. Users have a tendency to choose easily remembered passwords that they change rarely if ever, suggesting certain patterns in password choice that are likely to be common. For example, passwords are often drawn from popular culture and sports, or are likely to be words from the user’s native language. Even when the system attempts to enforce password choices with variation (e.g., “must contain a digit”), people subvert the intent in simple and easily predictable ways (e.g., they often choose PASSWORD1, PASSWORD2, and so on). Dictionaries are often used for guessing passwords—e.g., trying every word in the Zendian dictionary; such a technique can be effective if proper safeguards are not in place. Similar problems hold for any authenticator that remains constant. An attacker may try to compromise security software to pave the way for the introduction of another attack agent. Some agents evade detection by varying the malicious payload or by checking constantly to ensure that a given virus is not identified by antivirus engines.15 Others are designed to disable antivirus programs, and may do so selectively so that only a specific virus or worm written by the attacker sent later will be allowed through. These are simple automation steps that can use tech- niques described openly within the computer security industry. 16 Worms and Viruses  Worms and viruses are techniques generally used for installing Trojan horses on many computers. A worm is self-replicat- ing—in addition to infecting the machine on which it is resident, it uses that machine to seek out other machines to infect. A virus replicates through user actions—for example, an e-mail containing a virus may be sent to Alice. When Alice opens the e-mail, her computer is infected. The virus program may then send an e-mail (ostensibly from Alice) to every person in her contact list; when Bob receives the e-mail from Alice and opens it, Bob’s computer is infected and the cycle repeats itself. Because user action is required to propagate a virus, viruses tend to spread more slowly than do worms. Worms and viruses may be initially propagated in many ways, includ- 15 As many as 30 percent of virus and other malware infections may be undetectable by today’s antivirus engines. See, for example, Niels Provos et al., “All Your iFRAMEs Point to Us,” Proceedings of the 17th Usenix Security Symposium 08, 2008, available at http://www. usenix.org/events/sec08/tech/full_papers/provos/provos.pdf. 16 Metasploit Anti-Forensics Project. Metasploit Anti-forensics homepage, available at http://www.metasploit.com/research/projects/antiforensics/.

98 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES ing e-mails received by, web pages visited by, images displayed by, and software downloaded by the victim. Worms and viruses are often used as intermediate stepping stones to assume full control of an adversary system. For example, they can be used to establish reverse tunnels out through the firewall (from inside to outside), which in turn grant someone outside the protected network full control of the host inside the network, or to control hosts in an enterprise in the supply chain of the primary target. Anonymizers  Anonymizers are used to conceal the identity of an attack- ing party. One particularly useful anonymizing technique is onion rout- ing, a technology originally designed to disguise the source of electronic traffic.17 But since the technology cannot distinguish between different kinds of traffic, attackers can use onion routing to disguise the source of a remote cyberattack. Onion routing works by establishing a path through a maze of mul- tiple onion routers, each of which accepts a packet from a previous router and forwards it on to another onion router. The originating party—in this case, the attacker—encrypts the packet multiple times in such a way that each onion router can peel off a single layer of encryption; the final router peels off the last layer, is able to read the packet in the clear, and sends it to the appropriate destination. A variety of public-domain onion router networks exist, and some support specifying where the exit point should be. Thus, the attacker can specify “Exit from an onion router located in Zendia” and that is where a target would see an attack coming from. (On the other hand, a sophisticated target might notice that an attack was com- ing from a public-domain onion router, and make probabilistic inferences, though not definitive, about where the attack was really coming from.) Penetrations of and Denial-of-Service Attacks on Wireless Networks  Wireless networks to enable communications among computers and devices are increasingly common and provide clandestine methods for access and denial of service. An attacker may be able to insert his own broadcast/ reception node (on a WiFi network, he might insert his own wireless access point) to intercept and monitor traffic and perhaps be able to impersonate an authorized network user. An attacker may sometimes impersonate an authorized user with relative ease if access to the wireless network is not protected. For exam- ple, satellites communicate with their ground stations through wireless communications, and the command link may not be encrypted or may 17 See, for example, the TOR router (and project) at http://www.torproject.org/.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 99 be otherwise insecure. If so, a Zendian satellite can be controlled by com- mands sent from the United States just as easily as by commands sent from Zendia. With access to the command link, adversary satellites can be turned off, redirected, or even directed to self-destruct by operating in unsafe modes. Alternatively, an attacker might choose to deny service on the net- work by jamming it, flooding the operating area with RF energy of the appropriate frequencies. WiFi wireless networks for computer commu- nications are an obvious target, and given the increasing ubiquity of cell phones around the world, cell phone networks could be a particularly useful target of a jamming cyberattack should an attacker wish to disrupt a primary communications mechanism probably used by government and non-government personnel alike. Router Compromises  Router compromises often manipulate the logical map of a network (whether open, like the Internet, or closed, like that of a corporate network) in ways desired by the attacker. For example, modi- fication of the software and data tables that underlie routing of informa- tion, a specific site could effectively be isolated so that it could not receive e-mail sent to it from elsewhere on the Internet or so that web pages hosted on it could never be found by anyone on the Internet. A different modification of the routing software and data might result in much more widespread chaos on the Internet for many sites rather than just one. Attacks on routers are feasible because the routers themselves are often Internet accessible and have software and hardware vulnerabilities just like any other computers, although even if they were not Internet accessible, compromising them would not be impossible. Moreover, code to support attacks on routers is often available in the public domain, mak- ing attacks on routers easier than they would otherwise be. Under some circumstances, router flaws may enable an attacker to damage the rout- ing hardware itself remotely, as might be possible if the boot ROM were compromised or if the attacker gained access to low-level functions that controlled power supplies or fan speeds. An example of a router compromise is the Border Gateway Protocol (BGP) attacks. The Internet is a network of networks. Each network acts as an autonomous system under a common administration and with com- mon routing policies. Primarily used by Internet service providers (ISPs) and very large private networks such as those of multinational corpora- tions, BGP is the Internet protocol used to characterize every network to each other, for example between ISPs. BGP does so by publishing tables containing information on how packets should be routed between any given network and other networks. However, if these tables are cor-

100 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES rupted, traffic can be misdirected.18 One kind of BGP attack deliberately corrupts these tables to misdirect traffic away from its rightful destination and toward a network controlled by the attacker. Once the attacker has captured traffic intended for a given destination, the captured traffic can be discarded (thus isolating the destination net- work) or copied for later examination and then forwarded to the correct destination (to reduce the likelihood of the attack becoming known). If the captured traffic contains information such as passwords, the attacker may be able to impersonate the sender at a later date. Another kind of attack hijacks a block of IP addresses in order to send undesirable or malicious traffic such as spam or denial-of-service attacks. Such an attack allows a sender to remain untraceable. The attacker uses the routing infrastructure to evade IP-based filtering aimed at blacklisting malicious hosts. Protocol Compromises  A network protocol is a standard that controls or enables communication between two computing devices. In practice, pro- tocols—even widely accepted and used protocols—are sometimes flawed. For example, a given protocol may be designed incorrectly or be incom- pletely specified.19 A given implementation of a well-designed and well- specified protocol may itself be incomplete and/or contain a bug. (An incomplete implementation may mean that the system can enter some unanticipated state, and thus that consequences that ensue are unpredict- able.) An attacker may take advantage of such flaws. An example of a protocol attack is DNS cache poisoning. The Domain Name System (DNS) is a global system that maps domain names (e.g., www.nas.edu) into specific numeric IP addresses (e.g., 144.171.1.22).20 However, in order to reduce the load on the primary name servers, tables containing the relevant information are stored (cached) on secondary DNS servers operated by Internet service providers. By taking advantage 18 See Xin Hu and Z. Morley Mao, “Accurate Real-Time Identification of IP Prefix Hijacking,” Proceedings of IEEE Symposium on Security and Privacy, May 2007, pp. 3-17; Anirudh Ramachandran and Nick Feamster, “Understanding the Network-Level Behavior of Spammers,” Proceedings of the Association of Computing Machinery SIGCOMM 2006, pp. 291-302, available at http://www.cc.gatech.edu/~avr/publications/p396-ramachandran- sigcomm06.pdf. 19 Incomplete specifications or implementations are dangerous because of the possibil- ity of inputs for which no response is specified or provided. That is, a protocol (or a given implementation of the protocol) may not unambiguously specify an action to be taken for all inputs. If one of these “undefined response” inputs is received, the receiving system will do something unanticipated. If enough is known about the receiving system and its particular implementation of the protocol being used, the subsequent action may be exploitable. 20 Every device connected to the Internet has a unique identifying number known as its IP address. An IP address may take the form 144.171.1.22 (for IP Version 4) or 2001: db8:0:1234:0:567:1:1 (for IP Version 6).

TECHNICAL AND OPERATIONAL CONSIDERATIONS 101 of vulnerabilities in DNS software, it is sometimes possible to alter these tables, so that a request to “www.nas.edu” maps to 144.117.1.22, rather than the correct 144.171.1.22. The incorrect IP address 144.117.1.22 can be a phony host configured to look like the real thing, and can be used to intercept information sent by the user and intended for the real site. Alter- natively, corrupted tables could be used simply to misdirect messages being transmitted from point to point.21 (The corruption of a DNS server to redirect traffic is sometimes known as “pharming.”) Cache poisoning is possible because the DNS protocol does not authenticate responses, which is widely regarded as a flaw in the security of that protocol. This flaw means that an attacker can take advantage of the protocol by sending an inquiry to a server that causes it to make an inquiry to another server, and then sending a bogus reply to the first server before the second server has a chance to respond. Other examples of protocol attacks may involve partially opening many Transmission Control Protocol connections to tie up resources, or sending packets marked as the first and last fragments of a huge ­datagram in order to tie up buffer space. 2.2.5.2  Possible Approaches for Close-Access Cyberattacks To reduce the threat from tools that enable remote attacks, a poten- tial target might choose to disconnect from easily accessible channels of communication. A computer that is “air gapped” from the Internet is not susceptible to an attack that arrives at the computer through Internet con- nections. Thus, it is sometimes too difficult or impossible for an attacker to obtain remote access to a computer of interest. In these instances, other methods of attack are necessary. One approach to attacking a putatively isolated and “stand-alone” computer is to consider whether that computer is in fact isolated. For example, a computer without an Internet connection may be accessible through a dial-up modem; if the attacker can discover the phone num- ber associated with that modem, the computer may be vulnerable to a remote attack for the price of a long-distance telephone call. Or the com- puter of interest may connect to the Internet only occasionally to receive updates—during those moments of connection, the computer may be vulnerable. Or the computer might require the use of external media to provide data—although the data does not arrive through an Internet con- nection, data is supplied through the insertion of the data-carrying media into an appropriate slot in the computer, and the placement of hostile data 21 National Research Council, Signposts in Cyberspace: The Domain Name System and Internet Navigation, The National Academies Press, Washington, D.C., 2005.

102 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES on the CD-ROM can occur on a computer that is connected to the Internet. (Hostile data is data that, when processed, might cause the computer to fail or crash.) If the computer or network of interest is indeed isolated, close-access attacks provide an alternative. Close-access attacks require a human to be physically present, which can increase the chances of being caught and/or identified. Sometimes, a close-access attack is used late in the supply chain, e.g., against a deployed computer in operation or one that is awaiting delivery on a loading dock. In these cases, the attacks are by their nature narrowly and specifically targeted, and they are also not scalable, because the number of computers that can be compromised is proportional to the number of human assets available. In other cases, a close-access cyberattack may be used early in the supply chain (e.g., introducing a vulnerability during development), and high leverage might result from such an attack. For example, for many years the United States overtly restricted the cryptographic strength of encryption products allowed for export. If it had done so covertly, such an action might well have counted as a close-access cyberattack intended to make encryption products more vulnerable to compromise. By definition, close-access attacks bypass network perimeter defenses. A successful close-access cyberattack makes an outsider appear, for all intents and purposes, to be an insider, especially if credentials have already been compromised and can be used without raising alarms. Sophisticated anomaly detection systems that operate from login audit logs, network flow logs, and other sources of network and computer usage would be necessary to be able to detect this type of activity. Standard antivirus software and intrusion detection or protection systems are significantly less effective. Examples of close-access cyberattacks include the following: • Attacks somewhere in the supply chain. Systems (and their com- ponents) can be attacked in design, development, testing, production, distribution, installation, configuration, maintenance, and operation. (See Box 2.3 for a documented example.) Indeed, the supply chain is only as secure as its weakest link.22 In most cases, the supply chain is only loosely managed, which means that access control throughout the entire supply chain is difficult. Examples of hypothetical supply-chain attacks include the following: 22 Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, p. 25.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 103 BOX 2.3  Project “Gunman” On October 25, 1990, Congressman Henry Hyde revealed some of the tech- nical highlights of a Soviet intelligence penetration of the U.S. embassy in Moscow that was still under construction at that time. Within the U.S. intelligence community, the Soviet operation was known by the code name “Gunman.” In 1984, the United States discovered that a number of IBM Selectric typewriters within the U.S. Em- bassy had been secretly modified by the Soviets to transmit to a nearby listening post all keystrokes typed on those machines. Access to these typewriters to imple- ment the modification was achieved during part of the logistics phase prior to their delivery to the embassy at a point when the typewriters were unsecured. SOURCE: U.S. House of Representatives, Rep. Henry J. Hyde, Introduction to “Embassy Moscow: Attitudes and Errors,” Congressional Record, October 25, 1990, E3489. —A vendor with an employee loyal to an attacker introduces mali- cious code as part of a system component for which it is a subcontractor in a critical system.23 —An attacker intercepts a set of CD-ROMs ordered by the victim and substitutes a different doctored set for actual delivery to the victim. The doctored CD-ROMs contain attack software that the victim installs as he uses the CDs. —An attacker bribes a shipping clerk to look the other way when the computer is on the loading dock for transport to the victim, opens the box, replaces the video card installed by the vendor with one modified by the attacker, and reseals the box. • Compromises of third-party security software. Security software is intended to protect a computer from outside threats. In many cases, it does so by identifying and blocking specific malicious software or activi- ties based on some kind of “signature” associated with a given malicious action. But a government could induce the vendor of such security soft- ware to ignore threats that might be associated with a virus or worm that the government might use to attack an adversary’s system. The govern- ment could induce such cooperation in many ways. For example, it could persuade the CEO of the vendor’s company to take such action, prevent the company from selling its products if it failed to take such action, or 23 For a partial compendium of instances in which vendors have shipped to customers products ”pre-infected” with malware (e.g., virus or other security vulnerability or prob- lem), see http://www.attrition.org/errata/cpo/.

104 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES bribe some low-level programmer to “forget” to include a particular sig- nature in the virus checker’s database.24 • Compromises in the patch process. Patching software defects, espe- cially those that fix known vulnerabilities, is an increasingly routine part of system maintenance. Yet patching introduces another opportunity for introducing new vulnerabilities or for sustaining old ones.25 Both automated (e.g., Windows updates) and manual patch processes present opportunities for close-access cyberattacks, though the tools and resources required may be quite different. The patch issued may be corrupted by the cyberattacker; alternatively, the distribution channel itself may be compromised and hostile software installed. 2.2.5.3  Compromise of Operators, Users, and Service Providers Human beings who operate and use IT systems of interest constitute an important set of vulnerabilities for cyberattack.  They can be compro- mised through recruitment, bribery, blackmail, deception, or extortion.  Spies working for the attacker may be unknowingly hired by the victim, and users can be deceived into actions that compromise security.  Misuse of authorized access, whether deliberate or accidental, can help an attacker to take advantage of any of the vulnerabilities previously described—and in particular can facilitate close-access cyberattacks. For example, the operation of a modern nationwide electric power grid involves many networked information systems and human operators of those systems; these operators work with their information systems to 24 Some possible precedent for such actions can be found in the statement of Eric Chien, then chief researcher at the Symantec antivirus research lab, that Symantec would avoid up- dating its antivirus tools to detect a keystroke logging tool that was used only by the FBI (see John Leyden, “AV Vendors Split over FBI Trojan Snoops,” The Register, November 27, 2001, available at http://www.theregister.co.uk/2001/11/27/av_vendors_split_over_fbi/). More discussion of this possibility can be found in Declan McCullagh and Anne Broache, “Will Security Firms Detect Police Spyware?,” CNET News, July 17, 2007, available at http://news. cnet.com/2100-7348-6197020.html. Other corporate cooperation with government authori- ties was documented in the Church Committee hearings in 1976. For example, RCA Global and ITT World Communications “provided virtually all their international message traffic to NSA” in the period between August 1945 and May 1975 (see Book III of Supplementary Detailed Staff Reports on Intelligence Activities and the Rights of Americans, 94th Congress, Report 94-755, p. 765). As for a government influencing vendors to compromise security in their products, the canonical example is that for many years, the United States had imposed export controls on information technology vendors selling products with encryption capa- bilities—allowing more relaxed export controls only on those products capable of weak en- cryption. Most export controls on strong encryption products were lifted in the late 1990s. 25 Defense Science Board, “Report of the Defense Science Board Task Force on Mission Impact of Foreign Influence on DoD Software,” U.S. Department of Defense, September 2007, p. 25.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 105 keep the system in dynamic balance as loads and generators and trans- mission facilities come on and off line. A cyberattack might be directed at the systems in a control center that provide situational awareness for the operators—and so operators might not be aware of an emerging problem (perhaps a problem induced by a simultaneous and coordinated physical attack) until it is too late to recover from it. In many instances involving the compromise of users or operators, the channels for compromise often involve e-mails, instant messages, and files that are sent to the target at the initiative of the attacker, or other sources that are visited at the initiative of the target. Examples of the lat- ter include appealing web pages and certain shareware programs, such as those for sharing music files, or even playing a music CD with rootkit- installation software. An appealing web page might attract many viewers in a short period of time, and viewers could be compromised simply by viewing the page, while shareware programs might contain viruses or other malware. In an interesting experiment at West Point in 2004, an apparently legitimate e- mail was sent to 500 cadets asking them to click on a link to verify grades. Despite their start-of-semester training (including discussions of viruses, worms, and other malicious code, or malware), over 80 percent of recipi- ents clicked on the link in the message.26 Another example of social engineering in cyberattack involved a red team’s use of inexpensive universal serial bus (USB) flash drives to pen- etrate an organization’s security.  These drives were scattered in parking lots, smoking areas, and other areas of high traffic.  In addition to some innocuous images, each drive was preprogrammed with software that could have collected passwords, logins, and machine-specific informa- tion from the user’s computer, and then e-mail the findings to the red team.  Because many systems support an “auto-run” feature for insertable media (i.e., when the medium is inserted, the system automatically runs a program named “autorun.exe” on the medium) and the feature is often turned on, the red team was notified as soon as the drive was inserted.  The result: 75 percent of the USB drives distributed were inserted into a computer.27   A final category of vulnerabilities and access emanates from the IT- based service providers on which many organizations and individuals rely. Both individuals and organizations obtain Internet connectivity from 26 See Aaron J. Ferguson, “Fostering Email Security Awareness: The West Point C ­ arronade,” EDUCAUSE Quarterly 28(1):54-57, 2005, available at http://net.educause.edu/ ir/library/pdf/EQM0517.pdf. 27 Steve Stasiukonis, “Social Engineering, the USB Way,” Dark Reading, June 7, 2006, avail- able at http://www.darkreading.com/document.asp?doc_id=95556&WT.svl=column1_1.

106 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES Internet service providers. Many organizations also make use of external firms to arrange employee travel or to manage their IT security or repair needs. These service providers are potential additional security vulner- abilities, and thus might well be targeted in a cyberattack directed at the original organization. Note: Close-access attacks and social engineering are activities in which national intelligence agencies specialize, and over the years these agencies have presumably developed considerable expertise in carrying out such activities. In practice, it is often cheaper and easier to compro- mise a person than it is to break through firewalls and decrypt passwords. Indeed, in many situations, human subversion and physical action are the two quickest, cheapest, and most effective methods of attacking a computer system or network. 2.2.6  Propagating a Large-Scale Cyber Offensive Action In order to take control of a large number of computers, an attacker needs to locate vulnerable computers and somehow install malicious soft- ware on those computers. This can be done using direct attacks against exposed services (e.g., scan and attack behavior seen in worms like S ­ lammer and Blaster), or indirectly using social engineering techniques (e.g., e-mail with Trojan horse executables as file attachments, instant messages with hypertext links, web pages containing malicious content, Trojan horse executables in modified “free” software download archives, or removable media devices dropped in parking lots). 2.2.6.1  Direct Propagation Direct attacks are the fastest means of compromising a large number of hosts. The most common method of direct propagation is either by scan- ning for vulnerable hosts followed by direct remote attack, or by simply choosing random addresses and attempting to use ­vulnerabilities regard- less of whether there is a host listening on that IP address, or even pos- sessing the vulnerability at all. Malware artifacts (e.g., Agobot/Phatbot 28) often look for opportunistic avenues for attack, including the back doors left by other malware that may have previously infected the host. This tactic does not require the use of a new zero-day exploit, as there may be plenty of other commonly known ways to take control of computers. The successful hit rate is lowest by scanning entirely randomly while attack- ing, although there is an element of surprise using this method because 28 LURHQ, “Phatbot Trojan Analysis,” June 2004, available at http://www.lurhq.com/ phatbot.html.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 107 there is no opportunity for the target to see the reconnaissance scans. Either way, a direct attack increases the chances of detection through either signature (e.g., IDS/IPS or AV scanning) or anomalous flow detec- tion, possibly triggering a reaction by security operations personnel. Even if an attacker launches a new attack whose signature is not known to the defender, the defender may still be able to use traffic flow analysis to detect the attack by observing and noting anomalous flows. As mentioned above, worms to date have been quite noisy and in some cases spread so fast that they disrupt the network infrastructure devices themselves. In order to make direct attacks viable to recruit hosts for the kind of attack described here, a more slow and subtle attack (espe- cially one involving a zero-day attack method whose existence is kept secret) over a much longer period of time would be needed. The methods just described are active in nature, but there are also opportunities for passive direct propagation. For example, hosts infected with the Blaster worm can still be observed actively attempting to propa- gate, suggesting that attacking through more recently discovered vul- nerabilities is likely to be feasible.29 By simply passively monitoring for signs of Blaster-infected hosts scanning randomly across the Internet, one can identify hosts that have a high probability of possessing one or more remotely usable vulnerabilities. Those hosts can then be compro- mised and taken over, as was done by the Agobot/Phatbot malware in 2003/2004.30 There is also a very good chance that attacks against these hosts, since they were already compromised and are actively scanning the Internet for more victims, would not be noticed by the owners of the computers or networks in which they reside. 2.2.6.2  Indirect Propagation Indirect methods of cyberattack are often slower, but less easy to detect by either network level IDS/IPS or AV/anti-spam systems. Some indirect methods of cyberattack include: • Compromising installation archives of freeware or shareware pro- grams, either used generally or known to specifically be used by tar- get organizations. Examples include open source software development efforts, or software linked from sites that aggregate and categorize free- 29 Michael Bailey, Evan Cooke, Farnam Jahanian, David Watson, and Jose Nazario, “The Blaster Worm: Then and Now,” IEEE Security and Privacy 3(4):26-31, 2005, available at http://ieeexplore.ieee.org/iel5/8013/32072/01492337.pdf?arnumber=1492337. 30 Joe Stewart, “Phatbot Trojan Analysis,” March 15, 2004, available at http://www. secureworks.com/research/threats/phatbot/?threat=phatbot.

108 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES ware and shareware programs.31 Unwitting users download these altered installation archives and install them without first verifying hashes or cryptographic signatures that attest to their integrity. (In general, use of cryptography for authentication of software is done poorly, or not at all, allowing these kinds of attacks to succeed in many cases.) • Drive-by download attacks resulting from redirection of clients by corruption of DNS resolver configurations, or man-in-the-middle attacks on DNS requests, for example in free WiFi networks. The attacker who wishes to redirect a software download request, or web page request, must simply answer an unauthenticated DNS request that is easily seen by the attacker in an open WiFi network. The client is then silently redi- rected to a malicious site, where malicious software is downloaded and installed onto the system.32 • Cross-site scripting attacks involve redirection of web browsers through embedded content in HTML or Javascript. The redirection is invisible to the user, and can result in portions of a web session being hijacked to install malicious content and/or to capture login credentials that can be used later to compromise the user’s account. As an example, some online auctioning sites have a significant problem with their users attacking each other using the site as a platform; the mechanism is that the site permits upload of HTML to its regular vendor’s own internal-to-site web pages and in that HTML are hidden various attack mechanisms. 2.2.7  Economics The economics of cyberattack are quite different from those of kinetic attack. • Productivity. Certain kinds of cyberattacks can be undertaken with much more productivity than can kinetic attacks because the latter depend on human actions while the former depend on computer systems. Automation can increase the amount of damage that can be done per 31 An incident related to such compromises was reported in August 2008. The Red Hat Network, distributors for the Linux operating system, detected an intrusion on some of its computer systems. The intruder was able to sign a small number of OpenSSH pack- ages (OpenBSD’s SSH (Secure SHell) protocol implementation), and these packages may have been improperly modified. See https://rhn.redhat.com/errata/RHSA-2008-0855.html. Some users relying on a Red Hat’s digital signature to ensure that they install only autho- rized software are thus at some potential risk. 32 This is mostly a risk to users who perform normal tasks like reading news on web- sites from accounts on their computer that have elevated system administrator privileges. For this reason, it is typically recommended that users employ the concept of least-privileges and use accounts with administrator rights only when installing or configuring software, not for general tasks.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 109 attacker, increase the speed at which the damage is done, and decrease the required knowledge and skill level of the operator of the system. (A corollary is that the scale of effects of a cyberattack may be only weakly correlated with effort. As early as 1988, the Morris worm—developed with relatively little effort—demonstrated that a cyberattack could have effects on national and international scales. Indeed, the most difficult part of some cyberattacks may well be to limit their effects to specific com­ puters.) Automation can also simplify operational tasking by providing capabilities such as automated target acquisition, reducing effects to a set of alternatives in a pull-down menu, and turning rules of engagement into operational parameters that tie available actions to targets in the system’s menu. The result is a system that is easier to operate than a col- lection of discrete attack tools and thus requires lower levels of training, knowledge, and specific skills. • Capital investment. A cyberattack architecture functioning on a ser- vice-oriented model can be replicated at very low cost if it relies on stolen services. For example, millions of compromised computers assembled in a botnet can be tasked to any C2 control center, allowing a larger number of individual attackers to operate independently. This distributes the opera- tional and management loads, similar to the way a military battalion is composed of companies, cohesive units using similar weapons and tactics but capable of attacking different objective targets at different locations at the same time. Implementing a service-oriented model for a distributed architecture is simply a matter of programming and separation of duties (i.e., acquisition of newly compromised hosts to be controlled, and com- mand and control of subsets of these hosts by individual operational warfare units). The more loosely coupled the functions of command and control versus effects on and from compromised end hosts, the more resistant the overall architecture is to detection and mitigation. On the other hand, highly specialized cyberweapons (e.g., those designed to attack very specific targets) may well be costly. For example, the development of a particular cyberweapon may require intelligence collection that is difficult and thus expensive to perform. Other cyber- weapons may only be useful against adversary targets one or a few times (Section 2.3.10), making their use an expensive proposition. • Funding. Financial assets of an adversary can be used by an attacker. Rather than paying a defense contractor market rates to develop arms and munitions out of its own public coffers, a nation has the ability to steal money from an adversary for use in developing and advancing its cyberattack capabilities. For example, it could develop Version 1.0 of an attack platform on its own and then use the proceeds from fraud perpetrated using Version 1.0 to fund development of a larger and more effective Version 2.0 platform, and so forth. Such an approach could be

110 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES particularly appealing to subnational groups, such as terrorist or criminal organizations, or—in the absence of specific legal prohibitions against such actions—to underfunded government agencies. • Availability. The underlying technology for carrying out cyberat- tacks is widely available, inexpensive, and easy to obtain. Software pack- ages embedding some of the technology for carrying out cyberattacks are available on the Internet, complete with user manuals and point-and-click interfaces. The corollary is that government has no monopoly on cyber- weapons or over expertise. Private businesses and private individuals can own or control major cyberweapons with significant capability, but the same tends to be less true of kinetic weapons, citizen-built truck bombs notwithstanding. 2.3  Operational Considerations The previous section addresses the basic technologies of and approaches to cyberattack. This section considers the operational impli- cations of using cyberattack. Both nation-states and hackers must grapple with these implications, but the scope of these implications is of course much broader for the nation-state than for the hacker. 2.3.1  The Effects of Cyberattack Although the ultimate objective of using any kind of weapon is to deny the adversary the use of some capability, it is helpful to separate the effects of using a weapon into its direct and its indirect effects (if any). The direct effects of using a weapon are experienced by its immediate target. For example, the user of a kinetic weapon seeks to harm, damage, destroy, or disable a physical entity. The indirect effects of using that weapon are associated with the follow-on consequences of harming, damaging, destroying, or disabling a physical entity, which may include harming, destroying, or disabling other physical entities—a runway may be dam- aged (the direct effect) so that aircraft cannot land or take off (the indirect effect). This distinction between direct and indirect effects is particularly important in a cyberattack context. 2.3.1.1  Direct Effects33 By definition, cyberattacks are directed against computers or net- works. The range of possible direct targets for a cyberattack is quite broad and includes (but is not limited to) the following: 33 Much of the discussion in this section is based on National Research Council, Toward a Safer and More Secure Cyberspace, The National Academies Press, Washington D.C., 2007.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 111 • Computer chips embedded in other devices, such as weapons systems, communications devices, generators, medical equipment, automobiles, elevators, and so on. In general, these microprocessors provide some kind of real-time capability (e.g., a supervisory control and data acquisition system will control the operation of a generator or a floodgate, a chip in an automobile will control the flow of fuel, a chip in an ATM will dispense money). • The computing systems controlling elements of the nation’s critical infra- structure, for example, the electric power grid, the air traffic control sys- tem, the transportation infrastructure, the financial system, water purifi- cation and delivery, or telephony. For example, cyberattacks against the systems and networks that control and manage elements of a nation's transportation infrastructure could introduce chaos and disruption on a large scale that could drastically reduce the capability for transporting people and/or freight (including food and fuel). • Dedicated computing devices (e.g., desktop or mainframe com­puters). Such devices might well not be just any desktop computer (e.g., any computer used in offices around the country) but rather the desktop com­ puters in particular sensitive offices, or in critical operational software used in corporate or government computer centers (e.g., a major bank or the unclassified systems of an adversary nation’s ministry of defense). Dedicated computer systems might also include the routers that control and direct traffic on the Internet or on any other network. Cyberattacks generally target one of several attributes of these com- ponents or devices—they seek to cause a loss of integrity, a loss of authen- ticity, or a loss of availability (which includes theft of services): • Integrity. A secure system produces the same results or information whether or not the system has been attacked. An attack on integrity seeks to alter information (a computer program, data, or both) so that under some circumstances of operation, the computer system does not provide the accurate results or information that one would normally expect even though the system may continue to operate. A computer whose integrity has been compromised might be directed to destroy itself, which it could do if it were instructed to turn off its cooling fan. A loss of integrity also includes suborning a computer for use in a botnet, discussed further in Section 2.2.5.1.1. • Authenticity. An authentic message is one that is known to have originated from the party claiming to have originated it. An attack on authenticity is one in which the source of a given piece of information is obscured or forged. A message whose authenticity has been compromised will fool a recipient into thinking it was properly sent by the asserted originator.

112 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES • Availability. A secure system is available for normal use by its right- ful owner even in the face of an attack. An attack on availability may mean that e-mail sent by the targeted user does not go through, or the target user’s computer simply freezes, or the response time for that com- puter becomes intolerably long (possibly leading to catastrophe if a physi- cal process is being controlled by the system). Some analysts also discuss theft of services—an adversary may assume control of a computer to do his bidding. In such situations, the availability of the system has been increased, but for the wrong party (namely, the adversary). These attributes may be targeted separately or together. For example, a given cyberattack may support the compromise of both integrity and availability, though not necessarily at the same time. In addition, the vic- tim may not even be aware of compromises when they happen—a victim may not know that an attacker has altered a crucial database, or that he or she does not have access to a particular seldom-used emergency system. In some situations, integrity is the key target, as it might well be for a tactical network. A commander who doubts the trustworthiness of the network used to transmit and receive information will have many opportunities for second-guessing himself, and the network may become unreliable for tactical purposes. In other situations, authenticity is the key target—a cyberattack may take the form of a forged message purportedly from a unit’s commanders to move from one location to another. And in still other situations, availability is the target—a cyberattack may be intended to turn off the sensors of a key observation asset for the few minutes that it takes for kinetic assets (e.g., airplanes) to fly past it. The direct effects of some cyberattacks may be easily reversible. (Reversibility means that the target of the attack is restored to the operat- ing condition that existed prior to the attack.) For example, turning off a denial-of-service attack provides instant reversibility with no effort on the part of the attacked computer or its operators. If backups are available, an attack on the integrity of the operating system may take just a few minutes of reloading the operating system. Many effects of kinetic attacks are not as easy to reverse.34 A corollary to this point is that achieving enduring effects from a cyberattack may require repeated cyberstrikes, much as repeated bomb- ing of an airstrip might be necessary to keep it inactive. If so, keeping a 34 For example, the time scales involved may be very different. Restoring the capability of an attacked computer that controls a power distribution system is likely to be less costly or time-consuming compared to rebuilding a power plant damaged by kinetic weapons. (A cyberattack on a computer controlling a power distribution system may even be intended to give the attacker physical control over the system but not to damage it, enabling him to control production and distribution as though he were authorized to do so.)

TECHNICAL AND OPERATIONAL CONSIDERATIONS 113 targeted system down is likely to be much more difficult than bringing it down in the first place, not least because the administrators of the vic- timized system will be guided by the nature of the first attack to close off possible attack paths. Thus, the attacker may have to find different ways to attack if the goal is to create continued effects. That is, depending on the nature of his goals, the attacker must have operational plans that antici- pate possible defense actions and identify appropriate responses should those defense actions occur. 2.3.1.2  Indirect (and Unintended) Effects Although the direct effects of a cyberattack relate to computers, net- works, or the information processed or transmitted therein, cyberattacks are often launched in order to obtain some other, indirect effect—and in no sense should this indirect effect be regarded as secondary or unim- portant. The adversary air defense radar controlled by a computer is of greater interest to the U.S. commander in the field than is the computer itself. The adversary’s generator controlled by a computer is of greater interest to the U.S. National Command Authority than is that computer itself.35 In such cases, the indirect effect is more important than the first- order direct effect. Computers are also integral parts of command and control networks. In this case, the indirect effect sought by compromising such a computer is to prevent or delay the transmission of important messages, or to alter the contents of such messages. Indirect effects—which are often the primary goal of a cyberattack— are generally not reversible. A cyberattack may disrupt a computer that controls a generator. The attack on the computer may be reversible (leav- ing the computer as good as new). But the follow-on effect—the generator overheating and destroying itself—is not reversible. Cyberattacks are particularly well suited for attacks on the psychol- ogy of adversary decision makers who rely on the affected computers, and in this case such effects can be regarded as indirect effects. For example, a single database that is found to be deliberately corrupted, even when controls are in place to prevent such corruption, may call into question the integrity of all of the databases in a system. It is true that all produc- tion databases have some errors in them, and indeed savvy users ought to 35 For example, a test staged by researchers at the Department of Energy’s Idaho Na- tional Laboratories used a cyberattack to cause a generator to self-destruct. Known as Au- rora, the cyberattack was used to change the operating cycle of the generator, sending it out of control. See CNN, “Staged Cyber Attack Reveals Vulnerability in Power Grid,” September 26, 2007, available at http://www.cnn.com/2007/US/09/26/power.at.risk/index.html.

114 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES adjust for the possibility that their data may be incorrect. But in practice, they often do not. Being made conscious of the fact that a database may have been compromised has definite psychological effects on a user. Thus, the victim may face a choice of working with data that may—or may not—have been corrupted and suffering all of the confidence-eroding con- sequences of working in such an environment, or expending enormous amounts of effort to ensure that other databases have not been corrupted or compromised.36 A second example might be the clandestine alteration of critical data that causes decision makers to make poor or unfavorable decisions. The unintended consequences of a cyberattack are almost always indi- rect effects. For example, a cyberattack may be intended to shut down the computer regulating electric power generation for a Zendian air defense facility. The direct effect of the cyberattack could be the disabling of the computer. The intended indirect effect is that the air defense facility loses power and stops operating. However, if—unknown to the attacked—a Zendian hospital is also connected to the same generation facility, the hospital’s loss of power and ensuing patient deaths are also indirect effects, and also an unintended consequence, of that cyberattack. 2.3.2  Possible Objectives of Cyberattack Whether a cyberattack is conducted remotely or through close access, what might it seek to accomplish? Some possible objectives include the following, in which an attacker might seek to: • Destroy a network or a system connected to it. Destruction of a net- work or of connected systems may be difficult if “destruction” means the physical destruction of the relevant hardware, but is much easier if “destruction” simply means destroying the data stored within and/or eliminating the application or operating systems programs that run on that hardware. For example, an attacker might seek to delete and erase permanently all data files or to reformat and wipe clean all hard disks that it can find. Moreover, destruction of a network also has negative consequences for anything connected to it—power-generation facilities 36 For example, in 1982, the United States was allegedly able to “spike” technology that was subsequently stolen by the Soviet Union. Above and beyond the immediate effects of its catastrophic failure in a Soviet pipeline, Thomas Reed writes that “in time the Soviets came to understand that they had been stealing bogus technology, but now what were they to do? By implication, every cell of the Soviet leviathan might be infected. They had no way of knowing which equipment was sound, which was bogus. All was suspect, which was the intended endgame for the entire operation.” See Thomas C. Reed, At the Abyss: An Insider’s History of the Cold War, Ballantine Books, New York, 2004.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 115 controlled by a network are likely to be adversely affected by a disabled network, for example. • Be an active member of a network and generate bogus traffic. For exam- ple, an attacker might wish to masquerade as the adversary’s national command authority or as another senior official (or agency) and issue phony orders or pass faked intelligence information. Such impersonation (even under a made-up identity) might well be successful in a large orga- nization in which people routinely communicate with others that they do not know personally. Alternatively, the attacker might pretend to be a non-existent agency within the adversary’s government and generate traffic to the outside world that looks authentic. An impersonation objec- tive can be achieved by an attacker taking over the operation of a trusted machine belonging to the agency or entity of interest (e.g., the National Command Authority) or by obtaining the relevant keys that underlie their authentication and encryption mechanisms and setting up a new node on the network that appears to be legitimate because it exhibits knowledge of those keys. • Clandestinely alter data in a database stored on the network. For exam- ple, the logistics deployment plan for an adversary’s armed forces may be driven by a set of database entries that describe the appropriate arrival sequence of various items (food, fuel, vehicles, and so on). A planner relying on a corrupted database may well find that deployed forces have too much of certain items and not enough of others. The planner’s confi- dence in the integrity of the database may also be affected, as discussed in ­Section 2.3.1.2. • Degrade or deny service on a network. An attacker might try to degrade the quality of service available to network users by flooding communi- cations channels with large amounts of bogus traffic—spam attacks can render e-mail ineffective as a communications medium, for example. Denial-of-service attacks might be directed at key financial institutions, for example, and greatly degrade their ability to handle consumer finan- cial transactions. A denial-of-service attack on the wireless network (e.g., a jamming attack) used to control a factory’s operations might well shut it down. Taking over a telecommunications exchange might give an attacker the ability to overwhelm an adversary’s ministry of defense with bogus phone calls and make it impossible for its employees to use its telephones to do any work. A denial-of-service attack might be used to prevent an adversary from using a communications system, and thereby force him to use a less secure method for communications against which a cyber­ exploitation could be successful. • Assume control of a network and/or modulate connectivity, privileges, or service. An attacker might assume control of an Internet service provider in an adversary nation, and decide who would get what services and con-

116 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES nectivity. For example, it might intentionally (and clandestinely) degrade bandwidth to key users served by that ISP, so that transmission of large files (e.g., images or video) would take much longer than expected. If the ISP was used by the Zendian Ministry of Defense to pass targeting infor- mation for prompt action, delays in transmission might cause Zendian forces to miss important deadlines. Finally, cyberattacks can be carried out in conjunction with kinetic attacks, and indeed the effect of a cyberattack may be maximized if used in such a manner. For example, a cyberattack alone might be used to cause confusion. But the ability to cause confusion in the midst of a kinetic attack might well have greater operational significance for the attacker. 2.3.3  Target Identification As with any other weapon, a cyberattack must be directed at specific computers and networks. Even if a nation-state has been identified as being subject to cyberattack, how can the specific computers or networks of interest be identified in a useful manner? (Note also that target identi- fication is often related to attribution, but does not necessarily follow—a computer posing a threat may well be regarded as a target, even if the party controlling it is not known.) In some instances, the target identification process is a manual, intel- ligence-based effort. From a high-level description of the targets of inter- est (e.g., the vice president’s laptop, the SCADA systems controlling the electric generation facility that powers the air defense radar complex 10 miles north of the Zendian capital, the transaction processing systems of the Zendian national bank), a route to those specific targets must be found. For example, a target system with an Internet presence may have an IP address. Knowledge of the system’s IP address provides an initial starting point for attempting to gain entry to the appropriate component of the target system. Sometimes a computer is connected to the Internet indirectly. For example, although it is common for SCADA systems to be putatively “air gapped” from the Internet, utility companies often connect SCADA systems to networks intended for administrative use so that their business units can have direct real-time access to the data provided by SCADA sys- tems to improve efficiency.37 Compromising a user of the administrative 37 For example, in January 2003, the Slammer worm downed one utility’s critical SCADA network after moving from a corporate network, through a remote computer to a VPN connection to the control center LAN. See North American Electric Reliability Council, “SQL Slammer Worm Lessons Learned for Consideration by the Electricity Sector,” June 20, 2003, available at http://www.esisac.com/publicdocs/SQL_Slammer_2003.pdf.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 117 network may enable an attacker to gain access to these SCADA systems, and thus intelligence collection efforts might focus on such users. Target identification information can come from a mix of sources, including open source collection, automated probes that yield network topology, and manual exploration of possible targets. Manual target iden- tification is slow, but is arguably more accurate than automated target identification. Automated target selection is based on various methods of mapping and filtering IP addresses and/or DNS names, for example through pro- grammed pattern matching, network mapping, or querying databases (either public ones, or ones accessible through close-access attacks). The scope of automated attack identification can be limited by the use of network address filtering. Say, for example, an attacker wishes to target a specific military base, or to affect only hosts within a specific corporate network. Various public records exist, such as WHOIS network registra- tion information, BGP network routing information, DNS zone files, and so on, that map Internet-accessible domain names to IP network address blocks. Automated target selection within an internal network is more com- plicated. An internal network may have one gateway to the Internet, but within the perimeter of the internal network may be any arrangement of internal addresses. Once an attacker gains access to a host inside the network, the internal DNS zone tables can be accessed and decoded to identify appropriate targets. This will not always be possible, but in many cases even internal network ranges can be determined with minimal effort by the attacker. It is also possible to perform simple tests, such as attempt- ing to access controlled websites to test the ability to make outbound (i.e., through the firewall) connections38 and thus to determine network membership through the resulting internal/external address mappings. If the attacker has sufficient lead time, a “low and slow” network probe can—without arousing suspicion—generally yield connectivity informa- tion that is adequate for many attack purposes. A cyberattacker may also be interested in selecting targets that fall under a number of administrative jurisdictions. As a rule of thumb, orga- nizations under different jurisdictions are less willing to share informa- tion among themselves than if only a single jurisdiction is affected—and thus a coordinated response to a cyberattack may be less effective than it might otherwise be. Furthermore, different administrative jurisdictions are likely to enforce a variety of security precautions, suggesting that some jurisdictions would be less resistant to an attack than others. 38An illustration is the use of a query to the domain name system as a covert channel.

118 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES 2.3.4  Intelligence Requirements and Preparation Attacks on the confidentiality, integrity, authenticity, and availability attributes require taking advantage of some vulnerability in the targeted system. However, an attacker seeking to exploit a given vulnerability must know—in advance of the attack—whether the targeted system is in fact vulnerable to any particular attack method of choice. Indeed, the success of a cyberattack (to include both achieving the intended goal and minimizing collateral damage) often depends on many details about the actual configuration of the targeted computers and networks. As a general rule, a scarcity of intelligence information regarding pos- sible targets means that any cyberattack launched against them can only be a “broad-spectrum” and relatively indiscriminate or blunt attack. (Such an attack might be analogous to the Allied strategic bombing attacks of World War II that targeted national infrastructure on the grounds that the infrastructure supported the war effort of the Axis.) Substantial amounts of intelligence information about targets (and paths to those targets) are required if the attack is intended as a very precise one directed at a par- ticular system and/or if the attack is to be a close-access attack.39 Con- versely, a lack of such information will result in large uncertainties about the direct and indirect effects of a cyberattack, and make it difficult for commanders to make good estimates of likely collateral damage. Information collection for cyberattack planning differs from tradi- tional collection for kinetic operations in that it may require greater lead time and may have expanded collection, production, and dissemination requirements, as specific sources and methods may need to be positioned and employed over time to collect the necessary information and conduct necessary analyses.40 As illustrations (not intended to be exhaustive), intelligence information may be required on: • The target’s platform, such as the specific processor model; • The platform’s operating system, down to the level of the specific version and even the history of security patches applied to the operating system; • The IP addresses of Internet-connected computers; • The specific versions of systems administrator tools used; 39 To some extent, similar considerations apply to the intelligence required to support precise kinetic weaponry. If a kinetic weapon is intended to be used, and capable of being used, in a precise manner, more information about the target and its environment will be necessary than if the weapon is blunt and imprecise in its effects. 40 Joint Chiefs of Staff, Information Operations, Joint Publication 3-13, U.S. Depart- ment of Defense, Washington, D.C., February 2006, available at www.dtic.mil/doctrine/ jel/new_pubs/jp3_13.pdf.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 119 • The security configuration of the operating system, e.g., whether or not certain services are turned on or off, or what antivirus programs are running; • The physical configuration of the hardware involved, e.g., what peripherals or computers are physically attached; • The specific operators of the systems in question, and others who may have physical access to the rooms in which the systems are kept; • The name of the shipping company that delivers computer compo- nents to the facility; • The telephone numbers of the help desk for the system in question; and so on. That is, the list of information items possibly relevant to cyberattacks in general is quite long indeed. Although not all such information will be necessary for any given attack, some of it will surely be needed depending on the precise nature of the cyberattack required. In some cases, such information may be available from public sources, and these sources are often voluminous with a wealth of useful infor- mation. In other cases, the required information may not be classified but may be available only through non-official sources, such as in the non-shredded office trash of the installation in question. In still other cases, the relevant information may be available through the traditional techniques of human agents infiltrating an organization or interviewing people familiar with the organization. Finally, automated means may be used to obtain necessary intelligence information—an example is the use of automated probes that seek to determine if a system has ports that are open, accessible, and available for use. Intelligence preparation for a cyberattack is often a staged process. For example, stolen login credentials can be used to gain access to com- promised accounts, followed by escalation of privileges to (a) locate and exfiltrate files or (b) gain complete control over the host, allowing further keystroke logging or password extraction to compromise not only other accounts on the same system, but also accounts on other hosts. This in turn extends the attacker’s reach (as seen in the Stakkato case described in Appendix C) and enables the attacker to gather more information that might support an attack. Maximizing the ability to take advantage of these stolen credentials becomes a matter of database entry and process- ing to automate the tasks. A cyberattacker will also benefit from knowledge about the adver- sary’s ability to respond in a coordinated manner to a widespread attack. At the low end of this continuum (Table 2.2), the adversary is only mini- mally capable of responding to attacks even on isolated or single systems, and has no capability at all to take coordinated action against attacks on

120 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES TABLE 2.2  Levels of Intrusion Response Level Victim Posture Characteristic Actions 0 Unaware None: Passive reliance on inherent software capabilities 1 Involved Uses and maintains antivirus software and personal firewalls 2 Interactive Modifies software and hardware in response to detected threats 3 Cooperative Implements joint traceback with other affected parties 4 Non-cooperative Implements invasive tracebacks, cease- (active response) and-desist measures, and other actions up to retaliatory counterstrikes SOURCE: David Dittrich, On the Development of Computer Network Attack Capabilities, work performed for the National Research Council under agreement D-235-DEPS-2007-001, Feb- ruary 3, 2008. multiple systems. At the high end of this continuum, the adversary can integrate information relating to attacks on all of the systems under its jurisdiction, develop a relatively high degree of situational awareness, and respond in an active and coordinated manner.41 Ultimately, the operational commander must make an assessment about whether the information available is adequate to support the exe- cution of a cyberattack. Such assessments are necessarily informed by judgments about risk—which decreases as more information is available. Unfortunately, there is no objective, quantitative way to measure the adequacy of the information available, and also no way to quantitatively ascertain the increase in risk as the result of less information being avail- able (the discussion in Section 2.3.6 elaborates on sources of uncertainty). In practice, the best way to adapt to a lack of detailed information may be to ensure the availability of skilled and adaptive personnel who can modify an attack as it unfolds in response to unanticipated conditions. Lastly, the fact that considerable intelligence information may be required to conduct a specific targeted attack points to a possible defen- 41 Even when the capacity and resources exist to be able to operate at a high response level, there are many reasons why system owners may not respond in a cooperative man- ner to a widespread computer attack. They may not be capable of immediately responding, may lack adequate resources, may be unable to physically attend to the compromised host, or may even speak a different language than the person reporting the incident. There may be active complicity within the organization, or a willful disregard of reports that allow the attacker to continue unabated.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 121 sive strategy—if one knows that a cyberattack is imminent, a defender may take steps to invalidate the intelligence information that the attacker may have collected. Such steps may be internally originated (e.g., chang- ing one’s own system configuration and defensive posture) or externally originated (e.g., downloading a security update provided by a vendor). If these steps are successful (and it may well be possible to change defen- sive postures rapidly), such action may force the attacker to postpone or abandon his attack or to conduct an attack that is much less precise and focused and/or much less certain in outcome than it would otherwise have been. (These points are far less relevant if the attacker is interested in a “general” attack against broad swaths of an adversary’s computers and networks—in such an attack, the targets of interest are, by definition, the most weakly defended ones.) 2.3.5  Effects Prediction and Damage Assessment In the kinetic world, weapons (or, more precisely, munitions) are aimed against targets. Predicting the effect of a weapon on a given target is obviously important to operational planners, who must decide the most appropriate weapons-to-target matching. In general, characteristics of the weapon, such as explosive yield, fusing, likely miss distances (more pre- cisely, Circular Error Probable—the distance from the target within which the weapon has a 50 percent chance of striking), and so on are matched against characteristics of the target (such as target hardness, size, and shape), and its surrounding environment (e.g., terrain and weather). Damage assessment for physical targets is conceptually straightforward— one can generally know the results of a strike by visual reconnaissance, although a task that is straightforward in principle may be complicated by on-the-ground details or adversary deception. For example, the weather may make it impossible to obtain visual imagery of the target site, or the adversary may be able to take advantage of the delay between weapons impact and damage assessment to create a false impression of the dam- age caused. There are similar needs for understanding the effect of cyberweapons and assessing damage caused by cyberweapons. But munitions effects and damage assessment are complex and difficult challenges, because the effectiveness of cyberweapons is a strong function of the intelligence available. Munitions effects in the kinetic world can often be calculated on the basis of computational models that are based on physics-based algo- rithms. That is, the fundamental physics of explosives technology and of most targets is well known, and so kinetic effects on a given target can be calculated with acceptable confidence. Thus, many of the uncertainties in

122 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES kinetic targeting can be theoretically calculated and empirically validated (e.g., at weapons effects test ranges), and the remaining uncertainties relate to matters such as target selection and collocation of other entities with the intended target. But there is no comparable formalism for understanding the effects of cyberweapons. The smallest change in the configuration and interconnec- tions of an IT system can result in completely different system behavior, and the direct effects of a cyberattack on a given system may be driven by the behavior and actions of the human system operator and the specific nature of that system as well as the intrinsic characteristics of the cyber- weapon involved. Furthermore, these relatively small and/or obscure and/or hidden characteristics are often important in cyber targeting, and information about these things is difficult to obtain through remote intel- ligence collection methods such as photo reconnaissance, which means that substantial amounts of relevant information may not be available to the attacker. An example of an error causing unexpected behavior in a cyberattack is the Sapphire/Slammer worm of January 2003. Although the Sapphire worm was the fastest computer worm in history (infecting more than 90 percent of vulnerable hosts within 10 minutes), a defective random num- ber generator significantly reduced its rate of spread.42 (The worm tar- geted IP addresses chosen at random, and the random number generator produced numbers that were improperly restricted in range.) In a military attack context, a cyberattack that manifested its effects more slowly than anticipated might be problematic. An additional complication to the prediction problem is the possibil- ity of cascading effects that go further than expected. For example, in ana- lyzing the possible effects of a cyberattack, there may be no good analog to the notion of a lethal radius within which any target will be destroyed. When computer systems are interconnected, damage to a computer at the NATO Defense College in Italy can propagate to a computer at the U.S Air Force Rome Laboratory in New York—and whether or not such a propagation occurs depends on a detail as small as the setting on a single switch, or the precise properties of every device connected at each end of the link, or the software characteristics of the link itself. Engineers often turn to test ranges to better understand weapons effects, especially in those instances in which a good theoretical under- standing is not available. A weapons test range provides a venue for test- ing weapons empirically—sending them against real or simulated targets and observing and measuring their effects. Such information, suitably 42 David Moore, “The Spread of the Sapphire/Slammer Worm,” undated publication, available at http://www.caida.org/publications/papers/2003/sapphire/sapphire.html.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 123 refined, is then made available to users to assist them in the weapons selection process. A certain kind of cyberweapon may need to be tested against different versions of operating systems (and indeed, even against different builds of the same operating system), different configurations of the same oper- ating system, and even against different operators of that system. To test for cascading effects, multiple computers must be interconnected. Thus, realistic test ranges for cyberweapons are inevitably complex. It is also quite difficult to configure a cyber test range so that a simulation will provide high confidence that a given cyberattack will be successful. Some analysts take from these comments that the effects of a cyber- attack are impossible to predict. As a blanket statement, this claim is far overstated. It is true that the launch of a worm or virus may go on to infect millions of susceptible computers, and some of the infected machines might happen to control an electric power grid or a hospital information system. The media often report such events as if they were a surprise—and indeed it may well have been a surprise that these particu- lar machines were infected. Nevertheless, after-the-fact analysis of such cyberattacks sometimes leads to the conclusion that the party launching the attack could have predicted the number of susceptible machines fairly accurately.43 Indeed, more customized cyberattacks are quite possible, depend- ing on the goal of the attacker. A software agent introduced into a target machine could, in principle, search its environment and remain resident only if that search found certain characteristics (e.g., if the machine had more than 10 files containing the phrases “nuclear weapon” and “Wash- ington D.C.” and had an IP address in a particular range, which might translate into the nation in which the machine was located). Nevertheless, high degrees of customization may require large amounts of information on machine-identifiable characteristics of appro- priate targets. Such information may be obtained through traditional intelligence collection methods. In some cases, a scouting agent under- takes the initial penetration, explores the targeted machine or network to obtain the necessary information, and then retrieves the appropriate exploit from its controller to carry out the next step in the attack. To illustrate, the precise geographical location of a computer is often not available to a software agent running on it, and may indeed be impos- 43 These comments presume that the attack software is written correctly as the at- tacker intended—mistakes in the worm or virus may indeed lead to unintended effects. A classic example of a worm written with unintended consequences is the Morris worm. See Brendan P. Kehoe, “Zen and the Art of the Internet,” available at http://www.cs.indiana. edu/docproject/zen/zen-1.0_10.html#SEC91.

124 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES sible for the agent to discover. On the other hand, its topological relation- ship to other systems may be the variable of significance, which may or may not be as good an indicator of function or importance, and topologi- cal relationships are likely to be discoverable by an agent. An issue for uncustomized cyberattacks is “blowback,” which refers to a bad consequence returning to the instigator of a particular action. In the cyberattack context, blowback may refer to direct damage caused to one’s own computers and networks as the result of a cyberattack that one has launched. For example, if the United States launched a cyberattack against an adversary using a rapidly multiplying but uncustomized worm over the Internet, the worm might return to adversely affect U.S. comput- ers and networks. It might also refer to indirect damage—a large-scale U.S. cyberattack against a major trading partner’s economic infrastructure might have effects that could harm the U.S. economy as well. Another class of weapons effects might be termed strategic. Tactical effects of using a weapon manifest themselves immediately and gener- ally involve destruction, disabling, or damage of a target—tactical attacks seek an immediate effect on an adversary and its military forces. By con- trast, strategic effects are less tangible and emerge over longer periods of time—strategic attacks are directed at adversary targets with the intent or purpose of reducing an adversary’s warmaking capacity and/or will to make war against the United States or its allies, and are intended to have a long-range rather than an immediate effect on an adversary. Strategic targets include but are not limited to key manufacturing systems, sources of raw material, critical material, stockpiles, power systems, transporta- tion systems, communication facilities, and other such systems. 44 Most importantly, strategic effects are often less predictable than tacti- cal effects. For instance, recall the history of the German bombing of Lon- don in World War II. Originally believed by the Germans to be (among other things) a method of reducing civilian support for the British govern- ment, it proved to have the opposite effect.45 As a new modality of offen- sive action, the strategic impact of cyberattack on a population would be even harder to predict in the absence of empirical evidence. As for assessing damage caused by a cyberattack, note first that the damage due to a cyberattack is usually invisible to the human eye. To ascertain the effects of a computer network attack over the Internet, an 44 See DOD definitions for “strategic operations” and “strategic air warfare,” in Joint Chiefs of Staff, Dictionary of Military and Associated Terms, Joint Publication 1-02, Department of Defense, Washington, D.C., April 12, 2001 (as amended through October 17, 2008), avail- able at http://www.dtic.mil/doctrine/jel/new_pubs/jp1_02.pdf. 45 See, for example, L. Morgan Banks and Larry James, “Warfare, Terrorism, and Psy- chology,” pp. 216-222 in Psychology of Terrorism, Bruce Bongar et al., eds., Oxford University Press, 2007.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 125 attacker might be able to use Internet tools such as ping and traceroute. These tools are commonly available, and they test the routing from the sending machine to the target machine. Ping is intended to measure the round-trip time for a packet to travel between the sending machine and the target machine—and if the target machine is down, ping will return an error message saying it could not reach the target machine. Traceroute reports on the specific path that a given packet took from the send- ing machine to the target machine—and if the target machine is down, ­ raceroute returns a similar error message. Thus, the receipt of such an t error message by the attacker may indicate that an attack on the target has been successful. But it also may mean that the operators of the target machine have turned off the features that respond to ping and traceroute, so as to lower their vulnerability to criminal hackers or to mislead the damage assessor about the effectiveness of an attack. More generally, a cyberattack is—by definition—intended to impair the operation of a targeted computer or network. But from a distance, it can be very difficult to distinguish between the successful outcome of a cyberattack and a faked outcome. For example, an attack may be intended to disrupt the operation of a specific computer. But the attacker is faced with distinguishing between two very different scenarios. The first is that the attack was successful and thus that the targeted computer was disabled; the second is that the attack was unsuccessful and also was discovered, and that the adversary has turned off the computer deliber- ately—and can turn it on again at a moment’s notice. How might this problem be avoided? Where ping and traceroute as tools for damage assessment depend on the association of a damaged machine with a successful cyberattack, an alternative approach might call for the use of in-place sensors that can report on the effects of a cyberat- tack. Prior to a cyberattack intended to damage a target machine, the attacker plants sensors of its own on the target machine. These sensors respond to inquiries from the attacker and are programmed to report back to the attacker periodically. These sensors could also be implanted at the same time as attack capabilities are installed on the target machine. Such sensors could report on the outcomes of certain kinds of cyberattacks, and thus in some situations could reduce the uncertainty of damage assessment. It may also be possible to use non-cyber means for damage assess- ment of a cyberattack. For example, if a cyberattack is intended to cause a large-scale power outage in a city, its success or failure may be observ- able by an individual in that city reporting back to the attackers via satellite phone or by an indigenous news network reporting on events within the country. But if the intent of the cyberattack is to turn off the power to a specific radar installation in the nation’s air defense network

126 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES at a specific time, it will be difficult to distinguish between a successful attack and a smart and wily defender who has detected the attack and shut the power down himself but who is prepared to turn it back on at a moment’s notice. The bottom line on damage assessment is that the state of the art in damage assessment techniques for cyberattacks is still primitive in com- parison to similar techniques for kinetic attacks. Cyberattackers must therefore account for larger amounts of uncertainty in their operational planning than their physical-world counterparts—and thus may be inhib- ited from relying solely or primarily on cyberattack for important mis- sions. 2.3.6  Complexity, Information Requirements, and Uncertainty From an analytical perspective, it is helpful to separate the uncer- tainty of effects resulting from cyberattack into several components: • Uncertainties that result from any military operation using any kind of weapon. All military operations are uncertain in outcome to some extent, and all run some risk of incurring collateral damage or having unintended consequences. The availability of intelligence information that is more accurate and more complete reduces the uncertainty inherent in an opera- tion, and it is likely that the necessary intelligence for cyber targets will be less available than for most kinetic targets. • Uncertainties that result from the lack of experience with a new weapon. Additional uncertainties arise when new weapons are used because their operational effects are not well known or well understood. For exam- ple, the actual death tolls associated with the Hiroshima and Nagasaki bombings far exceeded the predicted tolls because only blast effects were taken into consideration. (Scientists understood that nuclear weapons had effects other than blast, but they did not know how to estimate their magnitude.) The same has been true for most of the history of U.S. plan- ning for nuclear strikes.46 • Uncertainties that result from unanticipated interactions between civilian and military computing and communications systems. Because much of the IT infrastructure is shared for military and civilian purposes, disruptions to a military computer system or network may propagate to a civilian system or network. (In some cases, an adversary may deliberately intermingle mili- tary and civilian assets in order to dissuade the attacker from attacking 46 Lynn Eden, Whole World on Fire: Organizations, Knowledge, and Nuclear Weapons Devasta- tion, Cornell University Press, Ithaca, N.Y., 2004.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 127 and thereby causing additional collateral damage.47) But without detailed knowledge of the interconnections between military and civilian systems, cascading effects may well occur. As a rule, planning for cyberattack can involve a much larger range of choices and options than planning for most traditional military opera- tions. For example, cyberattack planners must account for a wide range of time and space dimensions. The relevant time scales can range from tenths of a second (a cyberattack may interfere with the timing of a real- time process control system) to years (a cyberattack may seek to implant “sleeper” capabilities in an adversary network that might be activated many years hence). And the systems targeted may be dispersed around the globe or concentrated in a facility next door. All of these factors increase the complexity of planning a cyberattack. One of the most difficult-to-handle aspects of a cyberattack is that in contrast to a kinetic attack that is almost always intended to destroy a physical target, the desired effects of a cyberattack are almost always indirect, which means that what are normally secondary effects are in fact of central importance. In general, the planner must develop chains of causality—do X, and Y happens, which causes Z to happen, which in turn causes A to happen. Also, many of the intervening events between initial cause and ultimate effect are human reactions (e.g., in response to an attack that does X, the network administrator will likely respond in way Y, which means that Z—which may be preplanned—must take response Y into account). Moreover, the links in the causal chain may not all be of similar character—they may involve computer actions and results, or human perceptions and decisions, all of which combine into some outcome. Understanding secondary and tertiary effects often requires highly 47 The same is true in reverse—a cyberattack on the civilian side may well result in negative effects on military computers. This point was illustrated by the “I Love You” virus (also referred to as the “Love Bug”), released in May 2000. Press releases and articles from the Department of Defense indicate that some unclassified DOD systems, and even a few classified systems, were infected with this virus. See Jim Garamone, “Love Bug Bites DoD, Others,” American Forces Press Service, May 4, 2000, available at http://www.defenselink. mil/news/newsarticle.aspx?id=45220; “Statement by Assistant Secretary of Defense (Public Affairs) Ken Bacon,” U.S. Department of Defense, May, 2000, available at http://findarticles. com/p/articles/mi_pden/is_200005/ai_2892634075. Testimony to a congressional subcom- mittee from the Government Accounting Office shortly after the virus struck noted the impacts to DOD and many other federal agencies, in addition to the impacts on the private sector. See General Accounting Office, “‘ILOVEYOU’ Computer Virus Highlights Need for Improved Alert and Coordination Capabilities—Statement of Jack L. Brock, Jr.,” Testimony Before the Subcommittee on Financial Institutions, Committee on Banking, Housing and Urban Affairs, U.S. Senate, GAO/T-AIMD-00-181, May 18, 2000.

128 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES specialized knowledge. For example, an attack on an electric power grid will require detailed knowledge about the power generation plant of interest—model numbers, engineering diagrams, and so on. Thus, plan- ning a cyberattack may also entail enormous amounts of intellectual coordination among many different individuals (likely scattered through many organizations). The result is often a complex execution plan, and complex execution plans have many ways to go wrong—the longer a causal chain, the more uncertain its ultimate outcomes and conclusions. This is not simply a mat- ter of unintended consequences of a cyberattack, though that is certainly a concern. The point also relates to the implications of incomplete or over- looked intelligence. For example, it may be that a cyberattack is entirely successful in disabling the computer controlling an air defense radar, but also, as it turns out, that there is a backup computer in place that was not mentioned in the intelligence reports used to plan the attack. Or a con- nection between two systems that is usually in place is disconnected on the day of the attack because of a maintenance schedule that was changed last week, and thus was unknown to the attack planners—resulting in the inability of the attacker to destroy the backup computer. One way of coping with uncertainty in this context is to obtain feed- back on intermediate results achieved through monitoring the execution of the initial plan and then applying “mid-course corrections” if and when necessary. The need to apply mid-course corrections means that contingency plans must be developed, often in advance if mid-course corrections need to be applied rapidly. The need to develop contingency plans in advance adds to the complexity of the planning process. In practice, execution monitoring may be difficult to undertake. The attacker needs to know outcomes of various intermediate steps in the causal chain as well as what responses the victim has made at various stages of the attack, so that he may take appropriate compensating action. The difficulties of collecting such information are at least as hard as those of undertaking damage assessment for the ultimate outcome. 2.3.7  Rules of Engagement Rules of engagement define the appropriate use of force by specifying the circumstances under which various offensive actions may be under- taken and whose authority is needed to order such actions to be taken. In the physical world, the rules of engagement may specify, for example, that individuals with guns have the authority to fire them only when they are fired upon first, and that they may never fire when the shooter is running away from them. Alternatively, rules of engagement may allow the target- ing of tracked but not wheeled vehicles, or of vehicles but not personnel.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 129 Rules of engagement specify what tools may be used to conduct a cyberattack, what targets may be attacked, what effects may be sought, and who may conduct a cyberattack under what circumstances. Rules of engagement are formulated with different intent and for different pur- poses depending on the interests at stake, and are discussed at greater length in Chapters 3-5 (on the military, the intelligence agencies, and law enforcement). 2.3.8  Command and Control 2.3.8.1  Command and Control—Basic Principles According to the DOD,48 command and control (C2) refers to the exercise of authority and direction by a properly designated commander over assigned and attached forces in the accomplishment of the mission. Command and control functions are performed through an arrangement of personnel, equipment, communications, facilities, and procedures employed by a commander in planning, directing, coordinating, and controlling forces and operations in the accomplishment of the mission. In principle, C2 involves passing to weapons operators information such as the targets to be attacked, the time (which may be upon receipt) that an attack is to be launched, the nature of the attack to be launched, and the time (which may be upon receipt) that an attack in progress must be stopped. (Note, however, that not all weapons are designed to implement such C2 capabilities—sometimes, a weapon is designed to be used and then “forgotten”; that is, once launched or activated, it cannot be recalled and will do whatever damage it does without further intervention.) C2 requires situational awareness—information about the location and status of the targets of interest and friendly or neutral entities that should not be attacked (Section 2.3.3) and their characteristics (Section 2.3.4), decision making that results in an appropriate course of action, communication of the desired course of action to the weapons available (Section 2.3.8.2, below), and damage assessment that indicates to the deci- sion maker the results of the actions taken (Section 2.3.5). C2 becomes more complex when more weapons, more targets, or more friendly/neutral entities must be taken into account in decision making. For example, on a complex battlefield, issues of coordination often arise. A second strike on a given target may or may not be necessary, depending on the outcome of the first strike. Target A must be attacked 48DOD JP1-02, Dictionary of Military and Associated Terms, April 12, 2001 (as amended through September 30, 2008), available at http://www.dtic.mil/doctrine/jel/new_ pubs/ jp1_02.pdf.

130 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES and neutralized before Target B is attacked, because Target A provides defenses for Target B. A weapon launched to attack one target may inad- vertently interfere with the proper operation of another weapon launched to attack that same target—avoiding this problem is known as deconflic- tion. (When cyberattack is concerned, planners may well have to contend with cyberattacks launched by multiple agencies, multiple nations (e.g., allies), and even private citizens taking unauthorized actions.) All of these problems must be addressed by those planning an attack. (Section 3.6 speculates on how the DOD might address these problems.) C2 and coordination issues are complex enough in the context of cyber activities alone. But they are multiplied enormously when cyberat- tacks are conducted as part of an integrated campaign (i.e., a campaign that integrates all U.S. military capabilities, not only cyber capabilities) against an adversary. Many analysts also include matters such as damage assessment, attack assessment, and tactical warning under the rubric of command and control; this report addresses these matters in Section 2.3.5 (damage assessment), Section 2.4.1 (tactical warning and attack assessment), and Section 2.4.2 (attribution). 2.3.8.2  Illustrative Command and Control Technology for Cyberattack A cyberattack often depends on a program running on the computer being attacked—what might be called an attack agent. The C2 function is used to convey or transmit orders about what to do, when to do it, and when to stop doing it. C2 methods can include: • Direct (encrypted or clear text) connections to and from controller hosts or peers in peer-to-peer networks; • Covert channels using crafted packets that appear to be innocuous protocols, controlling specific header fields in packets, or using stegano- graphic techniques that embed commands in “normal” traffic on standard protocols (e.g., embedded characters in HTTP requests); and • Embedded commands in files retrieved from web servers, instant messages, or chat messages on Internet Relay Chat (IRC) servers. Direct C2 communication flows can often be detected using standard intrusion detection signature methods. Encryption may obscure the con- tent of the information flowing in C2 channels, but it is sometimes not very hard to identify a C2 channel by looking at flow history to a host that was recently engaged in a DDOS attack or at outbound scanning activity. If a central C2 method is used, and it is easy to identify the C2 server, it can be possible to identify the attacker and to mitigate the attack.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 131 Recently, distributed attack tool authors have sought to employ stronger cryptography, including use of public key exchange algorithms to generate per-session encryption keys, as well as to use peer-to-peer mechanisms for communication to conceal the complete distributed attack network, or to use time-delayed command execution to tempo- rally separate C2 traffic from hostile actions like DDOS attacks that trig- ger alarms. Stepping stones, or relays, can further obscure the traceback from a targeted computer to the keyboards of attackers. Even as far back as 2001, the Leaves worm used both strong encryption and synchronized infected hosts’ clocks to support synchronized, time-delayed execution of commands.49 Even without centralized command and control, different attack agents can coordinate their actions. For example, an agent active on one adversary computer can delay its action until it receives a signal from a second agent on another computer that the second agent has completed its work. Or, for purposes of impeding an adversary’s attempts to detect an attack, multiple agents might be implanted on a target computer with a mix of functionalities—coordination among these agents could be as effective as—or more so than—endowing a single agent with all of these functions. Coordination among different attack agents may be particularly important if and when the same computers have been targeted by differ- ent organizations. Without careful planning, it is possible that agents may be working at cross-purposes with each other. For example, one agent may be trying to jam a communications channel that is used clandestinely by another agent. C2 channels can also be used to update the capabilities of attack software already in place. Indeed, as long as the channel is active, such software can be upgraded or even replaced entirely—which means that an attack plan can be easily refined as more information is gained about the target. Defensive plans to prevent counterattack can also be changed in real time, so that (for example) a controller can itself be moved around (see Box 2.2). Lastly, an attack agent will often need ways to transmit information to its controller for purposes such as damage assessment, report-back status checking, and specifying its operating environment so that a more customized attack can be put into place. For such purposes, an attacker may use outbound communications channels that are not usually blocked, such as port 80 (associated with the HTTP protocol) or DNS queries. 49 CERT Coordination Center, “Cert Incident Note IN-2001-07 w32/leaves: Exploita- tion of Previously Installed Subseven Trojan Horses, July 2001. See http://www.cert.org/­ incident notes/IN-2001-07.html.

132 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES 2.3.8.3  The Role of Human Expertise and Skill In large part because the intelligence information on a cyber target is likely to be incomplete or inaccurate in some ways, the initial stages of a cyberattack may well be unsuccessful. Thus, for a cyberattack to suc- ceed, the attack plan may need to be modified in real time as unexpected defenses and unanticipated problems are encountered. Some cyberattacks can be “fire-and-forget”—especially those attacks for which the target set is relatively large and individual successes or failures are not particularly relevant. But if a cyberattack is very specifically targeted, adaptability and flexibility on the part of the attacker may well be needed. 2.3.9  Coordination of Cyberattack Activities with Other Institutional Entities If a cyberattack is launched by the United States, coordination is likely to be necessary in three domains—within the U.S. government, with the private sector, and with allied nations. • Coordination within the U.S. government. As noted in Chapters 3 and 4, a number of U.S. government agencies have interests in cyberattack. It is easy to imagine that a lack of interagency coordination might lead to conflicts between those wanting to exploit an adversary network and those wanting to shut it down. Policy conflicts over such matters are not new with the advent of cyberattack, but technical deconfliction issues arise as well, as different agencies might conduct cyber operations (either cyberattack and/or cyberexploitation) that might interfere with each other. In this connection, the committee has heard informally of potential struggles between the U.S. Air Force and the National Security Agency for institutional primacy in the cyberattack mission. In addition, under some circumstances, it may be necessary to consult with the congressional leadership and/or the relevant congressional committees, as discussed in Section 6.2. • Coordination with the private sector. Because so much IT is designed, built, deployed, and operated by the private sector, some degree of coor- dination with the private sector would not be surprising in the planning and execution of certain kinds of cyberattack. For example, a cyberattack may travel over the Internet to an adversary computer, and spillover effects (such as reductions in available bandwidth) may occur that affect systems in the private sector. Or a U.S. cyberattack may prompt an adversary counterattack against U.S. systems and networks in the private sector. Or a U.S. cyberattack against an adversary transmitted through a commercial Internet service provider might be detected (and perhaps suppressed) by that provider, believing it to be the cyberattack of a crimi-

TECHNICAL AND OPERATIONAL CONSIDERATIONS 133 nal or acting on the protest of the targeted network. Such possibilities might suggest that the defensive posture of U.S. private sector systems and networks be strengthened in anticipation of a U.S. cyberattack (or at least that relevant commercial parties such as ISPs be notified), but this notion raises difficult questions about maintaining operational security for the U.S. attack. • Coordination with allied (or other) nations. Issues of agency coordina- tion and coordination with the private sector arise with allied nations as well, since allied nations may also have government agencies with inter- ests in cyberattack activities and private sector entities whose defensive postures might be strengthened. Another issue is the fact that a cyberat- tack of the United States on Zendia might have to be transmitted over facilities controlled by third countries, and just as some nations would deny the United States military overflight rights, they may also seek to deny the United States the rights to transmit attack traffic through their facilities. Routing traffic to avoid certain countries is sometimes possible, but may require a significant amount of pre-planning and pre-positioning of assets depending on the nature of the attack to be launched. 2.3.10  A Rapidly Changing and Changeable Technology and Operational Environment for Cyberattack The technological and operational environment in which cyberattacks may be conducted is both highly malleable and subject to very rapid change. Consider first the underlying technologies. Historical experi- ence suggests that it takes only a decade for the technological substrate underlying IT applications to change by one, two, or three orders of magnitude—processor power, cost, bandwidth, storage, and so on. Then factor in trends of growing numbers of IT applications in both stand-alone and embedded systems and increasing connectivity among such applica- tions. These points indicate that the overall IT environment changes on a time scale that is short compared to that of the physical world. IT-based applications also evolve, but here the story is more mixed. Because such applications depend on knowledge and insight about how best to exploit technology, the march of progress is not nearly as consis- tent as it has been with the underlying technologies, and many difficult problem domains in IT have been difficult for many years. Of particular relevance to cyberattack is the problem of technical attack attribution (Sec- tion 2.4.2), which has bedeviled the cybersecurity community for many years.50 Many cyberattack capabilities are themselves afforded by various 50 For more discussion of this point, see National Research Council, Toward a Safer and More Secure Cyberspace, The National Academies Press, Washington, D.C., 2007.

134 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES IT-based applications, and may—or may not—change dramatically in the future, especially in relation to the defensive capabilities available to potential victims. That is, offensive capabilities are likely to grow for all of the reasons described in Section 2.2, but defensive capabilities are also likely to grow because IT vendors are placing more emphasis on security to meet the growing criminal threat. A second important point is that the security configuration of any given cyber target is also subject to very rapid change, and the vulner- abilities on which cyberattacks depend are sometimes easily fixed by the defender. A system administrator can close down unused access points with a few keystrokes. A patch can repair a security flaw only a few seconds after it is installed. A new security scan can discover and eliminate a malicious software agent in a few minutes. Responding to a security warning, an administrator may choose to strengthen security by deliberately degrading system functionality (e.g., reducing backward compatibility of applications that may also be associated with greater vulnerability). Even worse from the standpoint of the attacker, all such changes in security configuration can occur without notice. (Such changes are analo- gous to randomly changing the schedule of a guard.) Thus, if a specific computer system is to be targeted in a cyberattack, the attacker must hope that the access paths and vulnerabilities on which the cyberattack depends are still present at the time of the attack. If they are not, the cyberattack is likely to fail. (These considerations are less significant for a cyberattack in which the precise computers or networks attacked or compromised are not important. For example, if the intent of the cyberattack is to disable a substantial number of the desktop computer systems in a large organiza- tion, it is of little consequence that any given system is invulnerable to that attack—what matters is whether most of the systems within that organization have applied the patches, closed down unneeded access points, and so on.) Finally, if a cyberattack weapon exploits a vulnerability that is eas- ily closed, a change in security configuration or posture can render the weapon ineffective for subsequent use. This point is significant because it means that an attacker may be able to use a given cyberattack weapon only once or a few times before it is no longer useful. That is, certain cyberweapons may well be fragile. 2.4  Characterizing an Incoming Cyberattack As noted in Chapter 1, the definition of active defense involves launch- ing a cyberattack as a defensive response to an incoming cyberattack.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 135 However, before any such response occurs, the responding party must characterize the incoming attack accurately enough that its response is both appropriate and effective. Even if the victim of an incoming cyberattack does not plan to launch a cyberattack in response, it is important to char- acterize the incoming attack for forensic and law enforcement purposes. 2.4.1  Tactical Warning and Attack Assessment Tactical warning and attack assessment (TW/AA) refer to the pro- cesses through which the subject of an attack is alerted to the fact that an attack is in fact in progress and made aware of the scale, scope, and nature of an attack. In the strategic nuclear domain, early TW/AA, including, for example, information on the number of launches and their likely targets, would be provided for the United States by a network of satellites that detected adversary missiles just after launch. Moreover, the time scales of launch from Soviet territory to impact on U.S. soil (roughly 30 minutes in the case of ICBMs, 10-15 minutes in the case of submarine-launched bal- listic missiles) were a primary determinant of a U.S. command and control system to assess an attack and determine the appropriate response. For a cyberattack, even knowing that an attack is in progress is highly problematic. • For individual sites, anomalous activity may be associated with the start of a cyberattack, and if a site detects such activity, it may receive early warning of an attack. But characterizing anomalous activity on a computer system or network that reliably indicates an attack is an enor- mously difficult task (legitimate users do all kinds of unusual things), and general solutions to this identification problem have eluded computer scientists for decades. • Attack assessment is even more difficult, because the initial intru- sions may simply be paving the way for hostile payloads that will be delivered later, or the damage done by a cyberattacker may not be visible for a long time after the attack has taken place (e.g., if rarely used but important data has been corrupted). (Clandestine or delayed-discovery attacks have obvious advantages when it is desirable to weaken an adver- sary without its knowledge.) • A “serious” attack—that is, one conducted by a nation-state or a terrorist adversary for seriously hostile purposes—must be somehow distinguished from the background attacks that are constantly ongoing for nearly all systems connected to the Internet. These background attacks include a variety of hacking activities, virus propagation, distributed denial-of-service attacks, and other activities conducted for illicit mon- etary gain, sport, or pure maliciousness that are constantly being con-

136 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES ducted, in addition to the ongoing activities presumably undertaken by various nation-states or other subnational entities for covert intelligence- gathering purposes and/or to “prepare the battlefield” for possible future cyberattacks for offensive purposes. For a dispersed entity (such as the Department of Defense, the U.S. government, or a large corporation), multiple sites may be attacked in a coordinated manner. If attacks were somehow known to be coordinated, such coordination might indicate a serious attack. On the other hand, detecting such coordination against the background noise of ongoing attacks also remains an enormous intellectual challenge, as useful infor- mation from multiple sites must be made available on a timely basis. (And as detection capabilities improve, attackers will take steps to mask such signs of coordinated attacks.) An attack assessment would seek to address many factors, including the scale of the attack (how many entities are under attack), the nature of the targets (which entities are under attack, e.g., the DOD Global Command and Control System, electric power generating facilities, Internet retailers), the success of the attack and the extent and nature of damage caused by the attack, the extent and nature of any foreign involvement derived from technical analysis of the attack and/or any available intelligence informa- tion not specifically derived from the attack itself, and attribution of the source of the attack (discussed at greater length in Section 2.4.2). Information on these factors is likely to be quite scarce when the initial stages of an attack are first noticed. For example, because cyber- weapons can act over many time scales, anonymously, and clandestinely, knowledge about the scope and character of a cyberattack will be hard to obtain quickly. Other non-technical factors may well play into an attack assessment, such as the state of political relations with other nations that are capable of launching such an attack. From an organizational perspective, the response of the United States to a cyberattack by a non-state actor is often characterized as depending strongly on whether the attack—as characterized by factors such as those described above—is one that requires a law enforcement response or a national security response. This characterization is based on the idea that a national security response relaxes many of the constraints that would otherwise be imposed by a law enforcement response.51 But the “law enforcement versus national security” dichotomy is 51 For example, active defense—either by active threat neutralization or by cyber r ­ etaliation—may be more viable under a national security response paradigm, whereas a law enforcement paradigm might call for passive defense to mitigate the immediate threat and other activities to identify and prosecute the perpetrators.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 137 misleading. In practice, the first indications of a cyberattack are likely to be uncertain, and many factors relevant to a decision will be unknown. Once the possibility of a cyberattack is made known to national authori- ties, information must be gathered to determine perpetrator and purpose, and must be gathered using the available legal authorities (described in Section 7.3). Some entity within the federal government integrates the rel- evant information and then it or another higher entity (e.g., the National Security Council) renders a decision about next steps to be taken, and in particular whether a law enforcement or national security response is called for. How might some of the factors described above be taken into account as a greater understanding of the event occurs? Law enforcement equities are likely to predominate in the decision-making calculus if the scale of the attack is small, if the assets targeted are not important military assets or elements of critical infrastructure, or if the attack has not created sub- stantial damage. To the extent that any of these characteristics are not true, pressures may increase to regard the event as one that also includes national security equities. The entity responsible for integrating the available information and recommending next steps to be taken has evolved over time. In the late 1990s, the U.S. government established the National Infrastructure Protec- tion Center (NIPC) as a joint government and private sector partnership that provided assessment, warning, vulnerability, and investigation and response for threats to national critical infrastructure. Consisting of per- sonnel from the law enforcement, defense, and intelligence communities, each with reach-back into their respective agencies for support, along with representatives from the private sector and foreign security agencies, the NIPC was the place where information on the factors described was to be fused and the intelligence, national security, law enforcement, and private sector equities integrated regarding the significance of any given cyberattack. Organizationally, the NIPC was part of the Department of Justice under the Federal Bureau of Investigation. In later years, the analysis and warning functions of the NIPC were dispersed throughout the Depart- ment of Homeland Security (DHS) as the result of that department’s creation, while the principal investigative functions remained at the FBI (with some investigative functions performed by the U.S. Secret Service, an autonomous part of DHS).52 Initially, they were integrated into the Information Analysis and Infrastructure Protection Directorate, primarily the National Infrastructure Coordinating Center (NICC) under the Office 52 See Department of Homeland Security, “History: Who Became Part of the Depart- ment?,” 2007, available at http://www.dhs.gov/xabout/history/editorial_0133.shtm.

138 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES of Operations Coordination and the United States Computer Emergency Readiness Team, the operational arm of the National Cyber Security Divi- sion. The NICC provides operational assessment, monitoring, coordina- tion and response activities, and information sharing with the private sector through information sharing and analysis centers. The United States Computer Emergency Readiness Team (US-CERT), the operational arm of the National Cyber Security Division, coordinates defense against cyberattacks.   Further reorganization at DHS moved the Office of Operations Coor- dination to a freestanding component that runs NICC as part of the National Operations Center. The Office of Infrastructure Protection (OIP) became part of the National Protection and Programs (NPP) Director- ate. A separate Office of Cybersecurity and Communications, also under NPP, includes the National Cyber Security Division, which still man- ages US-CERT operations. Broadly, the NIPC functions that focused on risk reduction, warning, and vulnerability assessment are now part of NPP. Those NIPC functions that focused on operational assessment and coordination are today part of the NICC under the Office of Operations Coordination. As this report is being written, the U.S. government apparatus respon- sible for warning and attack assessment is likely to be reorganized again. The government agencies responsible for threat warning and attack assessment can, in principle, draw on a wide range of information sources, both inside and outside the government. In addition to hearing from private sector entities that come under attack, cognizant government agencies can communicate with security IT vendors, such as Symantec and McAfee, that monitor the Internet for signs of cyberattack activity. Other public interest groups, such as the Open Net Initiative and the Information Warfare Monitor, seek to monitor cyberattacks launched on the Internet.53 2.4.2  Attribution Attribution is the effort to identify the party responsible for a cyber- attack. Technical attribution is the ability to associate an attack with a responsible party through technical means based on information made available by the fact of the cyberattack itself—that is, technical attribution 53 See http://opennet.net/ and http://www.infowar-monitor.net for more infor- mation on these groups. A useful press report on the activities of these groups can be found at Kim Hart, “A New Breed of Hackers Tracks Online Acts of War,” Washington Post, August 27, 2008, available at http://www.washingtonpost.com/wp-dyn/content/ article/2008/08/26/AR2008082603128_pf.html.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 139 is based on the clues available at the scene (or scenes) of the attack. All- source attribution is a process that integrates information from all sources, not just technical sources at the scene of the attack, in order to arrive at a judgment (rather than a definitive and certain proof) concerning the identity of the attacker. Two key issues in technical attribution are precision and accuracy: • Precision. An attribution has associated with it some range of preci- sion for the identity of the attacker. The attack might be associated with a specific nation (Zendia), a specific department within that nation (the ministry of defense), a specific unit (the 409th Information Operations Brigade), a specific set of IP addresses, a specific individual, and so on. • Accuracy. A characteristic related to precision is accuracy, a measure of the quality of attribution, such as the probability that the attribution is correct. Accuracy is a key issue in legal standards for evidence and in the extent to which it is reasonable to develop linkages and inferences based on those attributes. Note that an attacker may seek to reduce the accuracy of attribution if he or she wishes to operate secretly by taking countermeasures to impersonate other parties. The unfortunate reality is that technical attribution of a cyberattack is very difficult to do (it is often said that “electrons don’t wear uniforms”), and can be nearly impossible to do when an unwittingly compromised or duped user is involved. As the existence of botnets illustrates, a cyberat- tacker has many incentives to compromise others into doing his or her dirty work, and untangling the nature of such a compromise is inevitably a time-consuming and laborious (if not futile) process. To illustrate the point, consider a scenario in which computers of the U.S. government are under a computer network attack (e.g., as the result of a botnet attack). The owners/operators of the attacked computers in the United States would likely be able to find the proximate source(s) of any attack. They might discover, for example, that the attack traffic ema- nated from computers located in Zendia. But there may well be no techni- cal way to differentiate among a number of different scenarios consistent with this discovery. These scenarios include the following: • The attack against the United States was launched by agents of the Zendian government with the approval of the Zendian national command authority. • The attack against the United States was launched by low-level agents of the Zendian government without the approval or even the knowledge of the Zendian national command authority. • The attack was launched through the efforts of computer-savvy

140 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES citizens of Zendia who believe that the United States oppresses Zendia in some way. Although the efforts of these citizens are not initiated by the Zendian government, the Zendian government takes no action to stop them. (Such individuals are often known as “patriotic hackers” and are discussed in more detail in Section 7.2.3.3.) • The Zendian computers used to conduct the attack against the United States have been compromised by parties outside Zendia (perhaps even from the United States, as happened in the Solar Sunrise incident in February 199854), and Zendia is merely an innocent bystander on the international stage. • The attack was launched at the behest of the Zendian government, but not carried out by agents of the Zendian government. For example, it may have been carried out by the Zendian section of an international criminal organization. However, the limitations of technical attribution are not dispositive. All-source attribution takes into account whatever information is avail- able from efforts at technical attribution, but also uses information from other sources to arrive at a judgment. Such sources might include: • Intelligence sources. For example, a well-placed informant in the Zendian government might provide information indicating the responsi- bility of that nation in initiating the attack, or routinely monitored mes- sage traffic might indicate a point of responsibility within the Zendian government. • Political sources. The Zendian government might publicly take credit for the attack. (Of course, a claim that “We were responsible for the attack” would itself have to be verified.) • Other technical information. The technical signature of the cyber­ attack might be similar to that of a previous attack, and the United States might have information on the originator of that previous attack. The scale or nature of the attack might be such that only a major nation-state could have mounted it, thus ruling out other parties. Or it might be pos- sible to determine the time zone of the actual attacking machine. 55 • Temporal proximity to other coercive or aggressive actions that can be attributed. For example, Zendia might choose to “bundle” a set of such actions together, such as cyberattack coupled with an embargo on selling 54 More information on the Solar Sunrise incident can be found at http://www.sans. org/resources/idfaq/solar_sunrise.php. 55 For example, it is sometimes possible to learn information about a target computer’s physical environment through the remote monitoring of time stamps. Local time stamps are governed by a computer’s clock, and the rate at which the clock runs is affected by the ambi- ent temperature. Thus, time stamp information provides information on changes of ambient

TECHNICAL AND OPERATIONAL CONSIDERATIONS 141 certain computer chips or strategic raw materials to the United States, a break in diplomatic relations, and refusal of “safe harbor” rights for U.S. naval vessels. Thus, although the process of all-source attribution might well take a long time to arrive at an actionable (if not definitive) judgment, the case for attribution is not as hopeless as it is often portrayed. Attribution of an attack should not be confused with establishing or identifying an access path to the source of the attack. Under a given set of circumstances, the victim may be able to establish both of these pieces of information, one of them, or none of them. For example, it may be impossible to establish an access path to the source of a cyberattack, but at the same time an all-source attribution effort might definitively iden- tify a given nation as being responsible for the attack. Alternatively, an access path to the source of a cyberattack might be established without providing any useful information at all regarding the party responsible (e.g., the launching point for a cyberattack against a corporation might be located inside that corporation and not be further traceable). The differ- ence between attribution and having an access path is significant, because in the absence of an access path, neutralization of a cyberattack is not pos- sible, though retaliation for it might be. The converse is true as well—in the absence of attribution, retaliation or reprisal is not possible, though neutralization of a cyberattack might be. Finally, the problems that anonymity poses for a defensive actor can easily be seen as advantages for an attacker. The discussion above sug- gests that with careful and appropriate planning, a cyberattack can often be conducted with a high degree of plausible deniability. Such a capability may be useful in certain intelligence operations when it is desirable that the role of the government sponsor of a cyberattack is not to be publicly acknowledged (as discussed in Section 4.2). 2.4.3  Intent In the realm of traditional military conflict, it is generally presumed that national governments control the weapons of warfare—frigates, temperature, which may be correlated with time-of-day physical location. Measure­ments of day length and time zone can provide information useful for estimating the physical loca- tion of a computer. Local temperature changes caused by air-conditioning or movements of people can identify whether two machines are in the same location, or even are virtual machines on one server. See Steven J. Murdoch, “Hot or Not: Revealing Hidden Services by Their Clock Skew,” Proceedings of the 13th ACM Conference on Computer and Communi- cations Security, CCS’06, October 30–November 3, 2006, Alexandria, Va., 2006, available at http://www.cl.cam.ac.uk/users/sjm217/.

142 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES fighter jets, tanks, and so on. Thus, if any of these weapons are used, there is a presumption that actions involving them have been sanctioned by the controlling government—and inferences can often be drawn regarding that government’s intent in ordering those actions. But when other weapons are not controlled exclusively by govern- ments, inferring intent from action is much more problematic. This is especially so if communication cannot be established with the control- ling party—as will often be the case with cyberattack. Attribution of a cyberattack (discussed above) helps, but if the party identified as being responsible is not a national government or another party with declared intentions toward the United States, it will be virtually impossible to determine intent with high confidence. Determinations of intent and attribution of the source are often com- plicated—and inappropriately biased—by a lack of information.  Ulti- mately, such determinations are made by human beings, who seek to integrate all available information in order to render a judgment.  (Such integration may be automated, but human beings program the rules for integration.)  When inexperienced human beings with little hard infor- mation are placed into unfamiliar situations in a general environment of tension, they will often make worst-case assessments.  In the words of a former Justice Department official involved with critical infrastruc- ture protection, “I have seen too many situations where government of- ficials claimed a high degree of confidence as to the source, intent, and scope of an attack, and it turned out they were wrong on every aspect of it.  That is, they were often wrong, but never in doubt.” 2.5  Active Defense for Neutralization As A Partially Worked Example To suggest how the elements above might fit together operationally, consider how a specific active defense scenario might unfold. In this scenario, active defense means offensive actions (a cyber counterattack) taken to neutralize an immediate cyberthreat—that is, with an operational goal—rather than retaliation with a strategic goal. The hostile cyberattack serves the offensive purposes of Zendia. The cyber counterattack in ques- tion is for defensive purposes. The scenario begins with a number of important U.S. computer systems coming under cyberattack. For definiteness, assume that these computer systems are SCADA and energy management systems controlling elements of the power grid, and that the attacker is using unauthorized connections between these systems and the Internet-connected business systems of a

TECHNICAL AND OPERATIONAL CONSIDERATIONS 143 power generation facility to explore and manipulate the SCADA and energy management systems. The first step—very difficult in practice—is to recognize the act as an unambiguously hostile one rather than one undertaken by cyber pranksters with too much time on their hands. Further inspection reveals that the unauthorized intruder has planted software agents that would respond to commands in the future by disabling some of the power generation and transmission hardware controlled by these systems, and furthermore that the apparent controllers of these agents are located around the world. However, even the availability of such information cannot determine the motivations of the responsible parties regarding why they are undertaking such a hostile act. A second step is to recognize that these attacks are occurring on many different SCADA and energy management systems around the nation. Such recognition depends on the existence of mechanisms within the U.S. govern- ment for fusing information from different sources into an overall picture indicating that any individual attack fits into a larger adversarial picture, rather than being an isolated event. The third step is to identify the attacker—that is, the party installing the agents. The IP address of the proximate source of this party can be ascer- tained with some degree of confidence, and a corresponding geographic loca- tion may be available—in this case, the geographic location of the proximate source is Zendia. But these facts do not reveal whether the attack was: • Sponsored by Zendia and launched with the approval of the highest levels of the Zendian National Command Authority; • Launched by low-level elements in the Zendian military without high-level authorization or even the knowledge of the Zendian NCA; • Launched by computer-savvy Zendian citizens; • Launched by terrorists from Zendian soil; or • Launched by Ruritania transiting through Zendia, which may be entirely innocent. Suppose further that additional information from non-technical sources is available that sheds additional light on the attacker’s identity. In this case, intelligence sources indicate with a moderate degree of confidence that the attack ultimately emanates from parties in Zendia. The availability of information about the attacker’s identity marks an important decision point about what to do next. One option is to approach the Zendian government and attempt to resolve the problem diplomatically and legally, where “resolution” would call for Zendian government action that results in a cessation of the attack—in this case, refraining from install-

144 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES ing any more agents on U.S. SCADA and energy management systems. (Knowing that the attacks have actually ceased is yet another problem, especially against the background of myriad other hostile or adversarial actions being taken every day against U.S. systems of various sorts.) Such an approach also risks compromising U.S. intelligence sources, and thus U.S. decision makers may be wary of taking this route. Continuing with this scenario, the United States discovers that the hos- tile agent controllers are themselves centrally controlled by an Internet-con- nected system located in Zendia. Cognizant of the uncertainties involved, the United States quietly probes the master controller to understand its vulner- abilities, but decides to refrain from further action at this time. It also works on removing the deployed agents from the SCADA and energy management systems in question, replacing them with harmless look-alike agents that can perform all of the relevant report-back functions to the controller. However, cyber response teams from the United States realize that they are unlikely to find every SCADA and energy management system so infested. A few months later, tensions between the United States and Zendia rise because of a non-lethal incident between the Zendian air force and a U.S. reconnaissance plane. In order to put pressure on the United States, Zendia tries to activate its SCADA/EMS agents. Zendia receives many affirmative reports from the field, some of which are in fact misleading and others valid. In order to stop the remaining agents, the United States launches a denial- of-service attack against the Zendian controller, effectively disconnecting it from the Internet while at the same time issuing a demarche to the Zendian government to cease its hostile actions and to provide information on the SCADA/EMS systems penetrated that is sufficient to effect the removal of all hostile agents. Zendia responds to the U.S. demarche publicly, denouncing the U.S. denial-of-service attack on it as an unprovoked and unwarranted escalation of the situation. This neutralization scenario raises many issues. Neutralization of cyberthreats requires an access path to the particular hardware unit from which the attack emanates (e.g., the attack controller). In addition, an indication of the physical location of that hardware may be necessary. • Knowledge of the controller’s specific hardware unit is important because the attacker may have taken a very circuitous route to reach the target. If the attacker has been clever, neutralization of any intermediate node along the way is unlikely to result in a long-term cessation of the attack, and only disruption of the controller will suffice. If not, there may be a particular intermediate node whose destruction or degradation may be sufficient to stop the attack.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 145 • Physical location is important because of the legal jurisdictional issue—depending on the physical (national) location of the hardware, different laws regarding the putative criminality of its behavior and the legality of damaging it may apply. This point is relevant especially if an attack has a foreign origin; however, knowledge of physical location is not required to neutralize the attack. In practice, none of these conditions are easy to meet. Attackers have strong incentives to conceal their identity, and so are likely to use com- promised computers as intermediate launching points for their attacks. Furthermore, because one compromised computer can be used to com- promise another one, the chain leading to the actual attacker—the only one with malevolent intent—can be quite long, and thus quite difficult (and time-consuming) to unravel. By the time an actual machine identity of the controller has been established, the attacker may no longer have a presence on the originating machine. Yet another complicating factor is that the controller function can be executed on a variety of different systems. Thus, even if a victim is suc- cessful in identifying the controller of the attack, and even if it successfully launches a counterattack that neutralizes that controller (a counterattack that may be electronic, kinetic, or even legal), the controller function may shift automatically to another system—if so, another laborious process may need to be started again. (An analogy could be drawn to the opera- tion of a mobile missile launcher [Transporter-Erector-Launcher, TEL]. A TEL sets down in a specific, usually pre-surveyed, location, launches its missile, and then immediately moves to minimize the effectiveness of a counterattack against it.) On the other hand, the controller of the attack may not shift, especially if the attacker is not well resourced or sophisti- cated. Under such circumstances, a counter-cyberattack may well succeed in shutting down an attack, at least for a while. A long chain of compromised machines is not the only obfuscation technique an attacker may use. An attacker may also plant false evidence implicating other parties on one or more of the intermediate links. Such evidence could lead the forensic investigator to mistakenly identify a par- ticular intermediate node as the true source of an attack, and a neutraliza- tion counterattack launched against that node would target an innocent party. In this case, the fact that the United States has only moderate con- fidence in the fact of Zendian responsibility is problematic. An important aspect of any neutralization counterattack is the time it takes to determine the identity of the attacking party and to establish an access path and its geographic location. Perhaps the most plausible justification for a neutralization counterattack is that a counterattack is

146 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES needed to stop the immediate harm being done by an attack. If it takes so long for the defending party to obtain the necessary information for a counterattack that the attack has ceased by the time the counterattack can be launched, this justification may no longer be plausible.56 Note that the policy requirement to quickly and properly identify the attacking party and the technical reality that attribution is a time-consum- ing task work against each other. Shortening the time for investigation, perhaps by going so far as to automate the identification, assessment, and response process, may well increase the likelihood of errors being made in any response (e.g., responding against the wrong machine, launching a response that has large unintended effects). On the other hand, it is possible that a neutralization cyberattack would be used only after a number of hostile cyberattacks had occurred. Consider the ease of an unknown adversary launching cyberattacks against a particular U.S. defense facility. If forensic investigation is undertaken after each attack, after a while enough information might be obtained to determine the leading indicators of an attack by this adversary. At some point, the United States might well have enough information in hand so that it could respond quickly to the next cyberattack launched by this adversary, and might be willing to take the chance that it was responding erroneously with a neutralization cyberattack. From a policy standpoint, the acceptability of an increase in the likeli- hood of errors almost surely depends on the state of the world at the time. During times of normal political relations with other nations, such an increase may be entirely unacceptable. However, during times of political, diplomatic, or even military tension with other nations, the U.S. leader- ship might well be willing to run the risk of a mistaken response in order to ensure that a response was not crippled by an adversary attack. (In this regard, the situation is almost exactly parallel to the issue of riding out a strategic attack on the United States or employing a strategy of launch- ing a land-based strategic missile on warning or while under attack—the latter being regarded as much more likely during times of tension with a putative adversary.) Under some circumstances, the United States might choose to launch a neutralization cyberattack fully expecting that the adversary would respond with an even larger hostile cyberattack. If it did so, it would be necessary for the United States to prepare for that eventuality. Such prepa- ration might involve taking special measures to strengthen the cybersecu- rity posture of key facilities and/or preparing for kinetic escalation. 56 On the other hand, the cessation of an attack may simply indicate the end of one phase and the start of a lull before the next phase. A clever attacker would launch the next phase in such a way that the defender would have to unravel an entirely new chain.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 147 These concerns do not automatically imply that neutralization coun- terattacks are a bad idea under all circumstances. But they do raise several questions that must be answered before such a response is made. • What defensive measures must be taken, if any, before launching a neutralization counterattack? Should a neutralization counterattack be a last resort, to be used when all other methods for responding to a cyberat- tack have proven (or will prove) ineffective?57 Or should a neutralization counterattack be a measure of first resort, to be triggered automatically without human intervention in the first few seconds of an attack? Or somewhere in between? • A counterattack requires only that an access path to the attacker be available. Under what circumstances must the identity of the attack- ing party be known? If the attacker must be known, what degree of confidence and what evidentiary basis are needed? And how, if at all, should the attacker’s identity affect a decision to launch a counterattack? (For example, how might such a decision be affected by the fact that an attack is emanating from the information network of a U.S. hospital or an important laboratory?) • How likely is it that the attacker will have anticipated a neutraliza- tion counterattack and taken steps to mitigate or negate the effect of the counterattack? What are those steps likely to have been? How likely is it that a neutralization counterattack will indeed curb or halt the incoming attack? • How narrowly should a neutralization counterattack be focused? Should it be limited solely to eliminating or mitigating the threat (and not causing harm outside that effort)? Or is causing additional damage to the attacker a desirable outcome? • At what threshold of actual or expected damage to U.S. systems and networks should a neutralization counterattack be launched? That is, how should the benefit of a counterattack be weighed against the politi- cal risks of launching it? For example, what targets are worth protecting? (U.S. military installations? Installations associated with national critical infrastructure? Defense industrial base firms? Fortune 500 companies?) • How should the threshold of damage be established? Should it be established unilaterally in real time by the original victim (e.g., the corpo- ration or government entity attacked)? Or should it result from an orderly interagency and governmental process that operates well in advance of when policy guidance is needed? 57 For example, one might argue that technical means such as target hardening and adversary deception and legal methods such as appeal to an ISP to disconnect an attacker from the Internet must be exhausted before active defense is considered.

148 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES BOX 2.4  A Possible Taxonomy of Active Responses There is a broad range of actions possible to respond to a cyberattack. One possible taxonomy of response actions was developed by Sergio Caltagirone.1 This taxonomy identifies eight types of response in increasing order of activity required by the responder, potential impact on the attacker, and potential for col- lateral and unintended consequences. 1. No action—a conscious decision to take no action in response to an identified attack. Not taking any action is active insofar as it involves a thoughtful decision process that considers the benefits and costs of potential options. 2. Internal notification—notifying users, administrators, and management of the system attacked. Some subset of these may be notified depending on the type of attack, but the attack is not reported to anyone outside the organization of the affected system. 3. Internal response—taking specific action to protect the system from the attacker. The response likely depends on the type of attack, but might include blocking a range of IP addresses or specific ports, segmenting or disconnecting parts of the system, and purposely dropping connections. 4. External cooperative response—contacting external groups or agencies with responsibility for classifying, publicizing and analyzing attacks (e.g., CERT, DShield), taking law enforcement action (e.g., FBI, Secret Service), providing protection services (e.g., Symantec, MacAfee), and providing upstream support (e.g., Internet service providers). There is a broad consensus that Actions 1-4 are legitimate actions under almost any set of circumstances. That is, an individual or organization is unambigu- ously allowed to take any of these actions in response to a cyberattack. However, the same is not true for Actions 5-8 described below, which are listed in order of increasing controversy and increasing likelihood of running afoul of today’s legal regime should the target of a cyberattack take any of these actions. Lastly, given the difficulties of knowing if a cyberattack is taking or has taken place; whether a given cyberattack is hostile, criminal, or mis- chievous in intent; the identity of the responsible party; and the extent to which it poses a significant threat, the neutralization option must not be seen as the only way to respond to an attack. Box 2.4 describes a spectrum of possible responses to a cyberattack—note that the neutralization option corresponds to Action 6 or Action 7, and as such is a more aggressive form of response.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 149 5. Non-cooperative intelligence gathering—the use of any tools to gather information about the attack and the attacker. Tools might include honeypots, honeynets, traceroutes, loose source and record routes, pings and fingers. 6. Non-cooperative “cease and desist”—the use of tools to disable harmful services on the attacker’s system without affecting other system services. 7. Counterstrike—response taking two potential forms: direct action (active counterstrike) such as hacking the attacker’s systems (hack-back) and transmitting a worm targeted at the attacker’s system; passive counterstrike that redirects the attack back to the attacker, rather than directly opposing the attack. Examples of passive counterstrike are a footprinting strike-back that sends endless data, bad data, or bad SQL requests, and network reconnaissance strike-back using trace- route packets (ICMP “TTL expired”). 8. Preemptive defense—conducting an attack on a system or network in anticipation of that system or network conducting an attack on your system. Different actions may be taken based on the type of attack and an analysis of the benefits and costs associated with each type of response. Multiple types of responses may be taken for any given attack. Actions 1-4 are generally non-controversial, in the sense that it would not be legally problematic for a private company to take any of these responses. Actions 6-8 are much more aggressive, fall into the general category of active defense (and more), and certainly raise many questions under the statutory pro- hibitions against conducting cyberattack. In addition, system administrators often express concern about the legality of Action 5 in light of the various statutes gov- erning electronic surveillance. 1 S. Caltagirone and D. Frincke, Information Assurance Workshop, 2005, IAW ‘05, Pro- ceedings from the Sixth Annual IEEE SMC, June 15-17, 2005, pp. 258-265. See also David D ­ ittrich and Kenneth Einar Himma, “Active Response to Computer Intrusions,” The Handbook of Information Security, Hossein Bidgoli, editor-in-chief, John Wiley & Sons, Inc., Hoboken, N.J., 2005. 2.6  Technical and Operational Considerations for Cyberexploitation 2.6.1  Technical Similarities in and Differences Between Cyberattack and Cyberexploitation The cyberexploitation mission is different from the cyberattack mis- sion in its objectives (as noted in Chapter 1) and in the legal constructs surrounding it (as discussed in Chapter 7). Nevertheless, much of the technology underlying cyberexploitation is similar to that of cyberattack, and the same is true for some of the operational considerations as well.

150 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES As noted in Section 2.2.2, a successful cyberattack requires a vulner- ability, access to that vulnerability, and a payload to be executed. A cyber- exploitation requires the same three things—and the only technological difference is in the payload to be executed. That is, what distinguishes a cyberexploitation from a cyberattack is the nature of the payload. Whereas the attacker might destroy the papers inside a locked file cabinet once he gains access to it, the exploiter might copy them and take them away with him. In the cyber context, the cyberexploiter will seek to compromise the confidentiality of protected information afforded by a computer system or network. 2.6.2  Possible Objectives of Cyberexploitation What might cyberexploitations seek to accomplish? Here are some hypothetical examples. The cyberexploiter might seek to: • Exploit information available on a network. For example, an attacker might monitor passing traffic for keywords such as “nuclear” or “pluto- nium,” and copy and forward to the attacker’s intelligence services any messages containing such words for further analysis. A cyberexploita- tion against a military network might seek to exfiltrate confidential data indicating orders of battle, operational plans, and so on. Alternatively, passwords are often sent in the clear through e-mail, and those passwords can be used to penetrate other systems. This objective is essentially the same as that for all signals intelligence activities—to obtain intelligence information on an adversary’s intentions and capabilities. • Be a passive observer of a network’s topology and traffic. As long as the attacker is a passive observer, the targeted adversary will experience little or no direct degradation in service or functionality offered by the network. Networks can be passively monitored to identify active hosts as well as to determine the operating system and/or service versions (through signatures in protocol headers, the way sequence numbers are generated, and so on).58 The attacker can map the network and make inferences about important and less important nodes on it simply by performing traffic analysis. (What is the organizational structure? Who holds positions of authority?) Such information can be used subsequently to disrupt the network’s operational functionality. If the attacker is able to read the contents of traffic (which is likely, if the adversary believes the network is secure and thus has not gone to the trouble of encrypting 58 Annie De Montigny-Leboeuf and Frederic Massicotte, “Passive Network Dis­covery for Real Time Situation Awareness,” 2004, available at http://www.snort.org/docs/­ industry/ADeMontigny NatoISTToulouse2004.pdf.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 151 t ­ raffic), he can gain much more information about matters of significance to the network’s operators. As importantly, a map of the network provides useful information for a cyberattacker, who can use this information to perform a more precise targeting of later attacks on hosts on the local network, which are typically behind firewalls and intrusion detection/ prevention systems that might trigger alarms. • Obtain technical information from a company’s network in another country in order to benefit a domestic competitor of that company. For example, two former directors of the DGSE (the French intelligence ser- vice) have publicly stated that one of the DGSE’s top priorities was to col- lect economic intelligence. During a September 1991 NBC news program, Pierre Marion, former DGSE director, revealed that he had initiated an espionage program against U.S. businesses for the purpose of keeping France internationally competitive. Marion justified these actions on the grounds that the United States and France, although political and military allies, are economic and technological competitors. During an interview in March 1993, then-DGSE director Charles Silberzahn stated that politi- cal espionage was no longer a real priority for France but that France was interested in economic intelligence, “a field which is crucial to the world's evolution.” Silberzahn advised that the French have had some success in economic intelligence but stated that much work is still needed because of the growing global economy. Silberzahn advised during a subsequent interview that theft of classified information, as well as information about large corporations, was a long-term French government policy.59 The examples above suggest certain technical desiderata for cyberex- ploitations. For instance, it is highly desirable for a cyberexploitation to have a signature that is difficult for its target to detect, since the cyberex- ploitation operation may involve many separate actions spread out over a long period of time in which only small things happen with each action. One reason is that if the targeted party does not know that its secret information has been revealed, it is less likely to take countermeasures to negate the compromise. A second reason is that the exploiter would like to use one penetration of an adversary’s computer or network to result in multiple exfiltrations of intelligence information over the course of the entire operation. That is, the intelligence collectors need to be able to maintain a clandestine presence on the adversary computer or network despite the fact that information exfiltrations provide the adversary with opportunities to discover that presence. Also, an individual payload can have multiple functions simultane- 59 See page 33, footnote 1, in National Research Council, Cryptography’s Role in Securing the Information Society, National Academy Press, Washington, D.C., 1996.

152 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES ously—one for cyberattack and one for cyberexploitation—and which function is activated at any given time will depend on the necessary command and control arrangements (see Section 2.3.8). For example, a payload delivered to an adversary command and control network may be designed to exfiltrate information during the initial stages of a conflict and then to degrade service on the network when it receives a command to do so. In addition, the relationship between technologies for cyberexploita- tion and cyberattack is strong enough that the cost of equipping a tool for the former with the capability for the latter is likely to be low—so low that in many cases acquisition managers could find it sensible as a matter of routine practice to equip a cyberexploitation tool with attack capabilities (or provide it with the ability to be modified on-the-fly in actual use to have such capabilities).60 2.6.3  Approaches for Cyberexploitation As is true for cyberattack, cyberexploitation can be accomplished through both remote-access and close-access methodologies. A hypothetical example of cyberexploitation based on remote access might involve “pharming” against an unprotected DNS server, such as the one resident in wireless routers.61 Because wireless routers at home tend to be less well protected than institutional routers, they are easier to compromise. Successful pharming would mean that web traffic originat- ing at the home of the targeted individual (who might be a senior official in an adversary’s political leadership) could be redirected to websites controlled by the exploiter. With access to the target’s home computer thus provided, vulnerabilities in that computer could be used to insert a payload that would exfiltrate the contents of the individual’s hard disk, possibly providing the exploiter with information useful for blackmailing the target. As a historical precedent, Symantec in January 2008 reported an incident directed against a Mexican bank in which the DNS settings on a customer’s home router were compromised.62 An e-mail was sent to the target, ostensibly from a legitimate card company. However, the e-mail 60 If these cyberexploitation tools were to be used against U.S. citizens (more precisely, U.S. persons as defined in EO 12333 (Section 7.3.6)), legal and/or policy implications might arise if these tools were to have attack capabilities as well. Thus, the observation is most likely to be true for tools that are not intended for such use. 61 “Pharming” is the term given to an attack that seeks to redirect the traffic to a par- ticular website to another, bogus website. 62 Ellen Messmer, “First Case of ‘Drive-by Pharming’ Identified in the Wild,” Network World, January 22, 2008, available at http://www.networkworld.com/news/2008/012208- drive-by-pharming.html.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 153 contained a request to the home router to tamper with its DNS settings. Thus, traffic intended for the bank was redirected to the criminal’s web- site mimicking the bank site. A hypothetical example of cyberexploitation based on close access might involve intercepting desktop computers in their original shipping cartons while they are awaiting delivery to the victim, and substitut- ing for the original video card a modified one that performs all of the original functions and also monitors the data being displayed for subse- quent transmission to the exploiter. There is historical precedent for such approaches. One episode is the 1984 U.S. discovery of Soviet listening devices in the Moscow embassy’s typewriters—these devices captured all keystrokes and transmitted them to a nearby listening post.63 A second reported episode involves cameras installed inside Xerox copiers in Soviet embassies in the 1960s.64 A third episode, still not fully understood, is the 2004-2005 phone-tapping affair in Greece.65 2.6.4  Some Operational Considerations for Cyberexploitation 2.6.4.1  The Fundamental Similarity Between Cyberattack and Cyberexploitation Because the cyber offensive actions needed to carry out a cyber­ exploitation are so similar to those needed for cyberattack, cyber­exploitations and cyberattacks may be difficult to distinguish in an operational context. (The problem of distinguishing between them is compounded by the fact that an agent for exploitation can also contain functionality to be used at another time for attack purposes.) This fundamental ambiguity—absent with kinetic, nuclear, biological, and chemical weapons—has several consequences: 63 Jay Peterzell, “The Moscow Bug Hunt,” Time, July 10, 1989, available at http://www. time.com/time/magazine/article/0,9171,958127-4,00.html. 64 Ron Laytner, “Xerox Helped Win The Cold War,” Edit International, 2006, available at http://www.editinternational.com/read.php?id=47ddf19823b89. 65 In this incident, a number of mobile phones belonging mostly to members of the Greek government and top-ranking civil servants were found to have been tapped for an ex- tended period of time. These individuals were subscribers to Vodafone Greece, the country’s largest cellular service provider. The taps were implemented through a feature built into the company’s switching infrastructure originally designed to allow law enforcement agencies to tap telephone calls carried on that infrastructure. However, those responsible for the taps assumed control of this feature to serve their own purposes and were also able to conceal their activities for a long time. The sophistication of the programming required to undertake this compromise is considerable, and has led to speculation that the affair was the result of an inside job. See Vassilis Prevelakis and Diomidis Spinellis, “The Athens Affair,” IEEE Spectrum, July 2007, available at http://www.spectrum.ieee.org/print/5280.

154 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES • The targeted party may not be able to distinguish between a cyber- exploitation and a cyberattack, especially on short time scales, even if such differences are prominent in the minds of the party undertaking cyber offensive actions. • Because the legal authorities to conduct cyberexploitations and cyberattacks are quite different, clarity in the minds of the operators about their roles in any given instance is essential. • From a training and personnel standpoint, developing expertise at cyberattack also develops most of the required skill set for conducting cyberexploitation, and vice versa. 66 2.6.4.2  Target Identification and Intelligence Preparation Although some intelligence operations may be characterized by a “vacuum cleaner” approach that seeks to obtain all available traffic for later analysis, a cyberexploiter may be very concerned about which com- puters or networks are targeted—an issue of precision. Very precise cyber- exploitations would be characterized by small-scale operations against a specific computer or user whose individual compromise would have enormous value (“going after the crown jewels”)—the vice president’s laptop, for example. To the extent that specific systems must be targeted, substantial intel- ligence efforts may be required to identify both access paths and vulner- abilities. For example, even if the vice president’s laptop is known to be a Macintosh running OS-X, there may well be special security software running on her laptop; finding out even what software might be run- ning, to say nothing of how to circumvent it, is likely to be very difficult in the absence of close access to it. The same considerations are true of Internet-connected computer systems that provide critical functionality to important companies and organizations—they may well be better pro- 66 For example, Air Force Doctrine Document 2-5 (issued by the Secretary of the Air Force, January 11, 2005) explicitly notes that “military forces under a combatant commander derive authority to conduct NetA [network attack] from the laws contained in Title 10 of the U.S. Code (U.S.C.). However, the skills and target knowledge for effective NetA are best developed and honed during peacetime intelligence or network warfare support (NS) opera- tions. Intelligence forces in the national intelligence community derive authority to conduct network exploitation and many NS [national security] operations from laws contained in U.S.C. Title 50. For this reason, ‘dual-purpose’ military forces are funded and controlled by organizations that derive authority under laws contained in both Title 10 and Title 50. The greatest benefit of these ‘dual-purpose’ forces is their authority to operate under laws contained in Title 50, and so produce actionable intelligence products while exercising the skills needed for NetA. These forces are the preferred means by which the Air Force can organize, train, and equip mission-ready NetA forces.” See http://www.herbb.hanscom. af.mil/tbbs/R1528/AF_Doctrine_Doc_2_5_Jan_11__2005.pdf.

TECHNICAL AND OPERATIONAL CONSIDERATIONS 155 tected than is the average system on the Internet. Nevertheless, as press reports in recent years make clear, such measures do not guarantee that their systems are immune to the hostile actions of outsiders. 67 As for gathering the intelligence needed to penetrate an adversary computer or network for cyberexploitation, this process is essentially identical to that for cyberattack. The reason is that cyberexploitation and cyberattack make use of the same kinds of access paths to their targets, and take advantage of the same vulnerabilities to deliver their payloads. In the event that an adversary detects these intelligence-gathering attempts, there is no way at all to determine their ultimate intent. 2.6.4.3  Rules of Engagement and Command and Control Rules of engagement for cyberexploitation specify what adversary systems or networks may be probed or penetrated to obtain information. A particularly interesting question arises when a possible target of oppor- tunity becomes known in the course of an ongoing cyberexploitation. For example, in the course of exploring one adversary network (Network A), the exploiter may come across a gateway to another, previously unknown network (Network B). Depending on the nature of Network B, the rules of engagement specified for Network A may be entirely inadequate (as might be the case if Network A were a military command and control network and Network B were a network of the adversary’s national com- mand authority). Rules of engagement for cyberexploitation must thus provide guidance in such situations. In at least one way, command and control for cyberexploitation is more complex than for cyberattack because of the mandatory requirement of report-back—a cyberexploitation that does not return information to its controller is useless. By contrast, it may be desirable for a cyberattack agent or weapon to report to its controller on the outcome of any given attack event, but its primary mission can still be accomplished even if it is unable to do so. Report-back also introduces another opportunity for the adversary to discover the presence of an exploiting payload, and thus the exploiter must be very careful in how report-back is arranged. 67 For example, the Slammer worm attack reportedly resulted in a severe degrada- tion of the Bank of America’s ATM network in January 2003. See Aaron Davis, “Computer Worm Snarls Web: Electronic Attack Also Affects Phone Service, BOFA’s ATM Network,” San Jose Mercury News, January 26, 2003, available at http://www.bayarea.com/mld/­ mercurynews/5034748.htm+atm+slammer+virus&hl=en.

156 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES 2.6.4.4  Effectiveness Assessment The cyberexploitation analog to damage assessment for cyberattack might be termed effectiveness assessment. If a cyberexploitation does not report back to its controller, it has failed. But even if it does report back, it may not have succeeded. For cyberexploitation, the danger is that it has been discovered and that somehow the adversary has provided false or misleading information that is then reported back. Alternatively, the adversary may have compromised the report-back channel itself and inserted its own message that is mistaken for an authentic report-back message. (In a worst-case scenario, the adversary may use the report-back channel as a vehicle for conducting its own cyberattack or cyberexploita- tion against the controller.) These scenarios for misdirection are not unique to cyberexploita- tion, of course—they are possible in ordinary espionage attempts as well. But because it is likely to be difficult for an automated agent to distin- guish between being present on a “real” target versus being present on a “decoy” target, concerns about misdirection in a cyberexploitation context are all too real. 2.6.4.5  Tradeoffs Between Cyberattack and Cyberexploitation In contemplating what to do about an adversary computer or net- work, decision makers have essentially two options—render it unavail- able for serving adversary purposes or exploit it to gather useful informa- tion. In many cases, these two options are mutually exclusive—destroying it makes it impossible to exploit it. In some cases, destroying it may also reveal to the adversary some vulnerability or access path previously unknown to him, and thus compromise friendly sources and methods. These tradeoffs are no less present in cyberattack and cyberexploita- tion. But in some ways, the tradeoffs may be easier to manage. For exam- ple, because a given instrument for cyberexploitation can be designed with cyberattack capabilities, the transition between exploitation and attack may be operationally simpler. Also, a cyberattack may be designed to corrupt or degrade a system slowly—and exploitation is possible as long as the adversary does not notice the corruption. 2.7  Historical Precedents and Lessons To provide a sense of what might be possible through cyberattack and cyberexploitation, it is useful to consider some of the ways in which crimi- nals have used them. A number of such cases are described in Appen-

TECHNICAL AND OPERATIONAL CONSIDERATIONS 157 dix C, and some of the lessons derived from considering these cases are provided below. • Attacks can have multiple phases, as illustrated in several of the cases in Appendix C, that last over a relatively long period of time (over a year, in many cases.) This is especially true of DDOS attacks, where attackers must first take control of thousands and thousands of comput- ers by installing their malicious software on them, causing them to join into mass command and control (e.g., join a botnet in IRC channels.) The same bots that are used for DDOS are also used for recruiting new bots through direct attack, sending copies of the malware to addressees in the victimized computer’s address book. The less visible or “noisy” the activ- ity, the longer the multiphase attack can last before being detected and mitigated. • Attacks can also have multiple foci. In the Invita case (Appendix C), there was a primary focus on trying to locate credit card data to per- petrate fraud, but the attackers also used extortion to obtain financial gain. In some of the botnet cases, the botnets would be used for extortion or click-fraud. The Stakkato case was multitarget, but this was primarily a by-product of following login trust relationships between systems and sites. • The same tactics used to compromise one host can be extended to compromise 1,000 hosts, given enough resources to repeat the same steps over and over, assuming the attacked systems are part of the same system monoculture all running the same targeted software (such as the same operating system). Automating these steps makes the job even easier, which can readily be done. (Anything that a user can do by typing at a keyboard can be turned into a scripted action. This is how the Invita attackers managed the creation and use of e-mail and online bidding accounts.) A corollary is the notion that an indirect attack can be as successful as a direct attack, given the resources necessary to work through the entire set of login relationships between systems. For example, one can attempt to get access to another person’s account by attacking that target’s lap- top or desktop system. This may fail, because the target may secure its personal computers very well. But the target may depend on someone else for system administration of its mail spool and home directory on a shared server. The attacker can thus go after a colleague’s, a fellow employee’s, or the service provider’s computer and compromise it, and then use that access to go after an administrator’s password on the file server holding the target’s account. The best case (from an attacker’s standpoint) is when the same vul-

158 Technology, Policy, Law, And Ethics Of U.s. Cyberattack CapabiliTIES nerability exists at all levels within large interconnected systems, where “redundant” resources can be compromised, resulting in cascading effects.68 This situation could allow an adversary to very quickly com- mandeer a large and diverse population of systems, as has been witnessed in various worm outbreaks over the past few years. • The theft of credentials, either for login authentication or execut- ing financial transactions, is a popular and successful avenue of attack. All that is necessary is either to direct a user to pass his or her keystrokes through a program under control of the attack (e.g., as in “phishing” attacks), or to get administrative control of either clients or servers and install software that logs keystrokes. • Highly targeted attacks against specific companies are possible, as was seen in the Israeli industrial espionage case, as well as a variant of the BugBear trojan in 2003 that specifically targeted the domains of more than 1,000 specific banks in several countries.69 Discovery and tak- ing advantage of implicit business trust relationships between sites are also possible, as was seen in the Stakkato case. An attacker need only start with the most basic information that can obtained about a company through open sources (e.g., press releases, organizational descriptions, phone directories, and other data made public through websites and news stories). She then uses this information to perform social engineering attacks, a pretext designed to trick users into giving out their passwords so that she can gain access to computers inside an organization’s net- work. Once in control of internal hosts, she effectively has insider access and can leverage that access to do more sensitive intelligence gathering on the target. She can learn business relationships, details about active projects and schedules, and anything necessary to fool anyone in the company into opening e-mail attachments or performing other acts that result in compromise of computer systems. (This is basic intelligence col- lection and analysis.) Control of internal hosts can also be used to direct attacks—behind the firewall and intrusion detection systems or intrusion prevention systems—against other internal hosts. 68 See, for example, Daniel E. Geer, “Measuring Security,” 2006, pp. 170-178, available at http://geer.tinho.net/measuringsecurity.tutorialv2.pdf. 69 F-Secure, “F-Secure Virus Descriptions: Bugbear.B,” 2003, available at http://www. f-secure.com/v-descs/bugbear_b.shtml.

Next: Part II: Mission and Institutional Perspectives »
Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities Get This Book
×
Buy Paperback | $54.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The United States is increasingly dependent on information and information technology for both civilian and military purposes, as are many other nations. Although there is a substantial literature on the potential impact of a cyberattack on the societal infrastructure of the United States, little has been written about the use of cyberattack as an instrument of U.S. policy.

Cyberattacks--actions intended to damage adversary computer systems or networks--can be used for a variety of military purposes. But they also have application to certain missions of the intelligence community, such as covert action. They may be useful for certain domestic law enforcement purposes, and some analysts believe that they might be useful for certain private sector entities who are themselves under cyberattack. This report considers all of these applications from an integrated perspective that ties together technology, policy, legal, and ethical issues.

Focusing on the use of cyberattack as an instrument of U.S. national policy, Technology, Policy, Law and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities explores important characteristics of cyberattack. It describes the current international and domestic legal structure as it might apply to cyberattack, and considers analogies to other domains of conflict to develop relevant insights. Of special interest to the military, intelligence, law enforcement, and homeland security communities, this report is also an essential point of departure for nongovernmental researchers interested in this rarely discussed topic.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!