9
Speculations on the Dynamics of Cyberconflict

9.1
DETERRENCE AND CYBERCONFLICT

To what extent is deterrence of cyberconflict possible? How might a nation’s cyberweapons be useful in deterring an adversary’s cyberattack?

In the language of defense policy, deterrence is an often-used and highly elastic concept, and it is hard to find an authoritative statement of its precise meaning. For purposes of this document, the definition provided by the U.S. Strategic Command is a reasonable starting point:1

Deterrence [seeks to] convince adversaries not to take actions that threaten U.S. vital interests by means of decisive influence over their decision-making. Decisive influence is achieved by credibly threatening to deny benefits and/or impose costs, while encouraging restraint by convincing the actor that restraint will result in an acceptable outcome.

The threat “to impose costs” is the foundation of classical deterrence, more specifically deterrence by threat of retaliation or punishment. This concept was the underpinning of U.S. nuclear policy toward the Soviet Union during the Cold War, and continues to be central to the reality of dealing with other nuclear states today. At the same time, an opponent that can be deterred by the threat of imposing costs is, almost by defini-

1

Deterrence Operations: Joint Operating Concept, Version 2.0, December 2006, available at http://www.dtic.mil/futurejointwarfare/concepts/do_joc_v20.doc.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 302
9 Speculations on the Dynamics of Cyberconflict 9.1 DETERRENCE AND CYBERCONFLICT To what extent is deterrence of cyberconflict possible? How might a nation’s cyberweapons be useful in deterring an adversary’s cyberattack? In the language of defense policy, deterrence is an often-used and highly elastic concept, and it is hard to find an authoritative statement of its precise meaning. For purposes of this document, the definition pro- vided by the U.S. Strategic Command is a reasonable starting point: 1 Deterrence [seeks to] convince adversaries not to take actions that threat- en U.S. vital interests by means of decisive influence over their decision- making. Decisive influence is achieved by credibly threatening to deny benefits and/or impose costs, while encouraging restraint by convincing the actor that restraint will result in an acceptable outcome. The threat “to impose costs” is the foundation of classical deterrence, more specifically deterrence by threat of retaliation or punishment. This concept was the underpinning of U.S. nuclear policy toward the Soviet Union during the Cold War, and continues to be central to the reality of dealing with other nuclear states today. At the same time, an opponent that can be deterred by the threat of imposing costs is, almost by defini- 1 Deterrence Operations: Joint Operating Concept, Version 2.0, December 2006, avail- able at http://www.dtic.mil/futurejointwarfare/concepts/do_joc_v20.doc. 0

OCR for page 302
0 SPECULATIONS ON THE DYNAMICS OF CYbERCONFLICT tion, a rational opponent—that is, one who can calculate that the costs of a certain action outweigh the gains possible from taking that action and thus does not take that action. But it is well known and widely under- stood that some actors are not rational in this sense of the term. Such “non-rational” actors may indeed be able to make rational cost-benefit calculations, and still take the “non-rational” course of action because of political or religious ideology, a belief in luck, or even insanity. The threat “to deny benefits” is the rationale for the deployment of defensive capabilities—capabilities that can interfere with the success of an attack. Antiballistic missile defenses, for example, are intended to prevent hostile ballistic missiles from striking friendly targets. Chemical protective suits are intended to reduce the effectiveness of chemical weap- ons against friendly forces. Offensive counter air operations are intended to destroy hostile aircraft before those aircraft take off to conduct attacks against friendly targets and territory. A refinement on the concept of deterrence as described above is the notion of tailored deterrence—deterrence “tailored” to specific adversar- ies in specific strategic contexts. For example, the U.S. Strategic Command notes that: Exercising decisive influence over the decision calculations of adversary decision-makers requires an understanding of their unique and distinct identities, values, perceptions, and decision-making processes, and of how these factors are likely to manifest themselves in specific strategic contexts of importance to the US and its allies. Specific state and non- state adversaries thus require deterrence strategies and operations tai- lored to address their unique decision-making attributes and characteris- tics under a variety of strategically relevant circumstances. Such tailored deterrence strategies and operations should be developed, planned, and implemented with reference to specific deterrence objectives that identify who we seek to deter from taking what action(s), under what conditions (i.e., Deter adversary X from taking action Y, under Z circumstances). Box 9.1 describes the key questions of tailored deterrence. It remains an open question as to whether the concepts of deterrence are relevant when applied to the domain of cyberconflict per se (that is, cyberconflict without reference to conflict in physical domains). For example, a credible threat to impose costs requires knowledge of the party on which the costs should be imposed—and as discussed in Chapter 2, attribution of a cyberattack is a very difficult and time-consuming—and perhaps insoluble—problem. Moreover, even if the adversary is known, and known to be a specific nation-state, the costs to be imposed must be judged by the adversary as greater than the gain that might result from his aggressive actions. Thus, the United States must be able to identify cyber targets in or of the adver-

OCR for page 302
04 TECHNOLOGY, POLICY, LAW, AND ETHICS OF U.S. CYbERATTACK CAPAbILITIES BOX 9.1 Tailored Deterrence Tailoring an approach to deterrence requires answering four questions as described below. Specific answers to all four questions would represent a specific tailoring. 1. Who is being deterred? By definition, deterrence is intended to influence an adversary’s decision-making process in such a way that the adversary chooses to refrain from taking an action that is undesirable to the United States. Further, the mechanisms through which deterrence can operate depend strongly on the party the United States is trying to influence. Possible answers for the question “Who is being deterred?” include: • The national leadership of an adversary nation • Leaders of subnational groups • Private citizens of the adversary nation 2. What is the undesirable action to be deterred? Depending on the undesir- able action to be deterred, different threats and different targets (below) might be required. Possible actions to be deterred include: • Nuclear attack • Attack with conventional forces • ttack with biological or • Cyberattack A • Adversary interventions in other locales chemical weapons 3. What threat is the basis for the deterrent? By definition, deterrence involves a threat of some kind. A common approach to determining the deterrent threat is the threat of “in-kind” action—deterring X action by an adversary calls for threat- ening to do X to the adversary. But in-kind action is not inherently necessary for deterrence, and much of the U.S. approach to deterrence has explicitly called for threats that are not symmetric. For example, the United States has long reserved the right to use nuclear weapons against an overwhelming conventional attack or against attacks using biological or chemical weapons. Some of the possible threats that might be used to deter an adversary include: • Nuclear attack • Conventional attack • Attack with biological or • Cyberattack • Economic or diplomatic pressure chemical weapons 4. What is the target of the U.S. threat? A threat must be directed at a target or targets whose loss would be important enough to the adversary decision maker to make him refrain from taking the undesirable action. Some possible targets might include: • Nuclear forces • Leadership • Biological or chemical • Key industries • weapon forces or Economic infrastructure • stockpiles Population • Conventional forces

OCR for page 302
0 SPECULATIONS ON THE DYNAMICS OF CYbERCONFLICT sary nation whose loss would be costly to the adversary, and it must be able to attack them with high confidence of success. In a nation that is not highly dependent on information technology, such assets would be hard to find. Even if the nation did have valuable information technology assets, specific individual targets (perhaps num- bering in the dozens or hundreds—a wild guess!) most valuable to the adversary are likely to be very well protected against cyberattack. The civilian IT infrastructure at large may be less well protected, but large- scale attacks on such infrastructure raise ethical and moral questions about targeting civilians. The military IT infrastructure could be targeted as well, but the degree to which it is well protected may be unknown to the attacker (see discussion in Chapter 2 regarding intelligence require- ments for successful focused cyberattacks). In addition, an attacker that launches a cyberattack should also be expected to take action to change its own defensive posture just prior to doing so. As discussed in Chapter 2, much can be done to invalidate an adversary’s intelligence preparations, which are necessary for discrimi- nating counterattacks. And since the attacker knows when he will launch the attack, he can create a window during which his defensive posture will be stronger. The window would last only as long as it would take for new intelligence efforts to collect the necessary information, but it would likely be long enough to forestall immediate retaliation. A threat to deny benefits to a cyberattacker also lacks credibility in certain important ways. In principle, defensive technologies to harden tar- gets against cyberattacks can be deployed, raising the difficulty of attack- ing them. But decades of experience suggest that deploying these technol- ogies and making effective use of them on a society-wide basis to improve the overall cybersecurity posture of a nation is difficult indeed. And there is virtually no prospect of being able to reduce a cyberattacker’s capabili- ties through offensive action, because of the ease with which cyberattack weapons can be acquired. Thus, counterforce capabilities—which in the nuclear domain have been justified in large part as necessary to reduce the threat posed by an adversary’s nuclear weapons—do not exist in any meaningful way in contemplating cyberconflict.2 How do the considerations above change if, as in the real world, the states involved also have kinetic capabilities, which may include nuclear weapons, and physical vulnerabilities? That is, each side could, in princi- ple, use kinetic weapons to attack physical targets, and these targets might be military or dual purpose in nature as long as they are legitimate targets 2This statement is NOT intended to indicate acceptance or rejection of the counterforce ar- gument in the nuclear domain—it is only to say that regardless of whether the counterforce argument is valid in the nuclear domain, it has little validity in the cyber domain.

OCR for page 302
0 TECHNOLOGY, POLICY, LAW, AND ETHICS OF U.S. CYbERATTACK CAPAbILITIES under LOAC. Because a transition from cyber-only conflict to kinetic con- flict would likely constitute an escalation (and would in any case make the conflict more overt), this point is discussed in more detail below. 9.2 ESCALATORY DYNAMICS OF CYBERCONFLICT BETWEEN NATION-STATES The escalatory dynamics of conflict model how a conflict, once started, might evolve. Of interest are issues such as what activities or events might set a cyberconflict into motion, what the responses to those activities or events might be, how each side might observe and understand those responses, whether responses would necessarily be “in kind,” how dif- ferent kinds of state might respond differently, and so on. What follows below are some speculations on some of the factors that might influence the evolution of a cyberconflict. The actors involved are presumed to be nation-states with significant kinetic and cyber capabilities at their disposal, and the situation in ques- tion is one of open tension and high rhetoric between two states that have traditionally been rivals. Important questions to be addressed (summa- rized in Box 9.2) are discussed in the remainder of this section, but the discussion is intended to raise issues rather than to answer questions. 9.2.1 Crisis Stability Where kinetic weapons are concerned, crisis stability refers to that condition in which neither side has incentives to attack first. Crisis stabil- ity is especially important for nuclear weapons, where the existence of an invulnerable submarine-based nuclear missile force means that an adver- sary could not escape retaliation no matter how devastating or successful a first strike it could launch against the United States. Where cyberweap- ons are concerned, there is no conceivable way for a nation to eliminate or even significantly degrade the cyberattack capability of another nation. 3 But the question remains whether a second-strike cyberattack capability is the enabling condition for crisis stability in cyberspace. A related question is that of incentives for preemption. Suppose that preemptive attacks by Ruritania on Zendia are undertaken in order to prevent (or at least to blunt) an impending attack by Zendia on Ruritania. 3 Even in the case of a nuclear electromagnetic pulse attack directed against the electronic equipment in another nation (Zendia), there is no reason to assume that all of Zendia’s cyberattack capabilities are necessarily resident within Zendia’s boundaries. Because cyber- attacks can originate from anywhere, some of Zendia’s cyberattack capabilities might have been deployed in other nations—indeed, some Zendian attack agents might already have been clandestinely deployed in U.S. systems.

OCR for page 302
0 SPECULATIONS ON THE DYNAMICS OF CYbERCONFLICT BOX 9.2 Questions About the Escalatory Dynamics of Cyberconflict Between Nation-States Crisis Stability • What is the analog of crisis stability in cyberconflict? • What are the incentives for preemptive cyberattack? Escalation Control and Management • How can intentions be signaled to an adversary in conflict? • ow can cyberconflict between nations be limited to conflict in cyberspace? H • ow should cyberattack be scoped and targeted so that it does not lead H an adversary to escalate a conflict into kinetic conflict? • ow can a modestly scoped cyberattack conducted by a government be H differentiated from the background cyberattacks that are going on all of the time? • ow can the scale and scope of a commensurate response be ascertained? H Complications Introduced by Patriotic Hackers • ow can “free-lance” activities on the part of patriotic hackers be handled? H Incentives for Self-restraint in Escalation • What are the incentives for self-restraint in escalating cyberconflict? Termination of Cyberconflict • What does it mean to terminate a cyberconflict? If Zendia is planning a cyberattack on Ruritania, a preemptive cyberattack on Zendia cannot do much to destroy Zendia’s attack capability; at best, Ruritania’s preemptive attack on Zendia might tie up Zendia’s personnel skilled in cyber operations. On the other hand, it is hard to imagine cir- cumstances in which Ruritania would realize that Zendia was planning an attack, as preparations for launching a cyberattack are likely to be invisible for the most part. A second relevant scenario is one in which Zendia is planning a kinetic attack on Ruritania. Intelligence information, such as photographs of troop movements, might well indicate that preparations for such an attack were being made. And under these circumstances, Ruritania might well choose to launch a preemptive cyberattack against Zendia, with the intent of delaying and disrupting Zendia’s preparations for its own (that is, Zendia’s) kinetic attack.

OCR for page 302
0 TECHNOLOGY, POLICY, LAW, AND ETHICS OF U.S. CYbERATTACK CAPAbILITIES 9.2.2 Escalation Control and Management In a time of tension or crisis, national leaders are often understandably concerned about inadvertent escalation. For example, Nation A does X, expecting Nation b to do Y in response. But in fact, Nation b unexpectedly does Z, where Z is a much more escalatory action than Y. Or Nation A may do X, expecting it to be seen as a minor action intended only to show mild displeasure and thinking that Nation b will do Y in response, where Y is also a relatively mild action. But due to a variety of circumstances, Nation b sees X as a major escalatory action and responds accordingly with Z, an action that is much more significant than Y. Nation A perceives Z as being way out of proportion, and in turn escalates accordingly. 9.2.2.1 Signaling Intentions Through Cyberconflict Nothing in the alphabet of options above is specific to cyberconflict— such issues have been an important part of crisis management for a long time. But managing such issues may well be more difficult for cybercon- flict than for other kinds of conflict. One reason is the constant background of cyberattack activity. Reports arrive hourly and daily of cyberattacks of one kind or another on U.S. computer systems and networks, and the vast majority of these attacks do not have the significance of a serious cyber- attack launched by a party determined to do harm to the United States. Indeed, the intent underlying a given cyberattack may not have a military or a strategic character at all. Organized crime may launch a cyberattack for profit-making purposes. A teenage hacking club may launch a cyberat- tack out of curiosity or for vandalism purposes. A dearth of historical experience with nations or terrorists using cyberattack against the United States further complicates efforts at under- standing what an adversary might hope to gain by launching a cyberat- tack. And other nations are in a similar position, lacking the experience and facing the same background of cyberattacks. In the absence of contact with cyberattackers (and sometimes even in the presence of such contact), determining intent is likely to be difficult, and may rest heavily on infer- ences made on the basis of whatever attack attribution is possible. Thus, attempts to send signals to an adversary through limited and constrained military actions—problematic even in kinetic warfare—are likely to be even more problematic when cyberattacks are involved. 9.2.2.2 Preventing Cyberconflict from Transitioning to Physical Space If national command authorities decide to retaliate in response to a cyberattack, an important question is whether retaliation must be based

OCR for page 302
0 SPECULATIONS ON THE DYNAMICS OF CYbERCONFLICT on a “tit-for-tat” response. Assuming the perpetrator of a cyberattack is known to be a hostile nation, there is no reason in principle that the retaliation to a hostile cyberattack could not be a kinetic attack against the interests of that hostile nation—that is, allowing a kinetic response to a cyberattack expands the range of options available to the victim. An extreme case is that in the event of a cyberattack of sufficient scale and duration to threaten a nation’s ability to function as a modern society, the attacked nation might choose to respond with kinetic force to the nation causing such problems. On the other hand, the attacked nation may have an interest in refraining from a kinetic response—for example, it may believe that a kinetic response would be too provocative and might result in an undesired escalation of the conflict. Decision makers may also see cyberattacks as instruments to be used in the early stages of a conflict (cf. Section 3.2). National decision makers considering a cyberattack (whether in response or as a first use) appear to have incentives to refrain from conducting cyberattacks that might induce a strong kinetic reaction unless kinetic conflict had already broken out. The obvious approach would be to conduct cyberattacks that are in some sense smaller, modest in result, targeted selectively against less pro- vocative targets, and perhaps more reversible. (The similarity of such an approach to escalation control in other kinds of conflict is not accidental, and it has all of the corresponding complexities and the uncertainties.) There is no reason to suppose that hackers and criminal elements will moderate their activities in times of crisis or conflict (see also Section 9.2.3 regarding patriotic hackers). Thus, if a cyberattack is intended to send a signal from the United States to Zendia, how is Zendia to recognize that signal? Overtly taking credit for such an attack goes only so far, especially given uncertain communications in times of tension or war, and the near certainty of less-than-responsible behavior on the part of one or both sides. Finally, it seems likely that escalation issues would play out differ- ently if the other nation(s) involved are near-peer competitors or not. Escalation to physical conflict is of less of concern to the United States if the nation has weak conventional forces and/or is a non-nuclear state. But a nation with nuclear weapons, or even with strong conventional forces in a position to damage U.S. allies, is another matter entirely, and relationships with such states may well need to be specially managed, paying particular attention to how escalation may be viewed, managed, and controlled, and most importantly, how miscalculation, misperception, or outright error may affect an adversary’s response.

OCR for page 302
0 TECHNOLOGY, POLICY, LAW, AND ETHICS OF U.S. CYbERATTACK CAPAbILITIES 9.2.2.3 Determining the Impact and Magnitude of Cyber Response If an adversary conducts a cyberattack against the United States, a first question for U.S. decision makers will be knowledge of the attack’s impact and magnitude. Such knowledge is necessary to inform an appro- priate U.S. response. (If, for example, the United States wishes to make a commensurate response, it needs to know what parameters of the incom- ing attack would characterize a commensurate response.) But in many kinds of cyberattack, the magnitude of the impact of the first cyberattack will be uncertain at first, and may remain so for a consid- erable period of time. Decision makers may then be caught between two challenges—a policy need to respond quickly and the technical fact that it may be necessary to wait until more information about impact and dam- age can be obtained. (As noted in Section 2.5, these tensions are especially challenging in the context of active defense.) Decision makers often feel intense pressure to “do something” imme- diately after the onset of a crisis, and sometimes such pressure is war- ranted by the facts and circumstances of the situation. On the other hand, the lack of immediate information may prompt decision makers to take a worst-case view of the attack and thus to assume that the worst that might have happened was indeed what actually happened. Such a situation has obvious potential for inappropriate and unintended escalation. 9.2.3 Complications Introduced by Patriotic Hackers Past experience strongly indicates that conflict or increased tension between two nations will result in the “patriotic hackers” of both nations (and perhaps their allies) taking action intended to harass or damage the other side. Such activities are not under the direct control of the national government, and as discussed in Section 7.2.3.3 may well interfere with the efforts of that government to manage the crisis vis-à-vis the other side.4 Indeed, the government of a targeted nation is likely to believe that a cyberattack conducted on it is the result of deliberate adversarial action rather than the actions of “unauthorized” parties. Thus, unauthor- ized activities of the patriotic hackers of Zendia against the United States may lead the United States to believe that the Zendian government has launched a cyberattack against it. A U.S. cyberattack against Zendia may be seen by the Zendian government as a cyber first strike against it. Yet another complication involving patriotic hackers is the possibility that they might be directed by, inspired by, or tolerated by their govern- 4Such activities also have some potential for complicating the operational efforts of that government—for example, because cyberattacks against the same target may interfere with each other.

OCR for page 302
 SPECULATIONS ON THE DYNAMICS OF CYbERCONFLICT ment (or a rogue section within it), but in ways in which the government’s hand is not easily visible. Under such circumstances, hostile acts with damaging consequences could continue to occur (with corresponding benefits to the nation responsible) despite official denials. At the very least, the possibility that patriotic hackers may be operating could act as a plausible cover for government-sponsored cyberattacks, even if there were in fact no patriotic hackers doing anything. 9.2.4 Incentives for Self-restraint in Escalation One set of incentives is based on concerns about an adversary’s response to escalation. Understanding this set of incentives is necessarily based on a sense of what kinds of offensive cyber actions, whether cyber- attack or cyberexploitation that might be mistaken for cyberattack, might lead to what kinds of adversary responses (in cyberspace or in physical space). In this regard, an essential difference between cyberattack and the use of nuclear, chemical, biological, or space weapons is readily appar- ent—the initial use of any nuclear, chemical, biological or space weapon, regardless of how it is used, would constitute an escalation of a con- flict under almost any circumstances. By contrast, whether a given cyber- attack (or conventional kinetic attack for that matter) would be regarded as an escalation depends on the nature of the operation—the nature of the target(s), their geographic locations, their strategic significance, and so on. A second set of incentives is based on concerns about “blowback”— the possibility that a cyberattack launched by the United States against Zendian computers might somehow affect U.S. computers at a later time. Understanding the likelihood of blowback will require a complex mix of technical insight and intelligence information. 9.2.5 Termination of Cyberconflict How could the United States indicate to Zendia that it was no longer engaging in cyberattacks against it? Given that a cyberattack might well involve the placement of hardware and/or software agents within the Zendian IT infrastructure (both civilian and military), would the United States direct such agents to self-destruct? Would it inform Zendia of the IT penetrations and compromises it had made? On what basis would the Zendian government believe a claim by the United States that it had issued such a directive? (And, of course, all of the same questions apply in reverse as well.) On the other hand, such actions may be more analogous to cleanup and recovery efforts after a kinetic war. Conflict termination in a kinetic

OCR for page 302
 TECHNOLOGY, POLICY, LAW, AND ETHICS OF U.S. CYbERATTACK CAPAbILITIES war means that both sides stop shooting at each other—and refrain from taking further destructive actions. This point suggests that software and/or hardware agents within an adversary’s IT infrastructure must be designed so that they are under the positive control of the launching nation—and thus that fully autonomous agents are inconsistent with positive control. In addition, an attacker may need to keep careful track of where these agents are implanted, so that subsequent “cyber de-mining” operations are possible when hostilities have terminated. 9.2.6 The Role of Transparency Where kinetic weapons are concerned, transparency and confidence- building measures such as adherence to mutually agreed “rules of the road” for naval ships at sea, prenotification of large troop movements, and non-interference with national technical means of verification have been used to promote stability and mutual understanding about a potential adversary’s intent. Secrecy surrounding cyberattack policy works against transparency. In addition, military operations on land, sea, and air are easily distinguish- able from most non-military movements, whereas it is likely to be difficult to distinguish between military and non-military cyber operations. 9.2.7 Catalytic Cyberconflict Catalytic conflict refers to the phenomenon in which a third party instigates conflict between two other parties. These parties could be nation-states or subnational groups, such as terrorist groups. The canoni- cal scenario is one in which the instigator attacks either Zendia or Ruri- tania in such a way that Zendia attributes the attack to Ruritania, or vice versa. To increase confidence in the success of initiating a catalytic war, the instigator might attack both parties, seeking to fool each party into thinking that the other party was responsible. As also noted in Section 2.4.2, high-confidence attribution of a cyber- attack under all circumstances is arguably very problematic, and an insti- gator would find it by comparison very easy to deceive each party about the attacker’s identity. Thus, a catalytic attack could be very plausibly executed. In addition, if a state of tension already exists between the United States and Zendia, both U.S. and Zendian leaders will be pre- disposed toward thinking the worst about each other—and thus may be less likely to exercise due diligence in carefully attributing a cyberattack. A Ruritanian might thus choose just such a time to conduct a catalytic cyberattack.

OCR for page 302
 SPECULATIONS ON THE DYNAMICS OF CYbERCONFLICT 9.3 CYBERCONFLICT BETWEEN THE UNITED STATES AND NON-STATE ACTORS Competition with nation-states is not the only kind of conflict that might involve the United States. For example, the United States might be the target of a cyberattack by a non-state party (such as a terrorist group). A terrorist group, by definition, does not operate as a nation-state, and there would inevitably be difficulties in identifying the relevant terrorist group (many terrorist groups would surely like to be able to conduct a cyberattack against the United States), thus complicating the “impose costs” strategy. In addition, if the terrorist group were operating under the auspices of a failed state, a cyber counterattack would be likely to find few suit- able targets in the failed state and thus would have little impact. (Kinetic counterattack might be feasible, as indicated by the experience of the United States in attacking Afghanistan immediately after the September 11, 2001, terrorist attacks on the World Trade Center and the Pentagon.) If the terrorist group were operating under the unwitting cover of another state, all of the attribution problems described in Section 2.4.2 would apply, and the discussion in Section 7.2.1.2 on the merits of self-defense in neutral territory would be relevant. Criminal groups conducting cyberattacks for illicit monetary gain are also important non-state actors. If they operate across national boundar- ies, law enforcement efforts to shut down the operations of such groups are likely to take a long time, if they are successful at all. Thus, one might plausibly consider, in addition to the usual law enforcement efforts, a dif- ferent response paradigm that could call for cyberattacks to terminate or attenuate their activities. Against non-state parties, deterrence by retaliation may be particu- larly ineffective. First, a non-state group may be particularly difficult to identify. A lack of identification means uncertainty about the appropriate focus of any retaliatory action, and also that the decision-making cal- culus of the non-state group is likely to be poorly understood. Second, a non-state group is likely to have few if any information technology assets that can be targeted. Third, some groups (such as organized hacker groups) regard counterattacks as a challenge to be welcomed rather than something to be feared. A criminal group might react very strongly to a counterattack by a much stronger cyberattack than was initially launched. Fourth, a non-state group such as a terrorist or insurgent group might

OCR for page 302
4 TECHNOLOGY, POLICY, LAW, AND ETHICS OF U.S. CYbERATTACK CAPAbILITIES seek to prooke cyber retaliation in order to galvanize public support for it or to antagonize the public against the United States.5 A particularly challenging problem is the prospect of an extended cyber-guerilla campaign against the United States by a non-state actor (perhaps with state sponsorship) operating globally over periods of month or years. In many ways, coping with such a campaign is similar to the physical space kinetic analog of an extended terrorist campaign against the United States. For example: • The set of possible targets is nearly infinite, suggesting that harden- ing every possible target against attack is an implausible strategy. • Knowing the adversary’s value calculus (what the adversary val- ues and how he values it) is fraught with uncertainty, which makes strate- gies based on deterrence by retaliation less effective as a policy tool. For example, if the United States cannot determine assets that the adversary values, credible retaliation is impossible to threaten. If the adversary- valued assets are in a friendly state, attacking those assets might have negative repercussions. • Penetration of the entities responsible for hostile actions against the United States (that is, turning an insider) is likely to be very problematic because of the difficulty of identifying an insider with whom to engage. • A continuing campaign, whether kinetic or cyber, could be very effective in instilling fear, terror, and uncertainty in the population regard- less of the actual level of damage being inflicted.6 There may also be factors that differentiate the situation as com- pared to the kinetic analog. For example, kinetic terrorism usually has dramatically visible effects, whereas the effects of a sustained guerilla campaign of cyberattacks may be far less visible, especially against the background noise of daily cyberattacks from myriad sources. Uncertainty and an undermining of confidence in information technology may be the most likely result of a cyber campaign. Also, state sponsorship may pro- 5 The notion of provoking U.S. retaliation as a technique for gaining the sympathies of the Islamic world at large is a basic tenet of Al Qaeda’s strategy against the United States. See, for example, Rohan Gunaratna, Inside Al Qaeda’s Global Network of Terror, Columbia University Press, New York, 2002. 6 For example, in the D.C. sniper case of 2002, 10 people were shot over a period of 3 weeks (Jessica Reaves, “People of the Week: John Muhammad and John Malvo,” Time, October 24, 2002, available at http://www.time.com/time/nation/article/0,8599,384284,00. html). Inhabitants of the greater D.C. area were terrorized and fearful because of the sniper, despite the fact that there were 18 “traditional” homicides during that time (Susan Kim, “Fear Lingers in DC Area,” Disaster News Network, November 12, 2002, available at http:// www.disasternews.net/news/article.php?articleid=57).

OCR for page 302
 SPECULATIONS ON THE DYNAMICS OF CYbERCONFLICT vide those responsible with access to intelligence that could amplify the potency of guerilla cyberattacks. How might the United States combat such a campaign? Although it is probably possible to harden genuinely critical targets of cyberattack and thereby make a truly devastating attack more difficult, the number of possible lucrative targets is large. Thus, such a campaign could still be expected to have some non-trivial degree of success. Because the locus of an attack can be shifted arbitrarily and essentially instantaneously, active threat neutralization would provide at best transient relief, if any at all. Moreover, enlisting the assistance of foreign national authorities is problematic because a shifting geographic locus can easily negate the effectiveness of any assistance offered. An alternative to the methods described above is to use techniques such as deception and infiltration coupled with covert cyberattack. Decep- tion might be used to induce operatives to violate operational cybersecu- rity by opening themselves to cyberattack (e.g., to visit putatively useful websites that might be able to infect visitors with a selectively acting Trojan horse). Infiltration may be difficult (as described above) but would have to be a priority effort. But whether this alternative would in fact be more effective against a cyberterrorist group is an open question. 9.4 THE POLITICAL SIDE OF ESCALATION The discussions in previous sections in this chapter address escalation dynamics primarily from a military standpoint. Yet escalation dynamics inevitably has a political and psychological component that must not be overlooked. For example, Section 2.5 (on active defense) points out that U.S. cyberattacks undertaken under the rubric of active defense may not be perceived by others as innocent acts of self-defense, even if they are intended by the United States as such. While in most conflicts, both sides claim that they are acting in self-defense, cyberconflicts are a particu- larly messy domain in which to air and judge such claims. Another pos- sible misperception may arise from intelligence collection activities that might involve cyberattack techniques. As noted in Section 2.6.1, the tools needed to conduct a cyberexploitation may not be very different from those needed to conduct a cyberattack. On the other hand, a nation’s tolerance for being the target of a cyberattack may be much lower than its tolerance for being the target of a cyberexploitation. Thus, consider the political ramifications in the following trouble- some scenarios:

OCR for page 302
 TECHNOLOGY, POLICY, LAW, AND ETHICS OF U.S. CYbERATTACK CAPAbILITIES • Zendia might believe that it has been attacked deliberately by the United States even when the United States has not done so. Indeed, because of the ongoing nature of various attack-like activities (e.g., hack- ing and other activities) against the computer systems and networks of most nations, the Zendian conclusion that Zendian computer systems are being attacked is certainly true. Attribution of such an attack is a different matter, and because hard evidence for attribution is difficult to obtain, the Zendian government might make inferences about the likelihood of U.S. involvement by giving more weight to a general understanding of U.S. policy and posture toward it than might be warranted by the specific facts and circumstances of the situation. Evidence that appears to confirm U.S. involvement will be easy to find, whether or not the United States is actually involved, and the lack of U.S.-specific “fingerprints” can easily be attributed to U.S. technological superiority in conducting such attacks. • An active defense undertaken by the United States of its systems and networks against Zendia could have significant political conse- quences. For example, even if the United States had technical evidence that was incontrovertible (and it never is) pointing to the Zendian gov- ernment, the Zendians could still deny that they had launched such an attack—and in the court of world opinion, the Zendian denial could carry some weight when considered against past U.S. assertions regarding simi- lar issues. That is, U.S. cyberattacks (counter-cyberattacks, to be precise) undertaken under the rubric of active defense may not be perceived as innocent acts of self-defense, even if they are. The result could be a flurry of charges and countercharges that would further muddy the waters and escalate the level of political tension and mistrust. • The United States plants a software agent in a Zendian military system but does not activate it (cf. Section 2.2.4). Zendia (being attacked) may well regard the hostile action as beginning at the moment the U.S. agent is planted, whereas the United States may believe that the hostile action begins only when the agent is activated. • The United States launches a cyberattack against a Zendian military factory, but the direct damage from this attack is not visible to the naked eye (Section 2.3.1.1). Without CNN images of smoking holes in the ground or troops on the move, an outside observer must weigh competing claims without tangible evidence one way or the other. Under such circumstances, the reputations of the different parties in the eyes of each other are likely to play a much larger political role. • The United States plants software agents in some of Zendia’s criti- cal networks to collect intelligence information. These agents are designed to be reprogrammable in place—that is, the United States can update these agents with new capabilities. During a time of crisis, Zendian authorities discover some of these agents and learn that they have been present for a

OCR for page 302
 SPECULATIONS ON THE DYNAMICS OF CYbERCONFLICT while, that they are sending back to the United States very sensitive infor- mation, and that their capabilities can be changed on a moment’s notice. Even if no harmful action has yet been taken, it is entirely possible that Zendia would see itself as being the target of a U.S. cyberattack. • The United States is the target of a cyberattack against its air traffic control system that results in a number of airplane crashes and several hundred deaths. Initially, no definitive technical attribution can be made regarding the perpetrator of the attack, but in a matter of weeks, an all-source attribution—depending on somewhat uncertain human and signals intelligence—suggests that the perpetrator could be Zendia. The United States decides on a mixed kinetic and cyber response against Zendia but must persuade allies and the rest of the world that its attack on Zendia is in fact justified. • Tensions between the United States and Zendia are high, even though diplomats are trying to defuse them. Over a relatively short period of time, Zendia conducts a number of cyberexploitations against a variety of computer systems and networks important to the U.S. military. Some of these activities are successful in compromising some sensitive but unclassified information, but the systems and networks in question do not experience any apparent functional degradation. However, in keeping with common press usage, U.S. news reports of these activities indicate that 300 Zendian “cyberattacks” have taken place against the U.S. mili- tary. In turn, these reports inflame passions in the United States, leading to significant pressures on the U.S. National Command Authority to respond aggressively against Zendia. Factors such as the ones described above suggest that factors other than those dictated by military or legal necessity play important roles in escalation dynamics, if nothing else because they can strongly affect the perceptions of decision makers on either side.