As noted in Chapter 1, progress in public policy to improve the nation’s cybersecurity posture has not been as rapid as might have been expected. One reason—perhaps the most important reason—is that cybersecurity is only one of a number of significant public policy issues—and measures taken to improve cybersecurity potentially have negative effects in these other areas. This chapter elaborates on some of the most significant tensions.
Economics and cybersecurity are intimately intertwined in the public policy debate in two ways—the scale of economic losses due to adversary operations for cyber exploitation and the effects of economics on the scope and nature of vendor and end-user investments in cybersecurity. (To date, the economic losses due to cyberattack are negligible by comparison.)
As implied in Chapter 4, economic approaches to promote cybersecurity should identify actions that lower barriers and eliminate disincentives. They should create incentives to boost the economic benefits that flow from attention to cybersecurity and should penalize inattention to cybersecurity or actions that cause harm in cyberspace. Some of the possible approaches are described briefly below, although there is no clear
national consensus on which of these, if any, should be implemented as policy, and legislation has not been passed on any of these approaches.
• Use of existing market mechanisms but with improved flow of information.
—One type of information is more and better information about threats and vulnerabilities, which could enable individual organizations to take appropriate action to strengthen their cybersecurity postures. For example, an organization may be driven to action if it hears that a large number of other organizations have already fallen victim to a given threat.
—A second type of information is information about an individual organization’s cybersecurity posture. For example, individual organizations in particular sectors of the economy can determine and adopt appropriate best-practice cybersecurity measures for those sectors. Another party, such as a government regulatory agency in the case of already-regulated industries, an insurance company for organizations carrying cybersecurity insurance, or the Securities and Exchange Commission for publicly held companies, would audit the adequacy of the organization’s adoption of best practices and publicize the results of such audits.1 Publicity about such results would in principle incentivize these organizations to improve their cybersecurity postures.
• Insurance. The insurance industry may have a role in incentivizing better cybersecurity. Consumers that buy insurance to compensate losses incurred because of cybercrime will have lower premiums if they have stronger cybersecurity postures, and thus market forces will help to drive improvements in cybersecurity. A variety of reasons stand in the way of establishing a viable cyber-insurance market: the unavailability of actuarial data to set premiums appropriately; the highly correlated nature of losses from outbreaks (e.g., from viruses) in a largely homogeneous monoculture environment, the difficulty in substantiating claims, the intangible nature of losses and assets, and unclear legal grounds.
• Standards setting and certification. This approach is based on three ideas: that good cybersecurity practices can be codified in standards, that such practices actually improve security, and that organizations publicly recognized as conforming to such standards can improve their competitive position in the marketplace. Relevant standards-setting bodies include the National Institute of Standards and Technology for the U.S.
1 President’s Council of Advisors on Science and Technology, Immediate Opportunities for Strengthening the Nation’s Cybersecurity, November 2013, available at http://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_cybersecurity_nov-2013.pdf.
government and the International Organization for Standardization for the private sector.
• Nonregulatory public-sector mechanisms. This approach uses some of the tools below to promote greater attention to and action on cybersecurity.
—Procurement regulations can be used to insist that information technology systems delivered to government are more secure. With such systems thus available, vendors might be able to offer them to other customers as well.
—The federal government can choose to do business only with firms that provide adequate cybersecurity in their government work.
—The federal government itself could improve its own cybersecurity practices and offer itself as an example for the rest of the nation.
—A variety of tax incentives might be offered to stimulate greater investment in cybersecurity.
—Public recognition of adherence to high cybersecurity standards—a form of certification—might provide “bragging rights” for a firm that would translate into competitive advantages.
—Voluntary standards setting by government can specify cybersecurity standards if private organizations do not so.
• Liability. This approach presumes that vendors and/or system operators held financially responsible for harms that result from cybersecurity breaches will make greater efforts than they do today to reduce the likelihood of such breaches. Opponents argue that the threat of liability would stifle technological innovation, potentially compromise trade secrets, and reduce the competitiveness of products subject to such forces. Moreover, they argue that vendors and operators should not be held responsible for cybersecurity incidents that can result from factors that are not under their control.
• Direct regulation. Regulation would be based on enforceable mandates for various cybersecurity measures. This is the ultimate form of changing the business cases—comply or face a penalty. Direct regulation might, for example, call for all regulated institutions to adopt certain kinds of standards relating to cybersecurity “best practices” regarding the services they provide to consumers or their own internal practices. Opponents of direct regulation argue that several factors would make it difficult to determine satisfactory regulations for cybersecurity.2 For example, regulations might divert resources that would otherwise be used to address actual threats. Costs of implementation would be highly variable and dependent on a number of factors beyond the control of the
2 Alfredo Garcia and Barry Horowitz, “The Potential for Underinvestment in Internet Security: Implications for Regulatory Policy,” Journal of Regulatory Economics 31(1):37-55, 2007, available at http://ssrn.com/abstract=889071.
regulated party. Risks vary greatly from system to system. There is wide variation in the technical and financial ability of firms to support security measures.
As an example of growing awareness that incentives may be important in cybersecurity, the present administration is promulgating its Cybersecurity Framework. Under development as this report is being written, the framework is a set of core practices to develop capabilities to manage cybersecurity.3 To encourage critical infrastructure companies to adopt this framework, the administration has identified a number of possible incentives that it is currently exploring, including:4
• Special consideration in the awards process for federal critical infrastructure grants;
• Priority in receiving certain government services, such as technical assistance in non-emergency situations;
• Reduced tort liability, limited indemnity, higher burdens of proof to establish liability, or the creation of a federal legal privilege that preempts state disclosure requirements; and
• Public recognition for adopters of the framework.
Regarding the negative economic impact of compromises in cybersecurity, numbers as high as $1 trillion annually have been heard in the public debate, and in 2012, the commander of U.S. Cyber Command asserted that the loss of industrial information and intellectual property through cyber espionage constitutes the “greatest transfer of wealth in history.”5 But in point of fact, the uncertainty in the actual magnitude is quite large, and other analysts speculate that the actual numbers—though significant in their own right—might be much lower than the highest known estimates.
For example, loss of intellectual property is today the poster child for
4 Michael Daniel, “Incentives to Support the Adoption of the Cybersecurity Framework,” August 6, 2013, available at http://www.whitehouse.gov/blog/2013/08/06/incentives-support-adoption-cybersecurity-framework.
5 Josh Rogin, “NSA Chief: Cybercrime Constitutes the Greatest Transfer of Wealth in History,” July 9, 2012, available at http://thecable.foreignpolicy.com/posts/2012/07/09/nsa_chief_cybercrime_constitutes_the_greatest_transfer_of_wealth_in_history#sthash.0k7NmFmQ.dpbs. The methodologies underlying such claims are controversial and are discussed in Section 3.6 on threat assessment.
the negative economic impact of adversarial cyber operations. However, intellectual property is unlike physical property in some very important ways, not the least of which is the fact that “stolen” intellectual property is still available to its owner, which can still exercise a considerable degree of control over it. “Stolen” intellectual property is really copied intellectual property, which means that the owner no longer has exclusive control over it. Moreover, valuing intellectual property is a complex process—is the value of intellectual property what it cost to produce that intellectual property, or what it might generate in revenues over its lifetime? How should a reduction in the period of exclusive control be valued? And there is no assurance that a taker of intellectual property will be able to use it properly or effectively.
Uncertainties also apply to valuing the loss of sensitive business information (such as negotiating strategies and company inside information). Company A may want to keep its negotiating strategy confidential, but if Company B, a competitor, knows it, Company B may be able to undercut Company A and unfairly win a contract. Insider information about Company C may lead to stock market manipulation. The loss of a contract is easy to value, but given that many factors usually affect the outcomes of such competitions, how could one tie a competitive loss to the loss of sensitive business information?
Opportunity costs are particularly hard to define. For example, service disruptions often delay service but do not deny it, and a customer who visits a Web site that is inaccessible today may well visit it tomorrow when it is accessible. Should the opportunity cost of a disruption be defined as the business foregone during the disruption or only the business that was lost forever? Damage to the reputation of a victimized company, also a category of opportunity cost, is often temporary—a company suffering a cybersecurity incident that is made public may see its stock price suffer, but a McAfee-CSIS report indicates that such a price drop usually lasts no more than a quarter.6
Last, a number of other factors also affect the reliability of various estimates. Companies may not know that they have been victimized by a cyber intrusion. They may know they have been victimized but refrain from reporting it. The surveys taken to determine economic loss are often not representative, and questions about loss can be structured in a way that does not allow erroneously large estimates to be corrected by errors on the other side of the ledger.7
6 Center for Strategic and International Studies, The Economic Impact of Cybercrime and Cyber Espionage, July 2013, available at http://www.mcafee.com/us/resources/reports/rp-economic-impact-cybercrime.pdf.
7 Dinei Florencio and Cormac Herley, “Sex, Lies, and Cyber-crime Surveys,” available at http://research.microsoft.com/apps/pubs/default.aspx?id=149886.
Estimates of losses due to cybercrime are intended to motivate action to deal with the cybercrime problem, and larger estimates presumably make the problem more urgent for policy makers to address. But disputes about methodology can erode the credibility of demands to take immediate action, even when the lower end of such estimates may be large enough from a public policy standpoint to warrant action.
Perhaps more important, even if the economic losses are large, users of information technology may be making a judgment that such losses are simply a cost of doing business. Although they may be loath to acknowledge it publicly, some users argue that they will not invest in security improvements until the losses they are incurring make such an investment economically worthwhile. Although economic calculations of this nature are unlikely to be the only reason that users fail to invest at a level commensurate with some externally assessed need, it may well be that some of these users simply have a different definition of need.
A stated goal of U.S. public policy is to promote innovation in products and services in the private sector. In information technology (as in other fields), vendors have significant financial incentives to gain a first-mover or a first-to-market advantage. For example, the vendor of a useful product or service that is first to market has a virtual monopoly on the offering, at least until a competitor comes along. During this period, the vendor has the chance to establish relationships with customers and to build loyalty, making it more difficult for a competitor to establish itself. Furthermore, customers of the initial product or service may well be reluctant to incur the costs of switching to a competitor.
Policy actions that detract from the ability of the private sector to innovate are inherently suspect from this perspective, and in particular policy actions to promote greater attention to cybersecurity in the private sector often run up against concerns that these actions will reduce innovation. The logic of reducing time to market for information technology products or services runs counter to enhancing security, which adds complexity, time, and cost in design and testing while being hard to value by customers. For example, the real-world software development environment is not conducive to focusing on security from the outset. Software developers often experience false starts, and many “first-try” artifacts are thrown away. In this environment, it makes very little sense to invest up front in that kind of adherence unless such adherence is relatively inexpensive.
Furthermore, to apply secure development principles such as those described in Box 4.2, software designers and architects have to know
very well and in some considerable detail just what the ultimate artifact is supposed to do. But some large software systems emerge from incremental additions to small software systems in ways that have not been anticipated by the designers of the original system, and sometimes users change their minds about the features they want, or even worse, want contradictory features.
Functionality that users demand is sometimes in tension with security as well. Users demand attributes such as ease of use, interoperability, and backward compatibility. Often, information technology purchasers (whether individuals or firms) make product choices based on features, ease of use, performance, and dominance in a market, although in recent years the criteria for product selection have broadened to include security to some extent in some business domains.
As an example, consider the choice that a vendor must make in shipping a product—whether to ship with the security features turned on or off. If the purchaser is a novice, he or she may find that security features often get in the way of using the product, an outcome that may lead to frustration and customer dissatisfaction. Inability to use the product may also result in a phone call to the vendor for customer service, which is expensive for the vendor to provide. By contrast, shipping the product with security features turned off tends to reduce one source of customer complaints and makes it easier for the customer to use the product. The customer is likely to realize at a later time the consequences of any security breaches that may occur as a result, at which point tying those consequences to the vendor’s decision may be difficult. Under these circumstances, many vendors will chose to ship with security turned off—and many customers will simply accept forever the vendor’s initial default settings.
Restricting users’ access privileges often has serious usability implications and makes it harder for users to get legitimate work done, as for example when someone needs higher access privileges temporarily but on a time-urgent basis. Program features that enable adversary access can be turned off, but doing so may disable functionality needed or desired by users. In some cases, closing down access paths and introducing cybersecurity to a system’s design slows it down or makes it harder to use. Other security measures may make it difficult to get work done or cumbersome to respond quickly in an emergency situation.
At the level of the computer programs needed for an innovative product or service, implementing the checking, monitoring, and recovery needed for secure operation requires a lot of computation and does not come for free. In addition, user demands for backward compatibility at the applications level often call for building into new systems some of the same security vulnerabilities present in the old systems.
Policy at the nexus of cybersecurity and civil liberties often generates substantial controversy. Civil liberties have an important informational dimension to them, and cybersecurity is in large part about protecting information, so it is not surprising that measures taken to enhance cybersecurity might raise civil liberties concerns.
Privacy is an ill-defined concept in the sense that people use the term to mean many different things, but it resists a clear, concise definition because it is experienced in a variety of social contexts. In the context of information, the term “privacy” usually refers to making ostensibly private information about an individual unavailable to parties who should not have that information. Privacy interests attach to the gathering, control, protection, and use of information about individuals.
Privacy and cybersecurity intersect in a number of ways, although the security of information against unauthorized access is different than privacy.8 In one basic sense, cybersecurity measures can protect privacy—an intruder seeking ostensibly private information (e.g., personal e-mails or photographs, financial or medical records, phone calling records) may be stymied by good cybersecurity measures.
But certain measures taken to enhance cybersecurity can also violate privacy. For example, some proposals call for technical measures to block Internet traffic containing malware before it reaches its destination. But to identify malware-containing traffic, the content of all in-bound network traffic must be inspected. But inspection of traffic by any party other than its intended recipient is regarded by some as a violation of privacy, because most traffic will in fact be malware-free. Under many circumstances, inspection of traffic in this manner is also a violation of law.
Another measure for enhancing cybersecurity calls for sharing technical information on various kinds of traffic with entities responsible for identifying and responding to intrusions. Technical information is information associated directly with the mechanisms used to effect access, to take advantage of vulnerabilities, or to execute malware payloads. For example:
8 What an individual regards as “private” may not be the same as what the law designates as being worthy of privacy protection—an individual may believe a record of pre-existing medical conditions should be kept away from life insurance providers, but the law may say otherwise. No technical security measure will protect the privacy interests of those who believe that legally authorized information flows constitute a violation of privacy.
• Malware (or intrusion) signatures. Sharing such information could help installations identify malware before it has a chance to affect vulnerable systems or networks.
• Time-correlated information on intrusions. Such information is an essential aspect of attack assessment, because simultaneous intrusions on multiple installations across the nation might signal the onset of a major attack. Important installations thus must be able to report their status to authorities responsible for coordinating such information.
• Frequency, nature, and effect of intrusions. How often are intrusions of a given type occurring? What tools are they using? What is the apparent purpose of these intrusions?
In some cases, real-time or near-real-time information sharing is a prerequisite for a prompt response. In other cases, after-the-fact coordination of information from multiple sources is necessary for forensic purposes or for detecting similar intrusions in the future. Nonetheless, many organizations are hesitant to share such information, raising concerns about possible antitrust or privacy violations and loss of advantages with respect to their competitors. Private-sector organizations are also sometimes reluctant to share such information with government agencies, for fear of attracting regulatory attention. Similar issues also arise regarding the sharing of threat information among agencies of the U.S. government, especially those within the intelligence community. The result can be that a particular method of intrusion may be known to some (e.g., elements of the intelligence community or the military) and unknown to others (e.g. industry and the research community), thus impeding or delaying the development of effective countermeasures.
In addition, privacy rights can be implicated if the definition of the information to be shared is cast too broadly, if personally identifiable information is not removed from the information to be shared, or if the scope of the allowed purposes for sharing information goes beyond matters related to cybersecurity. The essential privacy point is that systematically obtaining the information described above for hostile traffic requires inspection of all incoming traffic, most of which is not relevant or hostile in any way. If the entities with whom the information is shared are law enforcement or national security authorities, privacy concerns are likely to be even stronger.
Freedom of expression, which includes freedom of religion, freedom of speech, freedom of the press, freedom of assembly, and freedom to petition the government, encompasses civil liberties that are often infringed
when the causes involved are unpopular. In such cases, one way of protecting individuals exercising rights of free expression is to provide a means for them to do so anonymously. Thus, an individual may choose to participate in an unattributable online discussion that is critical of the government or of an employer, to make an unidentified financial contribution to an organization or a political campaign, to attend a meeting organized by unpopular groups, or to write an unattributed article expressing a politically unpopular point of view.
Civil liberties concerns regarding free expression attach primarily to strong authentication at the packet level. Few people object to online banks using strong authentication—but many have strong objections to mandatory strong authentication that is independent of the application in question, and in particular they are concerned that strong authentication will curtail their freedom of expression.
To address concerns about free expression, it is sometimes proposed that mandatory strong authentication should apply to a second Internet, which would be used by critical infrastructure providers and others who preferred to operate in a strongly authenticated environment. Although a new network with such capabilities would indeed help to identify attackers under some circumstances, attackers would nevertheless invariably seek other ways to counter the authentication capabilities of this alternative, such as compromising the machines connected to the new network.9
In addition, a new network may come with a number of drawbacks, such as retaining the economies of the present-day Internet and preventing any connection, physical or logical, to the regular Internet through which cyberattacks might be launched. On this last point, experience with large networks indicates that maintaining an actual air-gap isolation between two Internets would be all but impossible—not for technical reasons but because of a human tendency to make such connections for the sake of convenience.
An important element of protecting civil liberties is due process—the state cannot deprive individuals of civil liberties in the absence of due process. Some cybersecurity measures can put pressure on due process. For example, due process could be compromised if government authorities surveil Internet traffic for cybersecurity purposes in ways that are illegal under existing law or if they cause collateral damage to innocent civilians in the process of responding to an adversarial cyber operation.
9 Steven M. Bellovin, “Identity and Security,” IEEE Security and Privacy 8(2, March-April):88, 2010.
Also, it is often alleged that responses to cyber intrusions must happen very rapidly—in a matter of milliseconds—because the intrusions occur very rapidly. Leaving aside the question of whether a rapid response is in fact required in all circumstances, even those situations in which a rapid response is necessary raise the question of whether due process can be exercised in such a short time. Some tasks, such as high-confidence attribution of a cyber intrusion to the legally responsible actor, may simply be impossible to accomplish in a short time, and yet accomplishment of these tasks may be necessary elements of due process.
In the international environment of the Internet, “Internet governance” is not a well-defined term. There is broad agreement that Internet governance includes management and coordination of the technical underpinnings of the Internet such as the Domain Name System, and development of the standards and protocols that enable the Internet to function.10 A more expansive definition of Internet governance, for which there is not broad international agreement, would include matters such as controlling spam; dealing with use of the Internet for illegal purposes; resolving the “digital divide” between developed and developing countries; protecting intellectual property other than domain names; protecting privacy and freedom of expression; and facilitating and regulating e-commerce.11
International debates over what should constitute the proper scope of Internet governance are quite contentious, with the United States generally arguing for a very restricted definition and other nations arguing for a more expansive one, and in particular for a definition that includes security from threats in cyberspace.
But different nations have different conceptions of what constitutes a threat from cyberspace. China and Russia, for example, often talk about “information security”—a term that is much more expansive than the U.S. conception of cybersecurity. These nations argue that Internet traffic containing information related to various political developments poses threats to their national security and political stability (e.g., news
10 Lennard G. Kruger, “Internet Governance and the Domain Name System: Issues for Congress,” Congressional Research Service, November 13, 2013, available at http://www.fas.org/sgp/crs/misc/R42351.pdf.
11 National Research Council, Signposts in Cyberspace: The Domain Name System and Internet Navigation, The National Academies Press, Washington, D.C., 2005.
stories about corruption at high levels of government) and thus that Internet governance should recognize their rights to manage—and if necessary, block—such traffic, just as other nations would be allowed to block malware-containing traffic. The United States and many Western nations have opposed such measures in multiple forums, and in particular have opposed attempts to broaden the Internet governance agenda in this manner. In this context, disputes over Internet governance are thus often disputes over content regulation in the name of Internet security.
There is also contention about who defines the protocols and standards for passing information and what these protocols and standards should contain, because these protocols and standards affect how traffic can be monitored or controlled. Of particular significance are parties—both in other nations and in the United States—that would require packet-level authentication in the basic Internet protocols in the name of promoting greater security. Requiring authentication in this manner would implicate all of the civil liberties issues discussed above as well as the performance and feasibility issues discussed in Chapter 2.
As is true for all nations, the United States has multiple policy objectives in cyberspace. For example, the United States is on record as promoting cybersecurity internationally, as illustrated in the 2011 White House International Strategy for Cyberspace, a document stating that “assuring the free flow of information, the security and privacy of data [emphasis added], and the integrity of the interconnected networks themselves are all essential to American and global economic prosperity, security, and the promotion of universal rights.”12
The United States also collects information around the world for intelligence purposes, and much of such collection depends on the penetration of information technology systems and networks to access the information transiting through them. Cybersecurity measures taken by the users, owners, and operators of these systems and networks thus tend to frustrate intelligence collection efforts, and according to public reports, the United States has undertaken a variety of efforts to circumvent or weaken these measures.
12 White House, International Strategy for Cyberspace: Prosperity, Security, and Openness in a Networked World, May 2011, available at http://www.whitehouse.gov/sites/default/files/rss_viewer/international_strategy_for_cyberspace.pdf.
On the face of it, these two policy objectives are inconsistent with each other—one promotes cybersecurity internationally and the other undermines it. Of course, this would not be the first time that policy makers have pursued mutually incompatible objectives—governments frequently have incompatible objectives. A first response to the existence of incompatible objectives is to acknowledge the tension between them, and to recognize the possibility of tradeoffs—more of one may mean less of another, and the likely operational impact of policy tradeoffs made in different ways must be assessed and compared.
An illustration of this tradeoff is the Communications Assistance for Law Enforcement Act (CALEA) of 1994, which directs the telecommunications industry to design, develop, and deploy systems that support law enforcement requirements for electronic surveillance. Intelligence derived from electronic surveillance of adversaries (including criminals, hostile nations, and terrorists) is an important factor in shaping the U.S. response to adversary activities. But measures taken to facilitate CALEA-like access by authorized parties sometimes have the effect of reducing the security of the systems affected by those measures.13
Efforts continue today to introduce means of government access to the infrastructure of electronic communications,14 and some of these efforts are surreptitious. Regardless of the legality and/or policy wisdom of these efforts, a fundamental tradeoff faces national policy makers—whether reduced security for the communications infrastructure is worth the benefits of gaining and/or continuing access to adversary communications. Note also that benefits from the surveillance of adversary communications may be most obvious in the short term, whereas the costs of reduced security are likely to be felt in the long term. Advocates for maintaining government access to adversary communications in this manner will argue that the benefits are large and that whatever reductions in security result from “designed-in” government access are not significant. Opponents of this approach will argue the reverse.
13 An example is provided in Vassilis Prevelakis and Diomidis Spinellis, “The Athens Affair,” IEEE Spectrum 44(7):26-33, June 29, 2007, available at http://spectrum.ieee.org/telecom/security/the-athens-affair.
14 See, for example, Susan Landau, “Making Sense from Snowden: What’s Significant in the NSA Surveillance Revelations,” IEEE Security and Privacy 11(4, July/August):54-63, 2013, available at http://www.computer.org/cms/Computer.org/ComputingNow/pdfs/MakingSenseFromSnowden-IEEESecurityAndPrivacy.pdf, and “Making Sense of Snowden, Part II: What’s Significant in the NSA Revelations,” IEEE Security and Privacy 12(1, January/February):62-64, 2014, available at http://doi.ieeecomputersociety.org/10.1109/MSP.2013.161.
International norms of behavior are intended to guide states’ actions, sustain partnerships, and support the rule of law.15 Norms of international behavior are established in many ways, including the customary practice and behavior of nations and explicit agreements (treaties) that codify behavior that is permitted or proscribed.
The U.S. International Strategy for Cyberspace states that in cyberspace, the United States supports the development of a variety of norms for upholding fundamental freedoms; respect for property; valuing privacy; protection from crime; right of self-defense; global interoperability; network stability; reliable access; multi-stakeholder governance; and cybersecurity due diligence. But even a casual inspection of this set of possible norms would suggest that an international consensus for these norms would not be easy to achieve.
One of the most important factors influencing the adoption and enforcement of norms is the ability of all parties to monitor the extent to which other parties are in fact complying with them—parties can flout norms without consequence if they cannot be associated with such behavior. As discussed in Chapter 4 (Box 4.1), attributing actions in cyberspace to an appropriately responsible actor is problematic under many circumstances, especially if prompt attribution is required. Difficulties in attribution are likely to increase the difficulty of establishing norms of behavior in cyberspace.
For illustrative purposes, two domains in which norms may be relevant to cybersecurity relate to conducting cyber operations for different purposes and reaching explicit agreements internationally regarding acceptable and unacceptable behavior.
Distinguishing Between Cyber Operations Conducted for Different Purposes
In the cybersecurity domain, norms of behavior are contentious as well. For example, the United States draws a sharp line between collecting information related to national security and foreign policy and collecting information related to economic and business interests, arguing that the first constitutes espionage (an activity that is not illegal under international law) and that the second constitutes theft of intellectual property and trade secrets for economic advantage.
15 White House, International Strategy for Cyberspace—Prosperity, Security, and Openness in a Networked World, May 2011, available at http://www.whitehouse.gov/sites/default/files/rss_viewer/international_strategy_for_cyberspace.pdf.
Most other nations do not draw such a sharp line between these two kinds of information collection. But even were all nations to agree in principle that such a line should be drawn, how might these two types of information (information related to national security and information related to economic and business interests) be distinguished in practice? For instance, consider the plans for a new fighter plane designed for export. Should expropriation of such plans be regarded as an intelligence collection or as theft of intellectual property? If the nature of the information is not sufficient to categorize it, what other characteristics might differentiate it? Where it is stored? What it is used for? All of these questions, and others, remain to be answered.
And a further policy debate remains to be settled. Should the United States maintain the distinction between national security information and information related to economic or business interests? What would be the advantages and disadvantages, if any, to the United States of abandoning this distinction?
Today, the United States does not target intelligence assets for the specific purpose of enhancing the competitive position of U.S. industries or specific U.S. companies. The case for this current policy is based largely on the desire of the United States to uphold a robust legal regime for the protection of intellectual property and for a level playing field to enable competitors from different countries to make their best business cases on their merits. Revising this policy would call for relaxation of the current restraints on U.S. policy regarding intelligence collection for the benefit of private firms, thus allowing such firms to obtain competitively useful and proprietary information from the U.S. intelligence community about the future generations of foreign products, such as airplanes or automobiles, or about business operations and contract negotiating positions of their competitors.
Such a change in policy would require the U.S. government to wrestle with many thorny questions. For example, the U.S. government would have to decide which private firms should benefit from the government’s activities, and even what entities should count as a “U.S. firm.” U.S. government at the state and local level might well find that the prospect of U.S. intelligence agencies being used to help private firms would not sit well with foreign companies that they were trying to persuade to relocate to the United States. And that use of its intelligence agencies might well undercut the basis on which the United States could object to other nations conducting such activities for the benefit of their own domestic industries and lead to a “Wild West” environment in which anything goes.
Another problematic issue is the difference between cyber exploitation and cyberattack. As noted in Chapter 3, cyber exploitations and
cyberattacks use the same approaches to penetrating a system or network; this similarity between exploitations and attacks means that even if an intrusion is detected, the underlying intent may not be clear until some time has passed. Given that the distinction between an attack and an exploitation could be highly consequential, how should the United States respond when it is faced with a cyber intrusion of unknown intent?
For example, consider a scenario in which Elbonia plants software agents in some critical military networks of the United States to collect intelligence information. These agents are designed to be reprogrammable in place—that is, Elbonia can update these agents with new capabilities. During a time of crisis, U.S. authorities discover some of these agents and learn that they have been present for a while, that they are sending back to Elbonia very sensitive information, and that their capabilities can be changed on a moment’s notice. Even if no harmful action has yet been taken, it is entirely possible that the United States would see itself as being the target of an impending Elbonian cyberattack.
The possibility of confusion also applies if the United States conducts an exploitation against another nation. If the intent of an exploitation is nondestructive, how—if at all—should the United States inform the other nation of its nondestructive intentions? Such considerations are particularly important during periods of crisis or tension. During such periods, military action may be more likely, and it is entirely plausible that both sides would increase the intensity of the security scans each conducts on its critical systems and networks. More intense security scans often reveal offensive software agents implanted long before the onset of a crisis and that may have been overlooked in ordinary scans, and yet discovery of these agents may well prompt fears that an attack may be impending.16
Technical difficulties in distinguishing between exploitations and attack (or preparations for attack) should not preclude the possibility of using other methods for distinguishing them. For example, some analysts suggest that the nature of a targeted entity can provide useful clues to an adversary’s intention; others suggest that certain confidence-building measures in cyberspace, such as agreements to refrain from attacking certain kinds of facilities, can help as well. Such questions are open at this time.
16 Herbert Lin, “Escalation Dynamics and Conflict Termination in Cyberspace,” Strategic Studies Quarterly 6(3):46-70, 2012.
Arms Control in Cyberspace17
The intent of an arms control agreement in general is usually to reduce the likelihood that conflict will occur and/or to reduce the destructiveness of any conflict that does occur. Such agreements can be bilateral or multilateral, and they can be cast formally as treaties, informally as memorandums of understanding, or even more informally as coordinated unilateral policies.
In principle, arms control agreements can limit or ban the signatories from conducting some combination of research, development, testing, production, procurement, or deployment on certain kinds of weapons; limit or ban the use of certain weapons and/or the circumstances under which certain weapons may or may not be used; or oblige signatories to take or to refrain from taking certain actions under certain circumstances to reassure other signatories about their benign intent (i.e., to take confidence-building measures).
For cyber weapons (where a cyber weapon is an information technology-based capability for conducting some kind of cyber intrusion), any limit on research, development, testing, production, procurement, or deployment of certain kinds of weapons is unlikely to be feasible. One reason is the verification challenge for such weapons; a second is the fact that such weapons have legitimate uses (e.g., both military and civilian entities use such weapons to test their own defenses). Distinguishing offensive capabilities developed for cyberattack from those used to shore up defenses against cyberattack would seem to be a very difficult if not impossible task.
Restrictions on the use of cyber weapons might entail, as an example, agreement to refrain from launching cyberattacks against national financial systems or power grids, much as nations today have agreed to avoid targeting hospitals in a kinetic attack. Agreements to restrict use are by their nature not verifiable, but the inability to verify such agreements has not prevented the world’s nations (including the United States) from agreeing to the Geneva Conventions, which contain similarly “unverifiable” restrictions.
Yet recognizing violations of such agreements may be problematic. One issue is that nonstate actors may have access to some of the same cyber capabilities as do national signatories, and nonstate actors are unlikely to adhere to any agreement that restricts their use of such capabilities. Another issue is the difficulty of tracing cyber intrusions to their
17 Much of the discussion in this section is based on Herbert Lin, “A Virtual Necessity: Some Modest Steps Toward Greater Cybersecurity,” Bulletin of the Atomic Scientists, September 1, 2012, available at http://www.thebulletin.org/2012/september/virtual-necessitysome-modest-steps-toward-greater-cybersecurity.
ultimate origin. If the ultimate origin of a cyberattack can be concealed successfully, holding the violator of an agreement accountable becomes problematic.
Last, ambiguities between cyber exploitation and cyberattack complicate arms control agreements in cyberspace. A detected act of cyber exploitation may well be assessed by the target as a damaging or destructive act, or at least the prelude to such an act, yet forbidding cyber exploitation would go far beyond the current bounds of international law and fly in the face of what amounts to standard operating procedure today for essentially all nations.
Transparency and confidence-building measures (TCBMs) have been used to promote stability and mutual understanding when kinetic weapons are involved. Some possible TCBMs in cyberspace include (but are not limited to):
• Incident notification. Two or more nations agree to notify each other of serious cyber incidents, and to provide each other with information about these incidents.
• Joint exercises. Two nations engage in joint exercises to respond to a simulated cyber crisis that affects or involves both nations to see what information each side would need from the other.
• Publication of declaratory policies and/or doctrine about how a nation intends to use cyber capabilities, both offensive and defensive, to support its national interests.
• Notification of relevant nations regarding certain activities that might be viewed as hostile or escalatory.
• Direct communication with counterparts during times of tension or crisis.
• Mutual cooperation on matters related to securing cyberspace (e.g., jointly investigating the source of an attack).
• Imposing on nation-states an obligation to assist in the investigation and mitigation of cyber intrusions emanating from their territories.
Perhaps the most important challenge to the development of useful TCBMs in cyberspace is that offensive operations fundamentally depend on stealth and deception. Transparency and confidence-building measures are, as the name suggests, intended to be reassuring to an adversary; the success of most offensive operations depends on an adversary being falsely reassured. Thus, the misuse of these measures may well be an element of an adversary’s hostile use of cyberspace. In addition, many TCBMs are conventions for behavior (e.g., rules of the road) and as such do not speak to intent—but in cyberspace, intent may be the primary difference between a possibly prohibited act, such as certain kinds of
cyberattack, and an allowed one, such as cyber exploitation. Still, examining in a multilateral way various nations’ views about the nature of cyber weapons, cyberspace, offensive operations, and so on could promote greater mutual understanding among the parties involved.
Whether the challenges described above convincingly and definitively refute, even in principle, the possibility of meaningful arms control agreements in cyberspace is open to question today. What is clear is that progress in cyber arms control, if it is feasible at all, is likely to be slow.
The information technology industry is highly globalized. India and China play major roles in the IT industry, and Ireland, Israel, Korea, Taiwan, Japan, and some Scandinavian countries have also developed strong niches within the increasingly globalized industry. Today, a product conceptualized and marketed in the United States might be designed to specifications in Taiwan, and batteries or hard drives obtained from Japan might become parts in a product assembled in China. (Table 5.1 traces possible origins for some components of a laptop computer.) Integrated circuits at the heart of a product might be designed and developed in the United States, fabricated in Taiwan, and incorporated into a
TABLE 5.1 Supply-Chain Geography—An Illustration
|Component of Laptop Computer||Location of Facilities Potentially Used by Supplier(s)|
|Liquid crystal display||China, Czech Republic, Japan, Poland, Singapore, Slovac Republic, South Korea, Taiwan|
|Memory||China, Israel, Italy, Japan, Malaysia, Philippines, Puerto Rico, Singapore, South Korea, Taiwan, United States|
|Processor||Canada, China, Costa Rica, Ireland, Israel, Malaysia, Singapore, United States, Vietnam|
|Hard disk drive||China, Ireland, Japan, Malaysia, Philippines, Singapore, Thailand, United States|
SOURCE: U.S. Government Accountability Office, National Security-Related Agencies Need to Better Address Risks, GAO-12-361, U.S. Government Printing Office, March 23, 2012, available at http://www.gao.gov/products/GAO-12-361.
product assembled from components supplied from around the world. Similar considerations apply to software—and software is important to any device, component, system, or network.
The global nature of the IT supply chain raises concerns that foreign suppliers may be subject to pressures from their governments to manipulate the supply of critical components of IT systems or networks or, even worse, introduce substandard, faulty, counterfeit, or deliberately vulnerable components into the supply chain. U.S. users of these components, which include both commercial and government entities, would thus be using components that weakened their cybersecurity posture.
To manage the risks associated with a globalized supply chain, users of the components it provides employ a number of strategies, sometimes in concert with each other:18
• Using trusted suppliers. Such parties must be able to show that they have taken adequate measures to ensure the dependability of the components they supply or ship. Usually, such measures would be regarded as “best practices” that should be taken by suppliers whether they are foreign or domestic.
• Diversifying suppliers. The use of multiple suppliers increases the effort that an adversary must exert to be confident of introducing its ersatz components into a particular or specific system of interest.
• Reducing the time between choosing a supplier and taking possession of the components provided. A shorter interval reduces the window within which an adversary can develop its ersatz components.
• Testing components. Components can be tested to ensure that they live up to the intended performance specifications. However, as a general rule, testing can indicate only the presence of a problem—not its absence. Thus, testing generally cannot demonstrate the presence of unwanted (and hostile) functionality in a component, although testing may be able to provide evidence that the component does in fact perform as it is supposed to perform.
The strategies described above address some of the important process and performance aspects of ensuring the integrity of the IT supply chain. But implementing these strategies entails some cost, and many of the most stringent strategies (e.g., self-fabrication of integrated circuit chips) are too expensive or otherwise impractical for widespread use. It is thus
18 National Institute of Standards and Technology, “NIST Special Publication 800-53—Recommended Security Controls for Federal Information Systems and Organizations,” 2010, available at http://csrc.nist.gov/publications/nistpubs/800-53-Rev3/sp800-53-rev3final_updated-errata_05-01-2010.pdf.
fair to say that the risk associated with corruption in the supply chain can be managed and mitigated to a certain degree—but not avoided entirely.
Policy regarding the use of offensive operations in cyberspace is generally classified. As a matter of logic, it is clear that offensive operations can be conducted for cyber defensive purposes and also for other purposes.19 Furthermore, according to a variety of public sources, policy regarding offensive operations in cyberspace includes the following points:
• The United States would respond to hostile acts in cyberspace as it would to any other threat to the nation, and reserves the right to use all necessary means—diplomatic, informational, military, and economic—as appropriate and consistent with applicable international law, in order to defend the nation, its allies, its partners, and its interests.20
• The laws of war apply to cyberspace,21 and because the United States has made a commitment to behaving in accordance with these laws, cyber operations conducted by the United States are expected to conform to the laws of war.
• Offensive operations in cyberspace offer “unique and unconventional capabilities to advance U.S. national objectives around the world with little or no warning to the adversary or target and with potential effects ranging from subtle to severely damaging.”22
• Offensive operations likely to have effects in the United States require presidential approval, except in emergency situations.23
• Cyber operations, including offensive operations, that are likely to result in significant consequences (such as loss of life; actions in response against the United States; damage to property; serious adverse foreign policy or economic impacts) require presidential approval.24
19 National Research Council, Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities, The National Academies Press, Washington, D.C., 2009.
20 White House, International Strategy for Cyberspace—Prosperity, Security, and Openness in a Networked World, May 2011, available at http://www.whitehouse.gov/sites/default/files/rss_viewer/international_strategy_for_cyberspace.pdf.
21 Harold Koh, Speech on International Law on Cyberspace at the USCYBERCOM Inter-Agency Legal Conference, Ft. Meade, Md., September 18, 2012, available at http://opiniojuris.org/2012/09/19/harold-koh-on-international-law-in-cyberspace/.
22 Robert Gellman, “Secret Cyber Directive Calls for Ability to Attack Without Warning,” Washington Post, June 7, 2013.
23 Gellman, “Secret Cyber Directive Calls for Ability to Attack Without Warning,” 2013.
24 Glenn Greenwald and Ewen MacAskill, “Obama Orders U.S. to Draw Up Overseas Target List for Cyberattacks,” The Guardian, June 7, 2013, available at http://www.theguardian.com/world/2013/jun/07/obama-china-targets-cyber-overseas.
However, despite public knowledge of these points, the United States has not articulated publicly a military doctrine for how cyber capabilities might be used operationally. (As a notable point of comparison, U.S. approaches to using nuclear weapons were publicly discussed during the Cold War.)
A particularly important question about the use of offensive cyber operations is the possibility of escalation, that is, that initial conflict in cyberspace may grow. But the escalation dynamics of conflict in cyberspace are not well understood. How would escalation unfold? How could escalation be prevented (or deterred)? Theories of escalation dynamics, especially in the nuclear domain, are unlikely to apply to escalation dynamics in cyberspace because of the profound differences between the nuclear and cyber domains. Some of the significant differences include the fact that attribution is much more uncertain, the ability of nonstate actors to interfere in the management of a conflict, and the existence of a multitude of states that have nontrivial capabilities to conduct cyber operations.
Last, the fact that the Department of Defense is willing to consider undertaking offensive operations in cyberspace as part of defending its own systems and networks raises the question of whether offensive operations might be useful to defend non-DOD systems, and in particular to defend entities in the private sector. Today, a private-sector entity that is the target of hostile actions in cyberspace can respond to such threats by taking measures within its organizational boundaries to strengthen its defensive posture, and it can seek the assistance of law enforcement authorities to investigate and to take action to mitigate the threat.
Although both of these responses (if properly implemented) are helpful, their effectiveness is limited. Tightening security often reduces important functionality in the systems being locked down—they become more difficult, slower, and inconvenient to use. Sustaining a locked-down posture is also costly. Law enforcement authorities can help, but they cannot do so quickly and the resources they can bring to bear are usually overwhelmed by the demands for their assistance.
A number of commentators and reports have suggested that a more aggressive defensive posture—that is, an active defense—is appropriate under some circumstances.25 Such an approach, especially if carried out
25 See, for example, Ellen Nakashima, “When Is a Cyberattack a Matter of Defense?,” Washington Post, February 27, 2012, available at http://www.washingtonpost.com/blogs/checkpoint-washington/post/active-defense-at-center-of-debate-oncyberattacks/2012/02/27/gIQACFoKeR_blog.html; Ellen Nakashima, “To Thwart Hackers, Firms Salting Their Servers with Fake Data,” Washington Post, January 2, 2013, available at http://www.washingtonpost.com/world/national-security/to-thwarthackers-firms-salting-their-servers-with-fake-data/2013/01/02/3ce00712-4afa-11e2-9a42
by the targeted private-sector entities, raises a host of technical, legal, and policy issues.
A U.S. policy that condones aggressive self-help might serve as a deterrent that reduces the cyber threat to private-sector entities. Alternatively, it might encourage a free-for-all environment in which any aggrieved party anywhere in the world would feel justified in conducting offensive operations against the alleged offender. This debate is not likely to be settled soon.
d1ce6d0ed278_story.html; David E. Sanger and Thom Shanker, “N.S.A. Devises Radio Pathway into Computers,” New York Times, January 14, 2014, available at http://www.nytimes.com/2014/01/15/us/nsa-effort-pries-open-computers-not-connected-to-internet.html; and Thom Shanker, “U.S. Weighs Its Strategy on Warfare in Cyberspace,” New York Times, October 19, 2011, available at http://www.nytimes.com/2011/10/19/world/africa/united-states-weighs-cyberwarfare-strategy.html.