4

Enhancing Cybersecurity

4.1 APPROACHES TO IMPROVING SECURITY

There are several approaches to minimizing the number and significance of adversarial cyber operations. The approaches described below are not mutually exclusive, and robust cybersecurity generally requires that some combination of them be used.

4.1.1 Reducing Reliance on Information Technology

The most basic way to improve cybersecurity is to reduce the use of information technology (IT) in critical contexts. Thus, the advantages of using IT must be weighed against the security risks that the use of IT might entail. In some cases, security risks cannot be mitigated to a sufficient degree, and the use of IT should be rejected. In other cases, security risks can be mitigated with some degree of effort and expense—these costs should be factored into the decision. But what should not happen is that security risks be ignored entirely—as may sometimes be the case.

An example of reducing reliance on IT is a decision to refrain from connecting a computer system to the Internet, even if not connecting might increase costs or decrease the system’s utility. The theory underlying such a decision is that the absence of an Internet connection to such a computer will prevent intruders from gaining access to it and thus that the computer system will be safe. In fact, this theory is not right—the lack of such a connection reduces but does not prevent access, and thus the



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 53
4 Enhancing Cybersecurity 4.1  APPROACHES TO IMPROVING SECURITY There are several approaches to minimizing the number and signifi- cance of adversarial cyber operations. The approaches described below are not mutually exclusive, and robust cybersecurity generally requires that some combination of them be used. 4.1.1  Reducing Reliance on Information Technology The most basic way to improve cybersecurity is to reduce the use of information technology (IT) in critical contexts. Thus, the advantages of using IT must be weighed against the security risks that the use of IT might entail. In some cases, security risks cannot be mitigated to a suffi- cient degree, and the use of IT should be rejected. In other cases, security risks can be mitigated with some degree of effort and expense—these costs should be factored into the decision. But what should not happen is that security risks be ignored entirely—as may sometimes be the case. An example of reducing reliance on IT is a decision to refrain from connecting a computer system to the Internet, even if not connecting might increase costs or decrease the system’s utility. The theory underly- ing such a decision is that the absence of an Internet connection to such a computer will prevent intruders from gaining access to it and thus that the computer system will be safe. In fact, this theory is not right—the lack of such a connection reduces but does not prevent access, and thus the 53

OCR for page 53
54 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY safety of the computer system cannot be taken for granted forever after. But disconnection does help under many circumstances. The broader point can be illustrated by supervisory control and data acquisition (SCADA) systems, some of which are connected to the Inter- net.1 SCADA systems are used to control many elements of physical infrastructure: electric power, gas and oil pipelines, chemical plants, fac- tories, water and sewage, and so on. Infrastructure operators connect their SCADA systems to the Internet to facilitate communications with them, at least in part because connections and communications hardware that are based on standard Internet protocols are often the least expensive way to provide such communications. But Internet connections also potentially provide access paths to these SCADA systems that intruders can use. Note that disconnection from the Internet may not be easy to accom- plish. Although SCADA systems may be taken off the Internet, connecting these systems to administrative computers that are themselves connected to the Internet (as might be useful for optimizing billing, for example) means that these SCADA systems are in fact connected—indirectly—to the Internet. 4.1.2  Knowing That Security Has Been Penetrated Detection From the standpoint of an individual system or network operator, the only thing worse than being penetrated is being penetrated and not knowing about it. Detecting that one has been the target of a hostile cyber operation is also the first step toward taking any kind of specific remedial action. Detection involves a decision that something (e.g., some file, some action) is harmful (or potentially harmful) or not harmful. Making such decisions is problematic because what counts as harmful or not harmful is for the most part a human decision—and such judgments may not be made correctly. In addition, the number of nonharmful things happening inside a computer or a network is generally quite large compared with the number of harmful things going on. So the detection problem is nearly always one of finding needles in haystacks. One often-used technique for detecting malware is to check to see if a suspect program has been previously identified as being “bad.” Such checks depend on “signatures” that might be associated with the program—the name of the program, the size of the program, the date 1 See http://cyberarms.wordpress.com/2013/03/19/worldwide-map-of-internet-connected- scada-systems/.

OCR for page 53
ENHANCING CYBERSECURITY 55 when it was created, a hash of the program,2 and so on. Signatures might also be associated with the path through which a program has arrived at the target—where it came from, for example. The Einstein program of the Department of Homeland Security (DHS) is an example of a signature-based approach to improving cybersecurity.3 By law and policy, DHS is the primary agency responsible for protecting U.S. government agencies other than the Department of Defense and the intelligence community. Einstein monitors Internet traffic going in and out of government networks and inspects a variety of traffic data (i.e., the header information in each packet but not the content of a packet itself) and compares that data to known patterns of such data that have previ- ously been associated with malware. If the match is sufficiently close, further action can be taken (e.g., a notification of detection made or traffic dropped). This signature-based technique for detection has two primary weak- nesses. First, it is easy to morph the code without affecting what the program can do so that there are an unlimited number of functionally equivalent versions with different signatures. Second, the technique can- not identify a program as malware if the program has never been seen before. Another technique for detection monitors the behavior of a program; if the program does “bad things,” it is identified as malware. When there are behavioral signatures that help with anomaly detection, this tech- nique can be useful. (A behavioral signature can be specified in terms of designating as suspicious any one of a specific set of actions, or it can be behavior that is significantly different from a user’s “normal” behavior.) But it is not a general solution because there is usually no reliable way to distinguish between an authorized user who wishes to do something for a legitimate and benign purpose and an intruder who wishes to do that very same thing for some nefarious purpose. In practice, this technique often results in a significant number of false positives—indications that something nefarious is going on when in fact it is not. A high level of false positives annoys legitimate users, and often results in these users being unable to get their work done. 2 One definition of a “hash function” is an algorithm that turns an arbitrary sequence of bits (1’s and 0’s) into a fixed-length value known as the hash of that string. With a well- constructed algorithm, hashes of two different bit sequences are very unlikely to have the same hash value. 3 Department of Homeland Security, National Cyber Security Division, Computer Emergency Readiness Team (US-CERT), Privacy Impact Assessment [of the] Einstein Program: Collecting, Analyzing, and Sharing Computer Security Information Across the Federal Civilian Government, September 2004, available at http://www.dhs.gov/xlibrary/assets/privacy/ privacy_pia_eisntein.pdf.

OCR for page 53
56 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY Assessment A hostile action taken against an individual system or network may or may not be part of a larger adversary operation that affects many systems simultaneously, and the scale and the nature of the systems and networks affected in an operation are critical information for decision makers. Detecting a coordinated adversary effort against the background noise of ongoing hostile operations also remains an enormous challenge, given that useful information from multiple sites must be made available on a timely basis. (And as detection capabilities improve, adversaries will take steps to mask such signs of coordinated efforts.) An assessment addresses many factors, including the scale of the hos- tile cyber operation (how many entities are being targeted), the nature of the targets (which entities are being targeted), the success of the operation and the extent and nature of damage caused by the operation, the extent and nature of any foreign involvement derived from technical analysis of the operation and/or any available intelligence information not specifi- cally derived from the operation itself, and attribution of the operation to a responsible party (discussed further in Box 4.1). Information on such factors is likely to be quite scarce when the first indications are received of “something bad going on in cyberspace.” Assessments are further com- plicated by the possibility that an initial penetration is simply paving the way for hostile payloads that will be delivered later, or by the possibility that the damage done by an adversarial operation will not be visible for a long time after it has taken place. The government agencies responsible for threat assessment and warn- ing can, in principle, draw on a wide range of information sources, both inside and outside the government. In addition to hearing from private- sector entities that are being targeted, cognizant government agencies can communicate with security IT vendors, such as Symantec and McAfee, that monitor the Internet for signs of hostile activity. Other public inter- est groups, such as the OpenNet Initiative and the Information Warfare Monitor, seek to monitor hostile operations launched on the Internet.4 4 See the OpenNet Initiative (http://opennet.net/) and the Information Warfare Moni- tor (http://www.infowar-monitor.net/) Web sites for more information on these groups. A useful press report on the activities of these groups can be found at Kim Hart, “A New Breed of Hackers Tracks Online Acts of War,” Washington Post, August 27, 2008, available at http://www.washingtonpost.com/wp-dyn/content/article/2008/08/26/ AR2008082603128_pf.html.

OCR for page 53
ENHANCING CYBERSECURITY 57 4.1.3  Defending a System or Network Defending a system or network means taking actions so that a hostile actor is less successful than he or she would otherwise be in the absence of defensive actions. A desirable side effect of taking such measures is that by reducing the likelihood that a hostile actor will succeed, that actor may also be deterred from taking hostile action because of its possible futility. Some of the most important approaches to defense include: • Reducing the number of vulnerabilities contained in any deployed IT system or network. There are two methods for doing so.  — Fix vulnerabilities as soon as they become known (a method known as “patching”). Much software has the capability to update itself, and many updates received automatically by a system con- tain patches that repair vulnerabilities that have become known since the software was released for general use.  — Design and implement software so that it has fewer vulnerabili- ties from the start. Software designers know many principles about how to design and build IT systems and networks more securely (Box 4.2). Systems or networks not built in accord with such prin- ciples will almost certainly exhibit inherent vulnerabilities that are difficult or impossible to address. In some cases, hardware-based security features are feasible—implementing such features in hard- ware is often more secure than implementing them in software, although hardware implementations may be less flexible than com- parable software implementations. • Eliminating or blocking known but unnecessary access paths. Many IT systems or networks have a variety of ways to access them that are unnec- essary for their effective use. Security-conscious system administrators often disconnect unneeded wireless connections and wired jacks; disable USB ports; change system access controls to quickly remove departing employees or to restrict the access privileges available to individual users to only those that are absolutely necessary for their work; and install fire- walls that block traffic from certain suspect sources. Disconnecting from the Internet is a particular instance of eliminating an access path. • “Whitelisting” software. Vendors of major operating systems provide the option of (and sometimes require) restricting the programs that can be run to those whose provenance can be demonstrated. An example of this approach is the “app store” approach to software development by third parties for mobile devices. In principle, whitelisting requires that the code of an application be cryptographically signed by its author using a public digital certification of identity, and thus a responsible party can be identi-

OCR for page 53
58 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY BOX 4.1  On Attribution Attribution is the process through which an adversarial cyber operation is associated with its perpetrator. In this context, the definition of “perpetrator” can have many meanings: • The computer from which the adversarial cyber operation reached the target. Note that this computer—the one most proximate to the target—may well belong to an innocent third party that has no knowledge of the operation being conducted. • The computer that launched or initiated the operation. • The geographic location of the machine that launched or initiated the operation. • The individual sitting at the keyboard of the initiating machine. • The nation under whose jurisdiction the named individual falls (e.g., by virtue of his physical location when he typed the initiating commands). • The entity under whose auspices the individual acted, if any. One can thus imagine a hostile operation that is launched under the auspices of Elbonia, by a Ruritanian citizen sitting in a Darkistanian computer laboratory, that penetrates computers in Agraria as intermediate nodes in an attack on com- puters in Latkovia. In general, “attribution” of a hostile cyber operation could refer to an identifica- tion of any of three entities: • A computer or computers (called C) that may be involved in the operation. The identity of C may be specified as a machine serial number, a MAC address, or an Internet Protocol (IP) address.1 • The human being(s) (H) involved in the operation, especially the human being who initiates the hostile operation (e.g., at the keyboard). The identity of H may be specified as his or her name, pseudonym, or identification card number, for example. • The party (P) ultimately responsible for the actions of the involved hu- mans. The identity of P may be the name of another individual, the name of an organization, or the name of a country, for example. If H is a “lone wolf,” P and H are probably the same. Note that knowing the identity of C does not necessarily identify H, and know- ing the identity of H does not necessarily identify P. The distinctions between C, H, and P are important because the appropriate meaning of attribution depends on the reason that attribution is necessary. • If the goal is to mitigate the negative effects of a hostile cyber operation as soon as possible, it is necessary to shut down the computers involved in the operation, a task that depends on affecting the computers more than on affecting their operators or their masters. The identity of C is important. • If the goal is to prosecute or take the responsible humans into custody, the names of these human beings are important. The identity of H is important. • If the goal is to deter future hostile acts, and recognizing that deterrence involves imposing a cost on the party that would otherwise choose to launch a future hostile act, the identity of P is important.

OCR for page 53
ENHANCING CYBERSECURITY 59 When the identities of H or P are desired, judgments of attribution are based on all available sources of information, which could include technical signatures and forensics collected regarding the act in question, communications information (e.g., intercepted phone calls monitoring conversations of individuals or their lead- ers), prior history (e.g., similarity to previous hostile operations), and knowledge of those with incentives to conduct such operations. The fact that such a diversity of sources is necessary for identifying humans underscores a fundamental point—assignment of responsibility for an adversarial cyber operation is an act that is influenced although not uniquely determined by the technical information associated with the operation itself. Nontechnical evidence can often play an important role in determining responsibility, and ultimately, human judgment is an essential element of any attempt at attribution. It is commonly said that attribution of an adversarial cyber operation is im- possible. The statement does have an essential kernel of truth: if the perpetrator makes no mistakes, uses techniques that have never been seen before, leaves behind no clues that point to himself, does not discuss the operation in any public or monitored forum, and does not conduct his actions during a period in which his incentives to conduct such operations are known publicly, then identification of the perpetrator may well be impossible. Indeed, sometimes all of these conditions are met, and policy makers rightly despair of their ability to act appropriately under such circumstances. But in other cases, the problem of attribution is not so dire, because one or more of these conditions are not met, and it may be possible to make some useful (if incomplete) judgments about attribution. For example, a cyber intruder may leave his IP ad- dress exposed (perhaps because he forgot to use an anonymizing service to hide it). That IP address may be the key piece of information that is necessary to track the intruder’s location and eventually to arrest the individual involved.2 Perhaps the more important point is that prompt attribution of any given adversarial cyber operation is much more difficult than eventual or delayed attribu- tion. It takes time—days, weeks, perhaps months—to assemble forensic evidence and to compare it to evidence of previous operations, to query nontechnical intel- ligence sources, and so on. In a national security context, policy makers faced with responding to a hostile cyber operation naturally feel pressure to respond quickly, but sometimes such pressures have more political than operational significance. Last, because attribution to any actor beyond a machine involves human judgments, actors that are accused of being responsible for bad actions in cyber- space can always assert their innocence and point to the sinister motives of the parties making human judgments, regardless of whether those judgments are well founded. Such denials have some plausibility, especially in an environment in which there are no accepted standards for making judgments related to attribution. 1 A MAC address (MAC is an acronym for media access control) is a unique number as- sociated with a physical network adapter, specified by the manufacturer and hard-coded into the adapter hardware. An IP address (Internet Protocol address) is a number assigned by the operator of a network using the Internet Protocol to a device (e.g., a computer) attached to that network; the operator may, or may not, use a configuration protocol that assigns a new number every time the device appears on the network. 2 See Gerry Smith, “FBI Agent: We’ve Dismantled the Leaders of Anonymous,” The Huffington Post, August 21, 2013, available at http://www.huffingtonpost.com/2013/08/21/ anonymous-arrests-fbi_n_3780980.html.

OCR for page 53
60 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY BOX 4.2 The Saltzer-Schroeder Principles of Secure System Design and Development Saltzer and Schroeder articulate eight design principles that can guide sys- tem design and contribute to an implementation without security flaws: • Economy of mechanism: The design should be kept as simple and small as possible. Design and implementation errors that result in unwanted access paths will not be noticed during normal use (since normal use usually does not include attempts to exercise improper access paths). As a result, techniques such as line-by-line inspection of software and physical examination of hardware that implements protection mechanisms are necessary. For such techniques to be suc- cessful, a small and simple design is essential. • Fail-safe defaults: Access decisions should be based on permission rather than exclusion. The default situation is lack of access, and the protection scheme identifies conditions under which access is permitted. The alternative, in which mechanisms attempt to identify conditions under which access should be refused, presents the wrong psychological base for secure system design. This principle applies both to the outward appearance of the protection mechanism and to its underlying implementation. • Complete mediation: Every access to every object must be checked for authority. This principle, when systematically applied, is the primary underpinning of the protection system. It forces a system-wide view of access control, which, in addition to normal operation, includes initialization, recovery, shutdown, and maintenance. It implies that a foolproof method of identifying the source of every request must be devised. It also requires that proposals to gain performance by remembering the result of an authority check be examined skeptically. If a change in authority occurs, such remembered results must be systematically updated. • Open design: The design should not be secret. The protection mecha- nisms should not depend on the ignorance of potential attackers, but rather on the possession of specific, more easily protected keys or passwords. This decoupling of protection mechanisms from protection keys permits the mechanisms to be examined by many reviewers without concern that the review may itself compro- mise the safeguards. In addition, any skeptical users may be allowed to convince fied if the program does damage to the user’s system.5 If the app store does whitelisting consistently and rigorously (and app stores do vary sig- nificantly in their rigor), the user is more secure in this arrangement, but cannot run programs that have not been properly signed. Another issue for whitelisting is who establishes any given whitelist—the user (who 5 The whitelisting approach can be extended to other scenarios. For example, a mail service can be configured to accept e-mail only from a specified list of parties approved by the recipient as “safe.” A networked computer can be configured to accept connections only from a specified list of computers.

OCR for page 53
ENHANCING CYBERSECURITY 61 themselves that the system they are about to use is adequate for their individual purposes. Finally, it is simply not realistic to attempt to maintain secrecy for any system that receives wide distribution. • Separation of privilege: Where feasible, a protection mechanism that re- quires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key. The reason for this greater robustness and flexibility is that, once the mechanism is locked, the two keys can be physically separated, and distinct programs, organizations, or individuals can be made re- sponsible for them. From then on, no single accident, deception, or breach of trust is sufficient to compromise the protected information. • Least privilege: Every program and every user of the system should oper- ate using the least set of privileges necessary to complete the job. This principle reduces the number of potential interactions among privileged programs to the minimum for correct operation, so that unintentional, unwanted, or improper uses of privilege are less likely to occur. Thus, if a question arises related to the possible misuse of a privilege, the number of programs that must be audited is minimized. • Least common mechanism: The amount of mechanism common to more than one user and depended on by all users should be minimized. Every shared mechanism (especially one involving shared variables) represents a potential infor- mation path between users and must be designed with great care to ensure that it does not unintentionally compromise security. Further, any mechanism serving all users must be certified to the satisfaction of every user, a job presumably harder than satisfying only one or a few users. • Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the pro- tection mechanisms correctly. More generally, the use of protection mechanisms should not impose burdens on users that might lead users to avoid or circumvent them—when possible, the use of such mechanisms should confer a benefit that makes users want to use them. Thus, if the protection mechanisms make the system slower or cause the user to do more work—even if that extra work is “easy”—they are arguably flawed. SOURCE: Adapted from J.H. Saltzer and M.D. Schroeder, “The Protection of Information in Computer Systems,” Proceedings of the IEEE 63(9):1278-1308, September 1975. may not have the expertise to determine safe parties) or someone else (who may not be willing or able to provide the full range of applications desired by the user or may accept software too uncritically for inclusion on the whitelist). These approaches to defense are well known, and are often imple- mented to a certain degree in many situations. But in general, these approaches have not been adopted as fully as they could be, leaving sys- tems more vulnerable than they would otherwise be. If the approaches

OCR for page 53
62 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY remain valid (and they do), why are they not more widely adopted? Several factors account for this phenomenon: • Potential conflicts with performance and functionality. In many cases, closing down access paths and introducing cybersecurity to a system’s design slows it down or makes it harder to use. Restricting access privi- leges to users often has serious usability implications and makes it harder for users to get legitimate work done, as for example when someone needs higher access privileges temporarily but on a time-urgent basis. Implementing the checking, monitoring, and recovery needed for secure operation requires a lot of computation and does not come for free. User demands for backward compatibility at the applications level often call for building into new systems some of the same security vulnerabilities present in the old systems. Program features that enable adversary access can be turned off, but doing so may disable functionality needed or desired by users. • The mismatch between these approaches to defense and real-world soft- ware development environments. For example, software developers often experience false starts, and many “first-try” artifacts are thrown away. In such an environment, it makes very little sense to invest up front in the approaches to defense outlined above unless such adherence is relatively inexpensive. • The difficulty of upgrading large systems. With large systems in place, it is very difficult, from both a cost and a deployment standpoint, to upgrade all parts of the system at once. This means that for practical purposes, an organization may well be operating with an information technology environment in which the parts that have not been replaced are likely still vulnerable, and their interconnection to the parts that have been replaced may make even the new components vulnerable. 4.1.4  Ensuring Accountability Accountability is the ability to unambiguously associate a conse- quence with a past action of an individual or an organization. Authenti- cation refers to a process that ensures that an asserted identity is indeed properly associated with the asserting party. Access control is the tech- nical mechanism by which certain system privileges but not others are granted to specified individuals. Forensics for cybersecurity are the tech- nical means by which the activity of an intruder can be reconstructed; in many cases, the intruder leaves behind evidence that provides clues to his or her identity.

OCR for page 53
ENHANCING CYBERSECURITY 63 Individual Authentication and Access Control For purposes of this report, authentication usually refers to the pro- cess of establishing that a particular identifier (such as a login name) correctly refers to a specific party, such as a user, a company, or a govern- ment agency. As applied to individuals, authentication serves two purposes: • Ensuring that only authorized parties can perform certain actions. In many organizations, authorized users are granted a set of privileges— the system is intended to ensure that those users can exercise only those privileges and no others. Because certain users have privileges that others lack, someone who is not authorized to perform a given action may seek to usurp the authentication credentials of someone who is so authorized so that the unauthorized party can impersonate an authorized party. A user may be authorized by virtue of the role(s) he or she plays (e.g., all senior executives have the ability to delete records, but no one else) or by virtue of his or her explicit designation by name (Jane has delete access but John does not). • Facilitating accountability, which is the ability to associate a conse- quence with a past improper action of an individual. Thus, the authentica- tion process must unambiguously identify one and only one individual who will be held accountable for improper actions. (This is the reason that credentials should not be shared among individuals.) To avoid account- ability, an individual may seek to defeat an authentication process. In general, the authentication process depends on one or more of three factors: something you know, something you have, or something you are. • Something you know, such as a password. Passwords have many advantages. For example, the use of passwords requires no specialized hardware or training. Passwords can be distributed, maintained, and updated by telephone, fax, or e-mail. But they are also susceptible to guessing and to theft.6 Passwords are easily shared, either intentionally or inadvertently (when written down near a computer, for example), and a complex, expensive infrastructure is necessary to enable resetting lost (forgotten) passwords. Because people often reuse the same name and password combinations across different systems to ease the burden 6 For example, in 2010, the most common passwords for Gawker Media Web sites were (in order of frequency) “123456,” “password,” and “12345678.” See Impact Lab, “The Top 50 Gawker Media Passwords,” December 14, 2010, available at http://www.impactlab. net/2010/12/14/the-top-50-gawker-media-passwords/.

OCR for page 53
82 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY lance” conducted to obtain information about a foreign power or foreign territory that relates to the national defense, the security, or the conduct of the foreign affairs of the United States, also known as “foreign intelligence information”). As this report is being written, the scope and the nature of precisely how federal agencies have complied with various portions of FISA are under investigation. A number of other statutes are designed to provide notification in the event that important information is compromised. If such information is personally identifiable, data breach laws generally require notification of the individuals with whom such information is associated. Federal securities law (the Securities Act of 1933 and the Securities Exchange Act of 1934) requires firms to disclose to investors timely, comprehensive, and accurate information about risks and events that is important to an investment decision. Under this authority, the Securities and Exchange Commission’s Division of Corporation Finance in 2011 provided volun- tary guidance to firms regarding their obligations to disclose information relating to cybersecurity risks and cyber incidents.18 Several federal statutes assign responsibility within the federal gov- ernment for various aspects of cybersecurity, including the Computer Security Act of 1987 (National Institute of Standards and Technology [NIST], responsible for developing security standards for non-national- security federal computer systems); the Paperwork Reduction Act of 1995 (Office of Management and Budget [OMB], responsible for developing cybersecurity policies); the Clinger-Cohen Act of 1996 (agency heads responsible for ensuring the adequacy of agency information-security policies and procedures); the Homeland Security Act of 2002 (HSA; Department of Homeland Security [DHS], responsible for cybersecurity for homeland security and critical infrastructure); the Cyber Security Research and Development Act of 2002 (NSF and NIST, research respon- sibilities in cybersecurity); and the Federal Information Security Manage- ment Act of 2002 (FISMA; clarified and strengthened NIST and agency cybersecurity responsibilities, established a central federal incident center, and made OMB, rather than the Secretary of Commerce, responsible for promulgating federal cybersecurity standards). Finally, national security law may affect how the United States may itself use cyber operations in an offensive capacity for damaging adver- sary information technology systems or the information therein. For example, the War Powers Act of 1973 restricts presidential authority to use the U.S. armed forces in potential or actual hostilities without congressio- U.S. Securities and Exchange Commission, Division of Corporation Finance, “CF 18 Disclosure Guidance: Topic No. 2—Cybersecurity,” October 13, 2011, available at http:// www.sec.gov/divisions/corpfin/guidance/cfguidance-topic2.htm.

OCR for page 53
ENHANCING CYBERSECURITY 83 nal authorization. However, the War Powers Act was passed in 1973—that is, at a time that cyber conflict was not a serious possibility—and the War Powers Act is poorly suited to U.S. military forces that might engage in active cyber conflict. Also, the Posse Comitatus Act of 1878 places some constraints on the extent to which, if at all, the Department of Defense— within which is resident a great deal of cybersecurity knowledge—can cooperate with civil agencies on matters related to cybersecurity. International Law International law does not explicitly address the conduct of hostile cyber operations that cross international boundaries. However, one inter- national agreement—the Convention on Cybercrime—seeks to harmonize national laws that criminalize certain specifically identified computer- related actions or activities, to improve national capabilities for inves- tigating such crimes, and to increase cooperation on investigations.19 That convention also obliges ratifying states to create laws allowing law enforcement to search and seize computers and “computer data,” engage in wiretapping, and obtain real-time and stored communications data, whether or not the crime under investigation is a cybercrime. International law does potentially touch on hostile cyber operations that cross international boundaries when a hostile cyber operation is the instrumentality through which some regulated action is achieved. A par- ticularly important example of such a case is the applicability of the laws of war (or, equivalently, the law of armed conflict) to cyberattacks. Today, the law of armed conflict is expressed in two legal instruments—the UN Charter and the Geneva and Hague Conventions. The UN Charter is the body of treaty law that governs when a nation may engage in armed conflict. Complications and uncertainty regarding how the UN Charter should be interpreted with respect to cyberattacks result from three fundamental facts: • The UN Charter was written in 1945, long before the notion of cyberattacks was even imagined. Thus, the framers of the charter could not have imagined how it might apply to cyber conflict. • The UN Charter does not define key terms, such as “use of force,” “threat of force,” or “armed attack.” Definitions and meanings can only be inferred from historical precedent and practice, and there are no such precedents for their meaning in the context of cyber conflict. 19 Drafted by the Council of Europe in Strasbourg, France, the convention is available on the Web site of the Council of Europe at http://conventions.coe.int/Treaty/en/Treaties/ Html/185.htm.

OCR for page 53
84 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY • The charter is in some ways internally inconsistent. It bans certain acts (uses of force) that could damage persons or property, but allows other acts (economic sanctions) that could damage persons or property. Offensive cyber operations may well magnify such inconsistencies. The Geneva and Hague Conventions regulate how a nation engaged in armed conflict must behave. These conventions embody several prin- ciples, such as the principle of nonperfidy (military forces cannot pretend to be legally protected entities, such as hospitals); the principle of propor- tionality (the military advantage gained by a military operation must not be disproportionate to the collateral damage inflicted on civilian targets); and the principle of distinction (military operations may be conducted only against “military objectives” and not against civilian targets). But as with the UN Charter, the Geneva Conventions are silent on cyberattack as a modality of conflict, and how to apply the principles mentioned above in any instance involving cyber conflict may be uncertain in some cases. A second important example of an implicit relationship between hos- tile cyber operations and international law is that of cyber exploitation by one nation to acquire intelligence information from another. Espionage is an illegal activity under the domestic laws of virtually all nations, but not under international law. There are no limits in international law on the methods of collecting information, what kinds of information can be col- lected, how much information can be collected, or the purposes for which collected information may be used. As noted above, international law is also articulated through cus- tomary international law—that is, the general and consistent practices of states followed from a sense of legal obligation. Such law is not codified in the form of treaties but rather is found in international case law. Here too, guidance for what counts as proper behavior in cyberspace is lack- ing. Universal adherence to norms of behavior in cyberspace could help to provide nations with information about the intentions and capabilities of other adherents, in both strategic and tactical contexts, but there are no such norms today. Foreign Domestic Law Foreign nations are governed by their own domestic laws that relate to cybersecurity. When another nation’s laws criminalize similar bad activities in cyberspace, the United States and that other nation are more likely to be able to work together to combat hostile cyber operations that cross their national borders. For example, the United States and China have been able to find common ground in working together to combat the production of child pornography and spam.

OCR for page 53
ENHANCING CYBERSECURITY 85 But when security- or privacy-related laws of different nations are inconsistent, foreign law often has an impact on the ability of the United States to trace the origin of hostile cyber operations against the United States or to take action against perpetrators under another nation’s juris- diction. Legal dissimilarities have in the past impeded both investigation and prosecution of hostile cyber operations that have crossed interna- tional boundaries. 4.2.4  Organizational Purview From an organizational perspective, the response of the United States to a hostile operation in cyberspace by a nonstate actor is often char- acterized as depending strongly on whether that operation is one that requires a law enforcement response or a national security response. This characterization is based on the idea that a national security response relaxes many of the constraints that would otherwise be imposed on a law enforcement response. For example, active defense—either by active threat neutralization or by cyber retaliation—may be more viable under a national security response paradigm, whereas a law enforcement para- digm might call for strengthened passive defense measures to mitigate the immediate threat and other activities to identify and prosecute the perpetrators. When a cyber incident first occurs, its scope and nature are not likely to be clear, and many factors relevant to a decision will not be known. For example, because cyber weapons can act over many time scales, anony- mously, and clandestinely, knowledge about the scope and character of a cyberattack will be hard to obtain quickly. Attributing the incident to a nation-state or to a non-national actor may not be possible for an extended period of time. Other nontechnical factors may also play into the assess- ment of a cyber incident, such as the state of political relations with other nations that are capable of launching the cyber operations involved in the incident. Once the possibility of a cyberattack is made known to national authorities, information must be gathered to determine perpetrator and purpose, and must be gathered using the available legal authorities. Some entity within the federal government integrates the relevant information, and then it or another higher entity (e.g., the National Security Council) renders a decision about next steps to be taken, and in particular whether a law enforcement or national security response is called for. How might some of the factors described above be taken into account as a greater understanding of the event develops? Law enforcement equi- ties are likely to predominate in the decision-making calculus if the scale of the attack is small, if the assets targeted are not important military

OCR for page 53
86 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY assets or elements of critical infrastructure, or if the attack has not created substantial damage. However, an incident with sufficiently serious conse- quences (e.g., death and/or significant destruction) that it would qualify as a use of force or an armed attack on the United States had it been carried out with kinetic means would almost certainly be regarded as a national security matter. Other factors likely to influence such a determi- nation are the geographic origin of the attack and the nature of the party responsible for the attack (e.g., national government, terrorist group). U.S. law has traditionally drawn distinctions between authorities granted to law enforcement (Title 18 of the U.S. Code), the Department of Defense (Title 10 of the U.S. Code), and the intelligence community (Title 50 of the U.S. Code), but in an era of international terrorist threats, these distinctions are not as clear in practice as when threats to the United States emanated primarily from other nations. That is, certain threats to the United States implicate both law enforcement and national security equities and call for a coordinated response by all relevant government agencies. When critical infrastructure is involved, the entity responsible for integrating the available information and recommending next steps to be taken has evolved over time. Today, the National Cybersecurity and Com- munications Integration Center (NCCIC) is the cognizant entity within the U.S. government that fuses information on the above factors and inte- grates the intelligence, national security, law enforcement, and private- sector equities regarding the significance of any given cyber incident.20 Whatever the mechanisms for aggregating and integrating informa- tion related to a cyber incident, the function served is an essential one— and if the relationships, the communications pathways, the protocols for exchanging data, and the authorities are not established and working well in advance, responses to a large unanticipated cyber incident will be uncoordinated and delayed. 4.2.5 Deterrence Deterrence relies on the idea that inducing a would-be intruder to refrain from acting in a hostile manner is as good as successfully defend- ing against or recovering from a hostile cyber operation. Deterrence through the threat of retaliation is based on imposing negative conse- quences on adversaries for attempting a hostile operation. Imposing a penalty on an intruder serves two functions. It serves See U.S. Department of Homeland Security, “About the National Cybersecurity and 20 Communications Integration Center,” available at http://www.dhs.gov/about-national- cybersecurity-communications-integration-center.

OCR for page 53
ENHANCING CYBERSECURITY 87 the goal of justice—an intruder should not be able to cause damage with impunity, and the penalty is a form of punishment for the intruder’s misdeeds. In addition, it sets the precedent that misdeeds can and will result in a penalty for the intruder, and it seeks to instill in future would- be intruders the fear that he or she will suffer from any misdeeds they might commit, and thus to deter such action, thereby discouraging further misdeeds. What the nature of the penalty should be and who should impose the penalty are key questions in this regard. (Note that a penalty need not take the same form as the hostile action itself.) What counts as a sufficient attribution of hostile action to a responsible party is also a threshold issue, because imposing penalties on parties not in fact responsible for a hostile action has many negative ramifications. For deterrence to be effective, the penalty must be one that affects the adversary’s decision-making process and changes the adversary’s cost-benefit calculus. Possible penalties in principle span a broad range, including jail time, fines, or other judicially sanctioned remedies; damage to or destruction of the information technology assets used by the per- petrator to conduct a hostile cyber operation; loss of or damage to other assets that are valuable to the perpetrator; or other actions that might damage the perpetrator’s interests. But the appropriate choice of penalty is not separate from the party imposing the penalty. For example, the prospect that the victim of a hos- tile operation might undertake destructive actions against a perpetrator raises the spectre of vigilantism and easily leads to questions of account- ability and/or disproportionate response. Law enforcement authorities and the judicial system rely on federal and state law to provide penalties, but they presume the existence of a process in which a misdeed is investigated, perpetrators are prosecuted, and if found guilty are subject to penalties imposed by law. As noted in Section 4.2.3, a number of laws impose penalties for the willful conduct of hostile cyber operations. Deterrence in this context is based on the idea that a high likelihood of imposing a significant penalty for violations of such laws will deter such violations. In a national security context, when the misdeed in question affects national security, the penalty can take the form of diplomacy such as demarches and breaks in diplomatic relations, economic actions such as trade sanctions, international law enforcement such as actions taken in international courts, nonkinetic military operations such as deploying forces as visible signs of commitment and resolve, military operations such as the use of cruise missiles against valuable adversary assets, or cyber operations launched in response. In a cyber context, the efficacy of deterrence is an open question.

OCR for page 53
88 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY Deterrence was and is a central construct in contemplating the use of nuclear weapons and in nuclear strategy—because effective defenses against nuclear weapons are difficult to construct, using the threat of retaliation to persuade an adversary to refrain from using nuclear weap- ons is regarded by many as the most plausible and effective alternative to ineffective or useless defenses. Indeed, deterrence of nuclear threats in the Cold War established the paradigm in which the conditions for successful deterrence are largely met. It is an entirely open question whether cyber deterrence is a viable strategy. Although nuclear weapons and cyber weapons share one key characteristic (the superiority of offense over defense), they differ in many other key characteristics. For example, it is plausible to assume that a large-scale nuclear attack can be promptly recognized and attributed, but it is not plausible to assume the same for a large-scale cyberattack. 4.3  ASSESSING CYBERSECURITY How should a system’s security be assessed? Cybersecurity analysts have strong intuitions that some systems are more secure than others, but assessing a system’s cybersecurity posture turns out to be a remarkably thorny problem. From a technical standpoint, assessing the nature and extent of a system’s security is confounded by two factors: • A system can be secure only to the extent that system designers can precisely specify what it means for the system to operate securely. Indeed, many vulnerabilities in systems can be traced to misunderstandings or a lack of clarity about what a system should do under a particular set of circumstances (such as the use of penetration techniques or attack tools that the defender has never seen before). • A system that contains functionality that should not be present according to the specifications may be insecure, because that excess func- tionality may entail doing something harmful. Discovering that a system has “extra” functionality that may be harmful turns out to be an extraor- dinarily difficult task as a general rule. Viewing system security from an operational perspective rather than just a technical one shows that security is a holistic, emergent, multi- dimensional property of a system rather than a fixed attribute. Indeed, many factors other than technology affect the security of a system, includ- ing the system’s configuration, the cybersecurity training and awareness of the people using the system, the access control policy in place, the boundaries of the system (e.g., are users allowed to connect their own

OCR for page 53
ENHANCING CYBERSECURITY 89 devices to the system?), the reliability of personnel, and the nature of the threat against the system. Accordingly, a discussion cast simply in terms of whether a system is or is not secure is almost certainly misleading. Assessing the security of a system must include qualifiers such as, Security against what kind of threat? Under what circumstances? For what purpose? With what con- figuration? Under what security policy? What does the discussion above imply for the development of cyber- security metrics—measurable quantities whose value provides informa- tion about a system or network’s resistance to a hostile cyber operation? Metrics are intended to help individuals and companies make rational quantitative decisions about whether or not they have “done enough” with respect to cybersecurity. These parties would be able to quantify cost-benefit tradeoffs in implementing security features, and they would be able to determine if System A is more secure than System B. Good cybersecurity metrics would also support a more robust insurance market in cybersecurity founded on sound actuarial principles and knowledge. The holy grail for cybersecurity analysts is an overall cybersecurity metric that is applicable to all systems and in all operating environ- ments. The discussion above, not to mention several decades’ worth of research and operational experience, suggests that this holy grail will not be achieved for the foreseeable future. But other metrics may still be use- ful under some circumstances. It is important to distinguish between input metrics (metrics for what system users or designers do to the system), output metrics (metrics for what the system produces), and outcome metrics (metrics for what users or designers are trying to achieve—the “why” for the output metrics).21 • Input metrics reflect system characteristics, operation, or environ- ment that are believed to be associated with desirable cybersecurity out- comes. An example of an input metric could be the annual cybersecurity budget of an organization. In practice, many input metrics for cybersecu- rity are not validated in practice, and/or are established intuitively. • Output metrics reflect system performance with respect to param- eters that are believed to be associated with desirable cybersecurity out- comes. An output metric in a cybersecurity context could be the number of cybersecurity incidents in a given year. Output metrics can often be assessed through the use of a red team. Sometimes known as “white-hat” 21 See Republic of South Africa, “Key Performance Information Concepts,” Chapter 3 in Framework for Managing Programme Performance Information, National Treasury, Pretoria, South Africa, May 2007, available at http://www.thepresidency.gov.za/learning/reference/ framework/part3.pdf.

OCR for page 53
90 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY or “ethical” hackers, a red team attempts to penetrate a system’s security under operational conditions with the blessing of senior management, and then reports to senior management on its efforts and what it has learned about the system’s security weaknesses. Red teaming is often the most effective way to assess the cybersecurity posture of an organization, because it provides a high-fidelity simulation of a real adversary’s actions. • Outcome metrics reflect the extent to which the system’s cyber- security properties actually produce or reflect desirable cybersecurity outcomes. In a cybersecurity context, an outcome measure might be the annual losses for an organization due to cybersecurity incidents. With the particular examples chosen, a possible logic chain is that an organization that increases its cybersecurity expenditures can reduce the number of cybersecurity incidents and thereby reduce the annual losses due to such incidents. Of course, if an organization spends its cyberse- curity budget unwisely, the presumed relationship between budget and number of incidents may well not hold. Also, the correlation between improvement in a cybersecurity input metric and better cybersecurity outcomes may well be disrupted by an adaptive adversary. The benefit of the improvement may endure, how- ever, against adversaries that do not adapt—and thus the resulting cyber- security posture against the entire universe of threats may in fact be improved. 4.4  ON THE NEED FOR RESEARCH Within each of the approaches for improving cybersecurity described above, research is needed in two broad categories. First, problem-specific research is needed to find good solutions for pressing cybersecurity prob- lems. A good solution to a cybersecurity problem is one that is effective, is robust against a variety of attack types, is inexpensive and easy to deploy, is easy to use, and does not significantly reduce or cripple other function- ality in the system of which it is made a part. Problem-specific research includes developing new knowledge on how to improve the prospects for deployment and use of known solutions to given problems. Second, even assuming that everything known today about improv- ing cybersecurity was immediately put into practice, the resulting cyber- security posture—although it would be stronger and more resilient than it is now—would still be inadequate against today’s high-end threat, let alone tomorrow’s. Closing this gap—a gap of knowledge—will require substantial research as well. Several principles, described in the 2007 NRC report Toward a Safer and More Secure Cyberspace, should shape the cybersecurity research agenda:

OCR for page 53
ENHANCING CYBERSECURITY 91 • Conduct cybersecurity research as though its application will be impor- tant. The scope of cybersecurity research must extend to understand- ing how cybersecurity technologies and practice can be applied in real- life contexts. Consequently, fundamental research in cybersecurity will embrace organizational, sociological, economic, legal, and psychological factors as well as technological ones. • Hedge against uncertainty in the nature and severity of the future cybersecurity threat. A balance in the research portfolio between research addressing low-end and high-end threats is necessary. Operationally, it means that the R&D agenda in cybersecurity should be both broader and deeper than might be required if only low-end threats were at issue. (Because of the long lead time for large-scale deployments of any mea- sure, part of the research agenda must include research directed at reduc- ing those long lead times.) • Ensure programmatic continuity. A sound research program should also support a substantial effort in research areas with a long time hori- zon for payoff. This is not to say that long-term research cannot have intermediate milestones, although such milestones should be treated as midcourse corrections rather than “go/no-go” decisions that demoral- ize researchers and make them overly conservative. Long-term research should engage both academic and industry actors, and it can involve col- laboration early and often with technology-transition stakeholders, even in the basic science stages. • Respect the need for breadth in the research agenda. Cybersecurity risks will be on the rise for the foreseeable future, but few specifics about those risks can be known with high confidence. Thus, it is not realistic to imagine that one or even a few promising approaches will prevent or even substantially mitigate cybersecurity risks in the future, and cyber- security research must be conducted across a broad front. In addition, because qualitatively new attacks can appear with little warning, a broad research agenda is likely to decrease significantly the time needed to develop countermeasures against these new attacks when they appear. Priorities are still important, but they should be determined by those in a position to respond most quickly to the changing environment—namely, the research constituencies that provide peer review and the program managers of the various research-supporting agencies. Notions of breadth and diversity in the cybersecurity research agenda should themselves be interpreted broadly as well, and might well be integrated into other research programs such as software and systems engineering, operating systems, programming languages, networks, Web applications, and so on. • Disseminate new knowledge and artifacts (e.g., software and hardware prototypes) to the research community. Dissemination of research results beyond one’s own laboratory is necessary if those results are to have a

OCR for page 53
92 AT THE NEXUS OF CYBERSECURITY AND PUBLIC POLICY wide impact—a point that argues for cybersecurity research to be con- ducted on an unclassified basis as much as possible. Other information to be shared as widely as possible includes threat and incident information that can help guide future research. As for the impact of research on the nation’s cybersecurity posture, it is not reasonable to expect that research alone will make any substan- tial difference at all. Indeed, many factors must be aligned if research is to have a significant impact. Specifically, IT vendors must be willing to regard security as a product attribute that is coequal with performance and cost; IT researchers must be willing to value cybersecurity research as much as they value research into high-performance or cost-effective computing; and IT purchasers must be willing to incur present-day costs in order to obtain future benefits.