There are several approaches to minimizing the number and significance of adversarial cyber operations. The approaches described below are not mutually exclusive, and robust cybersecurity generally requires that some combination of them be used.
The most basic way to improve cybersecurity is to reduce the use of information technology (IT) in critical contexts. Thus, the advantages of using IT must be weighed against the security risks that the use of IT might entail. In some cases, security risks cannot be mitigated to a sufficient degree, and the use of IT should be rejected. In other cases, security risks can be mitigated with some degree of effort and expense—these costs should be factored into the decision. But what should not happen is that security risks be ignored entirely—as may sometimes be the case.
An example of reducing reliance on IT is a decision to refrain from connecting a computer system to the Internet, even if not connecting might increase costs or decrease the system’s utility. The theory underlying such a decision is that the absence of an Internet connection to such a computer will prevent intruders from gaining access to it and thus that the computer system will be safe. In fact, this theory is not right—the lack of such a connection reduces but does not prevent access, and thus the
safety of the computer system cannot be taken for granted forever after. But disconnection does help under many circumstances.
The broader point can be illustrated by supervisory control and data acquisition (SCADA) systems, some of which are connected to the Internet.1 SCADA systems are used to control many elements of physical infrastructure: electric power, gas and oil pipelines, chemical plants, factories, water and sewage, and so on. Infrastructure operators connect their SCADA systems to the Internet to facilitate communications with them, at least in part because connections and communications hardware that are based on standard Internet protocols are often the least expensive way to provide such communications. But Internet connections also potentially provide access paths to these SCADA systems that intruders can use.
Note that disconnection from the Internet may not be easy to accomplish. Although SCADA systems may be taken off the Internet, connecting these systems to administrative computers that are themselves connected to the Internet (as might be useful for optimizing billing, for example) means that these SCADA systems are in fact connected—indirectly—to the Internet.
From the standpoint of an individual system or network operator, the only thing worse than being penetrated is being penetrated and not knowing about it. Detecting that one has been the target of a hostile cyber operation is also the first step toward taking any kind of specific remedial action.
Detection involves a decision that something (e.g., some file, some action) is harmful (or potentially harmful) or not harmful. Making such decisions is problematic because what counts as harmful or not harmful is for the most part a human decision—and such judgments may not be made correctly. In addition, the number of nonharmful things happening inside a computer or a network is generally quite large compared with the number of harmful things going on. So the detection problem is nearly always one of finding needles in haystacks.
One often-used technique for detecting malware is to check to see if a suspect program has been previously identified as being “bad.” Such checks depend on “signatures” that might be associated with the program—the name of the program, the size of the program, the date
when it was created, a hash of the program,2 and so on. Signatures might also be associated with the path through which a program has arrived at the target—where it came from, for example.
The Einstein program of the Department of Homeland Security (DHS) is an example of a signature-based approach to improving cybersecurity.3 By law and policy, DHS is the primary agency responsible for protecting U.S. government agencies other than the Department of Defense and the intelligence community. Einstein monitors Internet traffic going in and out of government networks and inspects a variety of traffic data (i.e., the header information in each packet but not the content of a packet itself) and compares that data to known patterns of such data that have previously been associated with malware. If the match is sufficiently close, further action can be taken (e.g., a notification of detection made or traffic dropped).
This signature-based technique for detection has two primary weaknesses. First, it is easy to morph the code without affecting what the program can do so that there are an unlimited number of functionally equivalent versions with different signatures. Second, the technique cannot identify a program as malware if the program has never been seen before.
Another technique for detection monitors the behavior of a program; if the program does “bad things,” it is identified as malware. When there are behavioral signatures that help with anomaly detection, this technique can be useful. (A behavioral signature can be specified in terms of designating as suspicious any one of a specific set of actions, or it can be behavior that is significantly different from a user’s “normal” behavior.) But it is not a general solution because there is usually no reliable way to distinguish between an authorized user who wishes to do something for a legitimate and benign purpose and an intruder who wishes to do that very same thing for some nefarious purpose. In practice, this technique often results in a significant number of false positives—indications that something nefarious is going on when in fact it is not. A high level of false positives annoys legitimate users, and often results in these users being unable to get their work done.
2 One definition of a “hash function” is an algorithm that turns an arbitrary sequence of bits (1’s and 0’s) into a fixed-length value known as the hash of that string. With a well-constructed algorithm, hashes of two different bit sequences are very unlikely to have the same hash value.
3 Department of Homeland Security, National Cyber Security Division, Computer Emergency Readiness Team (US-CERT), Privacy Impact Assessment [of the] Einstein Program: Collecting, Analyzing, and Sharing Computer Security Information Across the Federal Civilian Government, September 2004, available at http://www.dhs.gov/xlibrary/assets/privacy/privacy_pia_eisntein.pdf.
A hostile action taken against an individual system or network may or may not be part of a larger adversary operation that affects many systems simultaneously, and the scale and the nature of the systems and networks affected in an operation are critical information for decision makers.
Detecting a coordinated adversary effort against the background noise of ongoing hostile operations also remains an enormous challenge, given that useful information from multiple sites must be made available on a timely basis. (And as detection capabilities improve, adversaries will take steps to mask such signs of coordinated efforts.)
An assessment addresses many factors, including the scale of the hostile cyber operation (how many entities are being targeted), the nature of the targets (which entities are being targeted), the success of the operation and the extent and nature of damage caused by the operation, the extent and nature of any foreign involvement derived from technical analysis of the operation and/or any available intelligence information not specifically derived from the operation itself, and attribution of the operation to a responsible party (discussed further in Box 4.1). Information on such factors is likely to be quite scarce when the first indications are received of “something bad going on in cyberspace.” Assessments are further complicated by the possibility that an initial penetration is simply paving the way for hostile payloads that will be delivered later, or by the possibility that the damage done by an adversarial operation will not be visible for a long time after it has taken place.
The government agencies responsible for threat assessment and warning can, in principle, draw on a wide range of information sources, both inside and outside the government. In addition to hearing from private-sector entities that are being targeted, cognizant government agencies can communicate with security IT vendors, such as Symantec and McAfee, that monitor the Internet for signs of hostile activity. Other public interest groups, such as the OpenNet Initiative and the Information Warfare Monitor, seek to monitor hostile operations launched on the Internet.4
4 See the OpenNet Initiative (http://opennet.net/) and the Information Warfare Monitor (http://www.infowar-monitor.net/) Web sites for more information on these groups. A useful press report on the activities of these groups can be found at Kim Hart, “A New Breed of Hackers Tracks Online Acts of War,” Washington Post, August 27, 2008, available at http://www.washingtonpost.com/wp-dyn/content/article/2008/08/26/AR2008082603128_pf.html.
Defending a system or network means taking actions so that a hostile actor is less successful than he or she would otherwise be in the absence of defensive actions. A desirable side effect of taking such measures is that by reducing the likelihood that a hostile actor will succeed, that actor may also be deterred from taking hostile action because of its possible futility.
Some of the most important approaches to defense include:
• Reducing the number of vulnerabilities contained in any deployed IT system or network. There are two methods for doing so.
—Fix vulnerabilities as soon as they become known (a method known as “patching”). Much software has the capability to update itself, and many updates received automatically by a system contain patches that repair vulnerabilities that have become known since the software was released for general use.
—Design and implement software so that it has fewer vulnerabilities from the start. Software designers know many principles about how to design and build IT systems and networks more securely (Box 4.2). Systems or networks not built in accord with such principles will almost certainly exhibit inherent vulnerabilities that are difficult or impossible to address. In some cases, hardware-based security features are feasible—implementing such features in hardware is often more secure than implementing them in software, although hardware implementations may be less flexible than comparable software implementations.
• Eliminating or blocking known but unnecessary access paths. Many IT systems or networks have a variety of ways to access them that are unnecessary for their effective use. Security-conscious system administrators often disconnect unneeded wireless connections and wired jacks; disable USB ports; change system access controls to quickly remove departing employees or to restrict the access privileges available to individual users to only those that are absolutely necessary for their work; and install firewalls that block traffic from certain suspect sources. Disconnecting from the Internet is a particular instance of eliminating an access path.
• “Whitelisting” software. Vendors of major operating systems provide the option of (and sometimes require) restricting the programs that can be run to those whose provenance can be demonstrated. An example of this approach is the “app store” approach to software development by third parties for mobile devices. In principle, whitelisting requires that the code of an application be cryptographically signed by its author using a public digital certification of identity, and thus a responsible party can be identi-
BOX 4.1 On Attribution
Attribution is the process through which an adversarial cyber operation is associated with its perpetrator. In this context, the definition of “perpetrator” can have many meanings:
• The computer from which the adversarial cyber operation reached the target. Note that this computer—the one most proximate to the target—may well belong to an innocent third party that has no knowledge of the operation being conducted.
• The computer that launched or initiated the operation.
• The geographic location of the machine that launched or initiated the operation.
• The individual sitting at the keyboard of the initiating machine.
• The nation under whose jurisdiction the named individual falls (e.g., by virtue of his physical location when he typed the initiating commands).
• The entity under whose auspices the individual acted, if any.
One can thus imagine a hostile operation that is launched under the auspices of Elbonia, by a Ruritanian citizen sitting in a Darkistanian computer laboratory, that penetrates computers in Agraria as intermediate nodes in an attack on computers in Latkovia.
In general, “attribution” of a hostile cyber operation could refer to an identification of any of three entities:
• A computer or computers (called C) that may be involved in the operation. The identity of C may be specified as a machine serial number, a MAC address, or an Internet Protocol (IP) address.1
• The human being(s) (H) involved in the operation, especially the human being who initiates the hostile operation (e.g., at the keyboard). The identity of H may be specified as his or her name, pseudonym, or identification card number, for example.
• The party (P) ultimately responsible for the actions of the involved humans. The identity of P may be the name of another individual, the name of an organization, or the name of a country, for example. If H is a “lone wolf,” P and H are probably the same.
Note that knowing the identity of C does not necessarily identify H, and knowing the identity of H does not necessarily identify P.
The distinctions between C, H, and P are important because the appropriate meaning of attribution depends on the reason that attribution is necessary.
• If the goal is to mitigate the negative effects of a hostile cyber operation as soon as possible, it is necessary to shut down the computers involved in the operation, a task that depends on affecting the computers more than on affecting their operators or their masters. The identity of C is important.
• If the goal is to prosecute or take the responsible humans into custody, the names of these human beings are important. The identity of H is important.
• If the goal is to deter future hostile acts, and recognizing that deterrence involves imposing a cost on the party that would otherwise choose to launch a future hostile act, the identity of P is important.
When the identities of H or P are desired, judgments of attribution are based on all available sources of information, which could include technical signatures and forensics collected regarding the act in question, communications information (e.g., intercepted phone calls monitoring conversations of individuals or their leaders), prior history (e.g., similarity to previous hostile operations), and knowledge of those with incentives to conduct such operations.
The fact that such a diversity of sources is necessary for identifying humans underscores a fundamental point—assignment of responsibility for an adversarial cyber operation is an act that is influenced although not uniquely determined by the technical information associated with the operation itself. Nontechnical evidence can often play an important role in determining responsibility, and ultimately, human judgment is an essential element of any attempt at attribution.
It is commonly said that attribution of an adversarial cyber operation is impossible. The statement does have an essential kernel of truth: if the perpetrator makes no mistakes, uses techniques that have never been seen before, leaves behind no clues that point to himself, does not discuss the operation in any public or monitored forum, and does not conduct his actions during a period in which his incentives to conduct such operations are known publicly, then identification of the perpetrator may well be impossible.
Indeed, sometimes all of these conditions are met, and policy makers rightly despair of their ability to act appropriately under such circumstances. But in other cases, the problem of attribution is not so dire, because one or more of these conditions are not met, and it may be possible to make some useful (if incomplete) judgments about attribution. For example, a cyber intruder may leave his IP address exposed (perhaps because he forgot to use an anonymizing service to hide it). That IP address may be the key piece of information that is necessary to track the intruder’s location and eventually to arrest the individual involved.2
Perhaps the more important point is that prompt attribution of any given adversarial cyber operation is much more difficult than eventual or delayed attribution. It takes time—days, weeks, perhaps months—to assemble forensic evidence and to compare it to evidence of previous operations, to query nontechnical intelligence sources, and so on. In a national security context, policy makers faced with responding to a hostile cyber operation naturally feel pressure to respond quickly, but sometimes such pressures have more political than operational significance.
Last, because attribution to any actor beyond a machine involves human judgments, actors that are accused of being responsible for bad actions in cyberspace can always assert their innocence and point to the sinister motives of the parties making human judgments, regardless of whether those judgments are well founded. Such denials have some plausibility, especially in an environment in which there are no accepted standards for making judgments related to attribution.
1 A MAC address (MAC is an acronym for media access control) is a unique number associated with a physical network adapter, specified by the manufacturer and hard-coded into the adapter hardware. An IP address (Internet Protocol address) is a number assigned by the operator of a network using the Internet Protocol to a device (e.g., a computer) attached to that network; the operator may, or may not, use a configuration protocol that assigns a new number every time the device appears on the network.
2 See Gerry Smith, “FBI Agent: We’ve Dismantled the Leaders of Anonymous,” The Huffington Post, August 21, 2013, available at http://www.huffingtonpost.com/2013/08/21/anonymous-arrests-fbi_n_3780980.html.
BOX 4.2 The Saltzer-Schroeder Principles of
Secure System Design and Development
Saltzer and Schroeder articulate eight design principles that can guide system design and contribute to an implementation without security flaws:
• Economy of mechanism: The design should be kept as simple and small as possible. Design and implementation errors that result in unwanted access paths will not be noticed during normal use (since normal use usually does not include attempts to exercise improper access paths). As a result, techniques such as line-by-line inspection of software and physical examination of hardware that implements protection mechanisms are necessary. For such techniques to be successful, a small and simple design is essential.
• Fail-safe defaults: Access decisions should be based on permission rather than exclusion. The default situation is lack of access, and the protection scheme identifies conditions under which access is permitted. The alternative, in which mechanisms attempt to identify conditions under which access should be refused, presents the wrong psychological base for secure system design. This principle applies both to the outward appearance of the protection mechanism and to its underlying implementation.
• Complete mediation: Every access to every object must be checked for authority. This principle, when systematically applied, is the primary underpinning of the protection system. It forces a system-wide view of access control, which, in addition to normal operation, includes initialization, recovery, shutdown, and maintenance. It implies that a foolproof method of identifying the source of every request must be devised. It also requires that proposals to gain performance by remembering the result of an authority check be examined skeptically. If a change in authority occurs, such remembered results must be systematically updated.
• Open design: The design should not be secret. The protection mechanisms should not depend on the ignorance of potential attackers, but rather on the possession of specific, more easily protected keys or passwords. This decoupling of protection mechanisms from protection keys permits the mechanisms to be examined by many reviewers without concern that the review may itself compromise the safeguards. In addition, any skeptical users may be allowed to convince
fied if the program does damage to the user’s system.5 If the app store does whitelisting consistently and rigorously (and app stores do vary significantly in their rigor), the user is more secure in this arrangement, but cannot run programs that have not been properly signed. Another issue for whitelisting is who establishes any given whitelist—the user (who
5 The whitelisting approach can be extended to other scenarios. For example, a mail service can be configured to accept e-mail only from a specified list of parties approved by the recipient as “safe.” A networked computer can be configured to accept connections only from a specified list of computers.
themselves that the system they are about to use is adequate for their individual purposes. Finally, it is simply not realistic to attempt to maintain secrecy for any system that receives wide distribution.
• Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key. The reason for this greater robustness and flexibility is that, once the mechanism is locked, the two keys can be physically separated, and distinct programs, organizations, or individuals can be made responsible for them. From then on, no single accident, deception, or breach of trust is sufficient to compromise the protected information.
• Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job. This principle reduces the number of potential interactions among privileged programs to the minimum for correct operation, so that unintentional, unwanted, or improper uses of privilege are less likely to occur. Thus, if a question arises related to the possible misuse of a privilege, the number of programs that must be audited is minimized.
• Least common mechanism: The amount of mechanism common to more than one user and depended on by all users should be minimized. Every shared mechanism (especially one involving shared variables) represents a potential information path between users and must be designed with great care to ensure that it does not unintentionally compromise security. Further, any mechanism serving all users must be certified to the satisfaction of every user, a job presumably harder than satisfying only one or a few users.
• Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly. More generally, the use of protection mechanisms should not impose burdens on users that might lead users to avoid or circumvent them—when possible, the use of such mechanisms should confer a benefit that makes users want to use them. Thus, if the protection mechanisms make the system slower or cause the user to do more work—even if that extra work is “easy”—they are arguably flawed.
SOURCE: Adapted from J.H. Saltzer and M.D. Schroeder, “The Protection of Information in Computer Systems,” Proceedings of the IEEE 63(9):1278-1308, September 1975.
may not have the expertise to determine safe parties) or someone else (who may not be willing or able to provide the full range of applications desired by the user or may accept software too uncritically for inclusion on the whitelist).
These approaches to defense are well known, and are often implemented to a certain degree in many situations. But in general, these approaches have not been adopted as fully as they could be, leaving systems more vulnerable than they would otherwise be. If the approaches
remain valid (and they do), why are they not more widely adopted? Several factors account for this phenomenon:
• Potential conflicts with performance and functionality. In many cases, closing down access paths and introducing cybersecurity to a system’s design slows it down or makes it harder to use. Restricting access privileges to users often has serious usability implications and makes it harder for users to get legitimate work done, as for example when someone needs higher access privileges temporarily but on a time-urgent basis. Implementing the checking, monitoring, and recovery needed for secure operation requires a lot of computation and does not come for free. User demands for backward compatibility at the applications level often call for building into new systems some of the same security vulnerabilities present in the old systems. Program features that enable adversary access can be turned off, but doing so may disable functionality needed or desired by users.
• The mismatch between these approaches to defense and real-world software development environments. For example, software developers often experience false starts, and many “first-try” artifacts are thrown away. In such an environment, it makes very little sense to invest up front in the approaches to defense outlined above unless such adherence is relatively inexpensive.
• The difficulty of upgrading large systems. With large systems in place, it is very difficult, from both a cost and a deployment standpoint, to upgrade all parts of the system at once. This means that for practical purposes, an organization may well be operating with an information technology environment in which the parts that have not been replaced are likely still vulnerable, and their interconnection to the parts that have been replaced may make even the new components vulnerable.
Accountability is the ability to unambiguously associate a consequence with a past action of an individual or an organization. Authentication refers to a process that ensures that an asserted identity is indeed properly associated with the asserting party. Access control is the technical mechanism by which certain system privileges but not others are granted to specified individuals. Forensics for cybersecurity are the technical means by which the activity of an intruder can be reconstructed; in many cases, the intruder leaves behind evidence that provides clues to his or her identity.
Individual Authentication and Access Control
For purposes of this report, authentication usually refers to the process of establishing that a particular identifier (such as a login name) correctly refers to a specific party, such as a user, a company, or a government agency.
As applied to individuals, authentication serves two purposes:
• Ensuring that only authorized parties can perform certain actions. In many organizations, authorized users are granted a set of privileges—the system is intended to ensure that those users can exercise only those privileges and no others. Because certain users have privileges that others lack, someone who is not authorized to perform a given action may seek to usurp the authentication credentials of someone who is so authorized so that the unauthorized party can impersonate an authorized party. A user may be authorized by virtue of the role(s) he or she plays (e.g., all senior executives have the ability to delete records, but no one else) or by virtue of his or her explicit designation by name (Jane has delete access but John does not).
• Facilitating accountability, which is the ability to associate a consequence with a past improper action of an individual. Thus, the authentication process must unambiguously identify one and only one individual who will be held accountable for improper actions. (This is the reason that credentials should not be shared among individuals.) To avoid accountability, an individual may seek to defeat an authentication process.
In general, the authentication process depends on one or more of three factors: something you know, something you have, or something you are.
• Something you know, such as a password. Passwords have many advantages. For example, the use of passwords requires no specialized hardware or training. Passwords can be distributed, maintained, and updated by telephone, fax, or e-mail. But they are also susceptible to guessing and to theft.6 Passwords are easily shared, either intentionally or inadvertently (when written down near a computer, for example), and a complex, expensive infrastructure is necessary to enable resetting lost (forgotten) passwords. Because people often reuse the same name and password combinations across different systems to ease the burden
6 For example, in 2010, the most common passwords for Gawker Media Web sites were (in order of frequency) “123456,” “password,” and “12345678.” See Impact Lab, “The Top 50 Gawker Media Passwords,” December 14, 2010, available at http://www.impactlab.net/2010/12/14/the-top-50-gawker-media-passwords/.
on their memories, a successful password attack on a user on one site increases the likelihood that accounts of the same user on other sites can be hacked.
• Something you have, such as a cell phone. If a user is locked out of his account because of a forgotten password, a password recovery system can send a text message to the user’s cell phone with a special activation code that can be used to reset the password. Although anyone can request a reset link for my user name, only I have access to the specific cell phone on which the activation code is received. Of course, this approach presumes that I have—in advance—told the system my cell phone number at the time of account setup; that is the primary way that the system can associate the specific phone number.
• Something you are, such as a fingerprint. Biometric authentication (often called biometrics) is the automatic recognition of human individuals on the basis of behavioral and physiological characteristics. Biometrics have the obvious advantage of authenticating the human, not just the presented token or password. Common biometrics in use today verify fingerprints, retinas, irises, and faces, among other things. The most serious disadvantage of biometric credentials is that they can be forged or stolen,7 and revocation of biometric credentials is difficult (i.e., a biometric credential cannot be changed). Other downsides to biometrics include the fact that not all people can use all systems, making a backup authentication method necessary (and consequently increasing vulnerability), and the fact that remote enrollment of a biometric measure (sending one’s fingerprint or iris scan over the Internet, for example) may defeat the purpose and is easily compromised.
These factors can be combined to provide greater authentication security. For example, biometrics (e.g., a fingerprint) or a personal identification number can be used to authenticate a smart identification card that is read by a computer. This approach would provide two-factor authentication—authentication that requires confirmation of two factors rather than one to enable access.
All authentication mechanisms are susceptible to compromise to varying degrees in two ways. One is technical—use of a gummy bear to fake a fingerprint (see Footnote 7) or use of a password-guessing program are examples. The other is social (or psychological)—someone with the
7 For example, in 2002, a security expert was able to fool a number of fingerprint sensors by lifting latent fingerprints from a water glass using soft gummy bear candy. See John Leyden, “Gummi Bears Defeat Fingerprint Sensors: Sticky Problem for Biometrics Firms,” The Register, May 16, 2002, http://www.theregister.co.uk/2002/05/16/gummi_bears_defeat_fingerprint_sensors/.
necessary privileges can be bribed, tricked, coerced, or extorted into taking action on behalf of someone without those privileges.
At the organizational level, authentication is commonly used to ensure that communications with an organization are in fact being held with the proper organization. For example, if I use the online services of the XYZZY National Bank, I need to be sure that the Web site called www.xyzzynationalbank.com is indeed operated by the XYZZY National Bank. On the Internet today, I would likely rely on the assurances of a trusted third party known as a certificate authority (CA). Certificate authorities (Box 4.3) verify identity based on information that only the proper party
BOX 4.3 Certificate Authorities
Cryptography refers to a set of techniques that can be used to scramble information so that only certain parties—parties with the decryption key—can recover the original information. To scramble the original information, the sender of the information uses an encryption key.
In symmetric cryptography (or equivalently, secret-key cryptography), the encryption key is the same as the decryption key; thus, message privacy depends on the key being kept secret. In asymmetric (or, equivalently, public-key) cryptographic systems, the encryption key is different from the decryption key. Message privacy depends only on the decryption key being kept secret. The encryption key can even be published and disseminated widely, so that anyone can encrypt messages.
Certificate authorities are used to facilitate public-key cryptography systems that enable secure communications among a large number of parties who do not know each other and who have not made prior arrangements for communicating. For Alice to send a secure message to Bob, Alice needs to know Bob’s encryption key. Alice looks up in a published directory Bob’s encryption key (which happens to be 2375959), and then sends an encrypted message to Bob. Only Bob can decrypt it, because only Bob knows the correct decryption key. But how is Alice to know that the number 2375959 does in fact belong to Bob—in other words, how does Alice know that the published directory is trustworthy?
The answer is that a trusted certificate authority stands behind the association between a given encryption key and the party to which it belongs. Certificate authorities play the role of trusted third parties—trusted by both sender and receiver to associate and publish public keys and names of potential message recipients.
Certificate authorities can exist within a single organization, across multiple related organizations, or across society in general. Any number of certificate authorities can coexist, and they may or may not have agreements for cross-certification, whereby if one authority certifies a given person, then another authority will accept that certification within its own structure.
would know, and they issue computer-readable certificates to, for example, the XYZZY National Bank. Users (through their Internet browsers) can automatically check these certificates when they communicate with a Web site operated by a party claiming to be the XYZZY Bank.
Given the role of the CA, its compromise is a dangerous event that can undermine transactions based on assurances of identity. Indeed, certificate authorities have in the past been tricked into issuing bad certificates, and some have even gone rogue on their own. The security of the Internet is under stress today in part because the number of trusted but not trustworthy CAs is growing. Thus, CAs must do what they can to ensure that the certificates for which they are responsible are not compromised and, just as important, must be able to revoke a certificate if and when it is compromised.
Of course, certificate revocation is only half the battle when a certificate is compromised—users relying on a certificate should, in principle, check its status to see if it has been revoked. Few users are so diligent—they rely on software to perform such checks. Sometimes the software fails to perform a check, leaving the user with a false sense of security. And sometimes the software informs the user that the certificate has been revoked and asks the user if he or she wants to proceed. Faced with this question, the user often proceeds.
Furthermore, there is an inherent tension between authentication and privacy, because the act of authentication involves some disclosure and confirmation of personal information. Establishing an identifier or attribute for use within an authentication system, creating transactional records, and revealing information used in authentication to others with unrelated interests all have implications for privacy and other civil liberties.
Stronger Authentication for the Internet
As discussed in Chapter 2, digital information is inherently anonymous, which means that specific mechanisms must be in place to associate a given party with any given piece of information. The Internet is a means for transporting information from one computer to another, but today’s Internet protocols do not require a validated identity to be associated with the packets that are sent.
Nevertheless, nearly all users of the Internet obtain service through an Internet service provider, and the ISP usually does have—for billing purposes—information about the party sending or receiving any given packet. In other words, access to the Internet usually requires some kind of authentication of identity, but the architecture of the Internet does not require that identity to be carried with sent packets all the way to the
intended recipient. (As an important aside, an ISP knows only who pays a bill for Internet service, and one bill may well cover Internet access for multiple users. However, the paying entity may itself have accounting systems in place to differentiate among these multiple users.)
In the name of greater security, proposals have been made for a “strongly authenticated” Internet as a solution to the problem of attribution. Recall that attribution refers to the identification of the entity responsible for a cyber incident. If cyber incidents effectuated through the Internet could be associated with an identifiable entity, accountability could be established and penalties meted out for illegal, improper, or unauthorized actions. “Strong” authentication mechanisms are one way to improve attribution capabilities.
Strong authentication could be one of the more centralized security services provided at the packet level of the Internet, as described in Chapter 2. Alternatively, strong authentication could be a service implemented at the applications level, in which case authentication would be the responsibility of individual applications developers that would then design such services as needed for the particular application in question.
Although the availability of a strongly authenticated Internet would certainly improve the security environment over that of today, it is not a panacea. Perhaps most important, users of a strongly authenticated Internet would be high-priority targets themselves. Once they were compromised (likely through social engineering), their credentials could then be used to gain access to the resources on the strongly authenticated Internet. The intruder could then use these resources to launch further hostile operations against the true target, masking his true identity.
In addition, strong authentication, especially if implemented at the packet level, raises a number of civil liberties concerns, as described in Chapter 5.
In a cybersecurity context, the term “cyber forensics” refers to the art and science of obtaining useful evidence from an ostensibly hostile cyber event. Cyber forensics are intended to provide information about what an intruder did and how he or she did it, and to the extent possible to associate a specific party with that event—that is, to attribute the event. Cyber forensics are necessary because, among other things, intruders often seek to cover their tracks.
When digital information is involved, forensics can be difficult. Digital information carries with it no physical signature that can be associated unambiguously with an individual. For example, although a digital signature on a document says something about the computer that signed the
document using a private and secret cryptographic key, it does not necessarily follow that the individual associated with that key signed the document. Because the key is a long string of digits, it is almost certainly stored in machine-readable form, and the association of the individual with the signed document requires a demonstration that no one else could have gained access to that key. Another example is the assumption—often not true in practice—that the owner of a Wi-Fi router has willfully allowed all traffic that is carried through it.
Typical forensic tasks include the examination of computer hardware for information that the perpetrator of a hostile action may have tried to delete or did not know was recorded, audits of system logs for reconstructions of a perpetrator’s system accesses and activities, statistical and historical analyses of message traffic, and interviews with system users. For example, system logs may record the fact that someone has downloaded a very large number of files in a very short time. Such behavior is often suspicious, and an audit of system logs should flag such behavior for further review and investigation. Conducted in real time, an audit could send a warning to system administrators of wrongdoing in progress.
Precisely what must be done in any cyber forensic investigation depends on its purpose. One purpose, of course, is to obtain evidence that may be usable in court for a criminal prosecution of the perpetrator. In this event, forensic investigation also involves maintaining a chain of custody over the evidence at every step, an aspect of the investigation that is likely to slow down the investigation. But businesses may need to conduct a forensic investigation to detect employee wrongdoing or to protect intellectual property. For this purpose, the evidentiary requirements of forensic investigation may be less stringent and the investigation shorter, a fact that might allow statistical likelihood, indirect evidence, and hearsay to fall within the scope of non-evidentiary forensics; the same may be true in a national security context as well. Business forensic approaches range across a broad spectrum from traffic analysis tools and instrumentation of embedded systems to handling massive data volume and network monitoring, and they require a similar foundation to deal with increasing complexity and broader application. These points are also relevant to civil proceedings, both because the standards of proof there are lower and because the use of digital forensics in business activities may also be the subject of litigation.
Also, the forensic investigator must proceed differently in an after-the-fact investigation than in a real-time investigation. Law enforcement authorities are often oriented toward after-the-fact forensics, which help to assemble evidence needed for prosecution. But to the extent that prevention or mitigation of damage is the goal of law enforcement authorities
or business operators, real-time or near-real-time forensics may be more valuable.
When cyber forensics are performed on IT systems and networks within the victim’s legitimate span of control, the legal and policy issues are few. Such issues become much more complicated if it is necessary to perform cyber forensics on IT systems and networks outside the victim’s legitimate span of control. For example, if an adversary has conducted a hostile operation from the computer belonging to an innocent third party that has no relationship to either adversary or victim, conducting forensics on that computer without the third party’s knowledge or permission raises a number of legal and policy problems.
Acknowledging that defenses are likely to be breached, one can also seek to contain the damage that a breach might cause and/or to recover from the damage that was caused.
Containment refers to the process of limiting the effects of a hostile action once it occurs. An example is sandboxing. A sandbox is a computing environment designed to be disposable—corruption or compromise in this environment does not matter much to the user, and the intruder is unlikely to gain much in the way of additional resources or privileges. The sandbox can thus be seen as a buffer between the outside world and the “real” computing environment in which serious business can be undertaken. The technical challenge in sandboxing is to develop methods for safe interaction between the buffer and the “real” environment, and in an imperfectly designed disposable environment, unsafe actions can have an effect outside the sandbox. Nevertheless, a number of practical sandboxing systems have been deployed for regular use; these systems provide some level of protection against the dangers of running untrusted programs.
A second example of containment is the use of heterogeneous computing environments. In agriculture, monocultures are known to be highly vulnerable to blight. In a computer security context, a population of millions of identically programmed computers is systematically vulnerable to an exploit that targets a specific security defect, especially if all of those computers are attached to the Internet—a hostile operation that is successful against one of these computers will be successful against all of them, and malware can propagate rapidly in such an environment. If these computers are programmed differently (while still providing the
same functionality to the user), the techniques used in a hostile operation against a particular programming base may well be unsuccessful against a different code base, and thus not all of the computers in the population will be vulnerable to those techniques. However, it is generally more expensive and labor-intensive to support a heterogeneous computing environment, and interoperability among the systems in the population may be more difficult to achieve.
In general, recovery-oriented approaches accomplish repair by restoring a system to its state at an earlier point in time. If that point in time is too recent, then the restoration will include the damage to the system caused by the attack. If that point in time is too far back, an unacceptable amount of useful work may be lost. A good example is restoring a backup of a computer’s files; the first question that the user asks when a backup file is needed is, When was my most recent backup?
A recovery-oriented approach is not particularly useful in any environment in which the attack causes effects on physical systems—if an attack causes a generator to explode, no amount of recovery on the computer systems attacked will restore that generator to working order. (But the operator still needs to restore the computer so that the replacement generator won’t be similarly damaged.)
In large systems or services, reverting to a known system state before the security breach may well be infeasible. Under such circumstances, a more practical goal is to restore normal system capacity/functionality with as little loss of operating data as possible.
A resilient system is one whose performance degrades gradually rather than catastrophically when its other defensive mechanisms are insufficient to stem an attack. A resilient system will still continue to perform some of its intended functions, although perhaps more slowly or for fewer people or with fewer applications.
Redundancy is one way to provide a measure of resilience. For example, Internet protocols for transmitting information are designed to account for the loss of intermediate nodes—that is, to provide redundant paths in most cases for information to flow between two points.
A second approach to achieving resilience is to design a system or network without a single point of failure—that is, it should be impossible to cause the system or network to cease functioning entirely by crippling or disabling any one component of the system. Unfortunately, discover-
ing single points of failure is sometimes difficult because the system or network in question is so complex. Moreover, the easiest way to achieve redundancy for certain systems is simply to replicate the system and run the different replications together. But if one version has a flaw, simple replication of that version replicates the flaw as well.
The limitations of the measures described above to protect important information technology assets and the information they contain are well known. Many measures (e.g., repair of system vulnerabilities) can be applied only to IT assets within an organization’s span of control—that is, systems and networks that it has the legal right to access, monitor, and modify. These also may reduce important functionality in the systems being protected—they become more difficult, slower, and inconvenient to use. They are also reactive—they are invoked or become operational only when a hostile operation has been recognized as having occurred (or is occurring).
Recognizing the limitations of passive defense measures as the only option for responding to the cyber threat, the Department of Defense issued in 2011 its Department of Defense Strategy for Operating in Cyberspace, which states that the United States will employ “an active cyber defense capability to prevent intrusions onto DoD networks and systems,” defining active cyber defense as “DoD’s synchronized, real-time capability to discover, detect, analyze, and mitigate threats and vulnerabilities.”8
The DOD does not describe active cyber defense in any detail, but the formulation above for “active cyber defense” could, if read broadly, include any action outside the DOD’s organizational span of control, any non-cooperative measure affecting or harming an attacker’s IT systems and networks, any proactive measure, or any retaliatory measure, as long as such action was taken for the purpose of defending DOD systems or networks from that attacker.
The sections below describe some of the components that a strategy of active cyber defense might logically entail.
Cyber Deception for Defensive Purposes
Deception is often a useful defensive technique. For example, an intruder bent on cyber exploitation seeks useful information. An intruder that can be fooled into exfiltrating false or misleading information that
looks like the real thing may be misled into taking action harmful to his own interests, and at the very least has been forced to waste time, effort, and resources in obtaining useless information.
The term “honeypot” in computer security jargon refers to a machine, a virtual machine, or other network resource that is intended to act as a decoy or diversion for would-be intruders. Honeypots intentionally contain no real or valuable data and are kept separate from an organization’s production systems. Indeed, in most cases, systems administrators want intruders to succeed in compromising or breaching the security of honeypots to a certain extent so that they can log all the activity and learn from the techniques and methods used by the intruder. This process allows administrators to be better prepared for hostile operations against their real production systems. Honeypots are very useful for gathering information about new types of operation, new techniques, and information on how things like worms or malicious code propagate through systems, and they are used as much by security researchers as by network security administrators.
When the effects of a honeypot are limited in scope to the victim’s systems and networks, the legal and policy issues are relatively limited. But if they have effects on the intruder’s systems, both the legal and the policy issues become much more complex. For example, a honeypot belonging to A might contain files of falsified information that themselves carry malware. When the intruder B exfiltrates these files and then views them on B’s own systems, the malware in these files is launched and conducts its own offensive operations on B’s systems in certain ways.
What might A’s malware do on B’s systems? It might activate a “beacon” that sends an e-mail to A to report on the environment in which it finds itself, an e-mail that contains enough information to identify B. It might erase files on B’s systems. It might install a way for A to penetrate B’s systems in the future. All of these actions raise legal and policy issues regarding their propriety.
Disruption is intended to reduce the damage being caused by an adversarial cyber operation in progress, usually by affecting the operation of the computer systems being used to conduct the operation.
An example of disrupting an operation in progress would be disabling the computers that control a botnet. Of course, this approach presumes that the controlling computers are known. The first time the botnet is used, such knowledge is unlikely. But over time, patterns of behavior might suggest the identity of those computers and an access path to them. Thus, disruption would be easier to accomplish after repeated attacks.
Under most circumstances, disabling the computers controlling an adversarial operation runs a risk of violating domestic U.S. law such as the Computer Fraud and Abuse Act. However, armed with court orders, information technology vendors and law enforcement authorities have worked together in a number of instances to disrupt the operation of botnets by targeting and seizing servers and controllers associated with those botnets.
An example of such action was a joint Microsoft-Federal Bureau of Investigation effort to take down the Citadel botnet in the May-June 2013 time frame. The effort involved Microsoft filing a civil lawsuit against the Citadel botnet operators. With a court-ordered seizure request and working with U.S. Marshals, employees from Microsoft seized servers from two hosting facilities in New Jersey and Philadelphia.9 In addition, they provided information about the botnets to computer emergency response teams (CERTs) located abroad, requesting that they target related command-and-control infrastructure. At the same time, the FBI provided related information to its overseas law enforcement counterparts.
Preemption—sometimes also known as anticipatory self-defense—is the first use of cyber force against an adversary that is itself about to conduct a hostile cyber action against a victim. The idea of preemption as a part of active defense has been discussed mostly in the context of national security.10
Preemption as a defensive strategy is a controversial subject, and the requirements of executing a preemptive strike in cyberspace are substantial.11 Preemption by definition requires information that an adversary is about to launch a hostile operation that is sufficiently serious to warrant preemption. When the number of possible cyber adversaries is almost limitless, how would a country know who was about to launch such an operation? Keeping all such parties under surveillance using cyber means and other intelligence sources would seem to be a quite daunting task
9 Matthew J. Schwartz, “Microsoft, FBI Trumpet Citadel Botnet Takedowns,” June 6, 2013, available at http://www.informationweek.com/attacks/microsoft-fbi-trumpetcitadel-botnet-takedowns/d/d-id/1110261?.
10 Mike McConnell, “How to Win the Cyber War We’re Losing,” Washington Post, February 28, 2010, available at http://www.washingtonpost.com/wp-dyn/content/article/2010/02/25/AR2010022502493.html.
11 Herbert Lin, “A Virtual Necessity: Some Modest Steps Toward Greater Cybersecurity,” Bulletin of the Atomic Scientists, September 1, 2012, available at http://www.thebulletin.org/2012/september/virtual-necessity-some-modest-steps-toward-greater-cybersecurity.
and yet necessary in an environment in which threats can originate from nearly anywhere.
Also, an imminent action by an adversary by definition requires that the adversary take nearly all of the measures and make all of the preparations needed to carry out that action. The potential victim considering preemption must thus be able to target the adversary’s cyber assets that would be used to launch a hostile operation. But the assets needed to launch a cyberattack are generally inexpensive and/or easily concealed (or made entirely invisible)—reducing the likelihood that a serious damage-limiting preemption could be conducted.
BOX 4.4 A Brief Case Study—
Securing the Internet Against Routing Attacks
The task of securing the routing protocols of the Internet makes a good case study of the nontechnical complexities that can emerge in what might have been thought of as a purely technical problem.
As noted in Chapter 2, the Internet is a network of networks. Each network acts as an autonomous system under a common administration and with common routing policies. BGP is the Internet protocol used to characterize every network to each other, and in particular to every network operated by an Internet service provider (ISP).
In general, the characterization is provided by the ISP responsible for the network, and in part the characterization specifies how that ISP would route traffic to a given destination. A problem arises if and when a malicious ISP in some part of the Internet falsely asserts that it is the right path to a given destination (i.e., it asserts that it would forward traffic to a destination but in fact would not). Traffic sent to that destination can be discarded, causing that destination to appear to be off the net. Further, the malicious ISP might be able to mimic the expected behavior of the correct destination, fooling unsuspecting users into thinking that their traffic has been delivered properly and thus causing further damage.
The technical proposal to mitigate this problem was to have the owner of each region of Internet addresses digitally sign an assertion to the effect that it is the rightful owner (which would be done using cryptographic mechanisms), and then delegate this assertion to the ISP that actually provides access to the addresses, which in turn would validate it by a further signature, and so on as the assertion crossed the Internet. A suspicious ISP trying to decide if a routing assertion is valid could check this series of signed assertions to validate it.
This scheme has a bit of overhead, which is one objection, but it also has another problem—how can a suspicious ISP know that the signed assertion is valid?
An important lesson that is often lost amidst discussions of cybersecurity is that cybersecurity is not only about technology to make us more secure in cyberspace. Indeed, technology is only one aspect of such security, and is arguably not even the most important aspect of security. Box 4.4 provides a brief case study that illustrates this point. The present section discusses a number of the most important nontechnological factors that affect cybersecurity.
It has been signed using some cryptographic key, but the suspicious ISP must know who owns that key. To this end, it is necessary to have a global key distribution and validation scheme, which is called a public-key infrastructure, or PKI. The original proposal was that there would be a “root of trust,” an actor that everyone trusted, who would sign a set of assertions about the identities of lower-level entities, and so on until there was a chain of correctness-confirming assertions that linked the assertions of each owner of an address block back to this root of trust.
This idea proved unacceptable for the reason, perhaps obvious to nontechnical people, that there is no actor that everyone—every nation, every corporation, and so on—is willing to trust. If there were such an actor, and if it were to suddenly refuse to validate the identity of some lower-level actor, that lower-level actor would be essentially removed from the Internet. The alternative approach was to have many roots of trust—perhaps each country would be the root of trust for actors within its borders. But this approach, too, is hard to make work in practice—for example, what if a malicious country signs some assertion that an ISP within its border is the best means to reach some range of addresses? How can someone know that this particular root of trust did not in fact have the authority to make assertions about this part of the address space? Somehow one must cross-link the various roots of trust, and the resulting complexity may be too hard to manage.
Schemes that have been proposed to secure the global routing mechanisms of the Internet differ with respect to the overhead, the range of threats to which they are resistant, and so on. But the major problem that all these schemes come up against is the nontechnical problem of building a scheme that can successfully stabilize a global system built out of regions that simply do not trust each other. And of course routing is only part of making a secure and resilient Internet. An ISP that is malicious can make correct routing assertions and then just drop or otherwise disrupt the packets as they are forwarded. The resolution of these sorts of dilemmas seems to depend on an understanding of how to manage trust, not on technical mechanisms for signing identity assertions.
Many problems of cybersecurity can be understood better from an economic perspective: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping, risk dumping, regulatory frameworks, and tragedy of the commons. Taken together, economic factors go a long way toward explaining why, beyond any technical solutions, cybersecurity is and will be a hard problem to address.
Many actors make decisions that affect cybersecurity: technology vendors, technology service providers, consumers, firms, law enforcement, the intelligence community, and governments (both as technology users and as guardians of the larger social good). Each of these actors gets plenty of blame for being the “problem”: if technology vendors would just properly engineer their products, if end users would just use the technology available to them and learn and practice safe behavior, if companies would just invest more in cybersecurity or take it more seriously, if law enforcement would just pursue the bad guys more aggressively, if policy makers would just do a better job of regulation or legislation, and so on.
There is some truth to such assertions, and yet it is important to understand the incentives for these actors to behave as they do. For example, technology vendors have significant financial incentives to gain a first-mover or a first-to-market advantage. But the logic of reducing time to market runs counter to enhancing security, which adds complexity, time, and cost in design and testing while being hard to value or even assess by customers.
In the end-user space, organizational decision makers and individuals do sometimes (perhaps even often) take cybersecurity into account. But these parties have strong incentives to take only those cybersecurity measures that are valuable for addressing their own cybersecurity needs, and few incentives to take measures that primarily benefit the nation as a whole. In other words, cybersecurity is to a large extent a public good; much of the payoff from security investments may be captured by society rather than directly by any individual firm that invests.
For example, an attacker A who wishes to attack victim V will compromise intermediary M’s computer facilities in order to attack V. This convoluted routing is done so that V will have a harder time tracing the
12 For an overview of the economic issues underlying cybersecurity, see Tyler Moore, “Introducing the Economics of Cybersecurity: Principles and Policy Options,” in National Research Council, Proceedings of a Workshop on Deterring Cyberattacks: Informing Strategies and Developing Options for U.S. Policy, pp. 3-24, The National Academies Press, Washington D.C., 2010. An older but still very useful paper is Ross Anderson, “Why Information Security Is Hard—An Economic Perspective,” Proceedings of the 17th Annual Computer Security Applications Conference, IEEE Computer Society, New Orleans, La., 2001, pp. 358-365.
attack back to A. However, the compromise on M’s computers will usually not damage them very much, and indeed M may not even notice that its computers have been compromised. Investments made by M to protect its computers will not benefit M, but will, rather, protect V. But an Internet-using society would clearly benefit if all of the potential intermediaries in the society made such investments. Many similar examples also have economic roots.
Is the national cybersecurity posture resulting from the investment decisions of many individual firms acting in their own self-interest adequate from a societal perspective? To date, the government’s assessment of this question yields “no” for an answer—whereas many in the private sector say “yes.” This disagreement is at the heart of many disputes about what the nation can and should do about cybersecurity policy.
A wide variety of psychological factors and issues are relevant to cybersecurity.
One definition of “social engineering” is “the art of gaining access to buildings, systems or data by exploiting human psychology, rather than by breaking in or using technical hacking techniques.”13 For example, instead of trying to find a technical way to access a computer, a social engineer might try to trick an employee into divulging his password by posing as an IT support person.
Social engineering is possible because the human beings who install, configure, operate, and use IT systems of interest can be compromised through deception and trickery. Spies working for an intruder may be unknowingly hired by the victim, and more importantly and commonly, users can be deceived into actions that compromise security. No malware operates by informing a human user that “running this program or opening this file will cause your hard disk to be erased”—rather, it tricks the human into running a program with that effect.
Many instances involving the compromise of users or operators involve e-mails, instant messages, and files that are sent to the target at the initiative of the intruder (often posing as someone known to the victim), or other sources that are visited at the initiative of the target. Examples
13 Joan Goodchild, “Social Engineering: The Basics,” December 20, 2012, available at http://www.csoonline.com/article/514063/social-engineering-the-basics.
of the latter include links to appealing Web pages and or downloadable software applications, such as those for sharing pictures or music files.
Another channel for social engineering is the service providers on which many organizations and individuals rely. Both individuals and organizations obtain Internet connectivity from Internet service providers. Many organizations make use of external firms to arrange employee travel or to manage their IT security or repair needs. Many organizations also obtain cybersecurity services from third parties, such as a security software vendor that might be bribed or otherwise persuaded to ignore a particular virus. Service providers are potential security vulnerabilities, and thus might well be intermediate targets in an offensive operation directed at the true (ultimate) target.
Decision Making Under Uncertainty
Decision making under conditions of high uncertainty will almost surely characterize U.S. policy makers responding to the first reports of a significant cyber incident, as described above in Section 4.1.2. Under conditions of high uncertainty, crisis decision-making processes are often flawed. Stein describes a number of issues that affect decision making in this context.14
For example, under the category of factors affecting a rational decision-making process, Stein points to uncertainty about realities on the ground as an important influence. In this view, decision making yields suboptimal outcomes because the actors involved do not have or understand all of the relevant information about the situation. Uncertainties may relate to the actual balance of power (e.g., difficulties of cyber threat assessment), the intentions of the various actors (e.g., defensive actions by A are seen as provocative by B, inadvertent actions by A are seen as deliberate by B), the bureaucratic interests pushing decision makers in certain directions (e.g., cyber warriors pushing for operational use of cyber tools), and the significance of an actor’s violation of generally accepted norms.
Under the category of psychological factors influencing decision making, Stein points out that because the information-processing capability of people is limited, they are forced in confusing situations to use a variety of cognitive shortcuts and heuristics to “simplify complexity, manage uncertainty, handle information, make inferences, and generate threat perceptions.”15 For example, people often:
14 Janice Gross Stein, “Threat Perception in International Relations,” in The Oxford Handbook of Political Psychology, 2nd Edition, Leonie Huddy, David O. Sears, and Jack S. Levy (eds.), Oxford University Press, 2013.
15 Stein, “Threat Perception in International Relations,” 2013.
• Interpret ambiguous information in terms of what is most easily available in their cognitive repertoire (availability). Thus, if a cyber disaster (real or hypothetical) is easily recalled, ambiguous information about cyber events will seem to point to a cyber disaster.
• Exaggerate similarities between one event and a prior class of events, typically leading to significant errors in probability judgments or estimates of frequency (representativeness). Thus, if the available information on a cyber event seems to point to its being a hostile action taken by a nation-state, it will be interpreted that way even if that nation-state has taken few such actions in the past.
• Estimate magnitude or degree by comparing it with an “available” initial value (often an inaccurate one) as a reference point and making a comparison (anchoring).
• Attribute the behavior of adversaries in terms of their disposition and animus but attribute their own behavior to situational factors. That is, “they” take certain actions because they want to challenge us, but “we” take the same actions because circumstances demanded that we do so.
Education for Security Awareness and Behavior
Users are a key component of any information technology system in use, and inappropriate or unsafe user behavior on such a system can easily lead to reduced security. Security education has two essential components: security awareness and security-responsible behavior.
• Security awareness refers to user consciousness of the reality and significance of threats and risks to information resources, and it is what motivates users to adopt safeguards that reduce the likelihood of security compromises and/or the effect of such compromises when they do occur.
• Security-responsible behavior refers to what users should and should not do from a security standpoint once they are motivated to take action.
To promote security awareness, various reports have sought to make the public aware of the importance of cybersecurity. In general, these reports point to the sophistication of the cybersecurity threat, the scale of the costs to society as a whole resulting from threats to cybersecurity, and the urgency of “doing something” about the threat. But it is also likely that such reports do not motivate individual users to take cybersecurity more seriously than would a specific targeted and demonstrated threat that could entail substantial personal costs to them.
As for security-responsible behavior, most children do receive some education when it comes to physical security. For example, they are taught
to use locks on doors, to recognize dangerous situations, to seek help when confronted with suspicious situations, and so on. But a comparable effort to educate children about some of the basic elements of cybersecurity does not appear to exist.
To illustrate some of what might be included in education for security-responsible behavior, a course taught at the University of Washington in 2006, intended to provide a broad education in the fundamentals of information technology for lay people, set forth the following objectives for its unit on cybersecurity:16
- Learn to create strong passwords.
- Set up junk e-mail filtering.
- Use Windows Update to keep your system up to date.
- Update McAfee VirusScan so that you can detect viruses.
- Use Windows Defender to locate and remove spyware.
Convenience and Ease of Use
Security features are often too complex for organizations or individuals to manage effectively or to use conveniently. Security is hard for users, administrators, and developers to understand, making it all too easy to use, configure, or operate systems in ways that are inadvertently insecure. Moreover, security and privacy technologies originally were developed in a context in which system administrators had primary responsibility for security and privacy protections and in which the users tended to be sophisticated. Today, the user base is much wider—including the vast majority of employees in many organizations and a large fraction of households—but the basic models for security and privacy are essentially unchanged.
Security features can be clumsy and awkward to use and can present significant obstacles to getting work done. As a result, cybersecurity measures are all too often disabled or bypassed by the users they are intended to protect. Because the intent of security is to make a system completely unusable to an unauthorized party but completely usable to an authorized one, desires for security and desires for convenience or ease of access are often in tension—and usable security seeks to find a proper balance between the two.
For example, users often want to transfer data electronically between two systems because it is much easier than rekeying the data by hand.
16 See University of Washington, “Lab 11—Computer Security Basics,” Winter 2006, available at http://www.cs.washington.edu/education/courses/100/06wi/labs/lab11/lab11.html.
But establishing an electronic link between the systems may add an access path that is useful to an intruder. Taking into account the needs of usable security might call for establishing the link but protecting it or tearing down the link after the data has been transferred.
In other cases, security techniques do not transfer well from one technology to another. For example, it is much more difficult to type a long password on a mobile device than on a keyboard, and yet many mobile applications for a Web service require users to use the same password for access as they do for the desktop computer equivalent.
Also, usable security has social and organizational dimensions as well as technological and psychological ones. Researchers have found that the development of usable security requires deep insight into the human-interaction dimensions of the application for which security is being developed and of the alignment of technical protocols for security and of the social/organizational protocols that surround such security.
U.S. domestic law, international law, and foreign domestic law affect cybersecurity in a number of ways.
The Congressional Research Service has identified more than 50 federal statutes addressing various aspects of cybersecurity either directly or indirectly.17 (The acts discussed below are listed with the date of original passage, and “as amended” should be understood with each act.)
Several statutes protect computers and data by criminalizing certain actions. These statutes include the Computer Fraud and Abuse Act of 1986 (prohibits various intrusions on federal computer systems or on computer systems used by banks or in interstate and foreign commerce); the Electronic Communications Privacy Act of 1986 (ECPA; prohibits unauthorized electronic eavesdropping); the Economic Espionage Act of 1996 (outlaws theft of trade secret information, including electronically stored information, if “reasonable measures” have been taken to keep it secret); the Federal Wiretap Act of 1968 as amended (often known as Title III; prohibits real-time surveillance of electronic communications by unauthorized parties); and the Foreign Intelligence Surveillance Act of 1978 (FISA; establishes a framework for the use of “electronic surveil-
17 Eric A. Fischer, Federal Laws Relating to Cybersecurity: Overview and Discussion of Proposed Revisions, Congressional Research Service, R42114, June 20, 2013, available at http://www.fas.org/sgp/crs/natsec/R42114.pdf.
lance” conducted to obtain information about a foreign power or foreign territory that relates to the national defense, the security, or the conduct of the foreign affairs of the United States, also known as “foreign intelligence information”). As this report is being written, the scope and the nature of precisely how federal agencies have complied with various portions of FISA are under investigation.
A number of other statutes are designed to provide notification in the event that important information is compromised. If such information is personally identifiable, data breach laws generally require notification of the individuals with whom such information is associated. Federal securities law (the Securities Act of 1933 and the Securities Exchange Act of 1934) requires firms to disclose to investors timely, comprehensive, and accurate information about risks and events that is important to an investment decision. Under this authority, the Securities and Exchange Commission’s Division of Corporation Finance in 2011 provided voluntary guidance to firms regarding their obligations to disclose information relating to cybersecurity risks and cyber incidents.18
Several federal statutes assign responsibility within the federal government for various aspects of cybersecurity, including the Computer Security Act of 1987 (National Institute of Standards and Technology [NIST], responsible for developing security standards for non-national-security federal computer systems); the Paperwork Reduction Act of 1995 (Office of Management and Budget [OMB], responsible for developing cybersecurity policies); the Clinger-Cohen Act of 1996 (agency heads responsible for ensuring the adequacy of agency information-security policies and procedures); the Homeland Security Act of 2002 (HSA; Department of Homeland Security [DHS], responsible for cybersecurity for homeland security and critical infrastructure); the Cyber Security Research and Development Act of 2002 (NSF and NIST, research responsibilities in cybersecurity); and the Federal Information Security Management Act of 2002 (FISMA; clarified and strengthened NIST and agency cybersecurity responsibilities, established a central federal incident center, and made OMB, rather than the Secretary of Commerce, responsible for promulgating federal cybersecurity standards).
Finally, national security law may affect how the United States may itself use cyber operations in an offensive capacity for damaging adversary information technology systems or the information therein. For example, the War Powers Act of 1973 restricts presidential authority to use the U.S. armed forces in potential or actual hostilities without congressio-
18 U.S. Securities and Exchange Commission, Division of Corporation Finance, “CF Disclosure Guidance: Topic No. 2—Cybersecurity,” October 13, 2011, available at http://www.sec.gov/divisions/corpfin/guidance/cfguidance-topic2.htm.
nal authorization. However, the War Powers Act was passed in 1973—that is, at a time that cyber conflict was not a serious possibility—and the War Powers Act is poorly suited to U.S. military forces that might engage in active cyber conflict. Also, the Posse Comitatus Act of 1878 places some constraints on the extent to which, if at all, the Department of Defense—within which is resident a great deal of cybersecurity knowledge—can cooperate with civil agencies on matters related to cybersecurity.
International law does not explicitly address the conduct of hostile cyber operations that cross international boundaries. However, one international agreement—the Convention on Cybercrime—seeks to harmonize national laws that criminalize certain specifically identified computer-related actions or activities, to improve national capabilities for investigating such crimes, and to increase cooperation on investigations.19 That convention also obliges ratifying states to create laws allowing law enforcement to search and seize computers and “computer data,” engage in wiretapping, and obtain real-time and stored communications data, whether or not the crime under investigation is a cybercrime.
International law does potentially touch on hostile cyber operations that cross international boundaries when a hostile cyber operation is the instrumentality through which some regulated action is achieved. A particularly important example of such a case is the applicability of the laws of war (or, equivalently, the law of armed conflict) to cyberattacks. Today, the law of armed conflict is expressed in two legal instruments—the UN Charter and the Geneva and Hague Conventions.
The UN Charter is the body of treaty law that governs when a nation may engage in armed conflict. Complications and uncertainty regarding how the UN Charter should be interpreted with respect to cyberattacks result from three fundamental facts:
• The UN Charter was written in 1945, long before the notion of cyberattacks was even imagined. Thus, the framers of the charter could not have imagined how it might apply to cyber conflict.
• The UN Charter does not define key terms, such as “use of force,” “threat of force,” or “armed attack.” Definitions and meanings can only be inferred from historical precedent and practice, and there are no such precedents for their meaning in the context of cyber conflict.
19 Drafted by the Council of Europe in Strasbourg, France, the convention is available on the Web site of the Council of Europe at http://conventions.coe.int/Treaty/en/Treaties/Html/185.htm.
• The charter is in some ways internally inconsistent. It bans certain acts (uses of force) that could damage persons or property, but allows other acts (economic sanctions) that could damage persons or property. Offensive cyber operations may well magnify such inconsistencies.
The Geneva and Hague Conventions regulate how a nation engaged in armed conflict must behave. These conventions embody several principles, such as the principle of nonperfidy (military forces cannot pretend to be legally protected entities, such as hospitals); the principle of proportionality (the military advantage gained by a military operation must not be disproportionate to the collateral damage inflicted on civilian targets); and the principle of distinction (military operations may be conducted only against “military objectives” and not against civilian targets). But as with the UN Charter, the Geneva Conventions are silent on cyberattack as a modality of conflict, and how to apply the principles mentioned above in any instance involving cyber conflict may be uncertain in some cases.
A second important example of an implicit relationship between hostile cyber operations and international law is that of cyber exploitation by one nation to acquire intelligence information from another. Espionage is an illegal activity under the domestic laws of virtually all nations, but not under international law. There are no limits in international law on the methods of collecting information, what kinds of information can be collected, how much information can be collected, or the purposes for which collected information may be used.
As noted above, international law is also articulated through customary international law—that is, the general and consistent practices of states followed from a sense of legal obligation. Such law is not codified in the form of treaties but rather is found in international case law. Here too, guidance for what counts as proper behavior in cyberspace is lacking. Universal adherence to norms of behavior in cyberspace could help to provide nations with information about the intentions and capabilities of other adherents, in both strategic and tactical contexts, but there are no such norms today.
Foreign Domestic Law
Foreign nations are governed by their own domestic laws that relate to cybersecurity. When another nation’s laws criminalize similar bad activities in cyberspace, the United States and that other nation are more likely to be able to work together to combat hostile cyber operations that cross their national borders. For example, the United States and China have been able to find common ground in working together to combat the production of child pornography and spam.
But when security- or privacy-related laws of different nations are inconsistent, foreign law often has an impact on the ability of the United States to trace the origin of hostile cyber operations against the United States or to take action against perpetrators under another nation’s jurisdiction. Legal dissimilarities have in the past impeded both investigation and prosecution of hostile cyber operations that have crossed international boundaries.
From an organizational perspective, the response of the United States to a hostile operation in cyberspace by a nonstate actor is often characterized as depending strongly on whether that operation is one that requires a law enforcement response or a national security response. This characterization is based on the idea that a national security response relaxes many of the constraints that would otherwise be imposed on a law enforcement response. For example, active defense—either by active threat neutralization or by cyber retaliation—may be more viable under a national security response paradigm, whereas a law enforcement paradigm might call for strengthened passive defense measures to mitigate the immediate threat and other activities to identify and prosecute the perpetrators.
When a cyber incident first occurs, its scope and nature are not likely to be clear, and many factors relevant to a decision will not be known. For example, because cyber weapons can act over many time scales, anonymously, and clandestinely, knowledge about the scope and character of a cyberattack will be hard to obtain quickly. Attributing the incident to a nation-state or to a non-national actor may not be possible for an extended period of time. Other nontechnical factors may also play into the assessment of a cyber incident, such as the state of political relations with other nations that are capable of launching the cyber operations involved in the incident.
Once the possibility of a cyberattack is made known to national authorities, information must be gathered to determine perpetrator and purpose, and must be gathered using the available legal authorities. Some entity within the federal government integrates the relevant information, and then it or another higher entity (e.g., the National Security Council) renders a decision about next steps to be taken, and in particular whether a law enforcement or national security response is called for.
How might some of the factors described above be taken into account as a greater understanding of the event develops? Law enforcement equities are likely to predominate in the decision-making calculus if the scale of the attack is small, if the assets targeted are not important military
assets or elements of critical infrastructure, or if the attack has not created substantial damage. However, an incident with sufficiently serious consequences (e.g., death and/or significant destruction) that it would qualify as a use of force or an armed attack on the United States had it been carried out with kinetic means would almost certainly be regarded as a national security matter. Other factors likely to influence such a determination are the geographic origin of the attack and the nature of the party responsible for the attack (e.g., national government, terrorist group).
U.S. law has traditionally drawn distinctions between authorities granted to law enforcement (Title 18 of the U.S. Code), the Department of Defense (Title 10 of the U.S. Code), and the intelligence community (Title 50 of the U.S. Code), but in an era of international terrorist threats, these distinctions are not as clear in practice as when threats to the United States emanated primarily from other nations. That is, certain threats to the United States implicate both law enforcement and national security equities and call for a coordinated response by all relevant government agencies.
When critical infrastructure is involved, the entity responsible for integrating the available information and recommending next steps to be taken has evolved over time. Today, the National Cybersecurity and Communications Integration Center (NCCIC) is the cognizant entity within the U.S. government that fuses information on the above factors and integrates the intelligence, national security, law enforcement, and private-sector equities regarding the significance of any given cyber incident.20
Whatever the mechanisms for aggregating and integrating information related to a cyber incident, the function served is an essential one—and if the relationships, the communications pathways, the protocols for exchanging data, and the authorities are not established and working well in advance, responses to a large unanticipated cyber incident will be uncoordinated and delayed.
Deterrence relies on the idea that inducing a would-be intruder to refrain from acting in a hostile manner is as good as successfully defending against or recovering from a hostile cyber operation. Deterrence through the threat of retaliation is based on imposing negative consequences on adversaries for attempting a hostile operation.
Imposing a penalty on an intruder serves two functions. It serves
20 See U.S. Department of Homeland Security, “About the National Cybersecurity and Communications Integration Center,” available at http://www.dhs.gov/about-nationalcybersecurity-communications-integration-center.
the goal of justice—an intruder should not be able to cause damage with impunity, and the penalty is a form of punishment for the intruder’s misdeeds. In addition, it sets the precedent that misdeeds can and will result in a penalty for the intruder, and it seeks to instill in future would-be intruders the fear that he or she will suffer from any misdeeds they might commit, and thus to deter such action, thereby discouraging further misdeeds.
What the nature of the penalty should be and who should impose the penalty are key questions in this regard. (Note that a penalty need not take the same form as the hostile action itself.) What counts as a sufficient attribution of hostile action to a responsible party is also a threshold issue, because imposing penalties on parties not in fact responsible for a hostile action has many negative ramifications.
For deterrence to be effective, the penalty must be one that affects the adversary’s decision-making process and changes the adversary’s cost-benefit calculus. Possible penalties in principle span a broad range, including jail time, fines, or other judicially sanctioned remedies; damage to or destruction of the information technology assets used by the perpetrator to conduct a hostile cyber operation; loss of or damage to other assets that are valuable to the perpetrator; or other actions that might damage the perpetrator’s interests.
But the appropriate choice of penalty is not separate from the party imposing the penalty. For example, the prospect that the victim of a hostile operation might undertake destructive actions against a perpetrator raises the spectre of vigilantism and easily leads to questions of accountability and/or disproportionate response.
Law enforcement authorities and the judicial system rely on federal and state law to provide penalties, but they presume the existence of a process in which a misdeed is investigated, perpetrators are prosecuted, and if found guilty are subject to penalties imposed by law. As noted in Section 4.2.3, a number of laws impose penalties for the willful conduct of hostile cyber operations. Deterrence in this context is based on the idea that a high likelihood of imposing a significant penalty for violations of such laws will deter such violations.
In a national security context, when the misdeed in question affects national security, the penalty can take the form of diplomacy such as demarches and breaks in diplomatic relations, economic actions such as trade sanctions, international law enforcement such as actions taken in international courts, nonkinetic military operations such as deploying forces as visible signs of commitment and resolve, military operations such as the use of cruise missiles against valuable adversary assets, or cyber operations launched in response.
In a cyber context, the efficacy of deterrence is an open question.
Deterrence was and is a central construct in contemplating the use of nuclear weapons and in nuclear strategy—because effective defenses against nuclear weapons are difficult to construct, using the threat of retaliation to persuade an adversary to refrain from using nuclear weapons is regarded by many as the most plausible and effective alternative to ineffective or useless defenses. Indeed, deterrence of nuclear threats in the Cold War established the paradigm in which the conditions for successful deterrence are largely met.
It is an entirely open question whether cyber deterrence is a viable strategy. Although nuclear weapons and cyber weapons share one key characteristic (the superiority of offense over defense), they differ in many other key characteristics. For example, it is plausible to assume that a large-scale nuclear attack can be promptly recognized and attributed, but it is not plausible to assume the same for a large-scale cyberattack.
How should a system’s security be assessed? Cybersecurity analysts have strong intuitions that some systems are more secure than others, but assessing a system’s cybersecurity posture turns out to be a remarkably thorny problem. From a technical standpoint, assessing the nature and extent of a system’s security is confounded by two factors:
• A system can be secure only to the extent that system designers can precisely specify what it means for the system to operate securely. Indeed, many vulnerabilities in systems can be traced to misunderstandings or a lack of clarity about what a system should do under a particular set of circumstances (such as the use of penetration techniques or attack tools that the defender has never seen before).
• A system that contains functionality that should not be present according to the specifications may be insecure, because that excess functionality may entail doing something harmful. Discovering that a system has “extra” functionality that may be harmful turns out to be an extraordinarily difficult task as a general rule.
Viewing system security from an operational perspective rather than just a technical one shows that security is a holistic, emergent, multidimensional property of a system rather than a fixed attribute. Indeed, many factors other than technology affect the security of a system, including the system’s configuration, the cybersecurity training and awareness of the people using the system, the access control policy in place, the boundaries of the system (e.g., are users allowed to connect their own
devices to the system?), the reliability of personnel, and the nature of the threat against the system.
Accordingly, a discussion cast simply in terms of whether a system is or is not secure is almost certainly misleading. Assessing the security of a system must include qualifiers such as, Security against what kind of threat? Under what circumstances? For what purpose? With what configuration? Under what security policy?
What does the discussion above imply for the development of cybersecurity metrics—measurable quantities whose value provides information about a system or network’s resistance to a hostile cyber operation? Metrics are intended to help individuals and companies make rational quantitative decisions about whether or not they have “done enough” with respect to cybersecurity. These parties would be able to quantify cost-benefit tradeoffs in implementing security features, and they would be able to determine if System A is more secure than System B. Good cybersecurity metrics would also support a more robust insurance market in cybersecurity founded on sound actuarial principles and knowledge.
The holy grail for cybersecurity analysts is an overall cybersecurity metric that is applicable to all systems and in all operating environments. The discussion above, not to mention several decades’ worth of research and operational experience, suggests that this holy grail will not be achieved for the foreseeable future. But other metrics may still be useful under some circumstances.
It is important to distinguish between input metrics (metrics for what system users or designers do to the system), output metrics (metrics for what the system produces), and outcome metrics (metrics for what users or designers are trying to achieve—the “why” for the output metrics).21
• Input metrics reflect system characteristics, operation, or environment that are believed to be associated with desirable cybersecurity outcomes. An example of an input metric could be the annual cybersecurity budget of an organization. In practice, many input metrics for cybersecurity are not validated in practice, and/or are established intuitively.
• Output metrics reflect system performance with respect to parameters that are believed to be associated with desirable cybersecurity outcomes. An output metric in a cybersecurity context could be the number of cybersecurity incidents in a given year. Output metrics can often be assessed through the use of a red team. Sometimes known as “white-hat”
21 See Republic of South Africa, “Key Performance Information Concepts,” Chapter 3 in Framework for Managing Programme Performance Information, National Treasury, Pretoria, South Africa, May 2007, available at http://www.thepresidency.gov.za/learning/reference/framework/part3.pdf.
or “ethical” hackers, a red team attempts to penetrate a system’s security under operational conditions with the blessing of senior management, and then reports to senior management on its efforts and what it has learned about the system’s security weaknesses. Red teaming is often the most effective way to assess the cybersecurity posture of an organization, because it provides a high-fidelity simulation of a real adversary’s actions.
• Outcome metrics reflect the extent to which the system’s cybersecurity properties actually produce or reflect desirable cybersecurity outcomes. In a cybersecurity context, an outcome measure might be the annual losses for an organization due to cybersecurity incidents.
With the particular examples chosen, a possible logic chain is that an organization that increases its cybersecurity expenditures can reduce the number of cybersecurity incidents and thereby reduce the annual losses due to such incidents. Of course, if an organization spends its cybersecurity budget unwisely, the presumed relationship between budget and number of incidents may well not hold.
Also, the correlation between improvement in a cybersecurity input metric and better cybersecurity outcomes may well be disrupted by an adaptive adversary. The benefit of the improvement may endure, however, against adversaries that do not adapt—and thus the resulting cybersecurity posture against the entire universe of threats may in fact be improved.
Within each of the approaches for improving cybersecurity described above, research is needed in two broad categories. First, problem-specific research is needed to find good solutions for pressing cybersecurity problems. A good solution to a cybersecurity problem is one that is effective, is robust against a variety of attack types, is inexpensive and easy to deploy, is easy to use, and does not significantly reduce or cripple other functionality in the system of which it is made a part. Problem-specific research includes developing new knowledge on how to improve the prospects for deployment and use of known solutions to given problems.
Second, even assuming that everything known today about improving cybersecurity was immediately put into practice, the resulting cybersecurity posture—although it would be stronger and more resilient than it is now—would still be inadequate against today’s high-end threat, let alone tomorrow’s. Closing this gap—a gap of knowledge—will require substantial research as well.
Several principles, described in the 2007 NRC report Toward a Safer and More Secure Cyberspace, should shape the cybersecurity research agenda:
• Conduct cybersecurity research as though its application will be important. The scope of cybersecurity research must extend to understanding how cybersecurity technologies and practice can be applied in real-life contexts. Consequently, fundamental research in cybersecurity will embrace organizational, sociological, economic, legal, and psychological factors as well as technological ones.
• Hedge against uncertainty in the nature and severity of the future cybersecurity threat. A balance in the research portfolio between research addressing low-end and high-end threats is necessary. Operationally, it means that the R&D agenda in cybersecurity should be both broader and deeper than might be required if only low-end threats were at issue. (Because of the long lead time for large-scale deployments of any measure, part of the research agenda must include research directed at reducing those long lead times.)
• Ensure programmatic continuity. A sound research program should also support a substantial effort in research areas with a long time horizon for payoff. This is not to say that long-term research cannot have intermediate milestones, although such milestones should be treated as midcourse corrections rather than “go/no-go” decisions that demoralize researchers and make them overly conservative. Long-term research should engage both academic and industry actors, and it can involve collaboration early and often with technology-transition stakeholders, even in the basic science stages.
• Respect the need for breadth in the research agenda. Cybersecurity risks will be on the rise for the foreseeable future, but few specifics about those risks can be known with high confidence. Thus, it is not realistic to imagine that one or even a few promising approaches will prevent or even substantially mitigate cybersecurity risks in the future, and cybersecurity research must be conducted across a broad front. In addition, because qualitatively new attacks can appear with little warning, a broad research agenda is likely to decrease significantly the time needed to develop countermeasures against these new attacks when they appear. Priorities are still important, but they should be determined by those in a position to respond most quickly to the changing environment—namely, the research constituencies that provide peer review and the program managers of the various research-supporting agencies. Notions of breadth and diversity in the cybersecurity research agenda should themselves be interpreted broadly as well, and might well be integrated into other research programs such as software and systems engineering, operating systems, programming languages, networks, Web applications, and so on.
• Disseminate new knowledge and artifacts (e.g., software and hardware prototypes) to the research community. Dissemination of research results beyond one’s own laboratory is necessary if those results are to have a
wide impact—a point that argues for cybersecurity research to be conducted on an unclassified basis as much as possible. Other information to be shared as widely as possible includes threat and incident information that can help guide future research.
As for the impact of research on the nation’s cybersecurity posture, it is not reasonable to expect that research alone will make any substantial difference at all. Indeed, many factors must be aligned if research is to have a significant impact. Specifically, IT vendors must be willing to regard security as a product attribute that is coequal with performance and cost; IT researchers must be willing to value cybersecurity research as much as they value research into high-performance or cost-effective computing; and IT purchasers must be willing to incur present-day costs in order to obtain future benefits.