Finding 1. Cybersecurity is a never-ending battle. A permanently decisive solution to the problem will not be found in the foreseeable future.
For the most part, cybersecurity problems result from the inherent nature of information technology (IT), the complexity of information technology systems, and human fallibility in making judgments about what actions and information are safe or unsafe from a cybersecurity perspective, especially when such actions and information are highly complex. None of these factors is likely to change in the foreseeable future, and thus there are no silver bullets—or even combinations of silver bullets—that can “solve the problem” permanently.
In addition, threats to cybersecurity evolve. As new defenses emerge to stop older threats, intruders adapt by developing new tools and techniques to compromise security. As information technology becomes more ubiquitously integrated into society, the incentives to compromise the security of deployed IT systems grow. As innovation produces new information technology applications, new venues for criminals, terrorists, and other hostile parties also emerge, along with new vulnerabilities that malevolent actors can exploit. That there are ever-larger numbers of people with access to cyberspace multiplies the number of possible victims and also the number of potential malevolent actors.
Thus, enhancing the cybersecurity posture of a system—and by exten-
sion the organization in which it is embedded—must be understood as an ongoing process rather than something that can be done once and then forgotten. Adversaries—especially at the high-end part of the threat spectrum—constantly adapt and evolve their intrusion techniques, and the defender must adapt and evolve as well.
These comments should not be taken to indicate a standstill in the U.S. cybersecurity posture. For example, most major IT vendors have in recent years undertaken significant efforts to improve the security of their products in response to end-user concerns over security. Many of today’s products are by many measures more secure than those that preceded these efforts. Support for research in cybersecurity has expanded significantly. And public awareness is greater than it was only a few years ago. Without these efforts, the gap between cybersecurity posture and threat would undoubtedly be significantly greater than it is today, especially with the concurrent rise in the use of IT throughout society.
Ultimately, the relevant policy question is not how the cybersecurity problem can be solved, but rather how it can be made manageable. Societal problems related to the existence of war, terrorism, crime, hunger, drug abuse, and so on are rarely “solved” or taken off the policy agenda once and for all. The salience of such problems waxes and wanes, depending on circumstances, and no one expects such problems to be solved so decisively that they will never reappear—and the same is true for cybersecurity.
Finding 2. Improvements to the cybersecurity posture of individuals, firms, government agencies, and the nation have considerable value in reducing the loss and damage that may be associated with cybersecurity breaches.
If an adversary has the resources to increase the sophistication of its attack and the motivation to keep trying even after many initial attempts fail, it is natural for users to wonder whether it makes sense to bother to improve security at all. Yet, doing nothing until perfect security can be deployed is surely a recipe for inaction that leaves one vulnerable to many lower-level threats.
The value of defensive measures is found in several points:
• Malevolent actors need some time to adapt to defensive measures. During this time, the victim is usually more secure than if no defensive measures had been taken.
• A target often has multiple adversaries, not just one. Even if it is true that adversary A will adapt to new defenses that are raised against A, adversaries B, C, and D may try the same kinds of techniques and tools
that A originally used—these efforts by B, C, and D are likely to be less successful against the target.
• Adaptation is costly, and it forces the adversary to expend resources. Increased difficulty or expense for the adversary sometimes acts as a deterrent of harmful actions.
• Unsuccessful attempts to compromise system security cost the adversary time—and an adversary who works more slowly poses less of a threat than one who works quickly. For example, imposing delays on the adversary may help to prevent him from being able to access everything on the targeted system.
• A well-defended target is usually less attractive to malevolent actors without specific objectives than are poorly defended targets. Thus, if a malevolent actor’s objectives do not call for compromising that specific target, he may well move on to a less-well-defended target.
• Certain defensive measures may provide opportunities for the victim to gather intelligence on an intruder’s methods and tactics.
• Other defensive measures may enable the victim to know of the adversary’s presence and activities, even if the victim is not entirely successful in thwarting the adversary’s efforts.
For all of these reasons, efforts to improve cybersecurity postures have significant value.
Finding 3. Improvements to cybersecurity call for two distinct kinds of activity: (a) efforts to more effectively and more widely use what is known about improving cybersecurity, and (b) efforts to develop new knowledge about cybersecurity.
The current U.S. national cybersecurity posture—as it actually is—is determined by knowledge that we have and that we actually use to build a posture that is as robust as we can make it. The gap in security between our national cybersecurity posture and the cyber threat has two essential parts.
The first part—Part 1—of the gap reflects what our cybersecurity posture could be if currently known best cybersecurity practices and technologies were widely deployed and used. Illustrative of things that we know but ignore or have forgotten about, the Part 1 gap is in some sense the difference between the average cybersecurity posture and the best cybersecurity posture possible with known best practices and technologies. The existence of the best is the proof that it is possible to improve the cybersecurity postures that are not the best. The second part—Part 2—is the gap between the strongest posture possible with known practices and technologies and the threat as it exists (and will exist). That is, even if the
Part 1 gap were fully closed, the resulting cybersecurity posture would not be adequate to cope with many of the threats that currently exist, especially the high-end threat.
Improvement to existing technologies and techniques—and indeed the development of entirely new approaches to cybersecurity—is the focus of traditional cybersecurity research. A properly responsive research program is broad and robust, and it addresses both current and possible future threats. Knowledge about new cybersecurity technologies, techniques, tactics, organizational arrangements, and so on will help to strengthen defenses against an ever-evolving threat. Attending to Part 2 of the cybersecurity gap calls for research that targets specific identifiable cybersecurity problems and that also builds a base of technical expertise that increases the ability to respond quickly in the future when threats unknown today emerge.
Note that the Part 1 gap is primarily nontechnical in nature (requiring, e.g., research relating to economic or psychological factors regarding the use of known practices and techniques, enhanced educational efforts to promote security-responsible user behavior, and incentives to build organizational cultures with higher degrees of security awareness). Closing the Part 1 gap does not require new technical knowledge of cybersecurity per se, but rather the application of existing technical knowledge. Research is thus needed to understand how better to promote deployment and use of such knowledge. By contrast, Part 2 of the cybersecurity gap is the domain where new technologies and approaches are primarily relevant and where exploratory technical research is thus important.
Finding 4. Publicly available information and policy actions to date have been insufficient to motivate an adequate sense of urgency and ownership of cybersecurity problems afflicting the United States as a nation.
In 2007, a National Research Council report titled Toward a Safer and More Secure Cyberspace called for policy makers to “create a sense of urgency about the cybersecurity problem commensurate with the risks” (p. 229). The report argued that the necessary sense of urgency might be motivated by making publicly available a greater amount of authoritative information about cybersecurity problems and threats and also by changing a decision-making calculus that excessively focuses vendor and end-user attention on the short-term costs of improving their cybersecurity postures.
In the period since that report was issued, the cybersecurity issue has received increasing public attention, and even more authoritative information regarding cybersecurity threats is indeed available publicly. But all
too many decision makers still focus on the short-term costs of improving their own organizational cybersecurity postures, and many—even most—people and organizations do not believe that cybersecurity is important enough to warrant any significant change in their own behavior. Furthermore, little has been done to harness market forces to address matters related to the cybersecurity posture of the nation as a whole.
How might things be different if a sense of urgency were in place?
A culture of security would pervade the entire life cycle of IT systems operations, from initial architecture, to design, development, testing, deployment, maintenance, and use. Such a culture would entail, among other things, collaboration among researchers; effective coordination and information sharing between the public and the private sector; the creation of a sufficient core of research specialists necessary to advance the state of the art; the broad-based education of developers, administrators, and users that would make security-conscious practices second nature, just as optimizing for performance or functionality is now, and that would make it easy and intuitive for developers and users to “do the right thing”; the employment of business drivers and policy mechanisms to facilitate security technology transfer and diffusion of R&D into commercial products and services; and the promotion of risk-based decision making (and metrics to support this effort).
Consider what such a culture might mean in practice:
• Developers and designers of IT products and services would use design principles that build security into new products and services, and that focus on security and attack resilience as well as performance and functionality.
• Security would be an integral part of the initial designs for future secure and attack-resilient computer architectures, and it would be integrated into every aspect of the hardware and software design life cycles and research agendas.
• Designers and developers would emphasize defensive design and implementation with the expectation that systems will have to deal with user mistakes and malicious adversaries.
• Security features would be much simpler to use than they are today.
• Designers and developers would assume that systems are insecure until evidence suggests their resistance to compromise.
• End users would be aware of security matters and diligent in their efforts to promote security.
• Senior managers would create organizational cultures in which a high degree of security awareness is the norm, would be willing to accept somewhat lower levels of performance with respect to other organiza-
tional goals in order to improve their cybersecurity postures, and would be willing to expend time, energy, talent, and money on cybersecurity.
• Policy makers would be willing to make decisions about tradeoffs that they try to avoid today and would also explain their rationale for those decisions to the nation.
As for market forces and cybersecurity, private-sector entities will not deploy a level of security higher than that which can be justified by today’s business cases. In the absence of a market for a higher level of security, vendors will also not provide such security. Accordingly, if the nation’s cybersecurity posture is to be improved to a level that is higher than the level to which today’s market will drive it, the market calculus that motivates organizations to pay attention to cybersecurity must be altered somehow, and the business cases for the security of these organizations must change.
Finding 5. Cybersecurity is important to the United States, but the nation has other interests as well, some of which conflict with the imperatives of cybersecurity. Tradeoffs are inevitable and will have to be accepted through the nation’s political and policy-making processes.
Senior policy makers have many issues on their agenda, and only five issues can be in the top five issues of concern. Even within the national security context, for example, is it more important to attend to nuclear proliferation and terrorism or to rebalancing U.S. military forces to focus on Asia than to address cybersecurity?
Compare, for example, the significance of a nuclear attack on the United States to the significance of a large-scale cyberattack. Despite comparisons that analogize Stuxnet (discussed in Chapter 1) to the use of nuclear weapons at Hiroshima in 1945,1 one critical difference is that the use of a nuclear weapon provides a very important threshold—there is no sense in which the use of even a single nuclear weapon could be regarded as unimportant or trivial. Indeed, an above-ground nuclear explosion anywhere in the world, especially one that does damage, is unambigu-
1 See, for example, Michael Joseph Gross, “A Declaration of Cyber-War,” Vanity Fair, April 2011, available at http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104; Alexis C. Madrigal, “Stuxnet Is the Hiroshima of Cyber War,” The Atlantic, March 4, 2011, available at http://www.theatlantic.com/technology/archive/2011/03/stuxnet-is-the-hiroshima-of-cyber-war/72033/; Mark Clayton, “From the Man Who Discovered Stuxnet, Dire Warnings One Year Later,” Christian Science Monitor, September 22, 2011, available at http://www.csmonitor.com/USA/2011/0922/From-the-man-who-discovered-Stuxnet-dire-warnings-one-year-later.
ously detectable. By contrast, cyberattacks are often conducted, not necessarily with government sponsorship or approval (although sometimes with government tolerance), by criminals and hackers. Cyber exploitation also occurs on a large scale, often with no one noticing.
But the likelihood of the detonation of a nuclear weapon on U.S. soil is much lower than that of a cyberattack on the United States. So is the nuclear issue, which is more consequential but less likely compared to the cyber issue, worth more attention and effort from policy makers? Or less effort? Both are unquestionably important—but which deserves more action?
Questions of prioritization play heavily in the conduct of foreign relations as well, given that the United States usually has many interests at stake with other nations. For example, the United States has publicly held China and Russia responsible for industrial cyber exploitation on a very large scale. But China is also the largest single holder of U.S. debt and one of the largest trading partners of the United States. China is the single most influential nation with respect to North Korea. The United States and China are arguably the most important nations regarding the mitigation of global climate change. And this list goes on. What is the importance of large-scale cyber exploitation conducted by China for economic advantage relative to other U.S. interests with respect to China? Similar comments hold for Russia as well, although the specifics of U.S. common interests with Russia are different.
The need to manage multiple common interests with China or Russia or any other nation generally requires policy makers to make tradeoffs—pursuing one item on the agenda less vigorously in order to make progress on another item. Moreover, making such tradeoffs almost always results in domestic winners and losers, a fact that makes the losers very unhappy and increases their incentives to make their unhappiness known.
Nor is the competition for policy-maker attention limited to national security and foreign relations. Domestic concerns about unemployment, access to health care, and climate change are also important to the nation, and who is to say whether cybersecurity is a more important problem for the nation to address?
In an environment of many competing priorities, reactive policy making is often the outcome. It is an unfortunate fact of policy and politics that tough decisions are often deferred in the absence of a crisis that forces policy makers to respond. (The same can be true in the private sector as well.) Support for efforts to prevent a disaster that has not yet occurred is typically less than support for efforts to respond to a disaster that has already occurred.
In cybersecurity, this tendency often is reflected in the notion that “no or few attempts have yet been made to compromise the cybersecurity
of application X, and why would anyone want to do so anyway?,” thus justifying why immediate attention and action to improve the cybersecurity posture of application X can be deferred or studied further. Reactive policy making can be explained in part by the economics of excessive discounting of future events but has many other causes as well.
Progress in cybersecurity policy has also stalled at least in part because of conflicting equities. As a nation, we want better cybersecurity, yes, but we also want a private sector that innovates rapidly, and the convenience of not having to worry about cybersecurity, and the ability for applications to interoperate easily and quickly with one another, and the right to no diminution of our civil liberties, and so on.
But the tradeoffs between security and these other national interests may not be as stark as they might appear at first glance. That is, it may be that the first proposals to advance cybersecurity interests in a given case entail sharper and starker tradeoffs than are necessary and that the second and third proposals may reduce the significance of those tradeoffs. Indeed, proposals may be developed that may advance both interests rather than just one at the expense of another, especially when longer time scales are involved. For example, a properly structured cybersecurity posture for the nation might also provide better protection for intellectual property, thereby enhancing the nation’s capability for innovation. More usable security technologies or procedures could provide better security and also increase the convenience of using information technology.
Nonetheless, irreconcilable tensions will sometimes be encountered. At that point, policy makers will have to confront rather than sidestep those tensions, and honest acknowledgment and discussion of the tradeoffs (e.g., a better cybersecurity posture may reduce the nation’s innovative capability, may increase the inconvenience of using information technology, may reduce the ability to collect intelligence) will go a long way toward building public support for a given policy position.
Finding 6. The use of offensive operations in cyberspace as an instrument to advance U.S. interests raises many important technical, legal, and policy questions that have yet to be aired publicly by the U.S. government.
As noted in Chapter 5, it is a matter of public record that the United States possesses offensive capabilities in cyberspace, including capabilities for cyber exploitation and for cyberattack. The United States has established U.S. Cyber Command as an entity within the Department of Defense that
plans, coordinates, integrates, synchronizes and conducts activities to: direct the operations and defense of specified Department of Defense
information networks and prepare to, and when directed, conduct full spectrum military cyberspace operations in order to enable actions in all domains, ensure US/Allied freedom of action in cyberspace and deny the same to our adversaries.2
The United States has publicly stated that it does not collect intelligence information for the purpose of enhancing the competitiveness or business prospects of U.S. companies. And it has articulated its view that established principles of international law—including those of the law of armed conflict—do apply in cyberspace.
But beyond these very general statements, the U.S. government has placed little on the public record, and there is little authoritative information about U.S. offensive capabilities in cyberspace, rules of engagement, doctrine for the use of offensive capabilities, organizational responsibilities within the Department of Defense and the intelligence community, and a host of other topics related to offensive operations.
It is likely that behind the veil of classification, these topics have been discussed at length. But a full public discussion of issues in these areas has yet to coalesce, and classification of such topics has left U.S. government thinking on these issues highly opaque. Such opacity has many undesirable consequences, but one of the most important consequences is that the role offensive capabilities could play in defending important information technology assets of the United States cannot be discussed fully.
What is sensitive about offensive U.S. capabilities in cyberspace is generally the fact of U.S. interest in a specific technology for cyberattack (rather than the nature of that technology itself); fragile and sensitive operational details that are not specific to the technologies themselves (e.g., the existence of a covert operative in a specific foreign country, a particular vulnerability, a particular operational program); or U.S. knowledge of the capabilities and intentions of specific adversaries. Such information is legitimately classified but is not particularly relevant for a discussion about what U.S. policy should be. That is, unclassified information provides a generally reasonable basis for understanding what can be done and for policy discussions that focus primarily on what should be done.
Cybersecurity is a complex subject whose understanding requires knowledge and expertise from multiple disciplines, including but not limited to computer science and information technology, psychology, eco-
2 Fact sheet on U.S. Cyber Command, available at http://www.stratcom.mil/factsheets/2/Cyber_Command/, accessed March 8, 2014.
nomics, organizational behavior, political science, engineering, sociology, decision sciences, international relations, and law. In practice, although technical measures are an important element, cybersecurity is not primarily a technical matter, although it is easy for policy analysts and others to get lost in the technical details. Furthermore, what is known about cybersecurity is often compartmented along disciplinary lines, reducing the insights available from cross-fertilization.
This primer seeks to illuminate some of these connections. Most of all, it attempts to leave the reader with two central ideas. The cybersecurity problem will never be solved once and for all. Solutions to the problem, limited in scope and longevity though they may be, are at least as much nontechnical as technical in nature.