Individuals, businesses, governments, and society at large have tied their future to information technologies, and activities carried out in cyberspace have become integral to daily life. Yet these activities—many of them drivers of economic development—are under constant attack from vandals, criminals, terrorists, hostile states, and other malevolent actors. In addition, a variety of legitimate actors, including businesses and governments, have an interest in collecting, analyzing, and storing information from and about individuals and organizations, potentially creating security and privacy risks.
Cybersecurity encompasses all the activities designed to protect work being carried out in cyberspace from the hostile actions of adversaries. Cybersecurity is made extremely difficult by the incredible complexity and scale of cyberspace. The challenges to achieving cybersecurity constantly change as technologies advance, new applications of information technologies emerge, and societal norms evolve.
On December 8 and 9, 2014, the Raymond and Beverly Sackler U.S.-U.K. Scientific Forum “Cybersecurity Dilemmas: Technology, Policy, and Incentives” examined a broad range of issues associated with cybersecurity. Organized by the National Academy of Sciences and the Royal Society, the forum brought together about 60 invited participants in Washington, D.C., for a day and a half of presentations and discussions on such topics as cybersecurity and international relations, privacy, rational cybersecurity, and accelerating progress in cybersecurity.
This summary of the forum is drawn from the comments made by participants at the meeting and does not reflect a consensus of those present or of the sponsoring organizations. Rather, it explores some of the more prominent dilemmas surrounding cybersecurity, identified in italicized boldface text, as well as issues related to those dilemmas.
A dilemma at the heart of cybersecurity is that people want conflicting things from computer and communications technologies. They want a technology to have the most modern and powerful features, be convenient to use, offer anonymity in certain circumstances, and be secure. But these attributes have competing requirements. For example, simpler systems are fairly easily made secure, but over time people demand more functionality, and the greater complexity that results makes systems less secure. Similarly, although a complex system can be better protected by isolating it or by sanitizing all input, doing so makes the system less useful and less valuable to its users.
An ongoing challenge in cybersecurity is to understand the costs of breaches as compared to the costs of additional security measures.
Users of computing and communications technologies understandably focus on getting the job done. If a security solution gets in the way, these people will find ways around it—for example, by remotely connecting an unsecured laptop so they can work at home or demanding that a particular information technology work in almost any circumstance or setting.
Because of these conflicting desires, many abstract cybersecurity goals are not realistic. Security is often a relatively low priority for the individuals using information systems. Indeed, unless users see a clear advantage in the security being provided, they generally are unwilling to tolerate systems that are slower or more expensive or less capable simply because they are more secure.
It is hard to estimate the total cost of cybersecurity breaches. Security experts understandably tend to focus on the worst things that could happen to systems, and users and cybersecurity vendors likewise often claim very large estimates of the damage resulting from breaches. Individual users, on the other hand, tend to think more about what has happened to them, to people they know, or to people they recognize as being similar to themselves. Moreover, the people being harmed by security lapses or measures may not be the same people who decide which security approaches and methods to use.
The harms that result from cybersecurity breaches can go well beyond economic costs. They include embarrassment and disruption, such as private pictures being distributed or data about corporate salaries being released. Economic and other harms to particular individuals or companies (or nations) can be significant. The harms from breaches may be small for any one individual but large in the aggregate. An ongoing challenge in cybersecurity is to understand the costs of breaches as compared to the costs (sometimes in the form of inefficiencies) of additional security measures.
Another prominent dilemma involves misaligned incentives. One way to think about the cybersecurity problem is to see it purely in terms of attackers and targets. A target has something that an attacker wants, and an attacker uses information technologies to try to get it. This view diminishes the importance of the context, including the value of the asset and the cost of the attack.
But the values of assets differ. Some targets are the equivalent of nuclear launch codes, which need extremely high-assurance protections. Others are online newspaper subscriptions, which need lower assurance protections. Also, attacks generally have costs for the attacker, both in terms of the resources required to mount the attack and potential costs if an attack is detected and punished. Unless the expected return from an attack is greater than the cost of the attack, the attack will be uneconomical.
These trade-offs require that decisions be made about the effort devoted to protecting assets. For example, what needs to be done to protect high-assurance assets, and what can be neglected in protecting low-assurance assets? Treating low-assurance assets as valuable assets—as is done, for example, when complex password rules are applied to low-assurance assets, or when people exaggerate the costs of cybercrime—leads to the irrational use of resources. In addition, sometimes it may be easier and cheaper to disrupt criminal activity down the line rather than to thwart it in advance by introducing rigorous security measures. For instance, it could be made harder to use stolen data, or the markets where criminal goods and services are exchanged could be disrupted. Further, the complexity that makes security hard also makes it difficult for individuals to be successful cybervillains on their own. Even if one person could, say, steal data, that person would still need a network of other specialists (such as malware designers, fake website designers, and money launderers) to carry out a criminal enterprise that exploits the data. If these networks can be disrupted, then the potential payoff of cybercrime can be limited.
Today, no good answer exists to the question of how rigorously people should protect their Internet accounts, or how much money should be spent on improving computer security. Even a simple question such as whether to mask passwords as people type them in (that is, replacing the symbols with a bullet, asterisk, or some other character) is difficult to answer, because the threat of shoulder surfing, where people steal someone else’s password by reading it as they type it in, could be replaced by other threats when passwords are masked. In fact, many claims about which practices are most effective in computer security are difficult to refute, both because there is no relevant evidence available and because gathering such evidence, if it existed, would be difficult.
Law enforcement activities frequently engage with information and activities in cyberspace, whether testing an alibi or attempting to uncover terrorist plots. In addition, access to communications data has become an important investigative tool for the police. For example, the large majority of serious crime cases prosecuted in the United Kingdom are said to rely on such access, in part because the data are relatively easy to obtain, whereas in the past such cases were prosecuted by other means.
Nonetheless, online criminal activities can run far ahead of the capabilities of law enforcement. Highly sophisticated gangs are using computer and communications technology to steal, smuggle, blackmail, sell drugs, and conduct other criminal activities on a large scale. Software to facilitate criminal acts can be purchased from hacking specialists, so those who benefit from a crime no longer need to be cyberexperts themselves. The most serious criminals then can base themselves in jurisdictions that do not have established mechanisms for assisting other countries with law enforcement cases. At the same time, an understanding of criminal motives and structures can aid law enforcement efforts. Criminal coalitions will need to generate specific trust-promoting structures and systems, which (given they are to be used by criminals) is a nontrivial problem. In these situations, the dilemma is that advancing information technologies facilitates cybercrime at the same time as it helps the efforts of law enforcement to prevent and solve such crime.
While technology cannot provide perfect security, new technology could provide greater security than exists today. For example, tamperproof audit trails and logs could cover all uses of data and enhance deterrence. Robust identity systems could be applied to people, programs, and machines. Technology development could change the balance and nature of trade-offs, and careful analysis of problems could yield improvements.
Stronger security technologies and procedures have been developed, but evidence of their efficacy and cost effectiveness is still lacking. High-assurance systems are possible, but they would likely be less functional and agile. Fundamental challenges include deciding which parts of the computing world need which levels of protection, determining how much added security will cost, and agreeing on how those costs should be distributed.
Providing cybersecurity typically means that trade-offs have to be made among the desired attributes of systems. Setting priorities can guide these trade-offs. One option is to limit the aspirations of systems by not trying to make everything secure. For example, the designers or users of a system could decide what really needs to be protected and what is not as important, just as people decide which assets to put in a bank vault and which to protect less securely. In this way, rational security policies would protect people only as much as they need to be protected.
Weakly protected accounts can become more valuable over time as people use them for more things.
As an example of setting priorities, the computing world could be divided into sectors that are more safe and accountable and sectors that are less safe and accountable. The sectors that call for more security might require centralized management and ways to control the input to systems, since they would still have vulnerabilities. One way to implement such an approach would be to identify a sector that handles only fully authenticated interactions, with accountability achieved by allowing interactions only with parties that are fully identified. Identity could be established through a combination of the person, the machine, and the program, and full audit logs could further enhance accountability.
One challenge with this approach would be moving information from a less secure zone to a more secure zone. Information could be sanitized—for example, by taking out all the scripting language and other executable code. That would reduce functionality, but there would have to be reasonable assurance that the more secure zone had not been compromised.
Another way to establish priorities would be to make it harder to target important assets. Most accounts contain relatively low-value assets, and attackers cannot target everyone with the most sophisticated possible attacks, since such attacks are expensive. Furthermore, it can be costly for an attacker to figure out which accounts contain more valuable assets. Systems also could be designed to enable their users to remain obscure on the Internet—for example, by dividing their information among multiple unlinked accounts, which would make it harder to identify valuable targets.
Approaches like these would have to overcome difficulties. For example, weakly protected accounts can become more valuable over time as people use them more and for more things. Today, for instance, basic e-mail accounts should be viewed as extremely sensitive, since they are often used to reset passwords for a wide variety of services. Yet the security on the accounts may not be upgraded in line with their increase in value over time, rendering their users more vulnerable.
Given that perfect security is not possible in cyberspace, one possibility is to move toward retroactive security measures rather than try to prevent all the bad things that could happen. For example, in the financial system, the fundamental basis for security is that almost any transaction can be undone. Preventive measures would still exist, but the emphasis would be on reacting to security issues after the fact rather than on trying to anticipate all possible threats. In this way, the focus would be on actual problems rather than hypothetical worst-case problems, as is often the case with physical security systems. Using this approach, actions that cannot be undone would have to be handled much more carefully. A challenge for retroactive security and the setting of priorities as described above is that the question of which things need high security is highly context-dependent. An individual’s “mother’s maiden name” is not a secret or a security concern until it is used to answer a financial institution’s security question. Similarly, whether a transaction can be undone or not is context- and time-dependent (for example, does the institution involved still exist?). It can be a challenge to know in advance whether something is sensitive or whether it can be undone.
The government could enhance its response to cybersecurity threats in a number of ways. It could increase its oversight and regulation of computer and communications technologies. It could use its convening power to encourage companies and institutions to comport with best practices. It could mandate a “safety culture” approach (similar to that seen in aviation) to cybersecurity and privacy not only in government agencies but also in the private sector. It could insist that companies provide security and privacy mechanisms in their products. Tort law could be interpreted in new ways or amended to provide increased penalties for cybersecurity breaches.
One aspect of regulation is deterrence through the threat of some kind of punishment. However, the people conducting cyberattacks are usually difficult or impossible to find and punish. Denial is easy, proof is hard, and prompt attribution is particularly difficult. Furthermore, cyberattacks and cyberexploitation are usually indistinguishable until an explicit attack is executed. Cybersecurity can be violated, for example, by the placement of a capability that allows access for some future unknown purposes.
Deterrence also runs the risk of sweeping up unintentional as well as deliberate attempts to contravene security. People who face a choice between getting their work done and observing unrealistic security guidelines make rational choices, so they need rational security systems.
Although it would be difficult to require accountability throughout a communications network, the nodes (i.e., servers or end-user devices) of a network could be made accountable for their cybersecurity provisions. For example, they could be locked out of a network or strongly isolated if they were found to be insufficiently secure. Administrators would need to detect which nodes are vulnerable or acting maliciously and be able to punish or isolate them. This authority could be delegated to a professional third party, with the responsibility decentralized rather than concentrated in a single location.
A dilemma inherent in regulation is that it tends to be a blunt instrument—slow, behind changing technologies and threats, and prone to unintended consequences, such as inhibiting innovation. Economic incentives can be a more efficient intervention. For example, the U.S. Securities and Exchange Commission and the U.K. Financial Conduct Authority could adopt new rules requiring that data breaches or noncompliance with best security practices be reported to investors in quarterly reports. If companies are seen as acting in ways that harm their customers, they will not keep their customers’ business. As is often said in the technology industry, a competitor is just a click away for consumers.
Additional research could yield substantial progress on many of the questions that still surround cybersecurity. For example, more study of how people apply—or circumvent—security systems would be useful for designing more rational systems. Metrics for levels of security and values of assets could enable good-enough security rather than absolute security. Is it possible to reduce the maximum harm that attackers can do while increasing the level of assurance that can be provided to potential targets given a particular level of available resources? Can an optimal cybersecurity model be envisioned along with pathways to move toward such a model? Can a machine learning system identify patterns of bad behavior in past activities and use those patterns to detect ongoing bad behavior—a goal of many intrusion detection systems today? How quickly can such approaches adapt when adversaries can use similar technologies to understand what patterns of behavior they need to change to remain undetected? Both foundational and more applied research could yield long-term progress.