Despite considerable investments of resources and intellect, cybersecurity continues to pose serious challenges to national security, business performance, and public well-being. Modern developments in computation, storage, and connectivity to the Internet have brought into even sharper focus the need for a better understanding of the overall security of the systems we depend on.
The cybersecurity task is daunting, and the world continues to change. We see increasing replacement of physical systems with digital ones, increasing use of digital systems by larger segments of the population, and increasing use of digital systems in ways that the designers and developers never intended. In the early days, the security focus was on protecting networks, servers, and client workstations. Today’s concerns include targeted attacks on electromechanical control systems and mobile devices. Systems of all kinds are becoming larger and more interconnected. Other changes in recent years include the character of the threat, its sophistication, goals and targets; increasingly sophisticated supply chains for software-reliant systems that themselves include components from diverse sources; and wide deployment of Internet of Things (IoT) devices (e.g., infrastructure controlled by SCADA systems,1 home automation, and self-driving and partly automated vehicles and automated highways). Success in protecting one area drives attackers to probe
1 SCADA refers to Supervisory Control and Data Acquisition systems typically used to monitor and control industrial processes in the physical world.
elsewhere. All of these trends result in larger impacts when systems are compromised.
This committee was asked to consider future research goals and directions for foundational science in cybersecurity, including economics and behavioral science as well as more “traditional” cybersecurity topics. It brought together researchers from different disciplines and practitioners from different sectors.
There have been many reports on cybersecurity research offering many recommendations. Rather than echo those reports and expand their lists of proposed projects, the committee focused on foundational research strategies for organizing people, technologies, and governance. These strategies seek to ensure the sustained support needed to create an agile and effective research community, with collaborative links across disciplines and between research and practice. The aim of the report is to encompass a broad security science that includes fundamental underpinnings related to scientific laws, attacks, policies, and environments;2 social, behavioral, and decision science considerations; as well as engineering, operational, and life-cycle challenges. This report is aimed primarily at the cybersecurity research community, but it takes a broad view that efforts to improve foundational science in cybersecurity will need to be inclusive of many disciplinary perspectives and ensure that these disciplines work together to achieve common goals.3
Cyberspace is notoriously vulnerable to varied and changing attacks by hackers, criminals, terrorists, and state actors. The nation’s critical infrastructure, including the electric power grid, air traffic control system, financial system, and communication networks, depends on information technology for its operation and thus is susceptible to cyberattack. These concerns are not new, nor is recognition of the importance of research as an essential element in U.S. national cybersecurity strategy. For example, as early as 1991, the National Academies of Sciences, Engineering, and Medicine highlighted the role of research in understanding and address-
2 This formulation was first described in F.B. Schneider, Blueprint for a science of cybersecurity, The Next Wave 19(2), 2012.
3 Regarding privacy: Although the committee was tasked to consider cybersecurity, there is overlap in the cybersecurity and privacy research communities (and research problems). And privacy research itself demands input from many disciplines. As an end, protecting privacy is one measure of system performance. As a means, compromised privacy can create openings for other mischief; fear over compromise may motivate behavior that benefits the system overall. Many of the approaches suggested in this report should also apply to privacy research, even if the particular examples do not overlap directly.
ing vulnerabilities through scientifically sound policies, technologies, and behavioral interventions.4 It focused on end-to-end strategies, linking the previously stovepiped domains of communications security and system security. The present report adopts the same encompassing view.
Why is the cybersecurity situation so challenging despite years of attention? One major challenge is that we rely on systems and components that were not designed with modern threats in mind. Many of these systems and components are intrinsically weak due to decades-old design choices as well as outdated security goals and assumptions about the nature of the threat. Another challenge is that even well-designed systems have bugs, creating vulnerabilities that attackers will work hard to find—and often succeed in finding. Many systems evolve over time, combining newer components with legacy components; often this evolution occurs with only limited application of systems engineering principles and without an understanding of what the security-critical components are or the dependencies on them.
Despite growing awareness of these threats, many organizations still do not (or cannot) spend the resources needed to understand or fix their vulnerabilities. When they see software as safety-critical, other concerns (e.g., costs, schedules) may limit their efforts to improve systems security. Moreover, fallible humans design, maintain, use, and repair systems in ways that may unintentionally expose and facilitate ease of break-in. This report bears these realities in mind, considering the behavioral and organizational interventions needed to sustain improvements needed for more securely designed, more bug-free, and more error-tolerant systems at acceptable cost.
Another difficulty is that many actors affect cybersecurity, including boards of directors, shareholders, regulators, standards bodies, citizens, nongovernmental organizations, manufacturers, and researchers. As a result, there are often conflicting views and interests. For instance, password requirements for online banking tend to be much less strict than those used inside the federal government, reflecting different trade-offs. At a societal level, cybersecurity affects and is affected by the sometimes conflicting equities of national security, democratic values, and economic prosperity,5 which widens the aperture for the research enterprise considerably.
Responding to these dynamic challenges requires sustained support for research that can address challenges of today and those still on the horizon.
4 National Research Council, Computers at Risk: Safe Computing in the Information Age, National Academy Press, Washington, D.C., 1991.
5 See “Tensions Between Cybersecurity and Other Public Policy Concerns,” Chapter 7 in National Research Council, At the Nexus of Cybersecurity and Public Policy: Some Basic Concepts and Issues, The National Academies Press, Washington, D.C., 2014.
It requires collaboration across disciplines, because overall system security depends on individual and organizational behavior as well as technology. It requires the ability to reconfigure approaches as threats (and successes) evolve, which means having short cycles for receiving and responding to feedback. Meeting these requirements will not be easy in a world organized around scientific disciplines, corporations and institutions, regulatory and standards bodies, and government bureaucracies—each functioning in ways that have been developed in the past to focus on other areas. For these reasons, the committee has primarily focused on processes for identifying and addressing problems, rather than on problems per se.
Security science6 has the goal of improving the understanding of which aspects of a system (including its environment and users) create vulnerabilities or enable someone or something (inside or outside the system) to exploit them. Ideally, security science provides not just predictions for when attacks are likely to succeed, but also evidence linking cause and effect pointing to solution mechanisms. A science of security would develop over time, for example, a body of scientific laws, testable explanations, predictions about systems, and confirmation or validation of predicted outcomes.
As an example, adversaries discovered a new interface in an incomplete initial model and used side-channel attacks7 to exploit it. A systematic, scientific approach to modeling the cryptographic system that took this into account allowed the model to be improved. Another example involves the common attack mode of “phishing,” which is not against a technical system per se but against an individual, where an adversary tries to deceive someone into actions that allow attackers into their system. A model that does not include people invoking malicious software would be
6 Recent years have seen increased discussion of what a scientific basis for cybersecurity might entail, and efforts are under way within the cybersecurity research community to develop a security science. See, for instance, F.B. Schneider, Blueprint for a science of cybersecurity, The Next Wave 19(2), 2012. Continued work in this space is a key component of the foundational approach described in this report. See also C. Herley and P. van Oorschot, “SoK: Science, Security, and the Elusive Science of Security,” Proceedings of 2017 IEEE Symposium on Security and Privacy (forthcoming, available at http://people.scs.carleton.ca/~paulv/papers/oakland2017science.pdf). Building this science will be a long-term endeavor that is both forward-looking, in that its results can be used as a basis for decisions about current and future systems, and retrospective, in that results can be used to explain how and why past efforts failed (or succeeded). As more is understood, scientific analyses can be used to assess both proposed efforts and past practices.
7Side-channel attacks use information derived from the physical characteristics of a system (such as power consumption or electromagnetic leaks) to attack cryptographic systems, rather than exploiting algorithmic weaknesses, such as differential power analysis.
incomplete with respect to this type of attack, however complete it was in other respects. As long as their limits are known, scientific laws derived from incomplete models may still be useful. A set of mathematical laws about cryptography that addresses the strength of algorithms but not side-channel attacks could still help in designing systems that resist some attacks, even if not all kinds of adversaries.
A scientific approach to cybersecurity challenges could enrich understanding of the existing landscape of systems, defenses, attacks, and adversaries. Clear and well-substantiated models could help identify potential payoffs and support of mission needs while avoiding likely dead ends and poor places to invest effort. There are strong and well-developed bases in the contributing disciplines. In mathematics and computer science, these include work in logic, computational complexity, and game theory. In the human sciences, they include work in judgment, decision making, interface design, and organizational behavior.
Examples of research areas in which this sort of scientific approach has been taken include cryptography, programming languages, and security modeling. The cryptography community (which comes from a mathematics tradition) has taken a mathematical approach to problems related to secrecy, integrity, authentication, and non-repudiation. For example, researchers in this community developed approaches to probabilistic computational secrecy, cryptographic protocol analysis, and logics of authentication with mathematical models that allow the exploration of what is and is not possible with clearly stated assumptions. The cryptography community has developed a set of building blocks and constructive reasoning principles that allow building new approaches (e.g., protocols) whose security attributes can be estimated relative to known building blocks.
The programming language and semantics community has followed suit. One example is the work on type-based information flow, which now allows constructing models of languages or systems, and proving relevant properties (e.g., non-interference) and deriving implementations.8 Another example is a proof that type checking and program obfuscation are equivalently effective against certain classes of attacks.9 The security
8 D.E. Denning and P.J. Denning, Certification of programs for secure information flow, Communications of the ACM 20(7):504-513, 1977; L. Zheng and A.C. Myers, “Dynamic Security Labels and Noninterference,” Cornell University, https://www.cs.cornell.edu/andru/papers/dynl-tr.pdf.
9 R. Pucella and F.B. Schneider, Independence from obfuscation: A semantic framework for diversity, Journal of Computer Security 18:701-749.
modeling community has formulated models of security since the 1970s,10 as well as methods for evaluating these models.11 This work was an instance where the science of security was advanced by introducing tools from another discipline (logic) to evaluate an accepted model.
Another example of research using a scientific approach considers abstract security mechanisms and what can be learned about their properties and applicability to classes of attacks. In the case of reference monitors—components of a system that allow certain things to happen (or not) based on a security policy—one interesting result is that firewalls, operating system kernels, and mechanisms that enforce access control lists are all reference monitors. Viewed from security science seeking general principle, researchers asked, What general security policies can a reference monitor enforce? The result is that reference monitors can enforce only what are known as safety properties, which require that something bad will never happen.12 This result both demonstrates the robustness of the scientific approach and offers a practical insight to those implementing security technologies—to wit: understand whether the policy to be enforced is a safety property, and recognize that, if it is not, any security approach that depends on a reference monitor will not be able to enforce it. For instance, firewalls cannot address sophisticated phishing attacks.13
Developing scientific laws and models related to composability would help explore and explain how combinations of mechanisms and approaches interact.14 It could be a key contribution, especially if exploring compos-
10 D.E. Bell and L.J. LaPadula, “Secure Computer System: Unified Exposition and Multics Interpretation,” MTR-2997, MITRE Corp., Bedford, Mass., March 1976, available as NTIS AD A023 588; J.A. Goguen and J. Meseguer, “Security Policies and Security Models,” pp. 11-20 in 1982 IEEE Symposium on Security and Privacy, 1982, http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6234453.
11 J. McLean, “Reasoning about Security Models,” 1987 IEEE Symposium on Security and Privacy, IEEE, 1987, http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6234872.
12 F.B. Schneider, Enforceable security policies, ACM Transactions on Information and System Security 3(1):30-50, 2000; K.W. Hamlen, G. Morrisett, and F.B. Schneider, Computability classes for enforcement mechanisms, ACM Transactions on Programming Languages and Systems (TOPLAS) 28(1):175-205, 2006.
13 A given firewall may be able to institute a policy that would reject some set of phishing attacks but cannot defend against the entire class of phishing attacks—not least because a precise definition of “phishing” is not available. To correctly identify all phishing attacks would require a reference monitor that could understand natural language as well as being able to predict how a program would execute when it is downloaded. This suggests the need for research on a broader notion of phishing that relates to the structure of decision making in organizations and would draw on social, behavioral, and decision sciences, as discussed in Chapter 3.
14 Some work has already been done in this area as well going at least as far back as J. McLean, A general theory of composition for a class of “possibilistic” properties, IEEE Transactions on Software Engineering 22(1):53-67, 1996.
ability mechanisms and approaches generates new knowledge about independence and its relationship to security. Results from that exploration could contribute significantly to the design and deployment of defensible systems. Reasoning about and understanding of the security of systems that have been synthesized from individual components or subsystems remains a challenging problem that is today best tackled by experts with expertise in attackers’ techniques. Such experts are in short supply. Providing these experts with well-protected building blocks could help them work more effectively and efficiently toward a network-wide security approach as they focus on the seams between components (which are increasingly a target of adversaries). This would be particularly effective if those blocks can be verified or proved correct, together with a way to understand and model how they work together. Human factors researchers have developed signal detection theory15 to determine when performance errors reflect the inability to detect problems and misaligned incentives for responding to them (undue or insufficient caution). They have developed vigilance theory16 to predict the effects of work conditions (e.g., shift length) on performance. One series of studies combined the two in investigating susceptibility to phishing attacks, finding wide variability in both detection ability and perceived incentives across individuals, as well as differences within individuals when thinking about a potential threat and deciding how to respond. These resulting performance parameter estimates provide a basis for evaluating the relative vulnerability of alternative system configurations.17
Other questions inviting a scientific approach include the following:
- What are useful or “interesting” classes of attacks, defenses, and policies?
- What does it mean for systems or subsystems to be independent? Replication tolerates failures (because we believe physically separated devices fail independently), but replication does not tolerate attacks (because replicas have the same vulnerabilities). Defense in depth works when the component defenses are “independent.”
- What are good underlying formalisms for execution? The formal methods community uses “sets of sequences,” but this is too inexpressive for even simple security policies like confidentiality.
15 S.K. Lynn and L.F. Barrett, “Utilizing” signal detection theory, Psychological Science 25(9):1663-1673, 2014, doi:10.1177/0956797614541991.
16 N.H. Mackworth, The breakdown of vigilance during prolonged visual search, Quarterly Journal of Experimental Psychology 1(1):6-21, 1948, doi:10.1080/17470214808416738.
17 C. Canfield, B. Fischhoff, and A.L. Davis, Quantifying phishing susceptibility for detection and behavior decisions, Human Factors 58(8):1158-1172, 2016, doi: 10.1177/ 0018720816665025.
- Is there an orthogonal set of building blocks related to security? Is there a natural correspondence between those building blocks and specific classes or mechanisms? The traditional security notions of confidentiality, integrity, and availability are intuitive, but they are not orthogonal, which complicates reasoning and analysis. (Confidentiality can be achieved by corrupting integrity or by denying access.)
Programmatically, there have been several efforts toward a science of security in the cybersecurity community, beginning in earnest a few years ago. As one example, the National Security Agency (NSA) has funded several lablets (groups of researchers tasked with contributing to the development of a systemic body of knowledge)18 and created an annual “Best Scientific Cybersecurity Paper” competition.19 As an adjunct to these lablets and related efforts, the NSA has also established a science of security virtual organization20 to help researchers stay abreast of current news and activities in the field. There are currently four academic research lablets; they were established to focus on developing a science of security and a community to advance it. The lablets have developed lists of hard problems that involve crossing disciplinary boundaries, and the NSA has worked to get researchers to report results in relation to those problems.
The lablet model is designed to promote more direct interactions among researchers (i.e., not just through the literature) with a focus on sharing those diverse research methods that cybersecurity challenges require, including observational empiricism and data analysis, interventional empiricism, mathematical models, and reasoning. A science of security can lead to powerful and explanatory results and predictions. Drawing the connections between traditional cybersecurity research and emerging scientific laws and models, and making clear how such results fit within an overarching (albeit still emerging) science, will serve to both validate the science as it is developed and contextualize specific results.
The committee’s analysis and recommendations in the rest of this report are organized under the four following broad aims:
- Support, develop, and improve security science—in terms of the emerging research efforts in this area, in the practice and reporting of results, and in terms of a long-term, inclusive, multidisciplinary approach to security science.
- Integrate the social, behavioral, and decision sciences into the security science research effort, since all cybersecurity challenges and mitigations involve people and organizations.
- Integrate engineering and operations, incorporating a life-cycle understanding of systems into the research endeavor and security science.
- Sustain long-term support for security science research including material resources and institutional structures that facilitate approaches and opportunities for improvement.
Box 1.1 illustrates these commitments in the context of the study of passwords. Although not every research effort in cybersecurity can
or should address all four of these aims at once, articulating where a given effort sits with respect to them is important to the coherence of the research program. Security science can be thought of, broadly, as incorporating these elements (to varying degrees as appropriate), ensuring that each piece of a particular research effort meets the standards of its contributing disciplines, and integrating those efforts in coherent and disciplined ways.
This chapter has described the report’s overall philosophy. Chapters 2 through 5 elaborate it. Chapter 2 examines the potential of social, behavioral, and decision sciences to contribute to improved cybersecurity. Chapter 3 highlights the importance of incorporating engineering and life-cycle considerations into the cybersecurity research endeavor. Chapter 4 outlines a foundational cybersecurity research agenda. Chapter 5 offers insights on the organization and leadership of the research community and describes opportunities to improve research practice and approach, concluding with a discussion of how the research community could reconfigure its efforts to more inclusively address cybersecurity challenges.