Numerous research agendas for cybersecurity have been developed and promulgated. For instance, a decade ago the National Research Council report Toward a Safer and More Secure Cyberspace1 argued that both traditional and unorthodox approaches to research are needed in order to create new knowledge and to make that knowledge usable and transferable. That report also emphasized the importance of breadth and diversity in research agendas, because the risks will be on the rise for the foreseeable future, and a broad, diverse research agenda will increase the likelihood that a useful approach to address some future threat can be found. That approach is still relevant today, as cybersecurity challenges have only increased over time.
More recently, the federal Networking and Information Technology Research and Development Program issued its Federal Cybersecurity Research and Development Strategic Plan.2 That plan rests on four assumptions related to adversaries, defenders, users, and technology; outlines a number of near-, mid-, and long-term goals; spotlights four defensive elements: deter, protect, detect, and adapt; outlines six critical areas, the first three of which are most relevant to this report: scientific foundations, enhancements in risk management, and human aspects; and offers five
1 National Research Council, Toward a Safer and More Secure Cyberspace, The National Academies Press, Washington, D.C., 2007.
2 National Science and Technology Council, Federal Cybersecurity Research and Development Strategic Plan: Ensuring Prosperity and National Security, Networking and Information Technology Research and Development Program, February 2016.
recommendations for the federal government, the first of which is to prioritize basic and long-term research.
This report is intended to complement the strategic plan—it emphasizes and elaborates on several specific components of the plan and offers a distinctive framework through which to view research efforts. This section offers a research agenda that can be mapped to components of the strategic plan. However, as per the charge and statement of task for this study, the committee offers a set of research questions to consider that align with its suggested approach for foundational cybersecurity. These are not, however, intended to supersede other agendas. This chapter offers a substantive research agenda to complement and work in tandem with the efforts toward a science of security described in Chapter 1. It includes a set of foundational technical problems, an outline of a set of questions that research in the social and behavioral sciences could address, and notes on cross-cutting topics, including metrics, assessments of criticality, and evaluation. A brief overview of other recent research agendas is available in Appendix C.
The committee outlines a foundationally oriented technical research agenda clustered around three broad themes that correspond to those in the strategic plan: detect (detection and attribution of attacks and vulnerabilities), protect (defensible systems that are prepared for and can resist attacks), and adapt (resilient systems that can recover from or cope with a wide range of adversarial behavior). Many familiar technical topics fall within these clusters. Of course, many challenges span these themes, and understanding how they interact with each other (for instance, protect systems with a design that supports both protection and adaptation) is important.
The original concept of intrusion detection, as put forth by J.P. Anderson in the 1970s3 and further advanced by Dorothy Denning in the 1980s,4 was to examine activity in the context of a computer system with the intent of detecting deviations from the norm that indicated malice or attack. The government invested significant resources in “intrusion-detection systems” during the late 1980s and 1990s with the aim of building systems that could accomplish the objective set out by Anderson. The
3 J.P. Anderson, “Computer Security Threat Monitoring and Surveillance,” April 15, 1980, http://seclab.cs.ucdavis.edu/projects/history/papers/ande80.pdf.
4 See D.E. Denning, An intrusion-detection model, IEEE Transactions on Software Engineering SE-13(2):222-232, 1987; and D.E. Denning and P.G. Neumann, Requirements and Model for IDES—A Real-Time Intrusion Detection System, SRI International, 1985, http://www.csl.sri.com/papers/9sri/.
task of detecting new or previously unseen attacks is difficult because the wide variety of activities observed in a typical computer system can lead to a high number of false positives.
While many current information technology (IT) installations incorporate commercial “intrusion-detection” systems, most of those systems operate by recognizing the signatures of previously observed attacks. New attacks that do not replicate previously seen malicious code, data, or network traffic patterns may not be detected because their activity fails to “look like an intrusion.” Some high-end intrusion-detection systems are capable of characterizing the normal activity on a network and reporting deviations from “normal” with an acceptably low false-positive rate.
The challenge is to detect behavior in a system that could lead to a “bad” situation. But defining “badness” is difficult and context-sensitive and can range from reductions in availability (denial of service) to information theft to corruption of data. For one thing, bad situations can result from the inadvertent mistakes of known good actors, not just from the behavior of attackers. And, attackers can often understand the workings of systems (including intrusion-detection systems themselves) and craft their attacks so as to appear “normal enough” to evade detection. Effective intrusion detection in the future will need to encompass more than technical signature-based pattern matching and machine learning-based classifiers. It also needs to include early detection of insider misuse, denials of service, and situational anomalies at many layers of abstraction. Organizations can employ red-teaming (running attacks against themselves) to help assess their own response and detection and capabilities. An additional difficulty is that in the era of big data, the data collected to protect against intruders may grow very large, and managing that data will become its own challenge.
With the increasing prominence of cyberattacks (especially in the military, diplomatic, and political spheres), attribution will be increasingly important once attacks are detected. Attribution—identifying with an understood degree of confidence who is responsible for a cyberattack—has increasing geopolitical significance. There are both technical questions (e.g., with what confidence can attribution be done, of what sorts of activities, and how quickly?) and questions related to trust and evidence. What constitutes evidence, and how can its context (including system-state parameters) and provenance be effectively conveyed? In addition, there are questions related to how to characterize uncertainty in attribution in ways that decision makers could use.
Attribution demands both technical approaches and knowledge about adversaries and their capabilities and intentions. Moreover, attribution is used for different purposes depending on context—sometimes to serve as a deterrent and sometimes to serve as the basis for holding
attackers accountable. Depending on the need, there may be different requirements for certainty, precision, and accuracy. Improving prospects for attribution and accountability is a research area that should integrate technical and social and behavioral approaches to be most effective. Given the limitations of purely technical attribution, attribution is ultimately an all-source activity—that is, knowledge from multiple domains (not all technical) is brought to bear. Although attribution based on characteristics of attacks is in part a technical problem, devising effective ways to hold attackers accountable in a military, diplomatic, or political context is more of an issue for political or behavioral science. There are challenges, for instance, around the nature of the evidence, determining what kinds of evidence are both meaningful and persuasive, and how to convey that evidence in a convincing way to those who need to know. In cases that involve public confidence in systems, national security and geopolitical factors may come into play regarding how much to disclose, making what once might have seemed an esoteric technical challenge into a vexing political problem.
Finally, transparency and sharing of information related to detected attacks and their attribution would increase the value of attack detection for the infrastructure as a whole. In addition to exploring technical means that might enable organizations to more readily share information, this also relates to social and decision sciences that could help inform how to incentivize such sharing and how to make it effective.
Systems need to be designed to be more defensible. Foundational research opportunities in this space range from clean-slate approaches,5 to high-assurance computing, to secure software development, to innovations in supporting technologies such as cryptography.
Specific areas that can help improve the security of software include understanding what classes of vulnerabilities can be detected automatically; research into languages that are more secure by default; support for end-to-end security policies along with analysis, synthesis, and compiler technology for automatic derivation of implementations of those policies; improvements in secure enclaves; tools for proving the absence of classes of errors; and virtualization—as part of a security infrastructure and as a
5 “Clean slate” typically refers to efforts to escape legacy constraints and see what can be accomplished using state-of-the-art modern approaches that will help us understand existing systems and their constraints as well as point the way toward systems of the future. See, for example, Defense Advanced Research Projects Agency, “Clean-slate Design of Resilient, Adaptive, Secure Hosts (CRASH),” 2015, http://opencatalog.darpa.mil/CRASH.html, or the Qubes OS team’s effort to develop a security-oriented operating system, https://www.qubes-os.org/intro/.
mitigation. In addition, ever larger systems will generate ever more data, necessitating research into how to manage increasingly large volumes of data, what data to collect and log, what sorts of analytics to apply, and how to do effective analytics in close to real-time. Connecting data and analytics to operational needs would also be fruitful. Architectural concepts meriting continued investigation include resiliency, information hiding, sandboxing, monitoring and logging, internal analytics, and so on.
Cryptography is key to many aspects of secure systems and their networked connections and interactions. Specific areas that would contribute to improving foundational cybersecurity include the following: cryptographic agility and future-proofing (e.g., quantum-resistant cryptography and other replacement options), compositional cryptographic protocols, integration of the possibility of side-channel attacks in algorithm and protocol design, making assumptions more realistic, improving the prospects and performance of homomorphic encryption, estimating not just upper bounds (how hard systems are to break) but lower bounds (to provide a sense of expected longevity), and connecting proofs of cryptographic security to the actual code in verifiable ways.
Also, importantly, improvements in hardware security help make systems more defensible. Research opportunities in this space range from techniques for verification of hardware designs to the development of security-enhanced architectures that take full advantage of new hardware capabilities. Integrating hardware security efforts with efforts elsewhere in the stack, toward an end-to-end approach, can lead to improvements. But understanding interactions between hardware and other components is also important.
Not all attacks can be detected, and even the most defensible systems will have vulnerabilities. Thus, deploying systems that are resilient and adaptable is important for improving cybersecurity, and, more importantly, ensuring that work can continue. Resilience is an attribute of system operations and is critical to the functioning and operation of networks and the Internet itself. Resilience needs to be assessed and viewed as a systems operational capability. It is achieved by combinations of many different system features, not only the security components. Research is needed on how to operate through cyberattacks and on what it means to operate in a degraded mode and how to develop a system such that degraded operations are valid and sufficient to meet mission needs.6
6 For one opinion on prioritization and reducing risks of dependence, see R. Danzig, “Surviving on a Diet of Poisoned Fruit: Reducing the National Security Risks of America’s Cyber Dependencies,” July 21, 2014, Center for a New American Security, https://www.
Additional research opportunities include the following: designing for adaptability, recovery, and graceful fail-over; developing the ability to assess the impact of a compromise and roll back to a pre-compromise state; physical protection and/or redundancy of key hardware components against electromagnetic pulse or other physical attack; increasing the utility of logging; understanding the tension between updateable (patchable) components and immutable components; and exploring ways to improve operational security. An additional opportunity is the design of systems that incorporate sufficient redundancy or consistency checks to help withstand or detect “supply chain attacks” (both software supply chain and hardware supply chain) that are conducted (possibly by insiders) during the development or maintenance process and the problems are far from solved.7
A fundamental tool both for detecting attacks and recovering from them is discovering unexpected changes. One key aspect is inspectability. If something can be changed by an attacker but cannot be inspected by a defender, the defender cannot detect the change except by its effects. Even then, the defender may not be able to deduce what component was changed and needs to be fixed. Detecting the change is critical, even if the effect of the change is completely obscure. For instance, some Intel CPUs load a specific chunk of data when starting. An unexpected change in that data would be an alerting signal, as would the discovery that one of a batch of otherwise identical CPUs is loading a different chunk. This is analogous to what is done to assess software artifacts and compare hashes of them with known good hash values. Despite the attractiveness of the concept, it is an open question to find all the components in a complex system that need to be inspected, and it is not obvious how to implement a complex system in which all changeable components can be inspected.
7 The Defense Science Board is undertaking a study on the cyber supply chain that will review DoD supply chain risk management activities and consider opportunities for improvement (see Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, “Terms of Reference—Defense Science Board Task Force on Cyber Supply Chain,” November 12, 2014, http://www.acq.osd.mil/dsb/tors/TOR-2014-11-12-Cyber_Supply_Chain.pdf). There is also a Defense Advanced Research Projects Agency program exploring how to eliminate counterfeit integrated circuits from the electronics supply chain (see K. Bernstein, “Integrity and Reliability of Integrated Circuits (IRIS),” Defense Advanced Research Projects Agency, http://www.darpa.mil/program/integrity-and-reliability-of-integrated-circuits, accessed September 2016).
Research efforts that link social and behavioral sciences and cybersecurity should be positioned to encourage advances in cybersecurity practices and outcomes. As an example of the importance of the linkage to social, behavioral, and decision sciences, the Federal Cybersecurity Research and Development Strategic Plan,8 mentioned earlier, also emphasized the concept of deterrence (in addition to detection, protection, and adaptation, discussed above). In the strategic plan, key challenges for deterrence include economic, policy, and legal mechanisms, as well as technical efforts. Below are several focus areas where insights from the behavioral and social sciences could prove useful. Small examples of how these topics could connect with and inform more traditional cybersecurity topics are offered after each. These are just examples, however, not a comprehensive list.
- How individuals interact with and mentally model systems, risk, and vulnerability—implications for defaults, user interfaces, development tools, enterprise security practices.
- Work group activities, knowledge sharing, and norm setting related to cybersecurity policies and practices—implications for how enterprise security tools are designed and deployed.
- Incentives and practices in organizations; how to manage organizations to produce desired outcomes (in this case appropriate cybersecurity outcomes)—implications for how enterprise security tools are designed and deployed and which defaults are chosen and promulgated.
- Adversary assessment, attribution, interruption, deterrence, and managed engagement—implications for cyber attribution research.
- Understanding and mitigating insider threat—detecting and stopping the malicious insider remains a hard problem. Having the benefit of the best thinking from social scientists will certainly help make advances on this problem. For example, what is known about what leads to security or safety infractions not being reported, and to what extent do results from other domains apply in a cybersecurity context? Similarly, what is understood about the causes of insider betrayal (such as greed, blackmail, or revenge), and are there demonstrated ways organizations can mitigate these risks?
8 National Science and Technology Council, Federal Cybersecurity Research and Development Strategic Plan: Ensuring Prosperity and National Security, Networking and Information Technology Research and Development Program, February 2016.
- Why and how cybersecurity measures are adopted by individuals, groups, organizations, institutions, and adversaries—implications for the design of tools and practices and for dissemination, prioritization, and implementation.
- Assessing and understanding acquisition practices, business norms, and evaluation practices when software and systems are acquired and what impacts those norms and practices have on cybersecurity requirements and outcomes.
- What effect will the emerging market for cybersecurity insurance have on outcomes? What models of insurance are most appropriate? (Options may include approaches similar to insurance for natural disasters, shared risk pools such as for climate change, occupational safety, or some other approach.)
- What makes “honeypots”—systems designed to simulate targets and act as decoy to attackers—effective?
- How can public trust in cybersecurity warnings be enhanced? For example, how has the Centers for Disease Control and Prevention maintained the credibility of its warnings, despite some false alarms, and what can be learned from that experience?9
- Economics of technology adoption and transition in companies and institutions and in developer communities—implications for the design of tools and practices and for prioritization. For people who do not want new or updated technology, under what conditions are new technology or practices likely to be acceptable?
- Understanding the market for zero-day exploits. Does the market rely on exclusivity? If so, what mechanisms are used to enforce it? To what extent are intermediaries at work? How could intermediaries be disrupted, discredited, co-opted, imitated, or punished?
- Cybersecurity skills gap. There is currently a shortage of well-qualified people entering the field of cybersecurity. The gap is large and growing. A long-term approach will be needed to get more students interested in pursuing careers in cybersecurity. Social scientists and educational researchers can help understand and make progress in closing this skills gap.
- Sectoral and intersectoral analyses—How can coordination be improved? What role does protecting the commons play?—implications for deployment and prioritization.
9 A forthcoming report of a Workshop on Building Communication Capacity to Counter Infectious Disease Threats from the Forum on Microbial Threats considers the challenge of public trust and warnings in the public health context. For more information, see http://nationalacademies.org/hmd/Activities/PublicHealth/MicrobialThreats/2016DEC-13.aspx.
- Managing conflicting needs and values at policy, organizational, and technical levels—implications for prioritization.
- Assessing and overcoming barriers to collaboration and effective practice—implications for deployment and adoption.
- Risk analysis—risk analyses that gather relevant data and carefully represent all stakeholders can provide guidance on how to communicate across the enterprise about appropriate expectations of system behavior and performance.
- Criticality analysis—how to decompose and assess the criticality of components or capabilities and dependencies among them in large, complex systems.
Efforts in these areas will be helpful to the cybersecurity challenge if they are connected to systems design, tooling, engineering, and deployment. It will be important to integrate research results from the areas above to ensure that, for example, organizations are more likely to implement effective practices and policies and that developer communities understand how users of tools and systems are likely to behave when confronted with options. One topic that could tie together both organizational science and the cybersecurity research community is the issue of adoption of policies and practices and understanding how major improvements are adopted. The research community has had many significant ideas over time, but only some of these have been widely adopted. How did this happen? How were these chosen and promulgated? Is, for instance, most of the improvement because of adoption by a few high-leverage organizations?10
Interdisciplinary work of the sort described above can also provide foundational principles of cybersecurity to help inform both research and practice. For example, a foundational discovery from social science work is that diversity in membership can sometimes improve the performance of problem-solving groups.11 This principle reinforces the argument for including social scientists in cybersecurity projects. Another example of a foundational principle is that there is a trade-off between sharing information widely in an organization to improve performance and restricting information sharing to reduce damage if one part of the organization is
10 C.E. Landwehr, D. Boneh, J.C. Mitchell, S.M. Bellovin, S. Landau, and M.E. Lesk, Privacy and cybersecurity: The next 100 years, Proceedings of the IEEE 100:1659-1673, 2012; C.E. Landwehr, “History of US Government Investments in Cybersecurity Research: A Personal Perspective,” pp. 14-20 in Proceedings of 2010 IEEE Symposium on Security and Privacy, May 2010; C.E. Landwehr, “Computer Security,” Tutorial paper, International Journal on Information Security 1(1):3-13, 2001.
11 See, for example, S. Page, The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies, Princeton University Press, Princeton, N.J., 2008.
hacked. Research could be done on how best to make this trade-off, as well as on how an organization can be designed to function well in the face of local breaches of security. Another foundational principle is from the 2016 strategic plan: “Users . . . will circumvent cybersecurity practices that they perceive as irrelevant, ineffective, or overly burdensome.” As noted above, interdisciplinary research on how to cope with this and produce more usable security tools and practices is needed.
Finally, there are two overarching challenges that will draw on both social, behavioral, and decision sciences research and the technical research outlined here.
One is the question of how to assess and determine the criticality of a particular capability or application in a given context. Literature regarding mission assurance and requirements engineering is relevant to criticality. The techniques of contextual inquiry, requirements modeling, and even ethnography are also appropriate to this topic. How can we determine or identify mission-critical aspects of a capability so as to better connect research results and outcomes to mission-critical applications? Put another way, what are the essential capabilities for a given mission, and what are those that could be deprecated, if needed? For instance, in some circumstances, maintaining an accurate location at all times might be essential; in others, maintaining the ability to collect and store data is essential. Those sorts of analyses and prioritizations require knowledge about the mission and its goals. Related to these issues is the question of how systems can be architected so that overall security can be improved at minimal cost. This will entail both deep technical understanding and social, behavioral, and decision science understanding, since the mission criticality of any given system is dependent on the context within which it is deployed.
Another overarching challenge is finding better ways to evaluate the results of technical research and prioritize the implementation of potentially high-impact results. The basic question of how to transition from research to practice is difficult. And, it is important to support a research community that can do advance work that is long horizon, high uncertainty, possibly non-appropriable, and potentially disruptive of current practices. Can we learn from social and organizational theory about how to drive focused change (e.g., leveraging new research results) to improve an organization’s cybersecurity posture and outcomes? The challenge of technology transfer and adoption relates to markets as well; thus, expertise found in business schools and experience found in the venture capital community can also help inform priorities and emphasis. Competition
has both positive and negative aspects. In some cases, breakthroughs in the research community are game-changing ideas that would, in fact, disrupt industry incumbents, forcing them to innovate at a faster pace. These incumbents might prefer to inhibit the unwanted disruption and the consequent market uncertainty. Scholars in business schools and venture capitalists can both offer perspectives on, for instance, how and why particular technologies have succeeded, as well as on how the structures and assumptions of markets and organizations affect technology adoption and penetration. In the federal acquisition context, for instance, government acquisition efforts can have effects on markets. The federal government obviously has some market power with its prime contractors and their immediate supply chain, although it is a challenge for it to be a smart customer in this regard. But it may have relatively less market influence with enterprise system vendors and open-source foundations. Research into how large organizations signal their needs, and thus influence the market, could be helpful.
Two important issues with respect to both criticality analysis and evaluation are cost and personnel. Some laudable approaches to security, such as the U.S. government’s “Rainbow Series” of requirements for government systems, failed to achieve their goals.12 This was in part due to the time and cost of constructing the assurance argument required for such systems, the lack of personnel sufficiently trained in formal methods to construct an adequate assurance argument, the limited usability of the fundamental model underlying the approach, and the lack of trained personnel who could evaluate systems that were intended to be the most secure.
12 These were a series of computer security guidelines and standards published by the U.S. Department of Defense and the National Computer Security Center in the 1980s and 1990s. The most relevant here was the Trusted Computer System Evaluation Criteria, nicknamed “the Orange Book.” See S. Lipner, “The Birth and Death of the Orange Book,” IEEE Annals of Computing History, April-June 2015.