Attaining meaningful cybersecurity presents a broad societal challenge. Its complexity and the range of systems and sectors in which it is needed mean that successful approaches are necessarily multifaceted. Moreover, cybersecurity is a dynamic process involving human attackers who continue to adapt. Despite considerable investments of resources and intellect, cybersecurity continues to pose serious challenges to national security, business performance, and public well-being. Modern developments in computation, storage, and connectivity to the Internet have brought into even sharper focus the need for a better understanding of the overall security of the systems we depend on.
The research cultures that have developed in the security community and in affiliated disciplines will increasingly need to incorporate lessons not just from a wider variety of disciplines, but also from practitioners, developers, and system administrators responsible for securing real-world operational systems. This report is aimed primarily at the cybersecurity research community, but takes a broad view that efforts to improve foundational cybersecurity research will need to include many disciplines working together to achieve common goals.
There have been many reports on cybersecurity research offering many recommendations. Rather than echo these reports and expand their lists of proposed projects, the committee has focused on foundational research strategies for organizing people, technologies, and governance. These strategies seek to ensure the sustained support needed to create an agile, effective research community, with collaborative links across disciplines and between research and practice.
Part of the task of the Committee on Future Research Goals and Directions for Foundational Science in Cybersecurity was to consider gaps in the federal research program. In the committee’s view, the security community and funders understand the breadth of the challenge and the importance of emphasizing progress on all fronts—a diversity evident in the diverse approaches taken by the federal agencies supporting cybersecurity research. Instead of focusing on gaps, this report offers a framework that links research efforts. The strategy advocated below requires unusual collaborations among disciplines focused on technologies and those focused on the individuals and organizations that try to attack and protect them. Achieving those collaborations will require creating incentives that run counter to academic pressure for publications and user pressures for short-term results.
To this end, the committee’s analysis is organized under the four following broad aims for cybersecurity research:
- Support, develop, and improve security science—a long-term, inclusive, multidisciplinary approach to security science.
- Integrate the social, behavioral, and decision sciences into the security science research effort, since all cybersecurity challenges and mitigations involve people and organizations.
- Integrate engineering and operations for a life-cycle understanding of systems.
- Sustain long-term support for security science research providing institutional and community opportunities to support these approaches.
Not every research effort will or needs to address all four aims. However, articulating where each sits with respect to them is important to the coherence of the research program. These four aims are discussed below.
STRENGTHEN THE SCIENTIFIC UNDERPINNINGS OF CYBERSECURITY
Security science has the goal of improving understanding of which aspects of a system (including its environment and users) create vulnerabilities or enable someone or something (inside or outside the system) to exploit them. Ideally, security science provides not just predictions for when attacks are likely to succeed, but also evidence linking cause and effect pointing to solution mechanisms. A science of security would develop over time, for example, a body of scientific laws, testable explanations, predictions about systems, and confirmation or validation of predicted outcomes.
A scientific approach to cybersecurity challenges could enrich understanding of the existing landscape of systems, defenses, attacks, and adversaries. Clear and well-substantiated models could help identify potential payoffs and support of mission needs while avoiding likely dead ends and poor places to invest effort. There are strong and well-developed bases in the contributing disciplines. In mathematics and computer science, these include work in logic, computational complexity, and game theory. In the human sciences, they include work in judgment, decision making, interface design, and organizational behavior.
INCLUDE THE SOCIAL, BEHAVIORAL, AND DECISION SCIENCES IN SECURITY SCIENCE
Technical approaches alone will not suffice for cybersecurity as long as humans play roles in systems as developers, users, operators, or adversaries. Major security breakdowns have occurred because individuals misplace trust, organizations create perverse incentives, or adversaries discover and exploit design flaws. Meeting security needs effectively requires understanding that human context. How does cybersecurity affect the real-world business or organization? Does it drain human or other resources, or does it reflect a balance between keeping the business or organization secure and keeping it economically viable? What investments does it deserve and receive? How does the perceived value of possible practices compare with their demonstrated efficacy? What evidence would help to make that assessment? Social, behavioral, and decision sciences provide the reservoir of knowledge for addressing some of these questions and for making other research more useful for those responsible for vulnerable systems. Such expertise can also be vital, especially during design, for revealing any disconnects between intention and actual use and in articulating the variety of potential users and their contexts.
Human behavior affects all stages of the life cycle of a cybersecurity system: design, implementation, operation, maintenance, monitoring, revision, and replacement. Each stage offers opportunities to increase or reduce vulnerability: how design teams are constituted and managed, how procedures and interfaces are tested for usability, what incentives and resources are provided for security, how operators are trained and their performance evaluated, how the business case is made and understood. Our adversaries’ systems have their own life cycles, which might be disrupted at each stage. As a result, achieving effective cybersecurity will depend on understanding and addressing human dimensions of systems. Doing so will require overcoming psychological and institutional impediments that make effective collaboration difficult.
On the whole, the traditional cybersecurity community lacks expertise in social science analysis tools and methods. As a result, it not only relies on intuition, when it could draw on science available, but also struggles in its attempts to validate its approaches, falling below scientific standards in experimental design. Collaborating with social scientists and understanding their standards for research and publication would bring new tools to bear and yield new insights. A psychological barrier to such collaborations is that system designers and operators have unwarranted confidence in their intuitive theories about others’ behavior. Indeed, the human–computer interaction research community has a motto, “You are not the user,” to remind researchers and practitioners not to assume that other people share their perceptions and motivations.
The primary institutional barrier to utilizing the social, behavioral, and decision science expertise is that these disciplines are largely absent from the cybersecurity research community. Indeed, the community often lacks even the absorptive capacity to identify these needs, recruit the expertise needed to address them, and critically evaluate claims about system security elements that can be compromised by human failures in design, deployment, training, or management. Without that expertise, the cybersecurity community must improvise its own theories and research methods that are central to those sciences: experimental design, identifying confounds, meta-analysis, sensitivity analysis, and so on. Conversely, because those sciences have not been drawn into addressing the unique challenges of cybersecurity, they have limited experience in applying (and explaining) their research to those domains and in identifying the unique foundational research questions that they pose.
To create more effective collaborations, it is essential to foster interactions that address the needs of both these disciplines and the cybersecurity community. One thing cross-disciplinary researchers can do is to evaluate how well a result from one context will hold true in another.
The committee identifies barriers to collaboration both within and among the disciplines and discusses strategic institutional options to overcome them. Those strategies include creating institutional settings with the following: support reserved for projects that are jointly defined by members of different disciplines; working groups with the sustained interpersonal contact needed to create trusted relationships and absorb one another’s work practices; training programs in the essentials of other disciplines, both short term for working professionals and extended at the undergraduate, graduate, and postdoctoral levels; and positions that require their holders to demonstrate both practical and academic accomplishments.
Cybersecurity poses grand challenges that require unique collaborations among the best people in the relevant core disciplines, who typically
have other options for their time and energy. Sponsors of cybersecurity research need to create the conditions that make it worth their while to work on these issues. If successful, cybersecurity research will benefit not only from the substantive knowledge of the social, behavioral, and decision sciences, but also from absorbing their research culture with respect to theory building, hypothesis testing, method validation, experimentation, and knowledge accumulation—just as they will learn from the complementary expertise of the cybersecurity community. Thus, these collaborations have transformative potential for the participating disciplines, while addressing the urgent practical problems of cybersecurity.
ADDRESS ENGINEERING, OPERATIONAL, AND LIFE-CYCLE CHALLENGES IN SECURITY SCIENCE
Improving cybersecurity requires that security considerations be integrated into the practice of hardware and software development and deployment. Research in many key technical areas can embed assumptions about, for example, agility and expected operations and maintenance, or research can be focused on how to improve post-deployment activities related to systems. That is, research that focuses on how maintenance and system administration affect overall system performance is part of a holistic approach to cybersecurity research.
In this spirit, software development organizations developing commodity systems have made substantial efforts to improve their practices and systems. These organizations have, over time, created development practices for reducing the prevalence of exploitable vulnerabilities in released software. A critical component of these approaches is feedback loops tracing discovered vulnerabilities or attacks to root causes. Applied systematically, these feedback loops can lead to new tools or techniques and are fundamental to improving cybersecurity. These efforts are an essential part of security science, integrating what is known about state-of-the-art software engineering practices; social, behavioral, and organizational theory; current understandings of the threat landscape; and models of attacks and defenses. Practical lessons from companies working at the cutting edge of secure system development can inform research approaches that incorporate scientific models.
System administrators and other practitioners are often on the front-lines of securing systems. It is important to develop mechanisms whereby researchers can learn about the real problems being experienced in the field by practitioners. Opportunities include work on resilient architectures, composition of components and properties, logging systems, variability, and configuration. Researchers in these areas benefit significantly from industry contacts and trust relationships with practitioner colleagues.
Research is also needed on metrics useful for informing organizational practices. As a starting point, the measures practiced by the most successful organizations could be made public in a way that makes them available for use as metrics by others. This work would need to be kept up-to-date. Research on what can be done in terms of organizational practices and the extent to which practices enhance security is needed. Adversaries change tactics and approaches frequently, and the organizations that hope to defend themselves must adapt continuously. Security science here will involve understanding how science, models, attacks, and defenses interact; how systems are engineered, deployed, and maintained; and how organizations decide to invest in, develop, and promulgate technologies, practices, and policies regarding security.
The traditional decoupling of academic research from engineering, quality control, and operations leaves gaps in a domain like cybersecurity, where solutions are needed for systems deployed at scale in the real world. These gaps highlight the importance of not just technology transfer, but of incorporating a life-cycle perspective (from development to deployment, maintenance, administration, and aftermarket activities) into proposed foundational research approaches.
SUPPORT AND SUSTAIN FOUNDATIONAL RESEARCH FOR SECURITY SCIENCE
This report is intended to complement the federal Networking and Information Technology Research and Development Program’s 2016 Federal Cybersecurity Research and Development Strategic Plan.1 It elaborates on several specific components of the strategic plan and offers a framework for organizing that research. Most elements of the research agenda in this report can be mapped to components of the strategic plan. The committee outlines a foundational technical research agenda clustered around three broad themes that correspond to those in the strategic plan: detect (detection and attribution of attacks and vulnerabilities), protect (defensible systems that are prepared for and can resist attacks), and adapt (resilient systems that can recover from or cope with a wide range of adversarial behavior). Many familiar technical topics fall within these clusters. Many challenges span them—making an understanding of how they interact critical.
Research that links social, behavioral, and decision sciences and cybersecurity should encourage advances in cybersecurity practices
1 National Science and Technology Council, Federal Cybersecurity Research and Development Strategic Plan: Ensuring Prosperity and National Security, Networking and Information Technology Research and Development Program, February 2016.
and outcomes. Illustrative topics include the following: how individuals interact with and mentally model systems, risk, and vulnerability; incentives and practices in organizations; adversary assessment; why and how cybersecurity measures are adopted; and managing conflicting needs and values in policy, organizations, and technologies. A better understanding of how policies, practices, and improvements are adopted (or neglected) would allow organizational science to leverage cybersecurity research. Two overarching research challenges are how to assess the criticality of a particular capability or application in a given context and how to evaluate the results of research and prioritize implementation. In both cases, there are opportunities to apply foundational science to cybersecurity needs.
FOSTER INSTITUTIONAL APPROACHES AND OPPORTUNITIES TO IMPROVE SECURITY SCIENCE
The research cultures that have developed in the security community and affiliated disciplines will increasingly need to adjust to embrace and incorporate lessons from both a wider variety of disciplines and practitioners, including the developers, and administrators responsible for securing real-world operational systems. Given the dynamic, rapidly evolving nature of the problem, the cybersecurity research community itself has struggled to develop a sustained science of security. A 2016 Computing Research Association (CRA) memo2 suggests that computing research suffers from counterproductive incentives emphasizing publication quantity and short-term results, inhibiting longer-term efforts and infrastructure development. This report proposes strategies to address these counterproductive incentives.
The committee was asked to consider gaps in the federal research program. In the committee’s view, the security community and funders understand the breadth of the challenge. The gaps that the committee identified are not strictly topics or problems that are not being addressed. Instead, the committee focused on how programs and projects are framed and conducted, with an emphasis on creating integrative security science that is capable of seeking and incorporating relevant social, behavioral, and decision science results and operational and life-cycle understandings.
The committee was also asked to consider how foundational efforts in cybersecurity bear on mission-critical applications and challenges, such as those faced by agencies working in classified domains. From the commit-
2 B. Friedman and F.B. Schneider, “Incentivizing Quality and Impact: Evaluating Scholarship in Hiring, Tenure, and Promotion,” 2015, Computing Research Association,http://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-inhiring-tenure-and-promotion/.
tee’s perspective, the same principles apply, whatever the domains (from protecting systems that contain information on an intelligence agency’s sources and methods, to preventing the servers running the latest bestselling augmented reality mobile game from being compromised, to general deterrence efforts).
Thus, foundational efforts in cybersecurity, as described in this report, could yield results that are broadly applicable. One potential distinction the committee considered was between classified and unclassified efforts: Although there may be differences in the nature of threat and what is known and by whom about that threat, private-sector entities are increasingly on the front line, facing and securing themselves against “nation-state”-level attacks. Moreover, even if people and processes differ in public- and private-sector organizations, all depend on human behavior of the sort that social, behavioral, and decision science research integrated with technical cybersecurity research can inform.
There are also research efforts in the classified and unclassified domains that leverage similarities in basic technologies, humans interacting with systems, and organizations managing them. Making those connections is not always done, however. It falls to funders, the researchers, and the consumers of research to make those connections. Problems and assumptions may need to be translated across the classified/unclassified boundary, but foundational results should be applicable in each. It will be particularly important to develop and find people who are skilled at these translations.
* * *
The challenge of cybersecurity and the urgent nature of risks to society posed by insecure systems and a dynamic and fast-changing environment understandably promotes an emphasis on moving fast. Paradoxically, however, the field is still so comparatively new, and the nature of the challenge is so hard, that in-depth scientific research is needed to understand the very nature of the artifacts in use, the nature of software, the complexity and interdependencies in these human-built systems, and importantly, how the humans and organizations who design, build, use, and attack the systems affect what can be known and understood about them. Encouraging research to address these challenges will require sustained commitments and engagements. Thus, programs that encourage long-horizon projects where these connections can be worked out will be important.
The fact that these systems are designed, developed, deployed, and used by humans, and that humans are also the adversaries behind attacks on them, means that the work done in the social, behavioral, and deci-
sion sciences will be critical. Deepening understanding of humans and human organizations, and linking that understanding to more traditional research in cybersecurity, is necessary to develop a robust security science and to deploy systems most effectively so that they do what they were designed to do, secured against human adversaries. Cybersecurity can be viewed as a cutting edge of computing that demands a broad, multidisciplinary effort. Addressing the global cybersecurity challenge needs not just computer science, engineering science, and mathematics, but also partnerships with other disciplines to draw on what we know and understand about human nature and how humans interact with and manage systems—and each other.