Attaining meaningful cybersecurity presents a broad societal challenge. Its complexity and the range of systems and sectors in which it is needed mean that any approaches are necessarily going to be multifaceted. Moreover, cybersecurity is a dynamic process involving human attackers who continue to adapt, and it is not possible to guarantee security. Instead, we might think in terms of mitigations and continually improving resilience in an environment of continuous attack and compromise.
The research cultures that have developed in the security community and in affiliated disciplines will increasingly need to adjust to embrace and incorporate lessons and results not just from a wider variety of disciplines, but also from practitioners, developers, and system administrators who are responsible for securing real-world operational systems. This chapter first explores opportunities to improve research practices, structural approaches that can help in interdisciplinary environments, and ways to address security science in federal research programs. A brief discussion of how to assess and evaluate research and how foundational efforts in cybersecurity research bear on mission criticality follows.
In 2015, the Computing Research Association (CRA) Committee on Best Practices for Hiring, Promotion and Scholarship published a best practice memo, “Incentivizing Quality and Impact: Evaluating Scholar-
ship in Hiring, Tenure, and Promotion.”1 In it, they recommended an increased emphasis on scholarship and quality over quantity:
Sheer numbers of publications (or derivative bibliometrics) should not be a primary basis for hiring or promotion, because this does not encourage researchers to optimize for quality or impact. Other proxy measures are similarly problematic. For example, whether program committee service indicates an individual’s stature in the field depends on the conference.
That memo also urged changes in the publication culture in the field:
Systemic changes throughout the publication culture would help to support better scholarship. With new technology and digital delivery, publishers could remove page limits for reference lists and could allow appendices for data, methods, and proofs. Editors, as appropriate, could consider longer submissions with the understanding that, in such cases, a longer review period would be likely. In addition to conferences with published proceedings, other professional gatherings (that do not publish proceedings) might be held where work-in-progress could be presented.
Although this memo was aimed at the computing research community broadly, similar challenges exist in cybersecurity research. Beyond negative impacts on scholarship and quality, incentives that reward frequent publication also end up draining time and energy from the pool of reviewers (who are themselves researchers). More generally, looking for ways to improve institutional incentives broadly is important. The CRA memo also emphasizes scientific and real-world impact, which has implications for cybersecurity research, since a focus on those impacts can help motivate researchers to do the hard work that is often required to support validity and transition to practice. This work can include prototyping, partnering, and persistence in resolving issues.
Given the dynamic and rapidly evolving nature of the cybersecurity problem, the research community itself has struggled to develop a sustained science of security. The CRA memo above suggests that computing research in general suffers from counterproductive incentives related to publication quantity and an emphasis on short-term results.2 Of course,
1 B. Friedman and F.B. Schneider, “Incentivizing Quality and Impact: Evaluating Scholarship in Hiring, Tenure, and Promotion,” Computing Research Association, Washington, D.C., 2015, http://cra.org/resources/best-practice-memos/incentivizing-qualityand-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/.
2 These problems are not unique to the field of cybersecurity research. The Economist recently discussed challenges of replication and reproducibility in the psychological sciences: “Ultimately, therefore, the way to end the proliferation of bad science is not to nag people to behave better, or even to encourage replication, but for universities and funding
such problems are endemic across science. The incentives for program managers at funding agencies, for researchers, for journal reviewers, and for conference organizers unfortunately too often skew against the development and practice of solid science. Funding agencies are under pressure to achieve short-term results; the cybersecurity research landscape is multifaceted, making it a challenge to choose focus areas.
Academic researchers are incentivized to publish as often as possible; a secondary effect of this is the impetus to avoid longer-term efforts and infrastructure work. Another side effect of the emphasis on quantity is that the peer-review process itself becomes overloaded, making it hard to find high-quality reviewers. As the community’s intellectual and oversight resources are stretched, incentives to submit rejected work to another venue without making improvements can also increase. In addition to reducing the quality of the literature, that tends to waste the time and energy of the reviewers who had provided comments.
Another challenge arises due to the nature of academic prototypes. Such tools and systems do not, understandably, focus on all of the details that would need to be addressed in a production-quality artifact, which can result in vulnerabilities. But it is important to take care that the innovation being represented in the prototype does not itself interfere with getting details right later (including details related to social and behavioral assumptions).
Transparency is another challenge in the practice of research. An effective security science demands (among other things) replication of studies in different contexts, not only to verify the results stated in already-published papers, but also to help determine in which other contexts the results hold. But in cybersecurity studies, the real-world participants may be loath to allow publication of research data, for fear of revealing intellectual property (as with testing proposed new security approaches), giving up competitive advantage, or compromising customer or employee privacy. Simply demanding openness is unlikely to succeed. If researchers are to devise ways to evaluate others’ work and to perform replications, they will first have to listen carefully to real-world concerns.
Given the importance that society attaches to making progress in cybersecurity, it seems valuable to try to address these counterproductive incentives and pressures and to help put the community in a better position to make progress. There are opportunities to improve how research is conducted, reported, and evaluated. To situate research within a devel-
agencies to stop rewarding researchers who publish copiously over those who publish fewer, but perhaps higher-quality papers.” In The Economist, “Incentive Malus,” September 24, 2016, http://www.economist.com/news/science-and-technology/21707513-poor-scientific-methods-may-be-hereditary-incentive-malus.
oped scientific framework suggests that reports of research results would have to clearly articulate how the research builds on previous efforts: what models of attacks, defenses, and/or policies the research is meant to address; and how it is informed by existing knowledge and integrates with other disciplines. This would serve to advance security science by enabling independent evaluation and assessment of past claims and pointing the way to eventual impacts on real-world systems.
With regard to experimental methods and investigational approaches, there are opportunities for cybersecurity researchers to learn from the ways that other disciplines communicate about methodology. For instance, if a project involves human subjects, then make clear the characteristics of the subject pool from which the subjects were drawn, what the selection mechanism was, and what the pool’s general demographics were.3 Other questions to consider are the following: How are the subjects compensated for their time? What were the instructions that the subjects were given? What was the design of the study? What were the statistics used? There are well-understood ways to perform case studies and surveys when experiments are impractical or impossible. There are also observational studies using well-accepted techniques from anthropology, sociology, and psychology. In many cases, it is the documentation of steps toward consistency and collaboration that are difficult to do but essential both to the science of security and to enabling cross-disciplinary investigation. Identifying threats to the validity of the study and its results can also be helpful. In many cases, the act of having to write that kind of section can bring fresh insights to the problem and also help the reviewers and others who need to evaluate the work.
In systems research in computing, there are sometimes empirical components (e.g., the use and analysis of the behavior and performance of specific artifacts) that, although not involving human subjects, could be reported in ways that explicitly draw from the list above to better position the work in the literature and clarify—to both the original researchers and to ultimate consumers of the research—the potential impact and reproducibility of the results.
One specific action that researchers and those who evaluate research results could take would be to shift the standards and expectations for how results are presented, especially for research involving human
3 Relying on collecting data through Mechanical Turk studies (in which very low paid workers click through a questionnaire or carry out tasks) may not yield high-value data. A separate issue is trials carried out on live systems by companies with or without the participation of academic researchers. Concerns about ethical and scientific integrity have been raised about some studies where customers may not know they are participating. (For example, a study Facebook conducted to examine how emotions spread on social media resulted in controversy over the ethics of experimentation on humans.)
subjects.4 Social science disciplines have standard ways of reporting studies—the problems, constraints, data, methods, limitations, and results. Cybersecurity research involving human subjects can readily follow those conventions. Cybersecurity research in other domains can learn from these disciplines and develop comparable conventions. Lack of such structure can impede comprehension and create opportunities for errors of omission (e.g., inadequacies in describing designs, in justifying decisions, in sampling, and so on). For some types of research, structured reporting can help focus the design and conduct of the research that will eventually be reported. The committee identified the seemingly prosaic function of publication practices as the following potentially effective leverage points:
- Encourage structured abstracts5—structured abstracts facilitate rapid comprehension, are easier to read, facilitate effective peer review, are more easily evaluated and comprehended, and lend themselves more readily to meta-analyses.
- Encourage clear statements of the research questions and how results relate to improving the understanding or management of real-world problems.
- Encourage cybersecurity researchers to become trained in experimental research methods and in explaining how they have been used—both to benefit their own work, when appropriate, and to inform their work as reviewers and evaluators of others’ work.
- Emphasize reporting on methodologies and expect that researchers make explicit any experimental and evaluation methodologies used in their work.
- Expect that results be appropriately contextualized, both within the broader scientific literature and with regard to the particular problem domain.
- Develop ways to track studies and outputs from projects that encourage researchers to take advantage of reviewer comments and suggestions even when papers are not accepted for publication.
- Expect that researchers explain what models of attacks, defenses, and/or policies a particular result is meant to address.
- Encourage replication of experiments and results.
- Encourage open publishing of data and software.
4 See R. Maxion, “Structure as an Aid to Good Science,” Proceedings of the IFIP Working Group, Workshop on The Science of Cyber Security, 2015, Bristol, U.K., http://webhost.laas.fr/TSF/IFIPWG/Workshops&Meetings/67/Workshop-regularPapers/Maxion-Bristol_012315.pdf.
5 A structured abstract is an abstract with explicit, labeled sections (e.g., Problem, Method, Results, Data, Discussion) to allow for ease of comprehension and clarity.
- Be explicit about what criteria are used to assess scientific contributions.
There have been some efforts to emphasize more structured approaches in reporting and communications by cybersecurity researchers themselves, including organizing a workshop on this topic,6 but there are opportunities to do more here as well. In particular, emphasizing reporting of this sort can have impact in two ways: It can, as discussed above, help buttress and elaborate the emerging science of security, and it can connect research results to outcomes in the real world.
The considerations described above are relatively standard in sciences used to dealing with human subject experiments. Although much of this might be seen as process-oriented, in the committee’s view, looking for opportunities to encourage these sorts of activities in the research enterprise—on the part of both funders and researchers—can help induce increased care with respect to how problems are framed and more thoughtfulness with regard to potential impact and leverage of the eventual results. Care also needs to be taken to ensure that overly rigid criteria are not used; excessive rigidity runs the risk of excluding categories of relevant and high-quality research. The assessment process itself should be under ongoing scrutiny to prevent this.
To achieve effective interdisciplinary outcomes, work will need to be done across disciplinary boundaries—incorporating experts from many disciplines as well as individuals with deep expertise in more than one discipline.7 There are often institutional impediments related to the difficulties of interdisciplinary work—for instance, regarding the respect members of one discipline give members of other disciplines; ensuring that cultural differences across disciplines reflecting conventions for documenting studies and their results are respected; and appropriate incorpo-
6Proceedings of the IFIP Working Goup, Workshop on The Science of Cyber Security, 2015, Bristol, U.K.
7 At the doctoral level, the University of Bochum in Germany is pioneering Tandem Dissertations in the SecHuman Doctoral Training Centre. A technical and a social science doctoral student and their primary advisors are paired in “tandems” and carry out research on the same topic, formulating interrelated research questions and tackling them with knowledge and methods from their background, reaching conclusions within each discipline first, and then reflecting on what emerges when they put the results together. This way, each doctoral student produces a thesis that can be examined and understood by members of their own discipline, but then also provides an additional layer of insight.
ration of each discipline’s perspective in each step of the research. A recent National Research Council effort looked more broadly at the challenges of “team science” and the particular challenges of collaboration. The resulting report, Enhancing the Effectiveness of Team Science,8 offers policy recommendations for science research agencies and policy makers along with recommendations for individual scientists and universities. A separate effort explored the challenge of interdisciplinary research specifically at the intersection of computing research and sustainability. That report, Computing Research for Sustainability, offered a number of examples9 of opportunities to enhance interdisciplinary approaches that could also be applied to the interdisciplinary challenge in cybersecurity. A revised and extended version of those opportunities that focuses on cybersecurity research follows:
- Scholarships and fellowships both for computer science graduate students and for early-career professors that provide financial support for taking the time to develop expertise in a complementary discipline.
- The development of cross-agency initiatives that encourage interdisciplinary collaboration in relevant fields.
- Support for a regular series of workshops for graduate students and junior faculty on research methods and quality scholarship.
- Support for the development of new cross-discipline structures (perhaps departments or institutes) between cybersecurity and other fields that can create a new generation of students who are agile both in technical aspects of cybersecurity and in one of the social, behavioral, or decision sciences.
- Involvement of academic administrators and others who influence the context in which tenure and promotion decisions are made. For instance, incentives could be devised to include providing a pathway for publication in social science venues that can lead to tenure and promotion, as well as framing projects so that graduate students can be involved and use the results for doctoral dissertations.
- Community meetings and other opportunities for collaborative and informal intellectual exchange focused on improving methods and approaches separate from the publication pipeline and review process prior to publication and the review process. The National
8 National Research Council, Enhancing the Effectiveness of Team Science, The National Academies Press, Washington, D.C., 2015.
9 See “Programmatic and Institutional Opportunities to Enhance Computer Science for Sustainability” in National Research Council, Computing Research for Sustainability, The National Academies Press, Washington, D.C., 2012.
- Security Agency’s lablets advance this approach, and much activity at conferences is of this sort.
- Institutional structures that support multidisciplinary and interdisciplinary teams focused on a problem or set of problems over an appropriately long period of time.
- The possibility of funding and support for one or more years for individuals to work in small teams on specific topics.
- Coordination between academic research in cybersecurity and nontraditional industrial partners in sectors beyond the large information technology companies—to scope problems, help train students, and cross-fertilize ideas.
- Regular, high-level summits involving cybersecurity and social science experts—practitioners and researchers—to inform shared research design, assess progress, and identify gaps and opportunities.
Another role that funding agencies can play is to fund longer-term projects and to be tolerant of uncertainty, particularly in these multidisciplinary and cross-disciplinary, potentially high-impact research areas.
In addition to applying knowledge from other disciplines to the cybersecurity challenge, foundational cybersecurity efforts would also benefit from a deeper understanding of methods from other disciplines and how they might apply to cybersecurity. Applying methods of social, behavioral, and decision sciences in cybersecurity research, where appropriate, is a way to enhance foundational approaches and also to open up potentially fruitful areas of insight and inquiry that more traditional technically focused agendas might overlook. In the committee’s view, there are fundamental research directions in the social sciences that could help increase understanding of and help solve some cybersecurity problems. However, these directions are not well explored and are typically not treated as a first-class area of research in either the social sciences or in computer science and cybersecurity research. Areas from which cybersecurity research efforts might benefit include the following:
- Predictive models—integrating behavioral science in formal models; elicitation of expert knowledge;
- Failure analysis—predicting the scope and impact of a component compromise in a large-scale system;
- Policy analysis—especially the role of regulation and uncertainty analysis;
- Program evaluation—criteria setting and standards of evidence; and
- Communication—for education of decision makers, researchers, and users and deployers of technologies.
The committee was asked to consider gaps in the federal research program. In the committee’s view, the security community and funders understand the breadth of the challenge. The Networking and Information Technology Research and Development Program’s 2016 Federal Cybersecurity Research and Development Strategic Plan (summarized briefly in Appendix C) lays out a broad approach to addressing it. And, as an earlier National Research Council report10 noted, emphasizing progress on all fronts along with experimentation in terms of programmatics is still important—a diversity evident in the different approaches and strategies among the federal agencies supporting cybersecurity research. The gaps that the committee identifies are not strictly topics or problems that are not being addressed. Instead, the committee recommends shifts in approaches to how programs and projects are framed, and an emphasis on seeking evidence of connections with and integration of science of security; relevant social, behavioral, and decision sciences; and operational and life-cycle understandings.
Cybersecurity poses grand challenges that require unique collaborations among the best people in the relevant core disciplines, who typically have other options for their research. Sponsors of cybersecurity research need to create the conditions that make it worth their while to work on these issues. If successful, cybersecurity research will benefit not only from the substantive knowledge of the social, behavioral, and decision sciences, but also from absorbing their research culture, with respect to theory building, hypothesis testing, method validation, experimentation, and knowledge accumulation—just as these sciences will learn from the complementary expertise of the cybersecurity community. Thus, these collaborations have transformative potential for the participating disciplines, as well as the potential to address the urgent practical problems of cybersecurity.
The traditional decoupling of academic research from engineering, quality control, and operations leaves gaps in a domain like cybersecurity, where solutions are needed for systems deployed at scale in the real world. These gaps spotlight the importance of not just technology transfer, but of incorporating a life-cycle understanding (from development to deployment, maintenance, administration, and aftermarket activities) into proposed foundational research approaches. This incorporation can lead
10 National Research Council, Toward a Safer and More Secure Cyberspace, The National Academies Press, Washington, D.C., 2007.
to better outcomes as well as improvements in communicating how and why certain things should be done.
Many of the technical research questions this committee highlights above are addressed in various funding portfolios and in the recent strategic plan. The committee urges an emphasis on situating research efforts in security science within the framework outlined in this report, which can help spotlight high-leverage opportunities for impact, and on thinking about how those opportunities can be translated into practice and deployed at scale. This goes beyond a traditional technology transfer challenge—which is hard enough—to connecting research results with anticipated social, behavioral, and organizational implications and with what practitioners understand about managing the full life cycle of deployed technologies.
With regard to all of these efforts, agencies that sponsor research in cybersecurity will continue to face a significant challenge in assessing the effectiveness of their investments. In part, that is the nature of research, where the ultimate payoffs can be quite large but are usually unpredictable and often come long after the research was carried out.11 Even so, successful technology transfer and the implementation of real-world systems or subsystems that apply research results provide some measures of research quality and effectiveness.
There are extrinsic impediments related to industry practices and norms that thwart the transition of promising research ideas into practice. For example, some industry norms and expectations can hinder the adoption of ideas and may even create counter-incentives. These norms, many of which are unique to software-based systems, relate to license
11 The Computer Science and Telecommunications Board’s (CSTB’s) recently revised “tire tracks” diagram links government investments in academic and industry research to the ultimate creation of new information technology industries with multibillion-dollar markets. Used in presentations to Congress and executive branch decision makers and discussed broadly in the research and innovation policy communities, the tire tracks figure dispelled the assumption that the commercially successful IT industry is self-sufficient, underscoring through long incubation periods of years and even decades. The figure was updated in 2002, 2003, and 2009 reports produced by the CSTB and more recently in 2012 (National Research Council, Continuing Innovation in Information Technology, The National Academies Press, Washington, D.C., 2012). In 2016, the CSTB issued a summary of presentations by leading academic and industry researchers and industrial technologists describing key research and development results and their contributions and connections to new IT products and industries, illustrating these developments as overlays to the “tire tracks” graphic (National Academies of Sciences, Engineering, and Medicine, Continuing Innovation in Information Technology: Workshop Report, The National Academies Press, Washington, D.C., 2016).
terms (e.g., as is, no warranty, and limits on acceptance evaluation with respect to security), process compliance, anticipated risk and reward (and associated measurement difficulties), and intellectual property protection. Technical advances can help address these challenges. These norms are well established and may not evolve in ways that enhance incentives for improved security. But research leaders need to be aware of these extrinsic factors as they develop ideas and plan for potential transition into practice. And, of course, advances in the technology might possibly result in adaptation of the norms to enhance the ability to develop, evaluate, and evolve improvements in security. This space itself offers opportunities for research efforts that may reveal the interplay of these norms with the potential transition and acceptance of new technologies.
Unfortunately, technology transfer can be a process of years for some research results. Moreover, a rigid focus on traditional technology transfer could drive sponsors to support only advanced development or research that leads to incremental improvements in technology, while the current state of cybersecurity clearly calls for more extensive changes.
Thus, in addition to monitoring technology transfer of applied or incremental results, sponsors can consider the following ways of assessing research:
- Publication of research results in high-quality journals or conference proceedings is the canonical indicator of research quality. To the extent that journals or conferences include editors, reviewers, or program committee members from development organizations and from other disciplines, their selections may be an especially useful indication of the long-term value of research (see also the next point).
- Creating opportunities for and then examining results of experimental prototype deployments can be instructive. Experimentation with prototypes offers the opportunity to expose hidden assumptions about reality and to demonstrate (or fail to demonstrate) scalability and feasibility.
- Technology transfer ultimately requires that results be adopted by development organizations that build real systems, by security vendors, and by organizations and institutions that deploy systems. Many development organizations and security vendors monitor research results with the aim of identifying potentially useful ideas or techniques. Feedback on research from such organizations, either in the form of informal reactions at conferences or in response to surveys or questions by sponsors, will give the sponsors a sense of research quality. Similarly, other positive signals to look for are when technical research results explicitly take
- into account the behavioral and organizational considerations that influence adoption, or when the primary focus is on social and behavioral aspects of cybersecurity practices and policies and those results are recognized within a multidisciplinary research community. It is important to note that adoption is only one signal, however, and relying too much on it may inadvertently neglect disruptive approaches (e.g., so-called clean-slate efforts) that would likely have a longer path to impact and recognition.
- To the extent that research results lead to the formation of start-up companies or the hiring of researchers by commercial enterprises, sponsors should consider that a positive sign for the research effort. However, these are steps along the way, not outcomes of better security. Citations of research results by publications and researchers that are themselves successful at technology transfer (or in descriptions of successful products) are an indication of research quality, though a “lagging indicator” that will likely be available only years after the initial research has been completed.
None of these approaches is likely to be a surprise to research funders, nor a panacea that guarantees accurate assessment of results. However, all may be worth sponsors’ consideration as they evaluate their research programs and associated projects they have chosen to sponsor and researchers they have chosen to support.
The committee was also asked to consider how foundational efforts in cybersecurity bear on mission-critical applications and challenges, such as those faced by agencies in the Special Cyber Operations Research and Engineering (SCORE) interagency working group (often in classified domains). First, whatever the application in whatever domain (from protecting systems that contain information on an intelligence agency’s sources and methods, to preventing the servers running the latest bestselling augmented reality mobile game from being compromised, to general deterrence efforts), the same fundamental assumptions about the nature of technological systems and about human nature apply. Thus, foundational efforts, in cybersecurity as described in this report, could yield results that are broadly applicable. Second, one significant distinction that may bear on what research problems are tackled and how well solutions apply across classified and unclassified efforts in cybersecurity involves the nature of threat and what is known and by whom about that threat. Even so, as private-sector companies and enterprises are increasingly seeking to secure themselves generally against “nation-state”-level
attacks, that distinction may be less critical. Third, people and processes need to be taken into account. In classified environments whose systems need to be secured, different kinds of security training might be done and different controls in terms of configuration and processes put in place than are likely in most private-sector organizations. This could have an impact on how effective certain security approaches and tools are—but the general point that social, behavioral, and decision sciences in tandem with technical cybersecurity research can help inform better choices in terms of people, processes, and institutional policies still holds.
There are undoubtedly research efforts in the classified and unclassified domains that leverage similarities in basic technologies, the nature of humans interacting with systems, and the nature of organizations. Making those connections is not always done, however. It falls to funders, the researchers, and the consumers of research to ask for and seek out those connections. In both directions, problems and assumptions may need to be translated across the classified/unclassified boundary, but foundational results should be applicable in each. In some cases, experts in a classified environment can develop study problems and data sets that are accessible to unclassified researchers. If this is accomplished well, then results can be translated into the classified setting. It will be particularly important to find and develop people who are skilled at developing and communicating these translations.
Specific areas of inquiry that could have direct applicability to SCORE missions include attribution and defense against and recovery from attacks by insiders—progress on both would benefit from the four-pronged approached recommended here. How can investments in foundational work of the sort the committee urges here be useful to those defending and supporting our nation’s most critical infrastructures and defense systems? The committee believes that increasing emphasis on a deep understanding of general classes of attacks, policies, and defenses, and carefully mapping where particular research results or technology solutions fit within that understanding, can provide decision makers a more thorough and grounded understanding of likely effectiveness for their situations. Similarly, improved integration of social, behavioral, and decision science methods, results, and inputs on particular research projects will enable a more nuanced understanding of cybersecurity challenges. And SCORE agencies are likely to benefit, especially from work that helps them understand where the leverage points for improvement are likely to be—which may not always be solely technical in nature.
* * *
The challenge of cybersecurity and the urgent nature of risks to society posed by insecure systems and a dynamic and fast-changing environment understandably promotes an emphasis on moving fast. Paradoxically, however, the field is still so comparatively new and the nature of the challenge is so hard, that in-depth scientific research is needed to understand the very nature of the artifacts in use, the nature of software, the complexity and interdependencies in these human-built systems, and importantly, how the humans and organizations who design, build, use, and attack the systems affect what can be known and understood about them. Encouraging research to address these challenges will require sustained commitments and engagements. Thus, programs that encourage long-horizon projects where these connections can be worked out will be important.
The fact that these systems are designed, developed, deployed, and used by humans, and that humans are also the adversaries behind attacks on them, means that the work done in the social, behavioral, and decision sciences will be critical. Deepening our understanding of humans and human organizations, and linking that understanding to more traditional research in cybersecurity, is necessary to develop a robust security science and to deploy systems most effectively so that they do what they were designed to do, to say nothing of securing them against human adversaries. Cybersecurity can be viewed as a cutting edge of computing that demands a broad, multidisciplinary effort. Addressing the global cybersecurity challenge needs not just computer science, engineering science, and mathematics, but also draws on what we know and understand about human nature and how humans interact with and manage systems—and each other.
This page intentionally left blank.