National Academies Press: OpenBook
« Previous: Front Matter
Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×

1

Introduction and Context

In recent years, interest and progress in the area of artificial intelligence (AI)1 and machine learning (ML) have boomed, with new applications vigorously pursued across many sectors. At the same time, the computing and communications technologies on which we have come to rely present serious security concerns: cyberattacks have escalated in number, frequency, and impact, drawing increased attention to the vulnerabilities of cyber systems and the need to increase their security. In the face of this changing landscape, there is significant concern and interest among policymakers, security practitioners, technologists, researchers, and the public about the potential implications of AI and ML for cybersecurity.

The National Academies of Sciences, Engineering, and Medicine held the Workshop on the Implications of Artificial Intelligence and Machine Learning for Cybersecurity on March 12-13, 2019, in Washington, D.C., to explore these issues. The event was organized by a planning committee appointed under the auspices of the National Academies’ Computer Science and Telecommunications Board (CSTB), in coordination with the Intelligence Community Studies Board. The project was sponsored by the Office of the Director of National Intelligence (see Box 1.1 for the statement of task). See Appendix A for the workshop agenda and Appendixes C and D for biographies of the planning committee members and speakers, respectively.

The workshop was unclassified and open to the public. This proceedings was developed based on transcripts and slides from the workshop and is intended to capture the ideas, examples, and questions raised by the speakers and other participants. The introductory remarks at the workshop are summarized in the remainder of this chapter. Chapter 2 explores the overall context of cyber engagements, emphasizing the potential use of AI in cyberattacks. Chapter 3 highlights some currently deployed AI/ML tools for cyber defense. Chapter 4 explores research efforts related to AI and cybersecurity, including work on the robustness of ML-based cybersecurity tools and AI methods for simulating the dynamics of cyber engagements. Chapter 5 provides a broader look at the security and privacy of deployed ML-enabled systems. Chapter 6 addresses the emergence and implications of deep fakes and synthetic media. Chapter 7 summarizes a wrap-up discussion in which workshop participants reflected on key questions posed to the group and identified key takeaways from the workshop.

___________________

1 Rapporteur’s note: Artificial intelligence is the general term for the field aimed at developing computing technologies that exhibit what humans would consider to be intelligent behavior, or for the technologies themselves. The field has evolved over time and includes many interconnected subfields and crossdisciplinary approaches. For more on the evolution of the field, see the summary of Subbarao Kambhampati’s opening remarks on “The State of Artificial Intelligence” below.

Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×

OPENING REMARKS

Workshop chair Fred Chang, Southern Methodist University, and representative of the sponsoring organization, Vinh Nguyen, National Security Agency (NSA), offered opening remarks to outline the workshop’s impetus and goals.

Fred Chang, Workshop Chair

Chang explained the process by which the workshop was organized, the three guiding themes provided in the statement of task (see Box 1.1), and the general topics to be addressed during each panel. He reiterated the public, unclassified nature of the workshop and encouraged attendees to actively interact with the speakers and engage in the discussion.

To set the stage, Chang offered a brief overview of the current state of cybersecurity. On the positive side, over the past decade, significant awareness, attention, and resources have been dedicated to cybersecurity. This is reflected in a range of developments, including the establishment of October as Cybersecurity Awareness Month, widespread efforts to educate people about phishing, congressional hearings on cybersecurity, the inclusion of cyber specialists on corporate boards, and a proliferation of new degree and certification programs in cybersecurity. A particularly important development, in Chang’s view, is the increased awareness and attention to this issue on the part of corporate leadership, who are dedicating significantly more resources to chief information security officers than in past decades. In addition, the field is increasingly attracting venture capital, with one estimate showing that more than $5 billion in venture capital was invested in cyber startups in 2018—an increase of 20 percent over 2017 and 80 percent over 2016, he noted.

Despite these developments, cybersecurity remains a vexing challenge. Pointing to the common refrain that complexity is the enemy of security, Chang noted the sharp rise in the sheer number of computer applications, lines of code per application, and interconnected devices in recent years. As devices and interdependencies proliferate, complexity grows and security suffers. Developers often face trade-offs among cost, speed, and reliability; there is often an additional trade-off between convenience and security. Many competing interests come into play, including the desires of corporate boards, users, standards bodies, regulators, the national security community, and society more broadly. In general, Chang said, in the face of tough choices, security interests may come out as a lower priority.

Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×

The current shortage of cybersecurity skills and expertise poses another important challenge, according to Chang. Despite the increase in educational and training opportunities in the field, including via cyber competitions, the gap continues to grow. As of August 2018, the National Institute of Standards and Technology-supported CyberSeek jobs board included more than 310,000 cybersecurity job openings, a number that continues to rise.

Chang emphasized the need for a science of cybersecurity to establish a deeper understanding of what creates a system vulnerability and how it can be exploited, identify cause-and-effect relationships, generate and test hypotheses, and improve prediction of outcomes. Such an effort, he suggested, would lay the groundwork for building more resilient systems. He noted that the National Academies have issued a range of studies addressing different dimensions of cybersecurity. For example, a 2017 report presented a framework for foundational cybersecurity research emphasizing not only the technical aspects, but also the importance of contributions from the social, behavioral, and decision sciences for understanding organizations, people, and incentives.2

In closing, Chang reflected on a series of inflection points that have changed the cybersecurity landscape over the years, originally described by cybersecurity analyst Dan Geer.3 The first was in the mid-1990s, when the TCP/IP stack was implemented in the second release of Windows 95, enabling the connection of individual owner-operators to a broader network of unknown individuals and predicating the need for cybersecurity. The second was in the mid-2000s, when the emergence of professional hackers upped the stakes for system security; and the third was around 2016, when the Defense Advanced Research Projects Agency (DARPA) Cyber Grand Challenge demonstrated the possibility of automating cybersecurity tasks that previously required human intelligence. This most recent shift defines the landscape at which we are arriving: the realm of artificial intelligence and its implications.

Vinh Nguyen, National Security Agency

Vinh Nguyen, chief data scientist for operations at the NSA, provided context on the origins of the workshop. He highlighted the importance of the topic to the intelligence community, informed by his recent role as the national intelligence officer for cyber in the Office of the Director of National Intelligence. With the expanding capabilities of Russia, China, North Korea, and Iran, the possibility of cyberattacks against critical infrastructure, the use of social media to influence the American people and the electorate, and an increasing frequency of cyberattacks, the normative environment is changing. Cyberattacks are more frequent, he said, and there are often few punitive measures available for pushing back on attackers.

Recent developments in AI and ML raise the question: Will AI and ML improve or degrade our position? Nguyen noted a particular concern about the security of AI and ML systems themselves. These technologies are being adopted in the absence of best practices for security, especially in the private sector. If this rapid adoption of AI and ML technology is akin to the expansion of the Internet in 1995, how can we avoid making the same mistakes? How much time do we have to get ahead of the potential problems that may emerge from this technology? How might we use AI and ML to advance cyber defenses? Will these developments challenge the offense-defense balance?

While the government has focused a lot of discussion on cybersecurity and on AI, Nguyen said the discussion around their intersection so far has been somewhat limited. Speculating that our adversaries’ capabilities are likely to look remarkably different in 5 years, he pointed out the importance of thinking ahead and anticipating possible changes. As an agency, the NSA protects national security systems, the U.S. government, and critical infrastructure and is in a position to help identify best practices for cybersecurity in the age of AI. He expressed his hope that the workshop can help shed light on the potential challenges ahead and the timescales on which they are likely to come to fruition by exploring current capabilities, emerging areas, and the broader context of AI and cybersecurity in an open conversation, thereby helping to inform future discussions among the nation’s leaders and policymakers.

___________________

2 National Academies of Sciences, Engineering, and Medicine, 2017, Foundational Cybersecurity Research: Improving Science, Engineering, and Institutions, The National Academies Press, Washington, DC, https://doi.org/10.17226/24676.

3 D. Geer, Jr., 2018, “A Rubicon,” Aegis Paper Series No. 1801, Hoover Working Group on National Security, Technology, and Law, February, https://lawfareblog.com/rubicon.

Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×

THE STATE OF ARTIFICIAL INTELLIGENCE

Subbarao Kambhampati, professor of computer science and engineering at Arizona State University and immediate past chair of the Association for the Advancement of Artificial Intelligence (AAAI), provided an overview of historical progress and the current state of the field of AI. Noting the great deal of attention AI has garnered in recent years—with hype both positive and negative—he offered context on both the achievements and the limitations of AI to date.

Significant progress has been made in the ability of AI to perform cognitive and reasoning-based tasks, and more recently advancements have been made toward perceptual intelligence and tacit learning. However, the field is still far from achieving true general intelligence, and significant thresholds remain to be crossed. Kambhampati concluded with his own thoughts on the current and potential future implications of AI for cybersecurity, initiating the proceedings.

The Evolution of Artificial Intelligence

“Artificial intelligence” can be informally defined as getting machines to do things that, when done by humans, would be considered intelligent behavior. Kambhampati introduced different kinds of intelligence: perceptual and manipulation intelligence, emotional intelligence, social and communicative intelligence, and cognitive and reasoning intelligence. He described how they are acquired by humans in stages over the course of their individual development and compared this to the historical progress made in the field of AI.

In humans, perceptual and manipulation intelligence are developed first; for example, a baby learns to recognize her mother’s face and to explore objects that are within reach. These intelligences are largely tacit and subconscious. Emotional intelligence—such as understanding when to cry and when to smile—comes next. Social and communicative intelligence then emerge, characterized by the development of language and a theory of mind—that is, a means for recognizing that other people are also people, with minds of their own. Cognitive and reasoning intelligence—the consciously accessible intelligence that one aims to measure in tests such as the SAT—comes last.

Kambhampati observed that the evolution of AI systems has proceeded in roughly the opposite direction along this spectrum of intelligence, beginning with progress in cognitive, then communicative, and finally perceptual intelligence. In the 1980s, progress was made in expert systems—rule-based systems aimed at capturing knowledge such as a company’s standard operating procedures. In the 1990s, reasoning systems emerged—for example, IBM’s Deep Blue, which dethroned chess master Gary Kasparov and demonstrated that machines could dominate humans in chess. Around 2006, AI systems began making impressive breakthroughs in performing perceptual tasks, such as speech and image recognition. Today, speech recognition has become commonplace, and image recognition (also known as computer vision) has improved significantly. Kambhampati noted that the jury is still out on the ability for AI systems to achieve emotional and social intelligence.

According to Kambhampati, this difference reflects the fact that it is easier to program computers to perform functions for which there exist clear and logical mental models. It is much harder to program for types of intelligence that are less understood and harder to explain, such as perceptual intelligence. He noted the phenomenon called Polanyi’s paradox: “We know more than we can tell.” That is, while some human knowledge is explicit and can be articulated, some is tacit and unconscious and cannot be readily explained.4 That which can be explained with conscious theories or rules was historically easier to program. However, this has changed in recent years with new opportunities in machine learning—making computers learn from observation of examples and experience that can be applied in new contexts, analogously to how humans learn through observation in early phases of development.

Current Trends in Artificial Intelligence: Neural Networks and Deep Learning

Interest in AI has exploded in recent years, largely because perceptual AI capabilities such as image and speech recognition have brought AI into tools we use in our everyday lives. We encounter AI on our smart phones, with

___________________

4 See M. Polanyi, 2009, The Tacit Dimension, University of Chicago Press, Chicago, IL.

Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×

our voice assistants, with driving assistance systems in our cars, and more. This increased access to and awareness of AI, however, has also led to misinterpretations and misperceptions about the nature of AI systems.

In particular, Kambhampati noted, while some may believe that the recent progress in perceptual intelligence reflects a major theoretical advance in AI, this is not the case: much of this progress is due to the recent, wide deployment of neural networks—an ML framework first developed in the 1960s. This approach has come to the fore today due to the separate, non-AI-related development of large-scale computation and communication and data capture infrastructure; only recently have data become abundantly available—and computers powerful enough—to deploy neural networks for practical applications.

Neural networks are the workhorses of the field of deep learning (DL), an ML approach that has achieved impressive feats of perceptual intelligence. For example, these methods are being used to write captions for images, to “dream up” realistic faces of people who don’t actually exist,5 and to produce plausible, original, textual content without the aid of a human.6

However, some aspects of DL are unintuitive, said Kambhampati. In particular, state-of-the-art neural networks, for which a large number of parameters (or, tunable weights) must be optimized, in some cases seem to succeed even when the size of the training data set is insufficient to meet requirements for the model. This scenario seems susceptible to drastic overfitting to the data, so it is still an open question as to how the approach can nonetheless yield useful results.

In addition, Kambhampati said, many state-of-the-art perceptual systems can misperceive images when noise is added to the data. AI image recognition systems can even be fooled into misperceiving images that a human would have no trouble perceiving correctly, as was demonstrated in one experiment where perturbations introduced in images caused an algorithm to mislabel a school bus as an ostrich. He pointed out that, while it is often easy to understand why humans might misidentify an image (e.g., one that looks to us like something it is not), it is not obvious what causes a computer to do so, reflecting a lack of transparency in deep learning models.

The image misidentified as an ostrich was in fact an example deliberately manufactured to trick the system, a so-called adversarial example. Kambhampati noted that adversarial examples would be an area of focus for the workshop, as they pose significant security threats. He commented that some researchers have been active in showing how to break deep learning systems for the purpose of improving our understanding of their failure modes; he underscored the need for the U.S. AI community to understand how AI systems break before our adversaries do.

Intelligent Deployment of Artificial Intelligence

Perceptual intelligence is powerful and can be used for a wide range of applications. Kambhampati suggested that capabilities in this area have led to an irrational exuberance about AI, highlighting a strong drive, especially in industry, to exploit ML or DL technology—even when it is not the tool best suited to the goal. Often, Kambhampati suggested, people are using neural networks and DL for no other reason than to put the AI label on it and keep their bosses happy or to increase a product’s appeal to funders or customers. In some cases, the label is even applied when there is no AI—for example, it was recently reported that 40 percent of the AI startups in Europe don’t even make use of actual AI technologies in their products.7

A related issue is the push to convert human and industry problems into purely data-driven ML problems. While the goal of data science is to extract insights and knowledge from data, some people have been taking explicit knowledge and converting it to data just so they can use neural networks to extract knowledge. For example,

___________________

5 See, for example, the website http://www.thispersondoesnotexist.com, imagined by a generative adversarial network (GAN) StyleGAN (see T. Karras, S. Laine, and T. Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” arXiv:1812.04948v3 [cs.NE]) and Nvidia Original GAN (see I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, 2014, “Generative Adversarial Networks,” arXiv:1406.2661v1 [stat.ML]).

6 For example, GTP-2, an AI system developed by OpenAI; see A. Radford, J. Wu, D. Amodei, D. Amodei, J. Clark, M. Brundage, and I. Sutskever, 2019, “Better Language Models and Their Implications,” OpenAI.com, February 14, https://openai.com/blog/better-language-models/.

7 MMC Ventures, 2019, The State of AI 2019: Divergence, p. 99, https://www.mmcventures.com/wp-content/uploads/2019/02/The-Stateof-AI-2019-Divergence.pdf; A. Ram, 2019, “Europe’s AI Start-Ups Often Do Not Use AI, Study Finds,” Financial Times, https://www.ft.com/content/21b19010-3e9f-11e9-b896-fe36ec32aece?shareType=nongift.

Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×

Kambhampati described an instance where an individual tried to use ML to learn how to play Sudoku. In this case, ML was applied to a large collection of human-solved Sudoku puzzles—an odd exercise, since we already understand how the game is played, and because a computer could solve these puzzles using the explicit rules. Today, data-driven AI approaches are elevated, a swing away from the historical mindset that rule-based models can be useful (even if none is perfect). He referred to the tension between these perspectives as the “data versus doctrine tension.” While one of the hallmarks of human intelligence is the seamless combination of explicit knowledge with tacit knowledge to achieve a goal, we have not yet figured out how to do this well with AI.

To encapsulate the notion that ML or data-driven AI is not the appropriate tool for every situation, Kambhampati offered the following modified “serenity prayer” for robots:

Human, grant me the serenity to accept the things I cannot learn, the data to learn the things I can, and the wisdom to know the difference.

Kambhampati also noted current concerns over the explainability of AI-based tools, stimulated largely by the current prominence of perceptual AI systems and their success with tacit knowledge—a development he referred to as “Polanyi’s revenge.” He pointed out that AI systems based on explicit knowledge generally are explainable—that is, one could provide some reasoning about how the model arrived at its outcome, and/or about why the output is correct. Those based on tacit knowledge (i.e., perceptual AI) generally are not—for several reasons. The first relates to Polanyi’s paradox, that tacit knowledge is not easily explainable, even for humans. Second, perceptual AI systems must learn their own representations in the absence of clarity about the ones that humans use—and communicating reasoning between different representations is a tall task. Kambhampati provided an illustrative analogy: even if lions could speak, we wouldn’t necessarily understand them, because we might not have a shared vocabulary or frame of reference. The same could be true for a computer.

Kambhampati also emphasized that AI systems need to work with humans; they should be able to recognize our intentions and project their own intentions to us. Teaming humans and AI systems, he said, requires that AI systems have a theory of mind for humans and also a sense of the human mental model of the AI system’s capabilities. The latter component is particularly important for AI systems to exhibit comprehensible and explicable behavior and for developing truly human-aware AI systems. Kambhampati referenced a presentation he gave in 2018 as president of the AAAI that addressed these and related issues.8

After Kambhampati’s talk, workshop chair Fred Chang asked him to speculate on what students of AI will study 10 to 15 years in the future. Kambhampati replied that they will likely study the same things that have been studied by past generations, such as ML techniques and coding languages, plus more. He pointed out that science is a human endeavor conducted by humans and as such it is filled with human biases toward certain ideas. However, since scientists record their progress, failed or ignored ideas sometimes make a comeback years later, as happened with neural networks. Building on this, Chang asked what “backwater” technique today may come back as a major area of research and development in 20 years. Kambhampati responded by noting that backwaters by definition are not widely recognized, which makes it hard to know which ones might rise to prominence in future years. He pointed out that 65 to 70 percent of current papers in the AI field address perceptual intelligence using deep learning. That leaves about 30 percent focused on other assorted ideas and approaches. Those ideas and technologies are all candidates for rising to prominence in the future, as are ideas that scientists have yet to consider.

Cybersecurity Implications of Artificial Intelligence

The final portion of Kambhampati’s talk focused on the implications of AI for cybersecurity, in terms of both opportunities and risks. Noting that this topic would be the focus of subsequent workshop sessions, he provided a brief introduction to ways in which AI techniques can help to improve cybersecurity, along with their potential to open up new attack surfaces for adversaries.

___________________

8 S. Kambhampati, 2018, “Challenges of Human-Aware AI Systems,” presidential address at the 32nd AAAI Conference on Artificial Intelligence, http://rakaposhi.eas.asu.edu/haai-aaai/AAAI-Presidential-Address-final.pdf, YouTube link: http://bit.ly/2tHyzAh.

Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×

Kambhampati noted that AI, like any tool or technology, could be useful for many applications, including for cybersecurity. He highlighted the following three key areas to consider:

  • Game-theoretic models (e.g., to understand dynamic security paradigms such as a moving target defense),
  • Planning and reasoning methods (e.g., to help uncover previously unidentified attack paths)
  • Machine learning (e.g., to identify malware or for intrusion detection)

Kambhampati suggested that, while there may be a predisposition to focus entirely on the affordances of ML for cybersecurity, AI is broader than ML alone, and one can envision AI’s contributions in terms of combining multiple approaches. He noted that significant research has been conducted at the intersection of cybersecurity and all of the above areas.

Kambhampati’s research group at Arizona State University has done work at the intersection of AI and cybersecurity. For example, they have worked on the so-called controlled observability planning problem, where an actor controls what the observer sees to effect a desired outcome: Actions are apparent to friends working in a collaborative environment but obfuscated for enemies in an adversarial context. They have also done some work on the “moving target defense” for Web applications, in which game-theoretic approaches inform constant configuration changes to reduce an attacker’s ability to bring down a system. Not surprisingly, this approach outperforms those using random configuration changes.9

While Kambhampati believes that AI systems will in general have many profoundly positive impacts on society, their widespread use will also open up new attack surfaces. One major area of concern is the use of AI to generate fake and potentially deceptive content, Kambhampati said. In particular, perceptual AI can be used to spoof voices, images, and identities, which could be used to deceive people and cause them to question what is real, or to accept a fabricated version of reality. For example, voice spoofing could be used to deceive someone into thinking that a caller is his mother asking for money, or to commandeer a voice-activated device. Image spoofing can be used to make an autonomous vehicle mistake a stop sign for a speed limit sign.

Such capabilities are apparent through a few specific examples, including an AI-generated news anchor used in Chinese news broadcasts. The website www.whichfaceisreal.com challenges visitors to discern which of two images depicts a real person and which is an AI-generated face; it is often hard for a human to tell the difference (although in many cases computers are still able to identify the “fake,” due to artifacts of the image generation method).

As language algorithms improve, the same challenges are seen in the context of written text. How can one discern between real and computer-generated writing, for example, in the context of political discourse, education, or social media? One new tool uses an algorithm to detect the signature patterns of machine-generated language based on the fact that AI writes essays by learning massive language models, where a current word is used to generate the next word. Kambhampati referred to a 2016 workshop at Arizona State University10 and the recent report on “Malicious Use of AI”11 for further discussion on these and other potential challenges related to AI.

___________________

9 See, for example, S. Sengupta, A. Chowdhary, D. Huang, and S. Kambhampati, 2018, “Moving Target Defense for the Placement of Intrusion Detection Systems in the Cloud,” paper at Conference on Decision and Game Theory for Security, https://www.researchgate.net/publication/327289093; S. Sengupta, S.G. Vadlamudi, S. Kambhampati, M. Taguinod, A. Doupé, Z. Zhao, and G.-J. Ahn, 2017, “A Game Theoretic Approach in Strategy Generation for Moving Target Defense in Web Applications,” in Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017), https://pdfs.semanticscholar.org/9469/c703b24dd25c2b0ef3153c15ac87aaa28048.pdf; K.M. Carter, J.F. Riordan, and H. Okhravi, 2014, “A Game Theoretic Approach to Strategy Determination for Dynamic Platform Defenses,” in MTD ‘14: Proceedings of the First ACM Workshop on Moving Target Defense, http://dx.doi.org/10.1145/2663474.2663478; S.G. Vadlamudi, S. Sengupta, M. Taguinod, Z. Zhao, A. Doupé, G.-J. Ahn, and S. Kambhampati, 2016, “Moving Target Defense for Web Applications Using Bayesian Stackelberg Games,” pp. 1377-1378 in Proceedings of the 2016 International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems; A. Chowdhary, S. Sengupta, D. Huang, and S. Kambhampati, 2019, “Markov Game Modeling of Moving Target Defense for Strategic Detection of Threats in Cloud Networks,” AAAI Workshop on Artificial Intelligence for Cyber Security (AICS); S. Venkatesan, M. Albanese, G. Cybenko, and S. Jajodia, 2016, “A Moving Target Defense Approach to Disrupting Stealthy Botnets,” pp. 37-46, in Proceedings of the 2016 ACM Workshop on Moving Target Defense, ACM.

10 The Origins Project Workshop “Envisioning and Addressing Adverse AI Outcomes,” E. Horvitz, L. Krauss, and J. Tallinn, chairs, took place on February, 24-26, 2017, at Arizona State University. For news coverage, see https://www.bloomberg.com/news/articles/2017-03-02/ai-scientists-gather-to-plot-doomsday-scenarios-and-solutions.

11 M. Brundage, S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, A. Dafoe, et al., 2018, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation,” https://maliciousaireport.com.

Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×

In summary, Kambhampati noted that, after a history of making progress mostly in carrying out declarative tasks, AI systems have shown significant success in learning tacit knowledge from data—a development that has had notable ramifications for the field. However, he pointed out that AI systems are far from achieving general intelligence, having yet to cross the thresholds of demonstrating common sense and becoming human-aware.

AI is an important tool that can also be used as a weapon—like any tool. He cautioned that, to people with a hammer, sometimes everything can look like a nail. Kambhampati also compared progress in AI to the opening of Pandora’s Box, but pointed out that many people don’t recall how that story ends: after all of the demons escaped, what remained inside the box was hope.

Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×
Page 1
Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×
Page 2
Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×
Page 3
Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×
Page 4
Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×
Page 5
Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×
Page 6
Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×
Page 7
Suggested Citation:"1 Introduction and Context." National Academies of Sciences, Engineering, and Medicine. 2019. Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/25488.
×
Page 8
Next: 2 Artificial Intelligence and the Landscape of Cyber Engagements »
Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop Get This Book
×
 Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop
Buy Paperback | $65.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In recent years, interest and progress in the area of artificial intelligence (AI) and machine learning (ML) have boomed, with new applications vigorously pursued across many sectors. At the same time, the computing and communications technologies on which we have come to rely present serious security concerns: cyberattacks have escalated in number, frequency, and impact, drawing increased attention to the vulnerabilities of cyber systems and the need to increase their security. In the face of this changing landscape, there is significant concern and interest among policymakers, security practitioners, technologists, researchers, and the public about the potential implications of AI and ML for cybersecurity.

The National Academies of Sciences, Engineering, and Medicine convened a workshop on March 12-13, 2019 to discuss and explore these concerns. This publication summarizes the presentations and discussions from the workshop.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!