Skip to main content

Currently Skimming:

4 Adversarial Artificial Intelligence for Cybersecurity: Research and Development and Emerging Areas
Pages 31-43

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 31...
... He noted that many cybersecurity researchers shied away from AI in the 1990s, but that some, including himself, did pursue research in this space as early as the late 1980s. Since then, significant progress has been made, and it has become widely accepted that ML could support cybersecurity functions such as anomaly detection, malware clustering, alert correlation, and user authentication.
From page 32...
... EMERGING AREAS AT THE INTERSECTION OF ARTIFICIAL INTELLIGENCE AND CYBERSECURITY David Martinez, MIT Lincoln Laboratory Martinez discussed the implications of AI for cybersecurity, providing an overview of security risks of AI-enabled systems and examples of how AI can be deployed across the cyber kill chain. His comments drew upon a recent MIT Lincoln Laboratory study1 on AI and cybersecurity and a workshop2 hosted jointly with the Association for the Advancement of Artificial Intelligence.
From page 33...
... Each stage represents an attack surface. NOTE: COA, course of action; GPU, graphics processing unit; TPU, tensor processing unit.
From page 34...
... Attack Surfaces of Machine Learning Algorithms Martinez described how an adversary might try to attack an AI-based system across three main phases of development: training a model on large amounts of data, cross-validation of the model, and testing the model on new data. Potential attacks include the following: • Evasion attacks, in which the characteristics of an attack are carefully adjusted to cause misclassification by the system, allowing the attacker to evade detection; and • Data poisoning, in which training data is modified so that the model's training leaves it weakened or flawed.
From page 35...
... Protect Detect Respond Recover Stages Impact Focused defense Deflect attacks ID new attacks Stop attacks "Mission" fight through FIGURE 4.2  Schematic of a cyber kill chain, with offensive stages outlined in red at the top and defensive stages outlined in blue at the bottom. Dialogue bubbles highlight important ways in which artificial intelligence can be applied at different stages of defensive operations.
From page 36...
... Xiao, and Y Vorobeychik, 2017, "A Framework for Validating Models of Evasion Attacks on Machine Learn ing, with Application to PDF Malware Detection," arXiv preprint, arXiv:1708.08327.
From page 37...
... Vorobeychik's team set out to determine the following: Can detection models be made robust using a simplified evasion model as a proxy for examples of actual, realizable evasion attacks? More specifically, can ML-based detection models trained on unrealistic feature-space attacks be robust against realizable attacks?
From page 38...
... ; second, that there is a malicious effect; and third, that the attack avoids detection. Vorobeychik ended his presentation by posing the question: Does RobustML defend against physically realizable attacks on deep neural networks for computer vision?
From page 39...
... Rivals considers the reconnaissance phase of the cyber kill chain, where a network has been invaded, and the attacker has achieved persistence, but not yet started to move laterally across the network. At this point, the attacker is launching scans to probe the topology of the network and characteristics of its nodes in order to find the locations of valuable assets.
From page 40...
... The work yielded new ways of understanding how adversaries coevolve and cause arms races, O'Reilly said, which she ultimately applied to the cybersecurity realm with Dark Horse. While there are still ways to make the
From page 41...
... Vorobeychik responded that his work focused on attacks developed by cybersecurity researchers, rather than real-world attacks, but that within this sphere the EvadeML attack could be viewed as AI, because it uses genetic programming9 to automatically search a large space of possible attacks. O'Reilly added that her coevolutionary algorithm similarly uses genetic programming on both sides.
From page 42...
... O'Reilly responded that game theory is indeed likely to be useful, although it will be necessary to go beyond simplistic theories, like the prisoner's dilemma and Nash equilibriums, in order to represent the complexity of the adversarial dynamics seen in cybersecurity. She noted that she has been examining ways to use Stackelberg games and other types of game theory at the tactical level and push it down the stack to represent threats and behaviors at a level where learning can occur.
From page 43...
... In her work, many variations of attacks and defenses were tried; with the right solution concept in the algorithm, it is possible to drive the system to equilibrium. In her view, the genetic algorithm is just as capable of coming up with the equilibrium as game theory.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.