Category 6—Speculative Research
Many of today’s most pressing security problems are the consequence of information technologies designed and built when security concerns were largely nonexistent. However, now that these technologies, which include personal computers (PCs) and the Internet, are so widely deployed, the current state of the world does not seem to offer an obvious and direct path to better security.
For this reason, Category 6—Speculative research, is reserved for research ideas that are arguably plausible but which also might be regarded as somewhat speculative and “out-of-the-box” by the mainstream research community. Investment in this category of research should account for only a small fraction of the cybersecurity research budget, but some investment is warranted if only to ensure that groupthink does not suppress ideas that might in fact have merit.
Specific examples of Category 6 research are, almost by definition, controversial. That is, some researcher will propose an idea that he or she believes is worth exploring, and others in the community may argue that such a research direction is not original or new, lacks depth, does not provide insights that suggest opportunities for surprise or success, does not appear to be deployable on any meaningful timescale or for a meaningful user base, poses currently insoluble difficulties, or must be approached with great caution if at all. Indeed, unlike the areas described in Categories 1 through 5 of the committee’s illustrative research agenda, the examples of Category 6 research below are controversial in just these ways, even within the committee itself. These examples were selected
through a process that required only a few members to support them and should not be taken as ideas that the committee as a whole thinks are worth significant effort or emphasis.
A CYBERATTACK RESEARCH ACTIVITY
In many domains of security studies, theories of defense and theories of attack are inextricably interwoven. That is, insights on how best to defend are grounded in knowledge of how attacks might unfold, and a deep knowledge of attack methodologies should not be limited to potential attackers. For example, arson investigators know very well how to set fires and agents from the Bureau of Alcohol, Tobacco, Firearms and Explosives know a great deal about how to make bombs. Similarly, a body of cyberattack knowledge that is independent of criminal intent may be very useful to cybersecurity researchers. Although in today’s cybersecurity environment, many attacks are simple indeed, such a body of cyberattack knowledge would logically go far beyond the commonplace attacks of today to include at least some of the more sophisticated techniques that high-end attackers might use.
The utility of this approach is suggested by the use of red teams to test operational defenses. Red team testing is an effort undertaken by an organization to test its security posture using teams that simulate what a determined attacker might do. The red team develops expertise relevant to its intended target, conducts reconnaissance to search for security weaknesses, and then launches attacks that exploit those weaknesses. Because red teams have deep knowledge of attack, and in particular know how to look at a system from the outside and how to cross interfaces (such as hardware/software) that may effectively limit the view of insiders, it is possible that greater interaction between red team experts and cybersecurity researchers would prove fruitful.
Many important issues attend the establishment of a research activity intended to develop deep knowledge of cyberattack. For example:
How should deep knowledge of cyberattack be acquired? Cybercriminals and other adversaries develop knowledge by attacking real systems; sometimes their efforts cause real disruptions and loss. It is inconceivable that as a matter of national policy the U.S. government would endorse or support any effort that would result in such harm, and there might well be significant liability issues associated with the conduct of such an activity. The availability of large-scale testbeds for the research community might have some potential for mitigating this particular problem. Moreover, once a plausible attack hypothesis has been developed, it might often be
demonstrated on a small subset of the target system that has been temporarily disconnected (or duplicated) for the demonstration.
How should such knowledge be shared? One model is to recruit cybersecurity researchers for “tours of duty” with a “cyberattack institute.” Another model is to teach cyberattack techniques as part of cybersecurity education.1
How should such knowledge be limited? This issue is the most important one to resolve if this approach is to be pursued. If placed at the disposal of an adversary, knowledge of cyberattack might be very dangerous indeed. Yet if the knowledge is excessively limited, it is useless to the cybersecurity research community at large. This issue is particularly thorny in the context of academic research, in which the dissemination of research results is a sine qua non for advancement. Nondisclosure agreements may be a feasible mechanism to protect knowledge acquired in the case of commercial systems, and security clearances or background checks may be necessary for government systems—although it is easy to imagine that some commercial systems are more sensitive than certain government systems are. Note also that the sensitivity of information about cyberattack increases as knowledge of the specific systems involved increases, suggesting that the study of generic attacks may enable greater information dissemination.
In an environment in which vulnerabilities result from routine implementation and coding failures, it may be that deep knowledge of cyberattack is not needed to develop defenses. But against sophisticated attackers who can target systems that have been hardened against “routine” attacks, deep knowledge of cyberattack may provide a context that can help to drive advanced defensive research.
BIOLOGICAL APPROACHES TO SECURITY
Biological systems are capable of healing themselves and defending themselves against outside attack. This basic fact has suggested to some researchers that biologically inspired approaches to cybersecurity may be worth some effort in exploring.
What does “biological inspiration” mean? A report of the National Research Council on computing and biology suggests that a biological organism may implement an approach to a problem that could be the
basis of a solution to a computing problem.2 But even if an implementation does not carry over well to a computing problem, it may be that its underlying principles do have some relevance.
Researchers exploring biological approaches to cybersecurity argue that the unpredictable pathogens to which an organism’s immune system must respond are analogous to some of the threats that computer systems face, and that the principles underlying the operation of the immune system may provide new approaches to computer security.3 They note, for example, that immune systems exhibit a number of characteristics that could reasonably describe how effective computer security mechanisms might operate in a computer system or network. In particular, the immune system is distributed, diverse, autonomous, tolerant of error, dynamic, adaptable, imperfect, redundant, and homeostatic.4 To go further, it is necessary to ask whether the particular methods by which the immune system achieves these characteristics have potential relevance to computer security.
For example, Forrest and Hofmeyr have described models for network intrusion detection and virus detection based on an immunological distinction between “self” (regarded as nondangerous) and “nonself” (regarded as dangerous),5 and at least one company has introduced cybersecurity products based on these models. The primary advantage of the immunological approach in this context is that attacks need not be identified by matching a potential threat to the known signature of a previously identified virus or worm, but rather there would be a behavioral identification of that threat as a “nonself” entity.
Despite some promising results, it remains to be seen how far immunological approaches to cybersecurity can be pushed. Given that the immune system is a very complex entity whose operation is not fully understood, a bottom-up development of a computer security system based on the
immune system is not possible today. The human immune system has evolved to its present state owing to many evolutionary accidents as well as the constraints imposed by biology and chemistry—much of which is likely to be artifactual and mostly irrelevant to the underlying principles that the system embodies and also to the design of a computer security system. Further, the immune system is oriented toward problems of survival. By contrast, computer security is traditionally concerned with confidentiality, accountability, and trustworthiness—and the relevance of immunological processes to confidentiality and accountability is entirely unclear today.
USING ATTACK TECHNIQUES FOR DEFENSIVE PURPOSES
Viruses and worms exploit vulnerabilities in a system to take control of it. But the payload of a virus or a worm can, in fact, be programmed to harm the system or to benefit it. In particular, it is technically possible to propagate system fixes through such a mechanism. That is, a “white hat” virus could be programmed to exploit a system vulnerability in order to enter that system, and to close that vulnerability through the administration of a system patch or changing certain administrative settings, and finally to self-destruct.
Known for many years,6 this type of application has advantages and disadvantages. For example, an advantage is that fixes could be propagated very rapidly. But since this approach was first proposed, the disadvantages have been sufficient to prevent its serious consideration. These disadvantages stem from technical, ethical/legal, and psychological reasons.7 Potential technical disadvantages include the originator’s lack of control over how the “white hat” virus or worm will spread, confusion over the intent or purpose of a virus or worm whose behavior may be superficially similar to a nefarious one, waste of system and network resources, and potential escape from any controlled environment. Potential ethical/legal issues include unauthorized data modification, copyright and ownership issues attending to the modification of resident software, and the legitimization of activities that are generally presumed dangerous today.8 Potential psychological issues include the violation that
An early mention of this idea can be found in Fred Cohen, “Trends in Computer Virus Research,” ASP, 1991, available at http://vx.netlux.org/lib/afc06.html; and Frederick B. Cohen, “A Case for Benevolent Viruses,” 1991, available at http://all.net/books/integ/goodvcase.html.
Vesselin Bontchev, “Are ‘Good’ Computer Viruses Still a Bad Idea?,” Virus Test Center, University of Hamburg, Germany; available at http://vx.netlux.org/lib/avb02.html. See also Eugene H. Spafford, “Response to Fred Cohen’s ‘Contest’,” The Sciences, January/February 1992, p. 4.
For further discussion, see Eugene H. Spafford; “Are Computer Break-ins Ethical?” Journal of Systems and Software, 17(1): 41-48, 1992.
may be felt by users regarding the loss of control over their systems that viruses and worms necessarily entail.
A special case of using attack techniques for defensive purposes arises in the realm of active defense. Traditionally, cybersecurity is based on the notion of passive defense—a defense that imposes no penalty on a would-be attacker apart from the time that the attacker needs to mount its attack. Under such circumstances, the attacker can continue attacking unpunished until success or exhaustion occurs.
The notion of cyber-retaliation as a part of an active defense is intended to make cyberattackers pay a price for attacking (whether or not they are successful), thus dissuading a potential attacker and offering a deterrent to attacking in the first place. But cyber-retaliation raises both technical and policy issues.
From a technical standpoint, the tools available today to support retaliation are inadequate. Identification of cyberattackers remains problematic, as indicated in Section 5.1 (Attribution). Today, the identification of an attacker is an enormously time-consuming task—even if the identification task is successful, it can take weeks to identify an attacker. Furthermore, considerable uncertainty often remains about the actual identity of the attacker, who may be an individual using an institution’s computer without the knowledge or permission of that institution. Such uncertainty raises the possibility that one’s retaliatory efforts might result in significant collateral damage to innocents without even necessarily affecting the perpetrator. In addition, the technical mechanisms for striking back are generally oriented toward causing damage to computer systems rather than being directed at individual perpetrators.
From a policy standpoint, cyber-retaliation raises issues such as the dividing line between regarding a cyberattack as a law enforcement matter versus a national security matter, the appropriate definitions of concepts such as “force” or “armed attack” as they apply to cyberattacks, the standards of proof required to establish the origin of a cyberattack, and the nature of the appropriate rules of engagement that might be associated with a cyberattack.
These comments should not be taken as denigrating passive cybersecurity measures, which remain central to the nation’s cybersecurity posture. Nevertheless, passive defenses have strong limitations, and active defense may provide a more robust set of options if the technical and policy issues can be resolved.