Appendix F
Suggested Elements of a Naval Information Assurance Research and Development Program

NETWORK LEVEL

The core fabric of the Internet and the Global Information Grid (GIG) is composed of standard protocols that are vulnerable to exploitation. Sophisticated adversaries, skilled in the art of cyber exploitation and cyberattack, can design their exploits to be difficult to detect. Developing and maintaining survivable networks require secure network functions (routing, addressing) to prevent attacks and to assure correct and attested routing and addressing, as well as countermeasures to defend against successful attacks. Examples of ongoing research that the Navy can build on in this area include the following:

  • BGP/DNS protocol “hardening.” Border Gateway Protocol (BGP) and Domain Name System (DNS) are core network protocols responsible for routing and naming services for all Internet Protocol traffic. Although these protocols have been established and in use for many years at the core of the Internet, a persistent set of vulnerabilities that affect them have been established by the research community with broad and rapid debate about fixes and upgrades. Many experts agree that these core protocols are currently not secure, which means that they can be exploited to reroute traffic to unauthorized destinations in a manner that is not detectable.1 A number of ongoing research projects from the Department of Homeland Security (DHS) and prior research from the Defense Advanced Research Projects Agency (DARPA) have developed secure implementations of

1

Joel Hruska. 2008. “Gaping Hole Opened in Internet’s Trust-based BGP Protocol,” Ars Technica, August 27. Available at <http://arstechnica.com/security/news/2008/08/inherent-security-flaw-poses-risk-to-internet-users.ars>. Accessed January 22, 2010.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 174
Appendix F Suggested Elements of a Naval Information Assurance Research and Development Program NETWORK LEVEL The core fabric of the Internet and the Global Information Grid (GIG) is composed of standard protocols that are vulnerable to exploitation. Sophisticated adversaries, skilled in the art of cyber exploitation and cyberattack, can design their exploits to be difficult to detect. Developing and maintaining survivable networks require secure network functions (routing, addressing) to prevent attacks and to assure correct and attested routing and addressing, as well as counter- measures to defend against successful attacks. Examples of ongoing research that the Navy can build on in this area include the following: • BGP/DNS protocol “hardening.” Border Gateway Protocol (BGP) and Domain Name System (DNS) are core network protocols responsible for routing and naming services for all Internet Protocol traffic. Although these protocols have been established and in use for many years at the core of the Internet, a persis - tent set of vulnerabilities that affect them have been established by the research community with broad and rapid debate about fixes and upgrades. Many experts agree that these core protocols are currently not secure, which means that they can be exploited to reroute traffic to unauthorized destinations in a manner that is not detectable.1 A number of ongoing research projects from the Department of Homeland Security (DHS) and prior research from the Defense Advanced Research Projects Agency (DARPA) have developed secure implementations of 1 Joel Hruska. 2008. “Gaping Hole Opened in Internet’s Trust-based BGP Protocol,” Ars Technica, August 27. Available at . Accessed January 22, 2010. 174

OCR for page 174
175 APPENDIX F BGP and DNS, but these have not been adequately vetted and are not broadly deployed. The Office of Management and Budget recently mandated federal adop- tion of secure DNS.2 The Navy should be a leader in adopting secure DNS. • Network filtering. Current network filtering strategies tend to be rule-based or signature-specific. A number of research projects at DARPA and the National Science Foundation (NSF) have developed content-based and connection-oriented anomaly detection to detect incoming attacks as well as outgoing exfiltration of sensitive information. Figure F.1 provides a view of one such approach to pro- tecting Web services from cross-site scripting attacks. High-speed networks and encrypted channels complicate matters by exacerbating the problem of content inspection. Consequently, network filtering may have a limited future, forcing the use of technologies that operate closer to the distributed computing nodes at the ends of the network. • Network visualization. Current tools for alerting network operators to attack conditions are text-oriented and voluminous, making the job of understand- ing the state of the network arduous and error-prone. Network visualization tools exploit a person’s capability to process visual cues rapidly for pattern recognition and anomaly detection. Prior and ongoing work at DARPA has developed network visualization tools that can be leveraged to improve the capabilities of network operation centers to detect and respond to attacks. • Resilient networks. In the category of protection, resilient networks ensure that networks can continue to provide service even while under severe denial- of-service attacks. Prior work at DARPA and NSF in overlay networks provides intelligent network elements to detect denial-of-service attacks and automatically throttle traffic to critically needed services. • Source attribution. One of the fundamental limitations of the Internet is that connections are essentially anonymous. The core design of the Internet estab - lished a simple means where disparate, geographically and logically separated networks simply announce themselves to one another, and each establishes its own independent routing infrastructure. As a result, it is difficult to ascertain where a connection or an attack is actually coming from, especially when the authority managing a particular network is unfriendly. Source attribution continues to be a continuing research area that the Intelligence Advanced Research Projects Activ - ity (IARPA) is funding. 2 Executive Office of the President, Office of Management and Budget memo, Washington, D.C. August 22, 2008, to Federal Chief Information Officers, requires the adoption of Domain Name System security standards as set forth in National Institute of Standards and Technology (NIST) Special Publication 800-53r1, and that these requirements be fully met by December 2009. See Ron Ross, Stu Katzke, Arnold Johnson, Marianne Swanson, Gary Stoneburner, and George Rogers. 2006. Recommended Security Controls for Federal Information Systems, Special Publication 800-53, Revi- sion 1, Computer Security Division, Information Technology Laboratory, National Institute of Stan - dards and Technology, Gaithersburg, Md., December. Available at . Accessed April 30, 2009.

OCR for page 174
176 INFORMATION ASSURANCE FOR NETWORK-CENTRIC NAVAL FORCES FIGURE F.1 An example Web-layer content sensor and filter. NOTE: Acronyms are defined in Appendix A. Figure F-1 R01471 uneditable bitmapped image • Decoy networking. Sophisticated adversaries will often conduct cyber- based reconnaissance prior to actually attacking. Presenting decoy networks can be an effective strategy for luring an adversary to a fishbowl network isolated from genuine naval forces networks, from which the adversary can be monitored for methods, behavior, and sources. Furthermore, decoy networking may provide a view to an adversary of an arbitrarily large network of bogus but realistic elements that confound and confuse the enemy’s attack strategies and targeting. Very little research has been conducted in this area except for work in the area of honeynets and honeypots. Some recent work has been funded partially by DHS and the Army Research Office (ARO). Figure F.2 provides a view of an experimental broadcast decoy injection framework for a wireless fidelity (WIFI) network. SYSTEM LEVEL Information technology (IT) systems composed of many distributed compo - nents, perhaps each with varying levels of security, pose serious information assur- ance (IA) problems. Large collections of common components provide a severe threat from a single common attack that may lead to catastrophic consequences, but also an opportunity that may also be leveraged to enhance security. Research topics in this area include the following: • Secure composition. Today, a single vulnerable software component in a system can compromise the integrity of an entire system. Research in the secure composition of distributed components, funded by NSF, aims to enable the com - position of components into systems in which security properties of the whole are guaranteed, or at least bounded. Such means are assumed to have been solved in

OCR for page 174
177 APPENDIX F FIGURE F.2 A decoy- or bait-injection framework. NOTE: Acronyms are defined in Appendix A. the long-term vision of the GIG in the context where deep application knowledge may be required for effective composition. The problem is far more difficult than simply defining a set of interface policies. F-2 Figure • Artificial diversity. Military and federal networks as a whole are currently R01471 actively managed to be uniformlyle bitmapped image them easier to man- homogeneous. This makes uneditab age on the one hand, but on the other, uniformly susceptible to a single contagion. To break monoculture and increase resiliency, artificial diversity techniques funded by DARPA introduce diversity into the computing fabric; these techniques permit applications to interoperate, but change the structural properties of code to make different instances of the same software diverse in implementation. • Collaborative software communities. While monocultures pose a risk as described above, some DARPA-funded work in application communities and related research funded by NSF have turned this vulnerability into a potential IA asset. This is accomplished by making each instance of the common software a

OCR for page 174
178 INFORMATION ASSURANCE FOR NETWORK-CENTRIC NAVAL FORCES sensor on the network, dynamically sharing attack data with other instances in order to responsively harden other instances of the software against in-progress attacks that they may also experience. Research focused on developing a number of related security-alert-sharing technologies (that maintain privacy across admin - istrative domains) have also been sponsored by NSF and DHS. • Privacy-preserving technologies. Security of systems requires confiden- tiality of data. Encryption logically serves as a fundamental capability, but it is insufficient, especially in the context of applications in which data are shared across domains with various levels of mutual (dis-)trust. This notion is extended to query processing, whereby questions posed by an organization that seeks data about some topic may also be considered as confidential. IARPA at present sponsors work in secure multiparty computation and privacy-preserving technolo- gies permitting enclaves to share data securely and privately without revealing what information is sought by either party. These technologies promise to allow effective sharing while maintaining strict compartmentalization. HOST LEVEL The fundamental IA challenge remains at the end points of networks. The core host software platforms and applications present a constant flow of discov - ered vulnerabilities that can be exploited by a persistent adversary in possession of the necessary skills and resources. A generation ago the technical principles of object-oriented programming were developed, whereby systems can be dynami - cally composed of objects that permit the reuse of software and the sharing of passive and active data among software components. Embedded in the design capabilities afforded by object-oriented design methods is the ability to dynami - cally communicate, interpret, and execute software among distributed computing components—that is, modern object-oriented systems provide code injection platforms. Injected code may be benign and useful (such as JavaScript drawing a table of information on a Webpage), or malicious and harmful (such as a Trojan embedded in a host by a malicious e-mail attachment). Furthermore, driven by customer demand and time-to-market considerations, commercial application vendors typically introduce products to market that are less than sufficiently tested, evaluated, and debugged, thereby providing sophisticated adversaries with the opportunity to exploit software design flaws that have not been discovered by the vendor prior to product release. Much of the response by the commercial security marketplace has been to provide signature-based detection and filter solutions requiring the continual updating of a growing signature base for known software exploitations. The inevitable response by sophisticated adversaries is to generate new attack vectors for which no signatures are yet available. This cat-and-mouse game was quite manageable, since the time from discovering a vulnerability to the time of generat- ing an attack vector to exploit that vulnerability was measured in time frames of

OCR for page 174
179 APPENDIX F weeks to days. New attack tools have clearly shifted the balance to the attacker in two ways. First, design patterns for attack tools have been developed to allow the rapid creation of zero-day attack vectors; second, tools have been designed to allow the generation of a very large set of variants that can avoid discovery, thereby forcing a defense that would need to look for an unmanageable number of attack signatures. In summary, signature-based defenses will become technically obsolete, while current IA architecture designs are dependent on such defenses. Furthermore, the offshore outsourcing of development, both hardware and software, exacerbates the problem by providing ample opportunity for a sophis - ticated adversary purposely to embed its attack vectors into commercial off-the- shelf (COTS) products that are regularly procured by the Department of Defense (DOD). To counter this fundamental danger of commercial IT practice, a number of advanced concepts to harden the host and improve the security of its software are being actively pursued. Topics include methods to create new secure and safe software and to automate security policy implementation. Many methods have been proposed to create secure software, but these do not adequately address the huge legacy-software base that now runs and operates modern enterprise systems, and the Internet in use today. A few representative research topics that deal with improving the security of systems broadly in use are enumerated below: • Counter-evasion techniques for obfuscated malware. Given the obsoles- cence of signature-based technologies, new and effective methods to identify malware embedded in content flows are required to keep pace with the advances made by sophisticated adversaries. Rich content flows, including Web pages, doc - uments, and other media, may legitimately include code for transfer to a recipient computer. Automatically determining the intent of code remains an open research problem, to distinguish malice from useful function. Furthermore, adversaries have cleverly obfuscated and embedded malicious code in content streams where code is not ordinarily expected. Detecting these stealth-attack vectors remains an open research problem. • Virtualization for security. Virtualization technology has been widely adopted for server consolidation and is beginning to be adopted to support multi - level security needs. However, virtualization can also be used to isolate untrusted applications from the host operating system. For example, an application can be considered to be untrusted if it communicates to untrusted networks (such as the Non-Classified Internet Protocol Router Network), runs untrusted content (such as media files from an untrusted source), or has unknown provenance. DARPA- funded work has developed application-level virtualization that seamlessly vir- tualizes applications transparently to users to isolate untrusted applications from trusted systems and networks. • Self-healing software. Substantial progress has been made in designing software that monitors and models its own behavior. This line of work on anomaly detection has been extended recently by work funded by DARPA and the Air Force

OCR for page 174
180 INFORMATION ASSURANCE FOR NETWORK-CENTRIC NAVAL FORCES Office of Scientific Research (AFOSR) to develop techniques so that software is self-aware of its own operation in order to detect violations of its integrity and repair itself after attack, leaving it more robust after attack, similar to human immune systems. • Hardware life-cycle tamper resistance. DARPA’s Trust in Integrated Circuits program is developing techniques to detect compromises in chip-level designs and implementations during supply chain life-cycle attacks. Far more of an investment is needed in this line of work to develop tamper-resistant hardware designs. USER LEVEL Many IA research and development (R&D) researchers have come to agree that system users constitute a core security threat, primarily owing to error and mistakes, but also to purposeful malfeasance. The insider attack threat has been known for quite some time but has not been adequately addressed. A growing body of literature is now appearing that recognizes this vexing security problem. Considerable R&D is needed in this area, including the following: • Behavior-based security. One of the most effective techniques for detecting insider threats is to analyze user behavior patterns for inappropriate access of net - work resources such as file servers, printers, and outbound connections. Ongoing work at the MITRE Corporation employs Bayesian analysis of user behavior to detect certain insider threats with a reasonably high reliability. Far more research is needed in order to understand user intent for detecting malicious or dangerous actions. Limited work is being sponsored by DHS and ARO in this area. • Defense through uncertainty. An emerging area, initially funded by IARPA and AFOSR, this topic leverages uncertainty in deployed environments to make it difficult for an adversary to exploit them. Knowledge and information about the target environment are sufficiently “fuzzed,” confusing the attacker to confound the intended end goals. One example is to present purposely erroneous server operating system images for entities connected on the network. This can result in an intended attack being delivered to an incorrect operating system environment. Another example is using decoy documents placed intelligently in a network so that if the documents are exfiltrated, the home organization will be aware of the theft but the adversary will not realize their false pretense. Many other opportuni - ties to confound and confuse an enemy are possible leveraging the principle of uncertainty. Of course, the use of these tactics requires management and control processes to ensure that desired activities are not inadvertently disrupted. PRIVILEgED USER LEVEL Perhaps the most vexing and difficult security problem is best captured by the adage “Who checks the checkers?” Security personnel are extremely privileged

OCR for page 174
181 APPENDIX F users with access to all key functions of the enterprise system. A recent example of the malfeasance in this area involved a system administrator who captured San Francisco’s entire administrative IT infrastructure and denied access to all system administrators but himself.3 Critical weapons systems are designed with safety systems and technologies that inhibit a single insider from unauthorized action, but little work has been done in the research community to address the core question of how to secure security systems from security and operating personnel who are the deepest insiders and who potentially pose the insider threat with the highest risk. • Role- and behavior-based access control. A fundamental tenet of IA is that data and applications are only accessed by authenticated and authorized users who require access to conduct their business. The pervasive use of access controls based on credentials (IDs, passwords, and pins) is woefully inadequate in complex network environments. Role-based access control considers means of associat - ing the logical roles of a user with the specific data and applications used by the specific roles defined with an enterprise. Research in this area by NSF has been extended by DARPA and some industrial laboratories also to associate “behavior” with a user’s credentials as a means of granting access to network resources. • Self-protecting security technologies. In much the same way that networks are threatened by denial-of-service attacks, host-based security technologies are threatened by denial-of-sensor attacks. A user may disable a host security system by accident, or a system administrator may bypass a security subsystem by design. This threat is just beginning to be recognized in the research community, and some work is proposed that deals with security technologies that are protected from this threat. Work done at the Sandia National Laboratories on safety technologies for nuclear weaponry may be brought to bear on this underfunded area of research related to the insider threat. 3Ashley Surdin. 2008. “San Francisco Case Shows Vulnerability of Data Networks,” Wash- ington Post, August 11, p. A03. Available at . Accessed March 16, 2009.

OCR for page 174