National Academies Press: OpenBook

Information Assurance for Network-Centric Naval Forces (2010)

Chapter: Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program

« Previous: Appendix E: Naval Information Assurance Architectural Considerations
Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×

Appendix F
Suggested Elements of a Naval Information Assurance Research and Development Program

NETWORK LEVEL

The core fabric of the Internet and the Global Information Grid (GIG) is composed of standard protocols that are vulnerable to exploitation. Sophisticated adversaries, skilled in the art of cyber exploitation and cyberattack, can design their exploits to be difficult to detect. Developing and maintaining survivable networks require secure network functions (routing, addressing) to prevent attacks and to assure correct and attested routing and addressing, as well as countermeasures to defend against successful attacks. Examples of ongoing research that the Navy can build on in this area include the following:

  • BGP/DNS protocol “hardening.” Border Gateway Protocol (BGP) and Domain Name System (DNS) are core network protocols responsible for routing and naming services for all Internet Protocol traffic. Although these protocols have been established and in use for many years at the core of the Internet, a persistent set of vulnerabilities that affect them have been established by the research community with broad and rapid debate about fixes and upgrades. Many experts agree that these core protocols are currently not secure, which means that they can be exploited to reroute traffic to unauthorized destinations in a manner that is not detectable.1 A number of ongoing research projects from the Department of Homeland Security (DHS) and prior research from the Defense Advanced Research Projects Agency (DARPA) have developed secure implementations of

1

Joel Hruska. 2008. “Gaping Hole Opened in Internet’s Trust-based BGP Protocol,” Ars Technica, August 27. Available at <http://arstechnica.com/security/news/2008/08/inherent-security-flaw-poses-risk-to-internet-users.ars>. Accessed January 22, 2010.

Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×

BGP and DNS, but these have not been adequately vetted and are not broadly deployed. The Office of Management and Budget recently mandated federal adoption of secure DNS.2 The Navy should be a leader in adopting secure DNS.

  • Network filtering. Current network filtering strategies tend to be rule-based or signature-specific. A number of research projects at DARPA and the National Science Foundation (NSF) have developed content-based and connection-oriented anomaly detection to detect incoming attacks as well as outgoing exfiltration of sensitive information. Figure F.1 provides a view of one such approach to protecting Web services from cross-site scripting attacks. High-speed networks and encrypted channels complicate matters by exacerbating the problem of content inspection. Consequently, network filtering may have a limited future, forcing the use of technologies that operate closer to the distributed computing nodes at the ends of the network.

  • Network visualization. Current tools for alerting network operators to attack conditions are text-oriented and voluminous, making the job of understanding the state of the network arduous and error-prone. Network visualization tools exploit a person’s capability to process visual cues rapidly for pattern recognition and anomaly detection. Prior and ongoing work at DARPA has developed network visualization tools that can be leveraged to improve the capabilities of network operation centers to detect and respond to attacks.

  • Resilient networks. In the category of protection, resilient networks ensure that networks can continue to provide service even while under severe denial-of-service attacks. Prior work at DARPA and NSF in overlay networks provides intelligent network elements to detect denial-of-service attacks and automatically throttle traffic to critically needed services.

  • Source attribution. One of the fundamental limitations of the Internet is that connections are essentially anonymous. The core design of the Internet established a simple means where disparate, geographically and logically separated networks simply announce themselves to one another, and each establishes its own independent routing infrastructure. As a result, it is difficult to ascertain where a connection or an attack is actually coming from, especially when the authority managing a particular network is unfriendly. Source attribution continues to be a continuing research area that the Intelligence Advanced Research Projects Activity (IARPA) is funding.

2

Executive Office of the President, Office of Management and Budget memo, Washington, D.C. August 22, 2008, to Federal Chief Information Officers, requires the adoption of Domain Name System security standards as set forth in National Institute of Standards and Technology (NIST) Special Publication 800-53r1, and that these requirements be fully met by December 2009. See Ron Ross, Stu Katzke, Arnold Johnson, Marianne Swanson, Gary Stoneburner, and George Rogers. 2006. Recommended Security Controls for Federal Information Systems, Special Publication 800-53, Revision 1, Computer Security Division, Information Technology Laboratory, National Institute of Standards and Technology, Gaithersburg, Md., December. Available at <http://csrc.nist.gov/publications/nistpubs/800-53-Rev1/800-53-rev1-final-clean-sz.pdf>. Accessed April 30, 2009.

Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
FIGURE F.1 An example Web-layer content sensor and filter. NOTE: Acronyms are defined in Appendix A.

FIGURE F.1 An example Web-layer content sensor and filter. NOTE: Acronyms are defined in Appendix A.

  • Decoy networking. Sophisticated adversaries will often conduct cyberbased reconnaissance prior to actually attacking. Presenting decoy networks can be an effective strategy for luring an adversary to a fishbowl network isolated from genuine naval forces networks, from which the adversary can be monitored for methods, behavior, and sources. Furthermore, decoy networking may provide a view to an adversary of an arbitrarily large network of bogus but realistic elements that confound and confuse the enemy’s attack strategies and targeting. Very little research has been conducted in this area except for work in the area of honeynets and honeypots. Some recent work has been funded partially by DHS and the Army Research Office (ARO). Figure F.2 provides a view of an experimental broadcast decoy injection framework for a wireless fidelity (WIFI) network.

SYSTEM LEVEL

Information technology (IT) systems composed of many distributed components, perhaps each with varying levels of security, pose serious information assurance (IA) problems. Large collections of common components provide a severe threat from a single common attack that may lead to catastrophic consequences, but also an opportunity that may also be leveraged to enhance security. Research topics in this area include the following:

  • Secure composition. Today, a single vulnerable software component in a system can compromise the integrity of an entire system. Research in the secure composition of distributed components, funded by NSF, aims to enable the composition of components into systems in which security properties of the whole are guaranteed, or at least bounded. Such means are assumed to have been solved in

Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
FIGURE F.2 A decoy- or bait-injection framework. NOTE: Acronyms are defined in Appendix A.

FIGURE F.2 A decoy- or bait-injection framework. NOTE: Acronyms are defined in Appendix A.

the long-term vision of the GIG in the context where deep application knowledge may be required for effective composition. The problem is far more difficult than simply defining a set of interface policies.

  • Artificial diversity. Military and federal networks as a whole are currently actively managed to be uniformly homogeneous. This makes them easier to manage on the one hand, but on the other, uniformly susceptible to a single contagion. To break monoculture and increase resiliency, artificial diversity techniques funded by DARPA introduce diversity into the computing fabric; these techniques permit applications to interoperate, but change the structural properties of code to make different instances of the same software diverse in implementation.

  • Collaborative software communities. While monocultures pose a risk as described above, some DARPA-funded work in application communities and related research funded by NSF have turned this vulnerability into a potential IA asset. This is accomplished by making each instance of the common software a

Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×

sensor on the network, dynamically sharing attack data with other instances in order to responsively harden other instances of the software against in-progress attacks that they may also experience. Research focused on developing a number of related security-alert-sharing technologies (that maintain privacy across administrative domains) have also been sponsored by NSF and DHS.

  • Privacy-preserving technologies. Security of systems requires confidentiality of data. Encryption logically serves as a fundamental capability, but it is insufficient, especially in the context of applications in which data are shared across domains with various levels of mutual (dis-)trust. This notion is extended to query processing, whereby questions posed by an organization that seeks data about some topic may also be considered as confidential. IARPA at present sponsors work in secure multiparty computation and privacy-preserving technologies permitting enclaves to share data securely and privately without revealing what information is sought by either party. These technologies promise to allow effective sharing while maintaining strict compartmentalization.

HOST LEVEL

The fundamental IA challenge remains at the end points of networks. The core host software platforms and applications present a constant flow of discovered vulnerabilities that can be exploited by a persistent adversary in possession of the necessary skills and resources. A generation ago the technical principles of object-oriented programming were developed, whereby systems can be dynamically composed of objects that permit the reuse of software and the sharing of passive and active data among software components. Embedded in the design capabilities afforded by object-oriented design methods is the ability to dynamically communicate, interpret, and execute software among distributed computing components—that is, modern object-oriented systems provide code injection platforms. Injected code may be benign and useful (such as JavaScript drawing a table of information on a Webpage), or malicious and harmful (such as a Trojan embedded in a host by a malicious e-mail attachment). Furthermore, driven by customer demand and time-to-market considerations, commercial application vendors typically introduce products to market that are less than sufficiently tested, evaluated, and debugged, thereby providing sophisticated adversaries with the opportunity to exploit software design flaws that have not been discovered by the vendor prior to product release.

Much of the response by the commercial security marketplace has been to provide signature-based detection and filter solutions requiring the continual updating of a growing signature base for known software exploitations. The inevitable response by sophisticated adversaries is to generate new attack vectors for which no signatures are yet available. This cat-and-mouse game was quite manageable, since the time from discovering a vulnerability to the time of generating an attack vector to exploit that vulnerability was measured in time frames of

Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×

weeks to days. New attack tools have clearly shifted the balance to the attacker in two ways. First, design patterns for attack tools have been developed to allow the rapid creation of zero-day attack vectors; second, tools have been designed to allow the generation of a very large set of variants that can avoid discovery, thereby forcing a defense that would need to look for an unmanageable number of attack signatures. In summary, signature-based defenses will become technically obsolete, while current IA architecture designs are dependent on such defenses.

Furthermore, the offshore outsourcing of development, both hardware and software, exacerbates the problem by providing ample opportunity for a sophisticated adversary purposely to embed its attack vectors into commercial off-the-shelf (COTS) products that are regularly procured by the Department of Defense (DOD). To counter this fundamental danger of commercial IT practice, a number of advanced concepts to harden the host and improve the security of its software are being actively pursued. Topics include methods to create new secure and safe software and to automate security policy implementation. Many methods have been proposed to create secure software, but these do not adequately address the huge legacy-software base that now runs and operates modern enterprise systems, and the Internet in use today. A few representative research topics that deal with improving the security of systems broadly in use are enumerated below:

  • Counter-evasion techniques for obfuscated malware. Given the obsolescence of signature-based technologies, new and effective methods to identify malware embedded in content flows are required to keep pace with the advances made by sophisticated adversaries. Rich content flows, including Web pages, documents, and other media, may legitimately include code for transfer to a recipient computer. Automatically determining the intent of code remains an open research problem, to distinguish malice from useful function. Furthermore, adversaries have cleverly obfuscated and embedded malicious code in content streams where code is not ordinarily expected. Detecting these stealth-attack vectors remains an open research problem.

  • Virtualization for security. Virtualization technology has been widely adopted for server consolidation and is beginning to be adopted to support multilevel security needs. However, virtualization can also be used to isolate untrusted applications from the host operating system. For example, an application can be considered to be untrusted if it communicates to untrusted networks (such as the Non-Classified Internet Protocol Router Network), runs untrusted content (such as media files from an untrusted source), or has unknown provenance. DARPA-funded work has developed application-level virtualization that seamlessly virtualizes applications transparently to users to isolate untrusted applications from trusted systems and networks.

  • Self-healing software. Substantial progress has been made in designing software that monitors and models its own behavior. This line of work on anomaly detection has been extended recently by work funded by DARPA and the Air Force

Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×

Office of Scientific Research (AFOSR) to develop techniques so that software is self-aware of its own operation in order to detect violations of its integrity and repair itself after attack, leaving it more robust after attack, similar to human immune systems.

  • Hardware life-cycle tamper resistance. DARPA’s Trust in Integrated Circuits program is developing techniques to detect compromises in chip-level designs and implementations during supply chain life-cycle attacks. Far more of an investment is needed in this line of work to develop tamper-resistant hardware designs.

USER LEVEL

Many IA research and development (R&D) researchers have come to agree that system users constitute a core security threat, primarily owing to error and mistakes, but also to purposeful malfeasance. The insider attack threat has been known for quite some time but has not been adequately addressed. A growing body of literature is now appearing that recognizes this vexing security problem. Considerable R&D is needed in this area, including the following:

  • Behavior-based security. One of the most effective techniques for detecting insider threats is to analyze user behavior patterns for inappropriate access of network resources such as file servers, printers, and outbound connections. Ongoing work at the MITRE Corporation employs Bayesian analysis of user behavior to detect certain insider threats with a reasonably high reliability. Far more research is needed in order to understand user intent for detecting malicious or dangerous actions. Limited work is being sponsored by DHS and ARO in this area.

  • Defense through uncertainty. An emerging area, initially funded by IARPA and AFOSR, this topic leverages uncertainty in deployed environments to make it difficult for an adversary to exploit them. Knowledge and information about the target environment are sufficiently “fuzzed,” confusing the attacker to confound the intended end goals. One example is to present purposely erroneous server operating system images for entities connected on the network. This can result in an intended attack being delivered to an incorrect operating system environment. Another example is using decoy documents placed intelligently in a network so that if the documents are exfiltrated, the home organization will be aware of the theft but the adversary will not realize their false pretense. Many other opportunities to confound and confuse an enemy are possible leveraging the principle of uncertainty. Of course, the use of these tactics requires management and control processes to ensure that desired activities are not inadvertently disrupted.

PRIVILEGED USER LEVEL

Perhaps the most vexing and difficult security problem is best captured by the adage “Who checks the checkers?” Security personnel are extremely privileged

Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×

users with access to all key functions of the enterprise system. A recent example of the malfeasance in this area involved a system administrator who captured San Francisco’s entire administrative IT infrastructure and denied access to all system administrators but himself.3 Critical weapons systems are designed with safety systems and technologies that inhibit a single insider from unauthorized action, but little work has been done in the research community to address the core question of how to secure security systems from security and operating personnel who are the deepest insiders and who potentially pose the insider threat with the highest risk.

  • Role- and behavior-based access control. A fundamental tenet of IA is that data and applications are only accessed by authenticated and authorized users who require access to conduct their business. The pervasive use of access controls based on credentials (IDs, passwords, and pins) is woefully inadequate in complex network environments. Role-based access control considers means of associating the logical roles of a user with the specific data and applications used by the specific roles defined with an enterprise. Research in this area by NSF has been extended by DARPA and some industrial laboratories also to associate “behavior” with a user’s credentials as a means of granting access to network resources.

  • Self-protecting security technologies. In much the same way that networks are threatened by denial-of-service attacks, host-based security technologies are threatened by denial-of-sensor attacks. A user may disable a host security system by accident, or a system administrator may bypass a security subsystem by design. This threat is just beginning to be recognized in the research community, and some work is proposed that deals with security technologies that are protected from this threat. Work done at the Sandia National Laboratories on safety technologies for nuclear weaponry may be brought to bear on this underfunded area of research related to the insider threat.

3

Ashley Surdin. 2008. “San Francisco Case Shows Vulnerability of Data Networks,” Washington Post, August 11, p. A03. Available at <http://www.washingtonpost.com/wp-dyn/content/article/2008/08/10/AR2008081001802.html>. Accessed March 16, 2009.

Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×

This page intentionally left blank.

Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
Page 174
Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
Page 175
Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
Page 176
Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
Page 177
Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
Page 178
Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
Page 179
Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
Page 180
Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
Page 181
Suggested Citation:"Appendix F: Suggested Elements of a Naval Information Assurance Research and Development Program." National Research Council. 2010. Information Assurance for Network-Centric Naval Forces. Washington, DC: The National Academies Press. doi: 10.17226/12609.
×
Page 182
Information Assurance for Network-Centric Naval Forces Get This Book
×
Buy Paperback | $62.00 Buy Ebook | $49.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Owing to the expansion of network-centric operating concepts across the Department of Defense (DOD) and the growing threat to information and cybersecurity from lone actors, groups of like-minded actors, nation-states, and malicious insiders, information assurance is an area of significant and growing importance and concern. Because of the forward positioning of both the Navy's afloat and the Marine Corps expeditionary forces, IA issues for naval forces are exacerbated, and are tightly linked to operational success. Broad-based IA success is viewed by the NRC's Committee on Information Assurance for Network-Centric Naval Forces as providing a central underpinning to the DOD's network-centric operational concept and the Department of the Navy's (DON's) FORCEnet operational vision. Accordingly, this report provides a view and analysis of information assurance in the context of naval 'mission assurance'.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!