Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 130
--> 3 Information Systems Security 3.1 Introduction DOD's increasing reliance on information technology in military operations increases the value of DOD's information infrastructure and information systems as a military target. Thus, for the United States to realize the benefits of increased use of C4I in the face of a clever and determined opponent, it must secure its C4I systems against attack. As noted in Chapter 2, the maximum benefit of C4I systems is derived from their interoperability and integration. That is, to operate effectively, C4I systems must be interconnected so that they can function as part of a larger ''system of systems." These electronic interconnections multiply many-fold the opportunities for an adversary to attack them. Maintaining the security of C4I systems is a problem with two dimensions. The first dimension is physical, that of protecting the computers and communications links as well as command and control facilities from being physically destroyed or jammed. For this task, the military has a great deal of relevant experience that it applies to systems in the field. Thus, the military knows to place key C4I nodes in well-protected areas, to put guards and other access control mechanisms in place to prevent sabotage, and so on. The military also knows how to design and use wireless communications links so that enemy jamming is less of a threat. Information systems security is a much more challenging task. Information systems security—the task of protecting the C4I systems connected to the communications network against an adversary's information attack against those systems—is a much more poorly understood area than
OCR for page 131
--> physical security.1 Indeed, DOD systems are regularly attacked and penetrated,2 though most of these attacks fail to do damage. Recent exercises such as Eligible Receiver (Box 3.1) have demonstrated real and significant vulnerabilities in DOD C4I systems, calling into question their ability to perform properly when faced with a serious attack by a determined and skilled adversary. Such observations are unfortunately not new. A series of earlier reports have noted a history of insufficient or ineffective attention to C4I information systems security (Box 3.2). The problem of protecting DOD C4I systems against attack is enormously complicated by the fact that DOD C4I systems and the networks to which they are connected are not independent of the U.S. national information infrastructure.3 Indeed, the line between the two is quite blurred because many military systems make use of the civilian information infrastructure,4 and because military and civilian systems are often interconnected. DOD is thus faced with the problem of relying on components of the infrastructure over which it does not have control. While the general principles of protecting networks as described below apply to military C4I systems, both those connected to civilian components and those that are not, the policy issues related to DOD reliance on the national information infrastructure are not addressed in this report. Lastly, C4I systems are increasingly built upon commercial technologies and thus 1. Within the information technology industry, the term "information security" encompasses technical and procedural measures providing for confidentiality, authentication, data integrity, and non-repudiation, as well as for resistance to denial-of-service attacks. The committee understands that within many parts of DOD, the term "information security" does not have such broad connotations. Nevertheless, it believes that lack of a broad interpretation for the term creates problems for DOD because it focuses DOD on too narrow a set of issues. Note that information systems security does not address issues related to the quality of data before it is entered into the C4I system. Obviously, such issues are important to the achievement of information superiority, but they are not the focus of this chapter. 2. In 1996, the General Accounting Office reported that the DOD may have experienced 250,000 cyber-attacks in 1995 and that the number of cyber-attacks would increase in the future. Furthermore, the Defense Information Systems Agency estimated that "only about 1 in 50 attacks is actually detected and reported." For additional information, see General Accounting Office. 1996. Information Security: Computer Attacks at the Department of Defense Pose Increasing Risks, GAO/AIMD-96-84, General Accounting Office, Washington, D.C. 3. The U.S. national information infrastructure includes those information systems and networks that are used for all purposes, both military and civilian, whereas DOD's C4I systems are by definition used for military purposes. 4. More than 95 percent of U.S. military and intelligence community voice and data communications are carried over facilities owned by public carriers. (See Joint Security Commission, Redefining Security: A Report to the Secretary of Defense and the Director of Central Intelligence, February 28, 1994, Chapter 8.)
OCR for page 132
--> BOX 3.1 Eligible Receiver Conducted in the summer of 1997 and directed by the Chairman of the Joint Chiefs of Staff, Eligible Receiver 97 was the first large-scale no-notice DOD exercise (a real, not tabletop, exercise) designed to test the ability of the United States to respond to an attack on the DOD and U.S. national infrastructure. This exercise involved a simulated attack against components of the national infrastructure (e.g., power and communications systems) and an actual "red team" attack against key defense information systems at the Pentagon, defense support agencies, and in combatant commands. The attack on the national infrastructure was based on potential vulnerabilities, while the actual attack on defense systems exploited both actual and potential vulnerabilities. (The vulnerabilities exploited were common ones, including bad or easily guessed passwords, operating system deficiencies, and improper system configuration control, sensitive site-related information posted on open Web pages, inadequate user awareness of operational security, and poor operator training.) All red team attacks were based on information and techniques derived from open non-classified research, and no insider information was provided to the red team. Furthermore, the red team conducted extensive "electronic reconnaissance" before it executed its attacks. The exercise demonstrated a high degree of interdependence between the defense and national information infrastructures. For example, the defense information infrastructure is extremely reliant on commercial computer and communication networks, and the public and private sectors often share common commercial software or systems. As a result, vulnerabilities demonstrated in DOD systems and procedures may be shared by others, and vulnerabilities in one area may allow exploitation in other areas. The exercise revealed vulnerabilities in DOD information systems and deficiencies in the ability of the United States to respond effectively to a coordinated attack on the national infrastructure and information systems. Poor operations and information security practices provided many red team opportunities. In short, the exercise provided real evidence of network vulnerabilities.
OCR for page 133
--> BOX 3.2 Some Related Studies on Information Security Computers at Risk: Safe Computing in the Information Age1 focused on approaches for "raising the bar" of computer and communications security so that all users—both civilian and military—would benefit, rather than just those who are users and handlers of classified government information. The report responded to prevailing conditions of limited awareness by the public, system developers, system operators, and policymakers. To help set and raise expectations about system security, the study recommended: Development and promulgation of a comprehensive set of generally accepted security system principles; Creation of a repository of data about incidents; Education in practice, ethics, and engineering of secure systems; and Establishment of a new institution to implement these recommendations. Computers at Risk also analyzed and suggested remedies for the failure of the marketplace to substantially increase the supply of security technology; export control criteria and procedures were named as one of many contributing factors. Observing that university-based research in computer security was at a "dangerously low level," the report mentioned broad areas where research should be pursued. The 1996 Report of the Defense Science Board Task Force on Information Warfare Defense2 focused on defending against cyber-threats and information warfare. The task force documented an increasing military dependence on networked information infrastructures, analyzed vulnerabilities of the current networked information infrastructure, discussed actual attacks on that infrastructure, and formulated a list of threats that has been discussed broadly within the DOD and elsewhere. The task force concluded that "there is a need for extraordinary action to deal with the present and emerging challenges of defending against possible information warfare attacks on facilities, information, information systems, and networks of the United States which [sic] would seriously affect the ability of the Department of Defense to carry out its assigned missions and functions.'' Some of the task force recommendations answered organizational questions, e.g., where within DOD various information warfare defense functions might be placed, how to educate senior government and industry leaders about vulnerabilities and their implications, and how to determine current infrastructure dependencies and vulnerabilities. Other recommendations addressed short- and longer-term technical means for repelling attacks. The task force urged greater use of existing security technology, certain controversial encryption technology, and the construction of a minimum essential information infrastructure. The task force noted the low levels of activity concerning computer security and survivable systems at universities, and also suggested a research program for furthering the development of the following:
OCR for page 134
--> System architectures that degrade gracefully and are resilient to failures or attacks directed at single components; Methods for modeling, monitoring, and managing large-scale distributed systems; and Tools and techniques for automated detection and analysis of localized or coordinated large-scale attacks, and tools and methods for predicting anticipated performance of survivable distributed systems. Trust in Cyberspace3 proposed a research agenda for building networked systems that are more robust, reducing software design problems, and developing mechanisms to protect against new types of attacks from unauthorized users, criminals, or terrorists. The report noted that much of today's security technology for operating systems is based on a model of computing centered on mainframe computers. Today, different security mechanisms are needed to protect against the new classes of attacks that become possible because of computer networks, the distribution of software using the Internet, and the significant use of commercial, off-the-shelf (COTS) software. Furthermore, the report recommended a more pragmatic approach to security that incorporates add-on technologies, such as firewalls, and utilizes the concept of defense in depth, which requires independent mechanisms to isolate failures so that they do not cascade from one area of the system to another. In the area of network design, the report noted a need for research to better understand how networked information systems operate, how their components work together, and how changes occur over time. Since a typical computer network is large and complex, few engineers are likely to understand the entire system. Better conceptual models of such systems will help operators grasp the structure of these networks and better understand the effects of actions they may take to fix problems. Approaches to designing secure networks built from commercially available software warrant attention. Improvements in testing techniques and other methods for determining errors also are likely to have considerable payoffs for enhancing assurance in networked systems. Finally, research is needed to deal with the major challenges for network software developers that arise because COTS components are used in the creation of most networked information systems. Indeed, today's networked information systems must be developed with limited access to significant pieces of the system and virtually no knowledge of how those pieces were developed. 1. Computer Science and Telecommunications Board, National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age, National Academy Press, Washington, D.C. 2. Defense Science Board. 1996. Report of the Defense Science Board Task Force on Information Warfare-Defense (IW-D), Office of the Under Secretary of Defense for Acquisition and Technology, Washington, D.C. 3. Computer Science and Telecommunications Board, National Research Council. 1999. Trust in Cyberspace, National Academy Press, Washington, D.C.
OCR for page 135
--> are coming to suffer from the same basic set of vulnerabilities that are observed in the commercial sector. 3.1.1 Vulnerabilities in Information Systems and Networks5 Information systems and networks can be subject to four generic vulnerabilities. The first is unauthorized access to data. By surreptitiously obtaining sensitive data (whether classified or unclassified) or by browsing a sensitive file stored on a C4I computer, an adversary might obtain information that could be used against the national security interests of the United States. Moreover, even more damage could occur if the fact of unauthorized access to data were to go unnoticed, because it would be impossible to take remedial action. The second generic vulnerability is clandestine alteration of data . By altering data clandestinely, an adversary could destroy the confidence of a military planner or disrupt the execution of a plan. For example, alteration of logistics information could significantly disrupt deployments if troops or supplies were rerouted to the wrong destinations or supply requests were deleted. A third generic vulnerability is identity fraud. By illicitly posing as a legitimate user, an adversary could issue false orders, make unauthorized commitments to military commanders seeking resources, or alter the situational awareness databases to his advantage. For example, an adversary who obtained access to military payroll processing systems could have a profound effect on military morale. An enemy who overruns a friendly position and gains access to the information network of friendly forces may see classified information with tactical significance or be able to insert bad information into friendly tactical databases. A fourth generic vulnerability is denial of service. By denying or delaying access to electronic services, an adversary could compromise operational planning and execution, especially for time-critical tasks. For example, attacks that resulted in the unavailability of weather information systems could delay planning for military operations. Attacks that deny friendly forces the use of the Global Positioning System (e.g., through jamming) could cripple targeting of hostile forces and prevent friendly forces from knowing where they are. Denial of service is, in the view of many, the most serious vulnerability, because denial-of-service attacks are relatively easy to carry out and often require relatively little technical sophistication. 5. Adapted from Computer Science and Telecommunications Board, National Research Council. 1996. Cryptography's Role in Securing the Information Society, National Academy Press, Washington, D.C., Box 1.3.
OCR for page 136
--> Also, it is worth noting that many compromises of security result not from a successful direct attack on a particular security feature intended to guard against one of these vulnerabilities, but instead from the "legitimate" use of designed-in features in ways that were not initially anticipated by the designers of that feature. Thus, defense must be approached on a system level rather than on a piecemeal basis. Lastly, non-technical vulnerabilities—such as the intentional misuse of privileges by authorized users—must be considered. For example, even perfect access controls and unbreakable encryption will not prevent a trusted insider from revealing the contents of a classified memorandum to unauthorized parties. The types of attack faced by DOD C4I systems are much broader and potentially much more serious and intense than those usually faced by commercial (non-military) networked information systems. The reason is that attacks on DOD C4I systems that are part of an attack sponsored or instigated by a foreign government can draw upon virtually unlimited resources devoted to those attacks. Furthermore, perpetrators sponsored or supported by a foreign government are largely immune to retaliation or punishment through law enforcement channels, and are thus free to act virtually without constraint. 3.1.2 Security Requirements Needs for information systems security and trust can be formulated in terms of several major requirements: Data confidentiality—controlling who gets to read information in order to keep sensitive information from being disclosed to unauthorized recipients, e.g., by preventing the disclosure of classified information to an adversary; Data integrity—assuring that information and programs are changed, altered, or modified only in a specified and authorized manner, e.g., by preventing an adversary from modifying orders given to combat units so as to shape battlefield events to his advantage; System availability—assuring that authorized users have continued and timely access to information and resources, e.g., by preventing an adversary from flooding a network with bogus traffic that delays legitimate traffic such as that containing new orders from being transmitted; and System configuration—assuring that the configuration of a system or a network is changed only in accordance with established security guidelines and only by authorized users, e.g., by detecting and reporting to higher authority the improper installation of a modem that can be used for remote access.
OCR for page 137
--> In addition, there is a requirement that cuts across these four, the requirement for accountability—knowing who has had access to information or resources. It is apparent from this listing that security means more than protecting information from disclosure (e.g., classified information). In the DOD context, much of the information on which military operations depend (e.g., data related to personnel, payroll, logistics, and weather) is not classified. While its disclosure might not harm national security, alteration or a delay in transmitting it certainly could. 6 In other cases, access to unclassified information can present a threat (e.g., access to personnel medical records used to enable blackmail attempts). Satisfying these security requirements requires a range of security services, including: Authentication—ascertaining that the identity claimed by a party is indeed the identity of that party. Authentication is generally based on what a party knows (e.g., a password), what a party has (e.g., a hardware computer-readable token), or what a party is (e.g., a fingerprint); Authorization—granting of permission to a party to perform a given action (or set of actions); Auditing—recording each operation that is invoked along with the identity of the subject performing it and the object acted upon (as well as later examining these records); and Non-repudiation—the use of a digital signature procedure affirming both the integrity of a given message and the identity of its creator to protect against a subsequent attempt to deny authenticity. 3.1.3 Role of Cryptography It is important to understand what role the tool of cryptography plays in information system security, and what aspects of security are not provided by cryptography. Cryptography provides a number of useful capabilities: Confidentiality—the characteristic that information is protected from disclosure, in transit during communications (so-called link encryp- 6. Statements typically issued by DOD in the aftermath of an identified attack on its systems assure Congress and the public that "no classified information was disclosed." These may be technically correct, but they do not address the important questions of whether military capabilities were compromised, or more broadly, if a similar incident would have adverse implications in future, purposeful attack situations.
OCR for page 138
--> tion) and/or when stored in an information system. The security requirement of confidentiality is the one most directly met by cryptography; Authentication—cryptographically based assurance that an asserted identity is valid for a given person or computer system; Integrity check—cryptographically based assurance that a message or file has not been tampered with or altered; and Digital signature—assurance that a message or file was sent or created by a given person, based on the capabilities provided by mechanisms for authentication and integrity checks. Cryptographic devices are important, for they can protect information in transit against unauthorized disclosure, but this is only a piece of the information systems security problem. The DOD mission also requires that information be protected while in storage and while being processed, and that the information be protected not only against unauthorized disclosure, but also against unauthorized modification and against attacks that seek to deny authorized users timely access to the information. Cryptography is a valuable tool for authentication as well as for verifying the integrity of information or programs.7 Cryptography alone does not provide availability (though because its use is fundamental to many information security measures, its widespread application can contribute to greater assurance of availability8). Nor does cryptography directly provide auditing services, though it can serve a useful role in authenticating the users whose actions are logged and in verifying the integrity of audit records. Cryptography does not address vulnerabilities due to faults in a system, including configuration bugs and bugs in cryptographic programs. It does not address the many vulnerabilities in operating systems and applications.9 It certainly does not provide a solution to such problems as 7. Cryptography can be used to generate digital signatures of messages, enabling the recipient of a message to assure himself that the message has not been altered (i.e., an after-the fact check of message integrity that does not protect against modification itself). However, in the larger view, a properly encrypted communications channel is difficult to compromise in the first place, and in that sense cryptography can also help to prevent (rather than just to detect) improper modifications of messages. 8. Widespread use of encryption (vs. cryptography) can also result in reduced availability, as it hinders existing fault isolation and monitoring techniques. It is for this reason that today's network managers are often not enthusiastic about deployment of encryption. 9. Recent analysis of advisories issued by the Computer Emergency Response Team at Carnegie Mellon University indicates that 85 percent of them would not have been solved by encryption. See Computer Science and Telecommunications Board, National Research Council. 1999. Trust in Cyberspace, National Academy Press, Washington, D.C.
OCR for page 139
--> poor management and operational procedures or dishonest or suborned personnel. In summary, cryptography may well be a necessary component of these latter protections, but cryptography alone is not sufficient.10 3.2 Major Challenges To Information Systems Security 3.2.1 The Asymmetry Between Defense and Offense Information systems security is fundamentally a defensive function, and as such suffers from an inherent asymmetry between cyber-attack and cyber-defense. Because cyber-attack can be conducted at the discretion of the attacker, while the defender must always be on guard, cyber-attack is often cheaper than defense, a point illustrated by the modest resources used by hackers to break into many unclassified DOD systems. Furthermore, for the defender to be realistically confident that his systems are secure, he must devote an enormous amount of effort to eliminate all security flaws that an attacker might exploit, while the attacker simply needs to find one overlooked flaw. Finally, defensive measures must be developed and deployed, a process that takes time, while attackers generally exploit existing security holes. In short, a successful defender must be successful against all attacks, regardless of where the attack occurs, the modality of the attack, or the time of the attack. A successful attacker has only to succeed in one place at one time with one technique. It is this asymmetry that underlies the threat-countermeasure cycle. A countermeasure is developed and deployed against a known threat, which prompts the would-be attacker to develop another threat. As a result, the advantage is heavily to the attacker until most potential vulnerabilities have been addressed (i.e., after many iterations of the cycle).11 3.2.2 Networked Systems The utility of an information or C4I system generally increases as the number of other systems to which it is connected increases. On the other 10. It is worth noting that cryptography is often the source of failures of C4I systems to interoperate. That is, two C4I systems often fail to exchange data operating in secure encrypted mode. 11. This asymmetry is discussed in Computer Science and Telecommunications Board, National Research Council. 1990. Computers at Risk: Safe Computing in the Information Age, National Academy Press, Washington, D.C.
OCR for page 140
--> hand, increasing the number of connections of a system to other systems also increases its vulnerability to attacks routed through those connections. The use of the Internet to connect C4I systems poses special vulnerabilities. It is desirable to use the Internet because the Internet provides lower information transport costs compared to the public switched telephone network or dedicated systems. But the Internet provides neither quality-of-service guarantees nor good isolation from potentially hostile parties. 3.2.3 Ease-of-Use Compromises Compromises arise because information systems security measures ideally make a system impossible to use by someone who is not authorized to use it, whereas considerations of system functionality require that the system be easy to use by authorized users. From the perspective of an authorized user, a system with information systems security features should look like the same system without those features. In other words, security features provide no direct functional benefit to the authorized user. At the same time, measures taken to increase the information security of a system almost always make using that system more difficult or cumbersome. The result in practice is that all too often (from a security standpoint) security features are simply omitted (or not turned on) to preserve the ease-of-use goal. 3.2.4 Perimeter Defense Today's commercially available operating systems and networks offer only weak defensive mechanisms, and thus the components that make up a system are both vulnerable and hard to protect. One approach to protecting a network is then to allow systems on the network to communicate freely (i.e., without the benefit of security mechanisms protecting each individual network transaction) while allowing connection to the larger world outside the network only through carefully defined and well-protected gateways. The result is an arrangement that is "hard on the outside" against attack but "soft on the inside." Thus, it is today very common to see ''enclaves" hiding from the Internet behind firewalls, but few defensive measures within the enclaves themselves. A perimeter strategy is less expensive than an approach in which every system on a network is protected (a defense-in-depth strategy) because defensive efforts can be concentrated on just a few nodes (the gateways). But the major risk is that a single success in penetrating the perimeter compromises everything on the inside. Once the perimeter is
OCR for page 168
--> spected. These tools are not perfect, but their widespread use would be a significant improvement over current DOD practice. A second aspect of configuration control is more difficult to achieve. Good configuration control also requires that every piece of executable code on every machine carry a digital signature that is periodically checked as a part of configuration monitoring. Furthermore, code that cannot be signed (e.g., macros in a word processor) must be disabled until development indicates a way to sign it. Today, it is quite feasible to require the installation of virus-checking programs on all servers and to limit the ability of users to download or install their own software (though Java and Active-X applets do complicate matters to some extent). Census software or regular audits can be used to ensure compliance with such policies. However, no tool known to the committee and available today undertakes this task systematically. Note that it is not practical to secure every system in the current inventory. It is probably unrealistic to develop and maintain tools that do thorough monitoring of the security configuration for more than two or three platforms (e.g., Windows NT and Sun UNIX). Over the long run, it may well be necessary to remove other systems from operational use, depending on the trade-offs between lower costs associated with maintaining fewer systems and greater security vulnerabilities arising from less diversity in the operating systems base. Authentication of human users is a second area in which DOD practices do not match the best practices found in the private sector. Passwords—ubiquitously used within the DOD as an authentication device—have many well-known weaknesses. An adversary can guess passwords, or reuse a compromised password (e.g., one found in transit on a network by a ''sniffer"), and can compromise a password without the knowledge of its legitimate user. A hardware-based authentication mechanism suffers from these weaknesses to a much lesser extent.32 Because the mechanism is based on a physical piece of hardware, it cannot be duplicated freely (whereas passwords are duplicated when one person tells another a password). The hardware can be designed to be tamper-resistant, which increases the difficulty of duplicating it. Furthermore, because persistent (i.e., long-last- 32. The device (e.g., a personal computer card) is enabled by a short password, usually called a PIN, entered by the user directly into the device. The device then engages in a secure and unforgeable cryptographic protocol with the system demanding the authentication; this protocol is much stronger than any password could be. The use of passwords is strictly local to the device and does not suffer from the well-known problems of passwords on networks, for example "sniffing" and playback attacks. This authentication depends on what you have (the device) together with that you know (the PIN).
OCR for page 169
--> ing) identifying information is never transmitted outside the piece of hardware, attacks to which password authentication is vulnerable (e.g., sniffing and playback attack) are essentially impossible. Hardware-based authentication is a highly effective method for authenticating communications originating from individuals. It also has particular value in the protection of remote access points (Box 3.5). Biometric identifiers complement hardware-based authentication devices. Because biometric information is closely tied to the user, biometric identifiers serve a function similar to that of the personal identification number (PIN) that is used to activate the device. Biometric identifiers are based on some distinctive physical characteristics of an individual (e.g., a fingerprint, a voiceprint, a retinal scan); biometric authentication works by comparing a real-time reading of some biometric signature to a previously stored signature. Biometric authentication is a newer technology than that of hardware-based authentication; as such it is less well developed (e.g., slower, less accurate) and more expensive even as it promises to improve security beyond that afforded by PINs. BOX 3.5 Protection of Remote Access Points Remote access points pose particular vulnerabilities. A hostile user attempting to gain access to a computer on the premises of a U.S. command post, for example, must first gain physical entry to the facility. He also runs the risk of being challenged face to face in his use of the system. Thus, it makes far more sense for an adversary to seek access remotely, where the risk of physical challenge is essentially zero. Strong authentication—whether hardware-based or biometric—is thus particularly important for protecting remote access points that might be used by individuals with home or portable computers. Some organizations (not necessarily within the DOD) protect their remote access points by using dial-back procedures1 or by embedding the remote access telephone number in the software employed by remote users to establish a connection. Neither approach is adequate for protecting remote access points (e.g., dial-back security is significantly weakened in the face of a threat that is capable of penetrating a telephone switch, such as a competent military information warfare group), and their use does not substitute for strong authentication techniques. 1. In a dial-back procedure, a remote user dials a specified telephone number to access the system. The system then hangs up and checks the caller's number against a directory of approved remote access telephone numbers. If the number matches an approved number, the system dials the user back and restores the connection.
OCR for page 170
--> Hardware-based authentication can also be used to authenticate all computer-to-computer communications (e.g., those using security protocols such as Secure Sockets Layer or IPSec). In this way, all communications carried in the network can be authenticated, not just those from outside a security perimeter. "Mutual suspicion" requiring mutual authentication among peers is an important security measure in any network. The potential value of strong authentication mechanisms is more fully exploited when the authentication is combined with mechanisms such as IPSec or TCP wrappers that protect the host machines against suspicious external connections33 and a fine-grained authorization for resource usage. For example, a given user may be allowed to read and write to some databases, but only to read others. Access privileges may be limited in time as well (e.g., a person brought in on a temporary basis to work on a particular issue may have privileges revoked when he or she stops working on that issue). In other words, the network administrator should be able to establish groups of users that are authorized to participate in particular missions and the network configured to allow only such interactions as necessary to accomplish those missions. Similarly, the network administrator should be able to place restrictions on the kinds of machine to-machine interactions allowable on the network. This requires that the administrator have tools for the establishment of groups of machines allowed to interact in certain ways. Some network management/configuration systems allow configuration control that would support fine-grained access controls. But most do not make it easy for a network administrator to quickly establish and revoke these controls. Finally, the trend of today toward "single login" presents a dangerous vulnerability.34 When a perimeter defense is breached, an adversary 33. TCP wrappers protect individual server machines, whereas firewalls protect entire networks and groups of machines. Wrappers are programs that intercept communications from a client to a server and perform a function on the service request before passing it on to the service program. Such functions can include security checking. For example, an organization may install a wrapper around the patient record server physicians use to access patient information from home. The wrapper could be configured to check connecting Internet addresses against a predefined approved list and to record the date and time of the connection for later auditing. Use of wrapper programs in place of firewalls means that all accessible server machines must be configured with wrapper(s) in front of network services, and they must be properly maintained, monitored, and managed. See Wietse Venema. 1992. "TCP WRAPPER: Network Monitoring, Access Control and Booby Traps," pp. 85-92 in Proceedings of the Third Usenix UNIX Security Symposium, Baltimore, Md., September. 34. "Single login" refers to the need of a user to log in (and authenticate himself) only once per session, regardless of how many systems he accesses during that session.
OCR for page 171
--> can roam the entire network without ever being challenged again to authenticate himself. A more secure arrangement would be for the network to support remote interrogation of the hardware authentication device by every system the user attempts to access, even though the user need only enter the PIN once to activate the device. In this way, every request to a computer, no matter where it is located on the network, is properly supported by strong evidence of the machine and the individual that is responsible for the request, allowing this evidence to be checked against the rules that determine who is allowed access to what resources. Implementing this recommendation is not easy, but is well within the state of the art. A reader for a hardware authentication device in every keyboard and in every laptop (via personal computer-card slots) is very practical today.35 In principle, even smart "dog tags" could be used as the platform for a hardware authentication device. However, the most difficult issue is likely to be the establishment of the public-key infrastructure for DOD upon which these authentication devices will depend. Biometric authentication devices are not practical for universal deployment (e.g., for soldiers in the field), but they may be useful in more office-like environments (e.g., command centers). Since DOD increasingly relies on commercial technology for the components of C4I systems, engagement of commercial support for authentication is important to making this affordable. It should be possible to enlist strong industry support for a program to make better authentication more affordable if the program is properly conceived and marketed. Many commercial customers have very similar requirements, which are poorly met by existing security products. Thus, from a practical standpoint, the DOD's needs with respect to authentication are very similar to commercial needs. Because this recommendation calls for DOD-wide action with respect to C4I systems, the Assistant Secretary of Defense for C3I must promulgate appropriate policy for the department. The information security policy is within the purview of the DOD's Chief Information Officer, who today is also the Assistant Secretary of Defense for C3I. Finally, given its history of involvement with information systems security, the National Security Agency is probably the appropriate body to identify the best available authentication mechanisms and configuration tools. 35. The Fortezza card was an attempt by the DOD in the mid-1990s to promote hardware-based authentication. While the Fortezza program itself has not enjoyed the success that was once hoped for it, the fact remains that one of the capabilities that Fortezza provides—widespread use of hardware-based authentication—is likely to prove a valuable security tool.
OCR for page 172
--> Recommendation S-5: The Under Secretary of Defense for Acquisition and Technology and the Assistant Secretary of Defense for C3I should direct the appropriate defense agencies to develop new tools for information security. Aligning DOD information security practice with the best practices found in industry today would be a major step forward in the DOD information security posture, but it will not be sufficient. Given the stakes of national security, DOD should feel an obligation to go further still. Going further will require research and development in many areas. For example, tools for systematic code verification to be used in configuration monitoring are an area in which DOD-sponsored research and development could have high payoff in both the military and civilian worlds, as organizations in both worlds face the same problem of hostile code. A second example involves fine-grained authorization for resource usage. Some network management/configuration systems allow configuration control that would support fine-grained access controls. But most do not make it easy for a network administrator to quickly establish and revoke these controls, and DOD-sponsored research and development in this area could have high payoff as well. A third area for research and development is tools that can be used in an adaptive defense of C4I systems. Adaptive defenses change the configuration of the defense in response to particular types of attack. In much the same way that an automatic teller machine eats an ATM card if the wrong PIN is entered more than three times, an "adaptive" defense that detects an attack being undertaken through a given channel can deny access to that channel for the attacker, thus forcing him to expend the time and resources to find a different channel. More sophisticated forms of adaptive defense might call for "luring" the attacker into a safe area of the system and manipulating his cyber-environment to waste his time and to feed him misleading information. For example, certain known security holes can be left unfixed, so that an attacker can have relatively easy access to the system through those holes. However, in fact, the information and system resources accessible through those holes are structured in such a way that they look authentic while providing nothing useful to the attacker. Deceptive defenses can force the attacker to waste time so that the defense has a greater opportunity to monitor the attacker and/or track the attacker's location and to take appropriate action. On the other hand, its long-term success presumes that the attacker cannot distinguish the holes left open deliberately from the ones unintentionally left open and that the defenders have the discipline
OCR for page 173
--> to monitor the former; thus, such "deceptive" techniques cannot be regarded as anything more than a component of effective cyber-defenses. A fourth area for research and development is biometrics. The basic technology and underlying premises of biometrics have been validated, but biometric authentication mechanisms are still sometimes too slow and too inaccurate for convenient use. (For example, they often take longer to operate than typing a password, and they sometimes result in false negatives (i.e., they reject a valid user fingerprint or retinal scan).) Broad user acceptance will depend both on more convenient-to-use mechanisms and on the integration of biometrics into the man-machine interface, such as a fingerprint reader in a mouse or keyboard. Finally, research and development on active defenses is needed. Active defenses make attackers pay a price for attacking (whether or not they are successful), thus dissuading a potential attacker and offering deterrence to attack in the first place (an idea that raises policy issues as important as those associated with Recommendation S-7 (below). Passive information systems security is extremely important but against a determined opponent with the time and resources to conduct an unlimited number of penetration attempts against a passive non-responding target, the attacker will inevitably succeed. This area for research and development raises important policy issues that are discussed below. But the fact remains that even if policy allowed the possibility of retaliation, the tools to support such retaliation are wholly inadequate. Instruments to support a policy-authorized retaliation are needed in two areas: Identification of an attacker. Before any retaliatory action can be undertaken, the attacker must be identified in a reasonable time scale with a degree of confidence commensurate with the severity of that action. Today, the identification of an attacker is an enormously time-consuming task—even if the identification task is successful, it can take weeks to identify an attacker. And, it is often that considerable uncertainty remains about the actual identity of the attacker, who may be an individual using an institution's computer without the knowledge or permission of that institution. Note also that better tools for the accurate and rapid location of cyber-attackers would greatly assist law enforcement authorities in apprehending and prosecuting them. Striking back against an attacker. Once an attacker is identified, tools are needed to attack him or her. Many of the techniques employed against friendly systems can be used against an attacker as well, but all of these techniques are directed against computer systems rather than individual perpetrators. Furthermore, using these techniques may well be quite cumbersome for friendly forces (just as they are for attackers). However, the
OCR for page 174
--> most basic problem in striking back is that from a technical perspective, not enough is known about what retaliation and active defenses might be. Other possible research and development areas include secure composition of secure systems and components to support ad hoc (e.g., coalition) activities; better ways to configure and manage security features; generation of useful security specifications from programs; more robust and secure architectures for networking (e.g., requiring trackable, certificated authentication on each packet, along with a network fabric that denies transit to unauthenticatable packets); and automatic determination of classification from content. Many agencies within DOD can conduct research and development for better information security tools, but a high-level mandate for such activity would help increase the priority of work in this area for such agencies. The National Security Agency and the Defense Advanced Research Projects Agency are the most likely agencies to develop better tools for information systems security. As noted above, better tools developed for DOD use are also likely to have considerable application in the commercial sector, a fact that places a high premium on conducting research and development in this area in an unclassified manner. Note that Trust in Cyberspace also outlines a closely related research agenda.36 Recommendation S-6: The Chairman of the Joint Chiefs of Staff and the service Secretaries should direct that a significant portion of all tests and exercises involving DOD C4I systems be conducted under the assumption that they are connected to a compromised network. Because both threat and technology evolve rapidly, perfect information systems security will never be achieved. Prudence thus requires C4I developers and operators to assume some non-zero probability that any system will be successfully attacked, that some DOD systems have been successfully attacked, and that some C4I systems are compromised at any given moment. (A "compromised" system or network is one that an adversary has penetrated or disrupted in some way, so that it is to some extent no longer capable of serving all of the functions that it could serve when it was not compromised.) This pessimistic assumption guards against the hubris of assumed perfection. However, despite this assumption, most of the C4I systems connected to the compromised components should be able to function effectively despite local security failures. 36. Computer Science and Telecommunications Board, National Research Council. 1999. Trust in Cyberspace, National Academy Press, Washington, D.C.
OCR for page 175
--> C4I systems should be designed and developed so that their functions and connectivity are easy to reconfigure under different levels of information threat. Critical functions must be identified in advance for different levels of threat (at different "INFOCONS") so that responses can occur promptly in a planned and orderly fashion. Note also that the nature of a mission-critical function may vary during the course of a battle. C4I systems should be tested and exercised routinely under the assumption that they are connected to a compromised network. The capability of U.S. forces against an adversary is strongly dependent on the training they receive, and so C4I tiger teams playing in exercises involving C4I (i.e., every exercise) should be able to operate in a largely unconstrained mode (i.e., subject to some but not many limits). The lack of constraint is intended to stress friendly forces in much the same way that very well trained opposition forces such as those at the Army's National Training Center, the Air Force's Air Warfare Center, and the Navy's Fighter Weapons School stress units that exercise there. However, because the activities of entirely unconstrained tiger teams may prevent the test or exercise from meeting other training goals, some limits are necessary. (The portion of the test or exercise subject to the assumption of a compromised network should also be expected to increase, and the limits on tiger team activities relaxed, as friendly forces develop more proficiency in coping with information threats.) With tiger teams operating in this mode, every battlefield C4I user could be made conscious that his information may have been manipulated and that at any instant it might be denied. Note that assuming a compromised network does not necessarily mean that the network cannot be used—only that it must be used with caution. For example, the network can be continually monitored for indications of anomalous activity, even if the network is nominally regarded as "secure." Network configurations can be periodically altered to invalidate information that the enemy may have been able to collect about the network. These steps would be analogous to periodic changes in tactical call signs that are used to identify units, an operational security measure that is taken to frustrate (or at least to complicate the efforts of) enemy eavesdroppers. Doctrine should account for the possibility that a tactical network has been compromised or penetrated as well. In addition to continually taking preventive measures even when the network is not known to have been compromised, commanders must have a range of useful responses when a compromise or penetration is detected. This premise differs from today's operational choices, which are either to stay connected to everything or to disconnect and have nothing, with added exhortations to "be careful" when intrusions are detected. Finally, units must know how they
OCR for page 176
--> will function when the only C4I available to them is unsecured voice communications. In short, it is useful for the U.S. military to be trained in how to use its C4I systems and networks even if they have been compromised, as well as for the possibility that they will be largely unavailable for use at all. Because this recommendation affects all operational deployments and exercises, both service and joint, a number of offices must take action. The Chairman of the Joint Chiefs of Staff should promulgate a directive that calls for such a policy in all joint exercises and operational deployments. And, because many C4I systems are owned and operated and controlled by the services, the services—perhaps through their training and doctrine commands—should establish doctrinal precepts for commanders to follow in implementing this policy. Recommendation S-7: The Secretary of Defense should take the lead in explaining the severe consequences for U.S. military capabilities that arise from a purely passive defense of its C4I infrastructure and in exploring policy options to respond to these challenges. Because a purely passive defense will ultimately fail against a determined attacker who does not pay a price for unsuccessful attacks, a defensive posture that allows for the possibility of inflicting pain on the attacker would bolster the security of U.S. C4I systems.37 Today, a cyber-attack on U.S. C4I systems is regarded primarily as a matter for law enforcement, which has the lead responsibility for apprehending and prosecuting the attacker. DOD personnel may provide technical assistance in locating and identifying the attacker, but normally DOD has no role beyond that. If an attack is known with certainty to emanate from a foreign power (a very difficult determination to make, to be sure) and to be undertaken by that foreign power, the act can be regarded as a matter of national security. If so, then a right to self-defense provides legal justification for retaliation. If the National Command Authorities (i.e., the President and the Secretary of Defense, or their duly authorized deputies or successors) decides that retaliation is appropriate, the remaining questions are those of form (e.g., physical or cyber) and severity (how hard to hit back). Under such circumstances, DOD would obviously play a role. However, DOD is legally prohibited from taking action beyond identification of a cyber-attacker on its own initiative, even though the ability of the United 37. DOD is not alone in having to deal with the difficulties of a purely passive defense. But given the importance to the national security, the inevitable consequences of passive defense have immense significance for DOD.
OCR for page 177
--> States to defend itself against external threats is compromised by attacks on its C4I infrastructure, a compromise whose severity will only grow as the U.S. military becomes more dependent on the leverage provided by C4I. From a national security perspective, the geographical origin of the attack matters far less than the fact that it is military C4I assets that are being attacked. Thus, the military desirability of cyber-retaliation to protect the nation's ability to defend itself should be clear. But the notion of cyber-retaliation raises many legal and policy issues, including issues related to constitutional law, law enforcement, and civil liberties. The legal issues are most significant in peacetime—if the United States were actively engaged in conflict, the restraints on DOD action might well be relaxed. But the boundary between peacetime and conflict is unclear, especially if overt military hostilities (i.e., force on force) have not yet broken out but an adversary is probing in preparation for an attack. It is this time that poses the most peril, because DOD is constrained—because it is "officially" peacetime and yet an adversary may be gaining valuable advantage through its probes. As a first step, DOD should review the legal limits on its ability to defend itself and its C4I infrastructure against information attack. 38 After such a review, DOD should take the lead in advocating changes in national policy (including legislation, if necessary) that amend the current "rules of engagement" specifying the circumstances under which force is an appropriate response to a cyber-attack against its C4I infrastructure. These rules of engagement would explicitly specify the nature of the force that could be committed to retaliation (e.g., physical force, cyber-attack), the damage that such force should seek to inflict, the authorizations needed for various types of response, the degrees of certainty needed for various levels of attack, the issues that would need to be considered in any response (e.g., whether the benefits of exploiting the source of an attack outweigh the costs of allowing that attack to continue), and the oversight necessary to ensure that any retaliation falls within all the parameters specified in the relevant legal authorizations. The committee is not advocating a change in national policy with respect to cyber-retaliation. Indeed, it was not constituted to address the larger questions of national policy, i.e., whether other national goals do or do not outweigh the narrower national security interest in protecting its military information infrastructure, and the committee is explicitly silent 38. Press reports indicate that DOD authorities are "struggling to define new rules for deciding when to launch cyber attacks, who should authorize and conduct them and where they fit into an overall defense strategy." See Bradley Graham, "Authorities Struggle with Cyberwar Rules," Washington Post, July 8, 1998, page Al.
OCR for page 178
--> on the question of whether DOD should be given the authority (even if constrained and limited to specific types and circumstances) to allow it to retaliate against attackers of its C4I infrastructure. But it does believe that DOD should take the lead in explaining the severe consequences for its military capabilities that arise from a purely passive defense, that DOD should support changes in policy that might enable it, perhaps in concert with law enforcement agencies, to take a less passive stance, and that a national debate should begin about the pros and cons of passive versus active defense. The public policy implications of this recommendation are profound enough that they call for involvement at the highest levels of the DOD—the active involvement of the Secretary of Defense is necessary to credibly describe the implications of passive defense for C4I systems in cyberspace. To whom should DOD explain these matters? Apart from the interested public, the Congress plays a special role. The reason is that actual changes in national policy in this area that enable a less passive role for DOD will certainly require legislation. Such legislation would be highly controversial, have many stakeholders, and would be reasonable to consider (let alone adopt) only after a thorough national debate on the subject.
Representative terms from entire chapter: