1
Overview and Recommendations

We are at risk. Increasingly, America depends on computers. They control power delivery, communications, aviation, and financial services. They are used to store vital information, from medical records to business plans to criminal records. Although we trust them, they are vulnerable—to the effects of poor design and insufficient quality control, to accident, and perhaps most alarmingly, to deliberate attack. The modern thief can steal more with a computer than with a gun. Tomorrow's terrorist may be able to do more damage with a keyboard than with a bomb.

To date, we have been remarkably lucky. Yes, there has been theft of money and information, although how much has been stolen is impossible to know.1 Yes, lives have been lost because of computer errors. Yes, computer failures have disrupted communication and financial systems. But, as far as we can tell, there has been no successful systematic attempt to subvert any of our critical computing systems. Unfortunately, there is reason to believe that our luck will soon run out. Thus far we have relied on the absence of malicious people who are both capable and motivated. We can no longer do so. We must instead attempt to build computer systems that are secure and trustworthy.

In this report, the committee considers the degree to which a computer system and the information it holds can be protected and preserved. This requirement, which is referred to here as computer security, is a broad concept; security can be compromised by bad system design, imperfect implementation, weak administration of procedures, or through accidents, which can facilitate attacks. Of course, if we are to trust our systems, they must survive accidents as



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 7
Computers at Risk: Safe Computing in the Information Age 1 Overview and Recommendations We are at risk. Increasingly, America depends on computers. They control power delivery, communications, aviation, and financial services. They are used to store vital information, from medical records to business plans to criminal records. Although we trust them, they are vulnerable—to the effects of poor design and insufficient quality control, to accident, and perhaps most alarmingly, to deliberate attack. The modern thief can steal more with a computer than with a gun. Tomorrow's terrorist may be able to do more damage with a keyboard than with a bomb. To date, we have been remarkably lucky. Yes, there has been theft of money and information, although how much has been stolen is impossible to know.1 Yes, lives have been lost because of computer errors. Yes, computer failures have disrupted communication and financial systems. But, as far as we can tell, there has been no successful systematic attempt to subvert any of our critical computing systems. Unfortunately, there is reason to believe that our luck will soon run out. Thus far we have relied on the absence of malicious people who are both capable and motivated. We can no longer do so. We must instead attempt to build computer systems that are secure and trustworthy. In this report, the committee considers the degree to which a computer system and the information it holds can be protected and preserved. This requirement, which is referred to here as computer security, is a broad concept; security can be compromised by bad system design, imperfect implementation, weak administration of procedures, or through accidents, which can facilitate attacks. Of course, if we are to trust our systems, they must survive accidents as

OCR for page 7
Computers at Risk: Safe Computing in the Information Age well as attack. Security supports overall trustworthiness, and vice versa. COMPUTER SYSTEM SECURITY CONCERNS Security is a concern of organizations with assets that are controlled by computer systems. By accessing or altering data, an attacker can steal tangible assets or lead an organization to take actions it would not otherwise take. By merely examining data, an attacker can gain a competitive advantage, without the owner of the data being any the wiser. Computer security is also a concern of individuals, including many who neither use nor possess computer systems (Box 1.1). If data can be accessed improperly, or if systems lack adequate safeguards, harm may come not only to the owner of the data, but also to those to whom the data refers. The volume and nature of computerized data-bases mean that most of us run the risk of having our privacy violated in serious ways. This is particularly worrisome, since those in a position to protect our privacy may have little incentive to do so (Turn, 1990). The threats to U.S. computer systems are international, and sometimes also political. The international nature of military and intelligence threats has always been recognized and addressed by the U.S. government. But a broader international threat to U.S. information resources is emerging with the proliferation of international computer networking—involving systems for researchers, companies, and other organizations and individuals—and a shift from conventional military conflict to economic competition.2 The concentration of information and economic activity in computer systems makes those systems an attractive target to hostile entities. This prospect raises questions about the intersection of economic and national security interests and the design of appropriate security strategies for the public and private sectors. Finally, politically motivated attacks may also target a new class of system that is neither commercial nor military: computerized voting systems.3 Outside of the government, attention to computer and communications security has been episodic and fragmented. It has grown by spurts in response to highly publicized events, such as the politically motivated attacks on computer centers in the 1960s and 1970s and the more recent rash of computer viruses and penetrations of networked computer systems.4 Commercial organizations have typically concentrated on abuses by individuals authorized to use their systems, which typically have a security level that prevents only the most straightforward of attacks.

OCR for page 7
Computers at Risk: Safe Computing in the Information Age BOX 1.1 SAMPLER OF COMPUTER SYSTEM PROBLEMS: EVIDENCE OF INADEQUATE TRUSTWORTHINESS Failures of system reliability, safety, or security are increasingly serious—and apparently increasing in number. Notable are the following: A $259 million Volkswagen currency exchange scam involving phony transactions; The nearly successful attempt to use thousands of phony Bank of America automatic teller machine cards fabricated with personal identification numbers pirated from an on-line database; An almost-successful $15.2 million Pennsylvania Lottery fraud attempt in which the database of unclaimed ticket numbers was used in the fabrication of a ticket about to expire; and Thousands of reported virus attacks and hundreds of different viruses identified (e.g., Stoned, Devil's Dance, 1260, Jerusalem, Yankee Doodle, Pakistani Brain, Icelandic-2, Ping Pong, December 24, to cite just a few). Penetrations and disruptions of communication systems appear to be increasing: A software design error freezing much of AT&T's long-distance network; The German Chaos Computer Club break-ins to the National Aeronautics and Space Administration's Space Physics Analysis Network; The West German Wily Hacker attacks (involving international espionage) on Lawrence Berkeley Laboratory; The Internet worm incident in which several thousand computers were penetrated; and Several takeovers of TV satellite up-links. Individual privacy has been compromised. For example, deficient security measures at major credit agencies have allowed browsing and surreptitious assignment of thousands of individuals' credit histories to others. Health care has been jeopardized by inadequate system quality as well as by breaches of security: An error in the computer software controlling a radiation therapy machine, a Therac 25 linear accelerator, resulted in at least three separate patient deaths when doses were administered that were more than 100 times the typical treatment dose. A Michigan hospital reported that its patient information had been scrambled or altered by a virus that came with a vendor's image display system. A Cleveland man allegedly mailed over 26,000 virus-infected diskettes with AIDS prevention information to hospitals, businesses, and government agencies worldwide. NOTE: None of the cases cited above involved any classified data. References to all of them can be found in Neumann (1989).

OCR for page 7
Computers at Risk: Safe Computing in the Information Age While weak computer security obviously affects direct and indirect users of computer systems, it may have less obvious but still important impacts on vendors of computer systems. The role of security and trust in product development and marketing should grow, and not only because it is in the public interest. In particular, failure to supply appropriate security may put vendors at a serious competitive disadvantage. Even though U.S. firms lead overall in the computer and communications market, several European governments are now promoting product evaluation schemes and standards that integrate other elements of trust, notably safety, with security. These developments may make it difficult for American industry to sell products in the European market.5 Although the committee focuses on technical, commercial, and related social concerns, it recognizes that there are a number of related legal issues, notably those associated with the investigation and prosecution of computer crimes, that are outside of its scope. It is important to balance technical and nontechnical approaches to enhancing system security and trust. Accordingly, the committee is concerned that the development of legislation and case law is being outpaced by the growth of technology and changes in our society. In particular, although law can be used to encourage good practice, it is difficult to match law to the circumstances of computer system use. Nevertheless, attacks on computer and communication systems are coming to be seen as punishable and often criminal acts (Hollinger and Lanza-Kaduce, 1988) within countries, and there is a movement toward international coordination of investigation and prosecution. However, there is by no means a consensus about what uses of computers are legitimate and socially acceptable. Free speech questions have been raised in connection with recent criminal investigations into dissemination of certain computer-related information.6 There are also controversies surrounding the privacy impacts of new and proposed computer systems, including some proposed security safeguards. Disagreement on these fundamental questions exists not only within society at large but also within the community of computer specialists.7 TRENDS-THE GROWING POTENTIAL FOR SYSTEM ABUSE Overall, emerging trends, combined with the spread of relevant expertise and access within the country and throughout the world, point to growth in both the level and the sophistication of threats to major U.S. computer and communications systems. There is reason to believe that we are at a discontinuity: with respect to computer

OCR for page 7
Computers at Risk: Safe Computing in the Information Age security, the past is not a good predictor of the future. Several trends underlie this assessment: Networking and embedded systems are proliferating, radically changing the installed base of computer systems and system applications.8 Computers have become such an integral part of American business that computer-related risks cannot be separated from general business risks. The widespread use of databases containing information of a highly personal nature, for example, medical and credit records, leaves the privacy of individuals at risk. The increased trust placed in computers used in safety-critical applications (e.g., medical instruments) increases the likelihood that accidents or attacks on computer systems can cost people their lives. The ability to use and abuse computer systems is becoming widespread. In many instances (e.g., design of computer viruses, penetration of communications systems, credit card system fraud) attacks are becoming more sophisticated. The international political environment is unstable, raising questions about the potential for transnational attacks at a time when international corporate, research, and other computer networks are growing. THE NEED TO RESPOND Use of computer systems in circumstances in which we must trust them is widespread and growing. But the trends identified above suggest that whatever trust was justified in the past will not be justified in the future unless action is taken now. (Box 1.2 illustrates how changing circumstances can profoundly alter the effective trustworthiness of a system designed with a given set of expectations about the world.) Computer system security and trustworthiness must become higher priorities for system developers and vendors, system administrators, general management, system users, educators, government, and the public at large. This observation that we are at a discontinuity is key to understanding the focus and tone of this report. In a time of slow change, prudent practice may suggest that it is reasonable to wait for explicit evidence of a threat before developing a response. Such thinking is widespread in the commercial community, where it is hard to justify expenditures based on speculation. However, in this period of rapid change, significant damage can occur if one waits to develop a countermeasure until after an attack is manifest. On the one hand, it may

OCR for page 7
Computers at Risk: Safe Computing in the Information Age BOX 1.2 PERSONAL COMPUTERS: SECURITY DETERIORATES WITH CIRCUMSTANCES Personal computers (PCs), such as the popular IBM PC running the MS/DOS operating system, or those compatible with it, illustrate that what was once secure may no longer be. Security was not a major consideration for developers and users of early PCs. Data was stored on floppy disks that could be locked up if necessary, and information stored in volatile memory disappeared once the machine was turned off. Thus the operating system contained no features to ensure the protection of data stored in the computer. However, the introduction of hard disks, which can store large amounts of potentially sensitive information in the computer, introduced new vulnerabilities. Since the hard disk, unlike the floppy disk, cannot be removed from the computer to protect it, whoever turns on the PC can have access to the data and programs stored on the hard disk. This increased risk can still be countered by locking up the entire machine. However, while the machine is running, all the programs and data are subject to corruption from a malfunctioning program, while a dismounted floppy is physically isolated. The most damaging change in the operating assumptions underlying the PC was the advent of network attachment. External connection via networks has created the potential for broader access to a machine and the data it stores. So long as the machine is turned on, the network connection can be exercised by a remote attacker to penetrate the machine. Unfortunately, MS/DOS does not contain security features that, for example, can protect against unwanted access to or modification of data stored on PCs. A particularly dangerous example of compromised PC security arises from the use of telecommunication packages that support connecting from the PC to other systems. As a convenience to users, some of these packages offer to record and remember the user's password for other systems. This means that any user penetrating the PC gains access not only to the PC itself but also to all the systems for which the user has stored his password. The problem is compounded by the common practice of attaching a modem to the PC and leaving it turned on at night to permit the user to dial up to the PC from home: since the PC has no access control (unless the software supporting the modem provides the service), any attacker guessing the telephone number can attach to the system and steal all the passwords. Storing passwords to secure machines on a machine with no security might seem the height of folly. However, major software packages for PCs invite the user to do just that, a clear example of how vendors and users ignore security in their search for ease of use.

OCR for page 7
Computers at Risk: Safe Computing in the Information Age take years to deploy a countermeasure that requires a major change to a basic system. Thus, for example, the current concern about virus attacks derives not from the intrinsic difficulty of resisting the attacks, but from the total lack of a countermeasure in such popular systems as MS/DOS and the Apple Macintosh operating system. It will take years to upgrade these environments to provide a technical means to resist virus attacks. Had such attacks been anticipated, the means to resist them could have been intrinsic to the systems. On the other hand, the threats are changing qualitatively; they are more likely to be catastrophic in impact than the more ordinary threat familiar to security officers and managers. This report focuses on the newer breed of threat to system trustworthiness. The committee concludes, for the various reasons outlined above and developed in this report, that we cannot wait to see what attackers may devise, or what accident may happen, before we start our defense. We must develop a long-term plan, based on our predictions of the future, and start now to develop systems that will provide adequate security and trustworthiness over the next decade. TOWARD A PLANNED APPROACH Taking a coherent approach to the problem of achieving improved system security requires understanding the complexity of the problem and a number of interrelated considerations, balancing the sometimes conflicting needs for security and secrecy, building on ground-work already laid, and formulating and implementing a new plan for action. Achieving Understanding The Nature of Security: Vulnerability, Threat, and Countermeasure The field of security has its own language and mode of thought, which focus on the processes of attack and on preventing, detecting, and recovering from attacks. In practice, similar thinking is accorded to the possibility of accidents that, like attacks, could result in disclosure, modification, or destruction of information or systems or a delay in system use. Security is traditionally discussed in terms of vulnerabilities, threats, and countermeasures. A vulnerability is an aspect of some system that leaves it open to attack. A threat is a hostile party with the potential to exploit that vulnerability and cause damage. A countermeasure or safeguard is an added step or improved design that eliminates the vulnerability and renders the threat impotent.

OCR for page 7
Computers at Risk: Safe Computing in the Information Age A safe containing valuables, for example, may have a noisy combination lock—a vulnerability—whose clicking can be recorded and analyzed to recover the combination. It is surmised that safecrackers can make contact with experts in illegal eavesdropping—a threat. A policy is therefore instituted that recordings of random clicking must be played at loud volume when the safe is opened—a countermeasure. Threats and countermeasures interact in intricate and often counterintuitive ways: a threat leads to a countermeasure, and the countermeasure spawns a new threat. Few countermeasures are so effective that they actually eliminate a threat. New means of attack are devised (e.g., computerized signal processing to separate ''live" clicks from recorded ones), and the result is a more sophisticated threat. The interaction of threat and countermeasure poses distinctive problems for security specialists: the attacker must find but one of possibly multiple vulnerabilities in order to succeed; the security specialist must develop countermeasures for all. The advantage is therefore heavily to the attacker until very late in the mutual evolution of threat and countermeasure.9 If one waits until a threat is manifest through a successful attack, then significant damage can be done before an effective countermeasure can be developed and deployed. Therefore countermeasure engineering must be based on speculation. Effort may be expended in countering attacks that are never attempted.10 The need to speculate and to budget resources for countermeasures also implies a need to understand what it is that should be protected, and why; such understanding should drive the choice of a protection strategy and countermeasures. This thinking should be captured in security policies generated by management; poor security often reflects both weak policy and inadequate forethought.11 Security specialists almost uniformly try to keep the details of countermeasures secret, thus increasing the effort an attacker must expend and the chances that an attack will be detected before it can succeed. Discussion of countermeasures is further inhibited because a detailed explanation of sophisticated features can be used to infer attacks against lesser systems.12 As long as secrecy is considered important, the dissemination, without motivation, of guidelines developed by security experts will be a key instrument for enhancing secure system design, implementation, and operation. The need for secrecy regarding countermeasures and threats also implies that society must trust a group of people, security experts, for advice on how to maintain security.

OCR for page 7
Computers at Risk: Safe Computing in the Information Age Confidence in countermeasures is generally achieved by submitting them for evaluation by an independent team; this process increases the lead times and costs of producing secure systems. The existence of a successful attack can be demonstrated by an experiment, but the adequacy of a set of countermeasures cannot. Security specialists must resort to analysis, yet mathematical proofs in the face of constantly changing systems are impossible. In practice, the effectiveness of a countermeasure often depends on how it is used; the best safe in the world is worthless if no one remembers to close the door. The possibility of legitimate users being hoodwinked into doing what an attacker cannot do for himself cautions against placing too much faith in purely technological countermeasures. The evolution of countermeasures is a dynamic process. Security requires ongoing attention and planning, because yesterday's safeguards may not be effective tomorrow, or even today. Special Security Concerns Associated with Computers Computerization presents several special security challenges that stem from the nature of the technology, including the programmability of computers, interconnection of systems, and the use of computers as parts of complex systems. A computing system may be under attack (e.g., for theft of data) for an indefinite length of time without any noticeable effects, attacks may be disguised or may be executed without clear traces being left, or attacks may be related to seemingly benign events. Thus "no danger signals" does not mean that everything is in order.13 A further complication is the need to balance security against other interests, such as impacts on individual privacy. For example, automated detection of intrusion into a system, and other safeguards, can make available to system administrators significant information about the behavior of individual system users. To some extent, those attributes of computing that introduce vulnerabilities can also be used to implement countermeasures. A computer system (unlike a file cabinet) can take active measures in its defense, by monitoring its activity and determining which user and program actions should be permitted (Anderson, 1980). Unfortunately, as discussed later in this report, this potential is far from realized. Programmability The power of a general-purpose computer lies in its ability to become an infinity of different machines through programming.14 This is also a source of great vulnerability, because if a system can be programmed, it can be programmed to do bad things.

OCR for page 7
Computers at Risk: Safe Computing in the Information Age Thus by altering program text a computer virus can transform a familiar and friendly machine into something else entirely (Cohen, 1984). The vulnerability introduced by programmability is compounded by the degree to which the operation of a computer is hidden from its user. Whereas an individual concerned about security can inspect a mechanical typewriter and safely conclude that the effects of pressing a key are the appearance of a letter on the paper and the imprint of a letter on the ribbon, he can gain no such confidence about the operation of a word processor. It is clear that the pressing of a word processor's key causes the appearance of a letter on the screen. It is in no sense clear what else is happening—whether, for instance, the letters are being saved for subsequent transmission or the internal clock is being monitored for a "trigger date" for the alteration or destruction of files. Embeddedness and Interconnection The potential for taking improper irreversible actions increases with the degree to which computers are embedded in processes.15 The absence of human participation removes checks for the reasonableness of an action. And the time scale of automatic decisions may be too short to allow intervention before damage is done. Interconnection enables attacks to be mounted remotely, anonymously, and against multiple vulnerabilities concurrently, creating the possibility of overwhelming impacts if the attacks are successful. This risk may not be understood by managers and system users. If a particular node on a massive, heterogeneous network does not contain any sensitive information, its owners may not be motivated to install any countermeasures. Yet such "wide-open" nodes can be used to launch attacks on the network as a whole, and little can be done in response, aside from disconnecting. The "Wily Hacker," for example, laundered his calls to defense-related installations through various university computers, none of which suffered any perceptible loss from his activities. The Internet worm of November 1988 also showed how networking externalizes risk. Many of the more than 2,000 affected nodes were entered easily once a "neighbor" node had been entered, usually through the electronic equivalent of an unlocked door. In many cases, communication and interconnection have passed well beyond the simple exchange of messages to the creation of controlled opportunities for outsiders to access an organization's systems to facilitate either organization's business. On-line access by major telephone customers to telephone system management data and by large businesses to bank systems for treasury management

OCR for page 7
Computers at Risk: Safe Computing in the Information Age functions are two examples of this phenomenon. A related development is electronic data interchange (EDI), in which companies have computer-communications links with suppliers and customers to automate ordering, queries about the status of orders, inventory management, market research, and even electronic funds transfer (EFT). EDI and EFT may add an additional system layer or interconnection where systems are mediated by third-party suppliers that collect, store, and forward messages between various parties in various organizations. This situation illustrates the need for trustworthiness in common carriage. In short, a wide range of organizations are connected to each other through computer systems, sometimes without knowing they are interconnected. Interconnection gives an almost ecological flavor to security; it creates dependencies that can harm as well as benefit the community of those who are interconnected. An analogy can be made to pollution: the pollution generated as a byproduct of legitimate activity causes damage external to the polluter. A recognized public interest in eliminating the damage may compel the installation of pollution control equipment for the benefit of the community, although the installation may not be justified by the narrow self-interest of the polluter. Just as average citizens have only a limited technical understanding of their vulnerability to pollution, so also individuals and organizations today have little understanding of the extent to which their computer systems are put at risk by those systems to which they are connected, or vice versa. The public interest in the safety of networks may require some assurances about the quality of security as a prerequisite for some kinds of network connection. Security Must Be Holistic—Technology, Management, and Social Elements Computer security does not stop or start at the computer. It is not a single feature, like memory size, nor can it be guaranteed by a single feature or even a set of features. It comprises at a minimum computer hardware, software, networks, and other equipment to which the computers are connected, facilities in which the computer is housed, and persons who use or otherwise come into contact with the computer. Serious security exposures may result from any weak technical or human link in the entire complex. For this reason, security is only partly a technical problem: it has significant procedural, administrative, physical facility, and personnel components as well. The General Accounting Office's recent criticisms of financial computer systems, for example, highlighted the risks associated with poor physical

OCR for page 7
Computers at Risk: Safe Computing in the Information Age them from Orange Book ratings. Industry has complained for some time about current export controls on trusted systems. The requirement for case-by-case review of export licenses for trusted systems with Orange Book ratings of B3 and above adds to the cost of such systems, because sales may be restricted and extra time is needed to apply for and receive export approval. These prospects discourage industry from developing more secure systems; vendors do not want to jeopardize the exportability of their mainline commercial offerings.27 The committee recommends that Orange Book ratings not be used as export control criteria. It also recommends that the Department of Commerce, in conjunction with the Departments of Defense and State, clarify for industry the content of the regulations and the process by which they are implemented. Removal of Orange Book ratings as control parameters would also help to alleviate potential problems associated with multiple, national rating schemes (see Chapter 5). The crux of the problem appears to be confusion among Orange Book ratings, dual-use (military and civilian) technology, and military-critical technology. Security technology intended to counter an intelligence-grade threat is considered military critical and not dual use—it is not aimed at commercial as well as military uses. Security technology intended to counter a lower, criminal-grade threat is of use to both defense and commercial entities, but it is not military critical. Since an Orange Book rating per se is not proof against an intelligence-grade threat, it does not alone signal military-critical technology that should be tightly controlled. Industry needs to know which features of a product might trigger export restrictions. 4b. Review export controls on implementations of the Data Encryption Standard. The growth of networked and distributed systems has created needs for encryption in the private sector. Some of that pressure has been seen in the push for greater exportability of products using the Data Encryption Standard (DES) and its deployment in foreign offices of U.S. companies.28 In principle, any widely available internationally usable encryption algorithm should be adequate. NIST, working with NSA, is currently trying to develop such algorithms. However, the committee notes that this effort may not solve industry's problems, for several reasons. The growing installed base of DES products cannot be easily retrofitted with the new products. The foreign supply of DES products may increase the appeal of foreign products. Finally, NSA-influenced alternatives may be unacceptable to foreign or even U.S. buyers, as evidenced by the American Banking Association's opposition

OCR for page 7
Computers at Risk: Safe Computing in the Information Age to the NSA's proposals to effectively restrict banks to encryption algorithms designed and developed by NSA when the DES was last recertified, in 1988. The committee has been apprised that NSA, because of classified national security concerns, does not support the removal of remaining restrictions on export of DES. However, there is a growing lack of sympathy in the commercial community with the NSA position on this matter. The committee recommends that the Administration appoint an arbitration group consisting of appropriately cleared individuals from industry and the Department of Commerce as well as the Department of Defense to impartially evaluate if there are indeed valid reasons at this time for limiting the export of DES.29 Recommendation 5 Fund and Pursue Needed Research The dramatic changes in the technology of computing make it necessary for the computer science and engineering communities to rethink some of the current technical approaches to achieving security. The most dramatic example of the problem is the confusion about how best to achieve security in networked environments and embedded systems. At present, there is no vigorous program to meet this need. Particularly worrisome is the lack of academic research in computer security, notably research relevant to distributed systems and networks.30 Only in theoretical areas, such as number theory, zero-knowledge proofs, and cryptology, which are conducive to individual research efforts, has there been significant academic effort. Although it must be understood that many research topics could be pursued in industrial as well as academic research laboratories, the committee has focused on strengthening the comparatively weaker research effort in universities, since universities both generate technical talent and are traditionally the base for addressing relatively fundamental questions. The committee recommends that government sponsors of computer science and technology research (in particular, DARPA and NSF) undertake well-defined and adequately funded programs of research and technology development in computer security. A key role for NSF (and perhaps DARPA), beyond specific funding of relevant projects, is to facilitate increased cross-coupling between security experts and researchers in related fields. The committee also recommends that NIST, in keeping with its interest in computer security and its charter to enhance security for sensitive unclassified data and systems, pro-

OCR for page 7
Computers at Risk: Safe Computing in the Information Age BOX 1.8 SECURITY RESEARCH AGENDA Security modularity—How can a set of system components with known security properties be combined or composed to form a larger system with known security properties? How can a system be decomposed into building blocks, units that can be used independently in other systems? Security policy models—Security requirements other than disclosure control, such as integrity, availability, and distributed authentication and authorization, are not easily modeled. There is also a need for better models that address protocols and other aspects of distributed systems. Cost/benefit models for security—How much does security (including also privacy protection) really cost, and what are its real benefits? New security mechanisms—As new requirements are proposed, as new threats are considered, and as new technologies become prevalent, new mechanisms are required to maintain effective security. Some current topics for research include mechanisms to support critical aspects of integrity (separation of duty, for example), distributed key management on low-security systems, multiway and transitive authentication, availability (especially in distributed systems and networks), privacy assurance, and access controllers in networks to permit interconnection of mutually suspicious organizations. Increasing effectiveness of assurance techniques—More needs to be known about the spectrum of analysis techniques, both formal and informal, and to what aspects of security they best apply. Also, tools are needed to support the generation of assurance evidence. Alternative representations and presentations—New representations of security properties may yield new analysis techniques. For example, vide funding for research in areas of key concern to it, either internally or in collaboration with other agencies that support research. The committee has identified several specific technical issues that justify research (see Box 1.8). Chapter 8 provides a fuller discussion; Chapters 3 and 4 address some underlying issues. The list, although by no means complete, shows the scope and importance of a possible research agenda. The committee believes that greater university involvement in large-scale research-oriented system development projects (comparable to the old Arpanet and Multics programs) would be highly beneficial for security research. It is important that contemporary projects, both inside and outside universities, be encouraged to use state-of-the art software development tools and security techniques, in order to evaluate these tools and to assess the expected gain in system security. Also, while academic computer security research traditionally has been

OCR for page 7
Computers at Risk: Safe Computing in the Information Age graphics tools that allow system operators to set, explore, and analyze proposed policies (who should get access to what) and system configurations (who has access to what) may help identify weaknesses or unwanted restrictions as policies are instituted and deployed systems used. Automated security procedures—Research is needed in automating critical aspects of system operation, to assist the system manager in avoiding security faults in this area. Examples include tools to check the security state of a system, models of operational requirements and desired controls, and threat assessment aids. Nonrepudiation—To protect proprietary rights it may be necessary to record user actions so as to bar the user from later repudiating these actions. Doing this in a way that respects the privacy of users is difficult. Resource control—Resource control is associated with the prevention of unauthorized use of proprietary software or databases legitimately installed in a computing system. It has attracted little research and implementation effort, but it poses some difficult technical problems and possibly problems related to privacy as well. Systems with security perimeters—Network protocol design efforts have tended to assume that networks will provide general interconnection. However, as observed in Chapter 3, a common practical approach to achieving security in distributed systems is to partition the system into regions that are separated by a security perimeter. This may cause a loss of network functionality. If, for example, a network permits mail but not directory services (because of security concerns about directory searches), less mail may be sent because no capability exists to look up the address of a recipient. performed in computer science departments, several study areas are clearly appropriate for researchers based in business schools, including assessing the actual value to an organization of information technology and of protecting privacy. DARPA has a tradition of funding significant system development projects of the kind that can be highly beneficial for security research. Examples of valuable projects include: Use of state-of-the-art software development techniques and tools to produce a secure system. The explicit goal of such an effort should be to evaluate the development process and to assess the expected gain in system quality. The difficulty of uncovering vulnerabilities through testing suggests that a marriage of traditional software engineering techniques with formal methods is needed. Development of distributed systems with a variety of security properties. A project now under way, with DARPA funding, is the

OCR for page 7
Computers at Risk: Safe Computing in the Information Age development of encryption-based private electronic mail. Another such project could focus on decentralized, peer-connected name servers. Development of a system supporting some approach to data integrity. There are now some proposed models for integrity, but without worked examples it will be impossible to validate them. This represents an opportunity for DARPA-NIST cooperation. In addition to funding specific relevant projects, both DARPA and NSF should encourage collaboration across research fields. Cross-disciplinary research in the following areas would strengthen system trustworthiness: Safety: There is growing concern about and interest in the safety-related aspects of computer processing both in the United States and internationally. Fault-tolerant computing: Much research has been directed at the problem of fault-tolerant computing, and an attempt should be made to extend this work to other aspects of security. Code analysis: People working on optimizing and parallelizing compilers have extensive experience in analyzing both source and object code for a variety of properties. An attempt should be made to see if similar techniques can be used to analyze code for properties related to security. Security interfaces: People working in the area of formal specification should be encouraged to specify standardized interfaces to security services and to apply their techniques to the specification and analysis of high-level security properties. Theoretical research: Theoretical work needs to be properly integrated in actual systems. Often both theoreticians and system practitioners misunderstand the system aspects of security or the theoretical limitations of secure algorithms. Programming language research: New paradigms require new security models, new design and analysis techniques, perhaps additional constructs, and persuasion of both researchers and users that security is important before too many tools proliferate. Software development environments: Myriad tools (e.g., theorem provers, test coverage monitors, object managers, and interface packages) continue to be developed by researchers, sometimes in collaborative efforts such as Arcadia. Some strategy for integrating such tools is needed to drive the research toward more system-oriented solutions.31 Again, much of this research is appropriate for both commercial and academic entities, and it might require or benefit from industry-

OCR for page 7
Computers at Risk: Safe Computing in the Information Age university collaboration. Certainly, joint industry-university efforts may facilitate the process of technology transfer. NSF and DARPA have a tradition of working with the broad science community and could obviously take on programs to facilitate needed collaboration. Some possible specific actions are suggested in Chapter 8. Recommendation 6 Establish an Information Security Foundation The public needs an institution that will accelerate the commercialization and adoption of safer and more secure computer and communications systems. To meet that need, the committee recommends the establishment of a new private organization—a consortium of computer users, vendors, and other interested parties (e.g., property and casualty insurers). This organization must not be, or even be perceived to be, a captive of government, system vendors, or individual segments of the user community. The committee recommends a new institution because it concludes that pressing needs in the following areas are not likely to be met adequately by existing entities: Establishment of Generally Accepted System Security Principles, or GSSP; Research on computer system security, including evaluation techniques; System evaluation; Development and maintenance of an incident, threat, and vulnerability tracking system; Education and training; Brokering and enhancing communications between commercial and national security interests; and Focused participation in international standardization and harmonization efforts for commercial security practice. Why should these functions be combined in a single organization? Although the proposed organization would not have a monopoly on all of these functions, the committee believes that the functions are synergistic. For example, involvement in research would help the organization recruit technically talented staff; involvement in research and the development of GSSP would inform the evaluation effort; and involvement in GSSP development and evaluation would inform education, training, and contributions to international criteria-setting and evaluation schemes. Further, a new organization would have

OCR for page 7
Computers at Risk: Safe Computing in the Information Age more flexibility than those currently focused on security to build strong bridges to other aspects of trust, notably safety. In the short run, this organization, called the Information Security Foundation (ISF) in this report, would act to increase awareness and expectations regarding system security and safety. The pressure provided by organized tracking and reporting of faults would encourage vendors and users to pay greater attention to system quality; the development and promulgation of GSSP should cause users and vendors to focus on an accepted base of prudent practice. In the longer term, a major activity of the ISF would be product evaluation. The complex and critical nature of security products makes independent evaluation essential. The only current official source of evaluations, the NCSC, has been criticized as poorly suited to meeting industry's needs, and changes in its charter and direction are reducing its role in this area. The process of evaluation described in Chapters 5 and 7 is intended to address directly industry's concerns with the current process and to define a program that can be a success in the commercial marketplace. The committee concludes that some form of system evaluation is a critical aspect of achieving any real improvement in computer security. Also in the longer term, the ISF would work to bridge the security and safety arenas, using as vehicles GSSP and evaluation as well as the other activities. The ISF could play a critical role in improving the overall quality and trustworthiness of computer systems, using the need for better security as an initial target to motivate its activities. The organization envisioned must be designed to interact closely with government, specifically the NCSC and NIST, so that its results can contribute to satisfying government needs. Similarly, it would coordinate with operational organizations such as DARPA's CERT, especially if the CERT proceeds with its plans to develop an emergency-incident tracking capability. The government may be the best vehicle to launch the ISF, but it should be an independent, private organization once functional. As discussed in detail in Chapter 7, the committee concludes that the ISF would need the highest level of governmental support; the strongest expression of such support would be a special congressional charter. Such a charter would define ISF's role and its relation to the government. At the same time, the organization should be outside of the government to keep it separate from the focus on intragovernmental security needs, internecine political squabbles, and the hiring and resource limitations that constrain NCSC and NIST. Its major source of funds should be member subscriptions and fees

OCR for page 7
Computers at Risk: Safe Computing in the Information Age for services such as evaluation. It must not depend on government funding for its viability. Note that the mission outlined above is much more challenging than defining standards or providing evaluation of consumer durables (e.g., as done by Underwriters Laboratories, Inc.). The committee does not know of any existing private organization that could take on these tasks. Although it recognizes that any proposal for establishing a new institution faces an uphill battle, the committee sees this proposal as a test of commitment for industry, which has complained loudly about the existing institutional infrastructure. Commitment to an organization like that proposed can facilitate self-regulation and greatly diminish the likelihood of explicit government regulation. If a new organization is not established—or if the functions proposed for it are not pursued in an aggressive and well-funded manner, the most immediate consequence will be the further discouraging of efforts by vendors to develop evaluated products, even though evaluation is vital to assuring that products are indeed trustworthy; the continuation of a slow rate of progress in the market, leaving many system users unprotected and unaware of the risks they face; and the prospect that U.S. vendors will become less competitive in the international systems market. Without aggressive action to increase system trustworthiness, the national exposure to safety and security catastrophes will increase rapidly. CONCLUSION Getting widely deployed and more effective computer and communications security is essential if the United States is to fully achieve the promise of the Information Age. The technology base is changing, and the proliferation of networks and distributed systems has increased the risks of threats to security and safety. The computer and communications security problem is growing. Progress is needed on many fronts—including management, development, research, legal enforcement, and institutional support—to integrate security into the development and use of computer and communications technology and to make it a constructive and routine component of information systems. NOTES 1.   Losses from credit card and communications fraud alone investigated by the Secret Service range into the millions. See Box 1.1 for other examples.

OCR for page 7
Computers at Risk: Safe Computing in the Information Age 2.   This growth may be aided by recent political changes in Eastern Europe and the Soviet Union, which are believed to be freeing up intelligence resources that analysts suggest may be redirected toward economic and technological targets (Safire, 1990). 3.   Voting systems present special challenges: First, the data is public property. Second, voting systems are information systems deployed to strange locations, handled by volunteers, abused by the media (''got to know the results by 8 p.m."), and offered by specialty vendors. Third, the openness issue can be evaded by vendors promoting proprietary approaches, in the absence of any organized screening or regulatory activity. Fourth, the security overhead in the system cannot get in the way of the operations of the system under what are always difficult conditions. Voting system technology makes an interesting case study because it is inherently system-oriented: ballot preparation, input sensing, data recording and transmission, pre-election testing, intrusion prevention, result preservation, and reporting. The variety of product responses are therefore immense, and each product must fit as wide a range of voting situations as possible, and be attractive and cost-effective. Anecdotal evidence suggests a range of security problems for this comparatively new application. (Hoffman, 1988; ECRI, 1988b; Saltman, 1988; miscellaneous issues of RISKS.) 4.   Viruses can spread by means of or independently of networks (e.g., via contaminated diskettes). 5.   The committee did not find evidence of significant Japanese activity in computer security, although viruses have begun to raise concern in Japan as evidenced by Japanese newspaper articles, and Japanese system development interests provide a foundation for possible eventual action. For competitive reasons, both Japanese and European developments should be closely monitored. 6.   A new organization, the Electronic Frontiers Foundation, has recently been launched to defend these free speech aspects. 7.   For example, professional journals and meetings have held numerous debates over the interpretation of the Internet worm and the behavior of its perpetrator; the Internet worm also prompted the issuance or reissuance of codes of ethics by a variety of computer specialist organizations. 8.   Two recent studies have pointed to the increased concern with security in networks: The congressional Office of Technology Assessment's Critical Connections: Communication for the Future (OTA, 1990) and the National Research Council's Growing Vulnerability of the Public Switched Networks (NRC, 1989b). 9.   This evolution took roughly two centuries in the case of safecracking, a technology whose systems consist of a box, a door, and a lock. 10.   This does not mean that the effort was wasted. In fact, some would argue that this is the height of success (Tzu, 1988). 11.   For example, a California prosecutor recently observed that "We probably turn down more cases [involving computer break-ins] than we charge, because computer-system proprietors haven't made clear what is allowed and what isn't" (Stipp, 1990). 12.   For example, a description of a magnetic door sensor that is highly selective about the magnetic field it will recognize as indicating "door closed" can indicate to attackers that less sophisticated sensors can be misled by placing a strong magnet near them before opening the door. 13.   For example, the GAO recently noted in connection with the numerous penetrations of the Space Physics Analysis Network in the 1980s that, "Skillful, unauthorized users could enter and exit a computer without being detected. In such cases and even in those instances where NASA has detected illegal entry, data could have been copied, altered, or destroyed without NASA or anyone else knowing" (GAO, 1989e, p. 1).

OCR for page 7
Computers at Risk: Safe Computing in the Information Age 14.   "Programming" is to be understood in a general sense—anything that modifies or extends the capabilities of a system is programming. Modification of controls on access to a system, for example, is a type of programming with significant security implications. Even special-purpose systems with no access to programming languages, not even to a "shell" or command language, are usually programmable in this sense. 15.   "Embeddedness" refers to the extent to which a computer system is embedded in a process, and it correlates with the degree to which the process is controlled by the computer. Computer-controlled X-ray machines and manufacturing systems, avionics systems, and missiles are examples of embedded systems. Higher degrees of embeddedness, generated by competitive pressures that drive the push for automation, shorten the link between information and action and increase the potential for irreversible actions taken without human intervention. By automating much of a process, embeddedness increases the leverage of an attacker. 16.   However, sometimes there will be trade-offs between security or safety and other characteristics, like performance. Such trade-offs are not unique to computing, although they may be comparatively more recent. 17.   It is worth noting, however, that "safety factors" play a role in security. Measures such as audit trails are included in security systems as a safety factor; they provide a backup mechanism for detection when something else breaks. 18.   Even NSA is confronting budget cuts in the context of overall cuts in defense spending. 19.   For example, the American Institute of Certified Public Accountants promulgates Statements on Auditing Standards (SAS), and the Financial Accounting Standards Board (FASB) promulgates what have been called Generally Accepted Accounting Principles (GAAP). Managers accept the importance of both the standards and their enforcement as a risk management tool. Adherence to these standards is also encouraged by laws and regulations that seek to protect investors and the public. (See Appendix D.) 20.   B1 is also the highest level to which systems can effectively be retrofitted with security features. 21.   An effort by several large commercial users to list desired computer and communications system security features demonstrates the importance of greater integrity protection and the emphasis on discretionary access control in that community. This effort appears to place relatively limited emphasis on assurance and evaluation, both of which the committee deem important to GSSP and to an ideal set of criteria. The seed for that effort was a project within American Express Travel Related Services to define a corporate security standard called C2-Plus and based, as the name suggests, on the Orange Book's C2 criteria (Cutler and Jones, 1990). 22.   In the past decade, a number of organizations (e.g., Corporation for Open Systems and the formerly independent Manufacturing Automation Protocol/Technical Office Protocol Users Group) have emerged with the goal of influencing the development of industry standards for computing and communications technology and promoting the use of official standards, in part by facilitating conformance testing (Frenkel, 1990). 23.   The Computer Security Act of 1987, for example, set in motion a process aimed at improving security planning in federal agencies. The experience showed that it was easier to achieve compliance on paper than to truly strengthen planning and management controls (GAO, 1990c). 24.   Examples include ISO 7498–2 (ISO, 1989), CCITT X.509 (CCITT, 1989b), and the NSA-launched Secure Data Network System (SDNS) standardization program. 25.   The very availability of such tools puts an extra responsibility on management to eliminate the kinds of vulnerabilities the tools reveal. 26.   For example, discussions of different project management structures would

OCR for page 7
Computers at Risk: Safe Computing in the Information Age     deal with their impact not only on productivity but also on security. Discussions of quality assurance would emphasize safety engineering more than might be expected in a traditional software engineering program. 27.   It is expensive for vendors to maintain two versions of products—secure and regular. Thus, all else being equal, regular versions can be expected to be displaced by secure versions. But if sales are restricted, then only the regular version will be marketed, to the detriment of security. 28.   As this report goes to press, a case is under consideration at the Department of State that could result in liberalized export of DES chips, although such an outcome is considered unlikely. 29.   As of this writing, similar actions may also be necessary in connection with the RSA public-key encryption system, which is already available overseas (without patent protection) because its principles were first published in an academic journal (Rivest et al., 1978). 30.   The paucity of academic effort is reflected by the fact that only 5 to 10 percent of the attendees at recent IEEE Symposiums on Security and Privacy have been from universities. 31.   For vendors, related topics would be trusted distribution and trusted configuration control over the product life cycle.