National Academies Press: OpenBook

Computers at Risk: Safe Computing in the Information Age (1991)

Chapter: Why the Security Market Has Not Worked Well

« Previous: Criteria to Evaluate Computer and Network Security
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

6
Why the Security Market Has Not Worked Well

Currently available are a wide variety of goods and services intended to enhance computer and communications security. These range from accessory devices for physical security, identification, authentication, and encryption to insurance and disaster recovery services, which provide computer and communications centers as a backup to an organization's or individual's own equipment and facilities. This chapter focuses on the market for secure or trusted systems and related products, primarily software. It provides an overview of the market and its problems, outlines the influences of the federal government on this market, discusses the lack of consumer awareness and options for alleviating it, and assesses actual and potential government regulation of the secure system market. Additional details on the export control process and insurance are provided in two chapter appendixes.

THE MARKET FOR TRUSTWORTHY SYSTEMS

Secure or trusted information systems are supplied by vendors of general- and special-purpose hardware and software. Overall, the market for these systems has developed slowly, although the pace is picking up somewhat now. Whereas the market in 1980 was dominated by commercial computer and communications systems with no security features, the market in 1990 includes a significant number of systems that offer discretionary access control and a growing number from both major and niche vendors with both discretionary and mandatory access control, which provides significant protections against breaches of confidentiality. Notable is the trend to produce systems rated at the Orange Book's B1 level (see Appendix A of this report), often by

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

adapting products that had had fewer security features and less assurance.

According to vendors, consumers most frequently demand security in connection with networked systems, which serve multiple users. One market research firm (International Resource Development) has estimated that the market for local area network (LAN) security devices may grow up to sixfold by the mid-1990s; it also foresees significant growth in data and voice encryption devices, in part because their costs are declining (Brown, 1989a). Other factors cited for growth in the encryption market are requirements for control of fraud in financial services and elsewhere (Datapro Research, 1989a).

Prominent in the market has been host access control software for IBM mainframes, especially IBM's RACF and Computer Associates' ACF2 and Top Secret. This type of add-on software provides (but does not enforce) services, such as user identification, authentication, authorization, and audit trails, that the underlying operating systems lack. It was originally developed in the 1970s and early 1980s, driven by the spread of multiaccess applications (mainframe-based systems were not originally developed with security as a significant consideration). Both IBM and Computer Associates plan to make these products conform to Orange Book B1 criteria. Although IBM intends now to bring its major operating systems up to the B1 level, it is reluctant to undertake development to achieve higher levels of assurance (committee briefing by IBM). Moreover, the market for host access control systems is growing slowly because those who need them generally have them already.1 One market analyst, Datapro, notes that sales come mostly from organizations required by federal or state regulations to implement security controls (Datapro Research, 1990a).

The most powerful alternatives to add-on software, of course, are systems with security and trust built in. In contrast to the mainframe environment, some vendors have been building more security features directly into midrange and open systems, possibly benefiting from the more rapid growth of this part of the market. Even in the personal computer market, newer operating systems (e.g., OS/2) offer more security than older ones (e.g., MS/DOS).

Multics, the first commercial operating system that was developed (by the Massachusetts Institute of Technology, General Electric, and AT&T Bell Laboratories) with security as a design goal, achieved a B2 rating in 1985. While Multics has a loyal following and is frequently cited as a prime exemplar of system security, its commercial history has not been encouraging. Its pending discontinuation by its vendor (now Bull, previously Honeywell, originally General Electric) apparently reflects a strategic commitment to other operating systems (Datapro Research, 1990b).

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

The history of Unix illustrates the variability of market forces during the lifetime of a single product. Originally Unix had security facilities superior to those in most commercial systems then in widespread use.2 Unix was enthusiastically adopted by the academic computer science community because of its effectiveness for software development. This community, where security consciousness was not widespread, created new capabilities, especially to interface to DARPA-sponsored networking (e.g., remote log-in and remote command execution).3 As Unix spread into the commercial marketplace, the new capabilities were demanded despite the fact that they undermined the ability to run a tight ship from the security standpoint. Subsequently, and largely spurred by the Orange Book, various efforts to strengthen the Unix system have been undertaken (including T-MACH, funded by DARPA; LOCK, funded by the National Security Agency; the IEEE POSIX 1003.6 standards proposal; and various manufacturers' projects). But the corrections will not be total: many customers still choose freedom over safety.

The slow growth of the market for secure software and systems feeds vendor perceptions that its profitability is limited. Both high development costs and a perceived small market have made secure software and system development appear as a significant risk to vendors. Moreover, a vendor that introduces a secure product before its competitors has only a year or two to charge a premium. After that, consumers come to expect that the new attributes will be part of the standard product offering. Thus the pace of change and competition in the overall market for computer technology may be inimical to security, subordinating security-relevant quality to creativity, functionality, and timely releases or upgrades. These other attributes are rewarded in the marketplace and more easily understood by consumers and even software developers.

While the overall market for computer technology is growing and broadening, the tremendous growth in retail distribution, as opposed to custom or low-volume/high-price sales, has helped to distance vendors from consumers and to diminish the voice of the growing body of computer users in vendor decision making. Although vendors have relatively direct communications with large-system customers—customers whom they know by name and with whom they have individualized contracts—they are relatively removed from buyers of personal computer products, who may be customers of a retail outlet rather than of the manufacturer itself. Retail distribution itself may constrain the marketing of security products. Vendors of encryption and access control products have indicated that some retailers may avoid offering security products because ''the issue of security dampens enthusiasm," while some of these relatively small vendors avoid re-

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

tail distribution because it requires more customer support than they can manage (Datapro Research, 1989a).

Many in the security field attribute the increased availability of more secure systems to government policies stimulating demand for secure systems (see "Federal Government Influence on the Market" below). Those policies have led to a two-tiered market: government agencies, especially those that process classified information, and their vendors, are likely to demand Orange Book-rated trusted systems; other agencies, commercial organizations, and individuals that process sensitive but unclassified information are more likely to use less sophisticated safeguards. This second market tier constitutes the bulk of the market for computer-based systems. The committee believes that, more often than not, consumers do not have enough or good enough safeguards, both because options on the market often appear to be ineffective or too expensive, and because the value of running a safe operation is often not fully appreciated. Since data describing the marketplace are limited and of questionable quality, the committee bases its judgment on members' experiences in major system user and vendor companies and consultancies. This judgment also reflects the committee's recognition that even systems conforming to relatively high Orange Book ratings have limitations, and do not adequately address consumer needs for integrity and availability safeguards.

A SOFT MARKET: CONCERNS OF VENDORS

Vendors argue that a lack of broad-based consumer understanding of security risks and safeguard options results in relatively low levels of demand for computer and communications security. For example, one survey of network users found that only 17 percent of Fortune 1000 sites and 10 percent of other sites used network security systems (Network World, 1990). Thus, although market research may signal high growth rates in certain security markets, the absolute market volume is small. To gain insight into the current market climate for secure products, the committee interviewed several hardware and software vendors.

Vendors find security hard to sell, in part because consumers and vendors have very different perceptions of the security problem.4 This situation calls for creative marketing: one vendor stresses functionality in marketing operating system software for single-user systems and security in marketing essentially the same software for multiuser local area networked systems. A commonly reported problem is limited willingness of management to pay for security, although the rise in expectations following publicity over major computer crimes sug-

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

gests that at least at the technical level, consumers are ready for more security. From the consumer's perspective, it is easy to buy something that is cheap; buying something expensive requires risk assessment and an investment in persuading management of the need. Vendors observed that they hear about what consumers would like, but they do not hear consumers say that they will not buy products that lack certain security features.

Vendors differ in their attitudes toward the Orange Book as a stimulus to commercial product security. Some indicated that they saw the government as leading the market; others characterized the government as a force that motivates their customers but not them directly. Vendors familiar with the Orange Book find it offers little comfort in marketing. For example, one customer told a sales representative that he did not need the capabilities required by the Orange Book and then proceeded to list, in his own words, requirements for mandatory access control and complete auditing safeguards, which are covered extensively in the Orange Book. Overall, vendors maintained that the Orange Book has had limited appeal outside the government contracting market, in part because it is associated with the military and in part because it adds yet more jargon to an already technically complex subject. This sentiment echoes the findings of another study that gathered inputs from vendors (AFCEA, 1989). Vendors also indicated that marketing a product developed in the Orange Book environment to commercial clients required special tactics, extra work that most have been reluctant to undertake.

Vendors also complained that it is risky to develop products intended for government evaluation (associated with the Orange Book) because the evaluation process itself is expensive for vendors—it takes time and money to supply necessary information—and because of uncertainty that the desired rating will be awarded. Time is a key concern in the relatively fast-paced computer system market, and vendors complain about both the time to complete an evaluation and the timing of the evaluation relative to the product cycle. The vendor's product cycle is driven by many factors—competition, market demands for functionality, development costs, and compatibility and synchrony with other products—of which security is just one more factor, and a factor that is sometimes perceived as having a negative impact on some of the others. While vendors may have a product development-to-release cycle that takes about three to six years, the evaluations have tended to come late in the product cycle, often resulting in the issuing of ratings after a product has been superseded by newer technology.

The time to complete an evaluation has been a function of Na-

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

tional Computer Security Center (NCSC) resources and practice. NCSC's schedule has been driven by its emphasis on security, the perceived needs of its principal clients in the national security community, and the (limited) availability of evaluation staff. By 1990, NCSC was completing evaluations at a rate of about five per year, although the shift from evaluating primarily C-level systems to primarily B-level systems was expected to extend the time required per evaluation (Anthes, 1989d; committee briefing by NSA). The time involved reflects the quality of the evaluation resources: individuals assigned to do evaluations have often had limited, if any, experience in developing or analyzing complex systems, a situation that extends the time needed to complete an evaluation; both vendors and NCSC management have recognized this. Further, as a member of the NCSC staff observed to the committee, "We don't speed things up." As of late October 1990, 1 system had obtained an A1 rating, none had been rated B3, 2 had been rated B2, 3 had been rated B1, 13 had been rated C2, and 1 had been rated C1 (personal communication, NSA, October 26, 1990). Prospects for future evaluations are uncertain, in view of the recent reorganization of the NCSC (see Chapter 7).

Vendors have little incentive to produce ratable systems when the absence of rated products has not detectably impaired sales. Customers, even government agencies that nominally require rated products, tend to buy whatever is available, functionally desirable, and or compatible with previously purchased technology. Customer willingness to buy unrated products that come only with vendor claims about their security properties suggests possibilities for false advertising and other risks to consumers.

Consider the multilevel secure database management system released by Sybase in February 1990 (Danca, 1990a). The Secure Server, as it is called, was designed and developed to meet B1-level requirements for mandatory access control as defined in the Orange Book. The development for that product began in 1985, with the initial operational (Beta) release in the spring of 1989. The Air Force adopted the Secure Server in its next version of the Global Decision Support System (GDSS), which is used by the Military Airlift Command to monitor and control worldwide airlift capabilities. However, at the time of its release, the Secure Server had not been evaluated against the Orange Book criteria because the relevant criteria, contained in the Trusted Database Interpretation (TDI), were still being reviewed. Although the TDI is expected to be released in late 1990 or early 1991, it will be at least six months (and probably nine months) before any official opinion is rendered by NCSC. In short, Sybase will be marketing a secure product that took five years to develop and the Air Force will be using that

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

product for a full year before any evaluation information is released. Both the vendors and consumers have proceeded with some degree of risk.

FEDERAL GOVERNMENT INFLUENCE ON THE MARKET

The federal government has tried to influence commercial-grade computer security through direct procurement, research support, and regulatory requirements placed on the handling of data in the private sector. That influence has been realized both directly through government actions (e.g., procurement and investment in research) and indirectly through regulations and policies that provide incentives or disincentives in the marketplaces.5 The influence of the Orange Book is discussed in Chapters 2 to 5 and in Appendix A. Procurement and strategic research programs are discussed briefly below.

Procurement

The U.S. government has tried to suggest that a strong government and commercial market would exist for security products were such products available (EIA, 1987). Industry is skeptical of such promises, arguing that the government does not follow through in its procurement (AFCEA, 1989), even after sponsoring the development of special projects for military-critical technology. However, one step the government has taken that has apparently stimulated the market is known as "C2 by '92." A directive (NTISSP No. 200, issued on July 15,1987) of the National Telecommunications and Information Systems Security Committee (NTISSC), the body that develops and issues national system security operating policies, required federal agencies and their contractors to install by 1992 discretionary access control and auditing at the Orange Book C2 level in multiuser computer systems containing classified or unclassified but sensitive information. This directive is widely believed to have stimulated the production of C2-level systems. However, its impact in the future is in question, given the divergence in programs for protecting classified and sensitive but unclassified information that has been reinforced by the Computer Security Act of 1987 and the revision of National Security Decision Directive 145 (see Chapter 7). The Computer Security Act itself has the potential for increasing the demand for trusted systems, but the security assessment and planning process it triggered fell short of expectations (GAO, 1990c).

Concern for security is not a consistent factor in government procurements. A small sample, compiled by the committee, of 30 recent

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

(1989) requests for proposal (RFPs), 10 of which were issued by DOD organizations and 20 of which were issued by the civil agencies, presents a picture of uneven concern for security: five RFPs had no stated security requirements. Five DOD and eight civil agency RFPs specified adherence to standards defined by the NCSC and the National Institute of Standards and Technology (NIST), although three of the DOD RFPs did not specify an Orange Book level. Two DOD and three civil agency RFPs indicated that unclassified but protectable data would be handled. None of the DOD RFPs specified encryption requirements; three civil agency RFPs required Data Encryption Standard (DES) encryption, and one required NSA-approved encryption technology. Access control features were required by 13 RFPs. Auditing features were required by six.

The procurement process itself provides vehicles for weakening the demand for security. Vendors occasionally challenge (through mechanisms for comment within the procurement process) strong security requirements in RFPs, on the grounds that such requirements limit competition. For example, a C2 requirement for personal computers was dropped from an RFP from the Air Force Computer Acquisition Command (AFCAC) because conforming systems were not available (Poos, 1990). Budgetary pressures may also contribute to weakening security requirements. Such pressures may, for example, result in the inclusion of security technology as a non-evaluated option, rather than as a requirement, leading to a vendor perception that the organization is only paying lip service to the need for security.

Interestingly, DOD itself is exploring novel ways to use the procurement process to stimulate the market beyond the Orange Book and military standards. In 1989 it launched the Protection of Logistics Unclassified/Sensitive Systems (PLUS) program to promote standards for secure data processing and data exchange among DOD and its suppliers. PLUS complements other DOD efforts to automate procurement procedures (e.g., electronic data interchange and Computer-aided Acquisition and Logistics Support (CALS) programs), helping to automate procurement (Kass, 1990). A subsidiary goal of PLUS is cheaper commercial security products (personal communication with PLUS staff).

Strategic Federal Investments in Research and Development

The government, especially through DARPA funding, has contributed to computer technology through large-scale strategic research and development programs that supported the creation or enhancement of facilities such as the (recently decommissioned) Arpanet network

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

serving researchers, Multics and ADEPT 50 (operating systems with security features), MACH (an extension of the Unix operating system that fully integrates network capabilities and that has been championed by the industry consortium Open Software Foundation), and the Connection Machine (an advanced parallel processor). Each of these projects—which were sponsored by DARPA—has moved the market into areas that are beneficial to both government and commercial computer users. The Arpanet and Multics experiences illustrate how very large scale, multifaceted, systems-oriented projects can catalyze substantial technological advances, expand the level of expertise in the research community, and spin off developments in a number of areas. Scale, complexity, and systems orientation are particularly important for progress in the computer and communications security arena, and the government is the largest supporter of these projects. Historically, security has been a secondary concern in such projects, although it is gaining more attention now. The widespread impact of these projects suggests that similar initiatives emphasizing security could pay off handsomely.

In the security field specifically, projects such as Multics and ADEPT 50 (which provided strong access control mechanisms), LOCK (hardware-based integrity and assurance), SeaView (a secure database management system), TMACH (a trusted or secure version of MACH), and the CCEP (Commercial COMSEC Endorsement Program for commercially produced encryption products) are intended to stimulate the market to develop enhanced security capabilities by reducing some of the development risks. The LOCK program, for example, was designed to make full documentation and background material available to major vendors so that they might profit from the LOCK experience; similar benefits are expected from the TMACH development program.

Another example is NSA's STU-III telephone project, which involved vendors in the design process. Five prospective vendors competed to develop designs; three went on to develop products. The interval from contract award to commercial product was less than three years, although years of research and development were necessary beforehand. The STU-III has decreased the price of secure voice and data communications from over $10,000 per unit to about $2,000 per unit, pleasing both government consumers and the commercial vendors. Moreover, in 1990 the DOD purchased several thousand STU-III terminals for use not only in DOD facilities but also for loan to qualified defense contractors; these firms will receive the majority of the purchased units. This program will help to overcome one obvious disincentive for commercial acquisition: to be of use, not only the party originating a call but also the receiver must have a STU-III.

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

For national security reasons, programs that are sponsored by NSA confine direct technology transfer to companies with U.S. majority ownership, thereby excluding companies with foreign ownership, control, or influence (FOCI). While the United States has legitimate national interests in maintaining technological advantage, the increasingly international nature of the computer business makes it difficult to even identify what is a U.S. company, much less target incentives (NRC, 1990). Another factor to consider in the realm of strategic research and development is the fact that, consistent with its primary mission, NSA's projects are relatively closed, whereas an agency like DARPA can more aggressively reach out to the computer science and technology community.

The proposed federal high-performance computing program (OSTP, 1989) could provide a vehicle for strategic research investment in system security technology; indeed, security is cited as a consideration in developing the component National Research and Education Network—and security would clearly be important to the success of the network. Agencies involved in generating technology through this program include DOD (with responsibility concentrated in DARPA), the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), the Department of Energy (DOE), and NIST. However, funding uncertainty and delays associated with the high-performance computing program suggest both that security aspects could be compromised and that additional but more modest large-scale technology development projects that promote secure system development may be more feasible. Certainly, they would have substantial benefits in terms of advancing and commercializing trust technology. Other government-backed research programs that focus on physical, natural, or biomedical sciences (e.g., the anticipated database for the mapping and sequencing of the human genome, or remote-access earth sciences facilities) also have security considerations that could provide useful testbeds for innovative approaches or demonstrations of known technology.

Export Controls as a Market Inhibitor

Vendors maintain that controls on exports inhibit the development of improved commercial computer and communications security products. Controls on the export of commercial computer security technology raise questions about the kind of technology transfer that should be controlled (and why), whether security technologies aimed at the civilian market should be considered to have military relevance (dual use), whether control should continue under the provisions aimed at

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

munitions, and other considerations that affect how commercial and military perspectives should be weighed and balanced for these technologies. An overview of the export control process is provided in Chapter Appendix 6.1. The challenge for policymakers is to balance national security and economic security interests in drawing the line between technology that should be controlled, because it compromises national security (in this case by hampering intelligence gathering by government entities) and technology that need not be, and allowing that line to move over time.6

The committee considered controls on the export of trusted systems and on the export of commercial-grade cryptographic products. The current rules constraining the export of trusted (and cryptographic) systems were developed at a time when the U.S. position in this area of technology was predominant. As in other areas of technology, that position has changed, and it is time to review the nature of the controls and their application, to assure that whatever controls are in place balance all U.S. interests and thereby support national security in the fullest sense over the long term. The emergence of foreign criteria and evaluation schemes (see "Comparing National Criteria Sets" in Chapter 5) makes reconsideration of export controls on trusted systems especially timely.

Balancing the possible temporary military benefit against the long-run interests of both national security applications and commercial viability, the committee concludes that Orange Book ratings, per se, do not signify military-critical technology, even at the B3 and A1 levels. Of course, specific implementations of B3 and A1 systems may involve technology (e.g., certain forms of encryption) that does raise national security concerns, but such technology is not necessary for achieving those ratings. NSA officials who briefed the committee offered support for that conclusion, which is also supported by the fact that the criteria for achieving Orange Book ratings are published information. The committee urges clarifying just what aspects of a trusted system are to be controlled, independent of Orange Book levels, and targeting more precisely the technology that it is essential to control. It also urges reexamination of controls on implementations of the Data Encryption Standard (DES), which also derive from published information (the standard; NBS, 1977). Issues in both of these areas are discussed below.

Technology Transfer: Rationale for Controlling Security Exports

Currently, the military and intelligence communities provide the largest concentration of effort, expertise, and resources allocated to

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

ensuring information security. Devoted to countering threats not likely to be experienced by industry, much of this effort and expertise gives rise to special, often classified, products that are not and should not be commercially available. However, a strong commercial security effort would make it possible for the defense sector to concentrate its development resources on military-critical technology. Then the flow of technology for dual-use systems could be substantially reversed, thus lessening concerns about the export of vital military technology.

Exports of dual-use computer technologies are controlled largely for defensive reasons, since those technologies can be used against U.S. national security—to design, build, or implement weaponry or military operations, for example. Computer security presents offensive and defensive concerns. Adversaries' uses of computer security technologies can hamper U.S. intelligence gathering for national security purposes (OTA, 1987b). As a result, DOD seeks to review sophisticated new technologies and products, to prevent potential adversaries of the United States from acquiring new capabilities, whether or not the DOD itself intends to use them. Another concern is that international availability exposes the technology to broader scrutiny, especially by potential adversaries, and thus increases the possibility of compromise of safeguards.

The need to minimize exposure of critical technology implies that certain military-critical computer security needs will continue to be met through separate rather than dual-use technology (see Appendix E, "High-grade Threats"). As noted in this report's "Overview" (Chapter 1), national security dictates that key insights not be shared openly, even though such secrecy may handicap the development process (see "Programming Methodology,'' Chapter 4). To maintain superiority, the export of such technology will always be restricted. Thus the discussion in this chapter focuses on dual-use technology.

Export Control of Cryptographic Systems and Components

Historically, because of the importance of encryption to intelligence operations and the importance of secrecy to maintaining the effectiveness of a given encryption scheme, cryptographic algorithms and their implementations could not be exported at all, even to other countries that participate in the Coordinating Committee on Multilateral Export Controls (CoCom).

Restrictions on exports of DES have been contested by industry because of the growing use of DES. The restrictions were recently relaxed somewhat, allowing for export of confidentiality applications under the International Traffic in Arms Regulations (ITAR; Office of

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

the Federal Register, 1990) to financial institutions or U.S.-company subsidiaries overseas. DES may also be exported for data integrity applications (NIST, 1990b). That is, DES may be used to compute integrity checks for information but may not be used to encrypt the information itself. Private (vendor-specific) algorithms are generally approved for export following review by NSA (although that review may result in changes in the algorithm to permit export). The Department of Commerce reviews export licenses for DES and other cryptographic products intended for authentication, access control, protection of proprietary software, and automatic teller devices.

Because of current controls, computer-based products aimed at the commercial market that incorporate encryption capabilities for confidentiality can only be exported for limited specific uses. (Ironically, encryption may even be unavailable as a method to assure safe delivery of other controlled products, including security products.) Affected products include Dbase-IV and other systems (including PC-oriented systems) with message and file security features. However, anecdotal evidence suggests that the regulations may not be applied consistently, making it difficult to assess their impact.

In some cases, the missing or disabled encryption function can be replaced overseas with a local product; indigenous DES implementations are available overseas. The local product may involve a different, locally developed algorithm. It is not clear, however, that modular replacement of encryption units will always be possible. The movement from auxiliary black-box units to integral systems suggests that it will become less feasible, and there is some question about whether modular replacement violates the spirit if not the letter of existing controls, which may discourage some vendors from even attempting this option. Vendors are most troubled by the prospect that the growing integration of encryption into general-purpose computing technology threatens the large export market for computer technology at a time when some 50 percent or more of vendors' revenues may come from overseas.

Much of the debate that led to the relaxation of export restrictions for DES centered on the fact that the design of DES is widely known, having been widely published for many years. Similarly, the RSA public-key algorithm (see "Selected Topics in Computer Security Technology," Appendix B) is well known and is, in fact, not patented outside the United States—because the basic principles were first published in an academic journal (Rivest et al., 1978). Consequently, there are implementations of DES and RSA that have been developed outside the United States and, as such, are not bound by U.S. restrictions.7 However, they may be subject to foreign export control regimes. With

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

U.S. vendors enjoined from selling DES abroad, then foreign consumers and, more importantly, large multinational consumers will simply purchase equivalent systems from foreign manufacturers.

Recognizing the demand for a freely exportable confidentiality algorithm, NIST, in consultation with NSA, has announced plans to develop and certify a new algorithm for protecting sensitive but unclassified information, possibly drawing on a published public-key system. A joint NIST-NSA committee is working to develop a set of four cryptographic algorithms for use in the commercial environment. One algorithm would provide confidentiality and thus is a DES substitute. A public-key distribution algorithm would be used to distribute the keys used by the first algorithm. The last two algorithms would be used to provide digital signatures for messages: one would compute a one-way hash on a message and the other would digitally sign the hash. All of the algorithms would, by design, be exportable, thus addressing a major complaint about DES. However, this process has been delayed, apparently because of NSA's discomfort with NIST's reported preference for using RSA, which it perceives as almost a de facto standard (Zachary, 1990).

The announced development of one or more exportable algorithms has not satisfied vendors, who note that overseas competitors can offer local implementations of DES, which has become widely recognized as a standard. By contrast, the new algorithm, while promised to be at least as good as DES, may be difficult to sell as it will be incompatible with DES implementations in use and may be tainted as U.S.-government-developed. Under the circumstances, if national security objections to free DES export continue, they should at the least be explained to industry. Also, independent expert review of the new algorithm is desirable to elevate confidence to the level that DES has attained. Note that there are other (non-DES) commercially developed encryption algorithms that are licensed for export by the Department of State. The United States is typically involved in their development, and some 98 percent of the products implementing these algorithms are approved for export (committee briefing by NSA).

Export Control of Trusted Systems

Trusted systems that have been evaluated at the Orange Book's levels B3 and above are subject to a case-by-case review, whether or not they incorporate cryptography or other technologies deemed military-critical.8 That is, the government must approve the export of a given system to a given customer for a given application if it is, or could be, rated as B3 or above; products with lower ratings are not regarded

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

as military-critical technology. The same rules extend to documentation and analysis (e.g., for a technical conference or journal) of affected products. An average of 15 such license applications per year (covering five to seven items) have been reviewed over the past three years, and all have been granted.9 About half have involved U.S. vendors providing technical data to their subsidiaries. In the case of software verification tools, which are used to develop trusted systems, there is the added requirement that informal intergovernmental agreements exist to monitor the tools' installation and operation. This is somewhat less restrictive than the treatment for supercomputers.

Note that in some respects trusted systems technology is very difficult to control because it depends heavily on software, which is relatively easy to copy and transport (NRC, 1988a). As a result, such technology can never be the only line of defense for protection of sensitive information and systems.

The Commercial Imperative

Because of the national security interests that dominate the ITAR, the current export control regime for high-level trusted systems and for most encryption products does not contain mechanisms for addressing vendor concerns about competitiveness. By contrast, commercial competitiveness concerns affect both the evolution of the Control List (CL) and the Commodity Control List (CCL) associated with the Export Administration Regulations (see Chapter Appendix 6.1) and the periodic reviews of dual-use technologies by the United States and other participants in CoCom. Under the terms of the Export Administration Act (50 U.S.C. APP. §§ 2401–2420, as amended), foreign availability may also justify the relaxation of controls for particular products, as it did for AT-class PCs in July 1989. Foreign availability is not, however, a factor in administering controls on military-critical technologies under the ITAR.

The discussions of controls on dual-use technology exports in general draw on a broader range of perspectives than do the discussions of technologies controlled under the ITAR, in part because there is generally no argument over whether a product is a munition or of fundamentally military value. As a result there is at least the potential for a greater balancing of policy interests in the making of control decisions affecting non-ITAR technologies. The complaints from industry surrounding controls on the export of DES and RSA, algorithms for encryption that fall in part under ITAR rules, signal a larger problem developing for exports of security technology. In today's global market for computer technology, commercial product line development,

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

production economics, and competitive strategy lead producers to want to market products worldwide. Major vendors generally have a major share of business (often 50 percent or higher) from outside of the United States.

Industry has four key concerns: First, every sale is important for profitability in a small market, such as the current market for security-rated systems. This means that both actual disapproval of a given sale and the delay and uncertainty associated with the approval process are costly to vendors. (Supercomputers are an extreme case of this problem.) Second, the principal commercial customers today for trusted systems (and commercial-grade encryption) are multinational corporations. This means that if they cannot use a product in all of their locations around the world, they may not buy from a U.S. vendor even for their U.S. sites. Third, U.S. vendors have seen the beginnings of foreign competition in trust technology, competition that is being nurtured by foreign governments that have launched their own criteria and evaluation schemes to stimulate local industry (see "Comparing National Criteria Sets" in Chapter 5). These efforts may alter the terms of competition for U.S. vendors, stimulate new directions in international standards, and affect vendor decisions on where as well as in what to invest. Fourth, as security (and safety) technology becomes increasingly embedded in complex systems, system technology and users will come to depend on trust technology, and it will become more difficult to excise or modify in systems that are exportable. This last problem has been cited by vendors as a source of special concern; a related concern is providing interoperability if different standards are used in different countries or regions.

The real difficulty arises if a vendor considers building security into a "mainstream" commercial product. In that event, the system's level of security, rather than its processing power, becomes its dominant attribute for determining exportability. A computer system that would export [sic] under a Commerce Department license with no delay or advance processing would become subject to the full State Department munitions licensing process. No vendor will consider subjecting a mainstream commercial product to such restrictions.10

The push by industry for expanded export flexibility for security-rated systems and low-grade encryption units highlights the tension between government encouragement of the supply of computer security technology, notably through the Orange Book evaluation of commercial products, and potential government restriction of the market for security products through export controls. The presence of an export control review threshold at B3, affecting B3 and A1 systems intended for other CoCom countries, has discouraged the enhancement of systems

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

to these levels, for fear of making products more difficult, if not impossible, to export.

Since other factors, such as high development costs and softness of perceived demand, discourage development of highly rated systems, it is difficult to quantify the disincentive arising from export controls. However, the very real pressure to export DES and RSA does provide evidence of a developing international market for security technology beyond what may currently be exported. Those and similar or successor technologies are not the technologies that are used for defense purposes, and it may be time to endorse a national policy that separates but mutually respects both national security and commercial interests. Those interests may overlap in the long run: as long as policy encourages use of commercial off-the-shelf technology, a strong commercial technology base is essential for feeding military needs. Even specifically military systems profit from commercial experience. And the strength of the commercial technology base today depends on the breadth of the market, which has become thoroughly international.

CONSUMER AWARENESS

Even the best product will not be sold if the consumer does not see a need for it. Consumer awareness and willingness to pay are limited because people simply do not know enough about the likelihood or the consequences of attacks on computer systems or about more benign factors that can result in system failure or compromise.11 Consumer appreciation of system quality focuses on features that affect normal operations—speed, ease of use, functionality, and so on. This situation feeds a market for inappropriate or incomplete security solutions, such as antiviral software that is effective only against certain viruses but may be believed to provide broader protection, or password identification systems that are easily subverted in ordinary use.12

Further militating against consumer interest in newer, technical vulnerabilities and threats is the experience of most organizations with relatively unsophisticated abuses by individuals authorized to access a given system (often insiders), abuses that happen to have involved computers but that need not have. The bread-and-butter work of the corporate computer security investigator is mostly devoted to worrying about such incidents as the following:

  1. Two members of management extract valuable proprietary data from a company's computer and attempt to sell the data to a competitor;

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
  1. An employee of company A, working on a contract for company B, uses a computer of company B to send a bomb threat to company C;

  2. An employee copies a backup tape containing confidential personnel information, which he then reveals to his friends;

  3. An employee uses his access to company billing information on a computer to reduce the bills of certain customers, for which service he collects a fee; and

  4. An employee uses company computer facilities to help him arrange illegal narcotics transactions.

All five of the above incidents are typical in a particular sense. In none of them did any single computer action of the perpetrator, as a computer action, extend beyond the person's legitimate authority to access, modify, transmit, and print data. There was no problem of password integrity, for example, or unauthorized access to data, or Trojan horses. Rather, it was the pattern of actions, their intent, and their cumulative effect that constituted the abuse.

The kinds of incidents listed above consume most of the security officer's time and shape his priorities for effective countermeasures. What the corporate computer and communications security specialist is most likely to want, beyond what he typically has, are better tools for monitoring and auditing the effects of collections of actions by authorized users: detailed logs, good monitoring tools, well-designed audit trails, and the easy ability to select and summarize from these in various ways depending on the circumstances he is facing.13 This history in large measure accounts for the relatively low interest in the commercial sector in many of the security measures discussed in this report. Nevertheless, even attention to administrative and management controls, discussed in Chapter 2, is less than it could or should be.

Enhancing security requires changes in attitudes and behavior that are difficult because most people consider computer security to be abstract and concerned more with hypothetical rather than likely events. Very few individuals not professionally concerned with security, from top management through the lowest-level employee, have ever been directly involved in or affected by a computer security incident. Such incidents are reported infrequently, and then often in specialized media, and they are comprehensible only in broadest outline. Further, most people have difficulty relating to the intricacies of malicious computer actions. Yet it is understood that installing computer security safeguards has negative aspects such as added cost, diminished performance (e.g., slower response times), inconvenience in use, and the awkwardness of monitoring and enforcement, not to mention objections from the

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

work force to any of the above. The Internet worm experience showed that even individuals and organizations that understand the threats may not act to protect against them.

The sensational treatment of computer crimes in the press and in movies about computer hijacks may obscure the growing role of computer technology in accomplishing more traditional and familiar crimes (e.g., fraud and embezzlement). In the public's eye, computer crimes are perpetrated by overzealous whiz-kids or spies, not disgruntled employees or professional criminals; prosecutors also complain that the media portray perpetrators as smarter than investigators and prosecutors (comments of federal prosecutor William Cook at the 1989 National Computer Security Conference). Public skepticism may be reinforced when, as in the case of recent investigations of the Legion of Doom and other alleged system abusers (Shatz, 1990), questions are raised about violation of First Amendment rights and the propriety of search and seizure techniques—issues of longstanding popular concern.14

Inevitably, resources are invested in safeguards only when there is a net payoff as measured against goals of the organization—whether such goals are chosen or imposed. It is notable that the banking industry's protection of computer and communications systems was stimulated by law and regulation. In the communications industry, lost revenues (e.g., through piracy of services) have been a major spur to tightening security.

Insurance as a Market Lever

Insurance can offset the financial costs of a computer-related mishap. The development of the commercial market for computer insurance (described in Chapter Appendix 6.2) provides a window into the problems of achieving greater awareness and market response.15

The market for insurance against computer problems has grown slowly. Insurance industry representatives attribute the slow growth to low levels of awareness and concern on the part of organizations and individuals, plus uneven appreciation of the issues within the insurance industry, where underwriters and investigators may not fully understand the nature of the technology and its implications as used.16 Insurance industry representatives also point to the reluctance of victims of computer mishaps to make their experiences public, even at the expense of not collecting on insurance.

The process of determining whether coverage will be provided involves assessing the controls provided by a prospect. Somewhat like auditors, underwriters and carriers evaluate security-related safeguards in place by focusing on physical and operational elements.

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

There is a concern for the whole control environment, including directly relevant controls and controls for other risks, which may indicate how well new risks may be controlled.

To the extent that premiums reflect preventive measures by an organization (e.g., off-site periodic backup copies of data, high-quality door locks, 24-hour guard coverage, and sprinkler or other fire control systems), insurance is a financial lever to encourage sound security, just as the Foreign Corrupt Practices Act (P.L. 95-215) and a variety of accounting principles and standards have encouraged stronger management controls in general (and, in some instances, stronger information security in particular (Snyders, 1983)).

Education and Incident Tracking for Security Awareness

If some of the problems in the secure system marketplace are due to lack of awareness among consumers, options for raising consumer awareness of threats, vulnerabilities, and safeguards are obviously attractive. Two options are raised here as concepts—education and incident reporting and tracking. The committee's recommendation that incident tracking be undertaken by a new organization is discussed in Chapter 7.

Education

Society has often regulated itself by promoting certain behaviors, for example, taking care of library books. Societal care-taking norms must now be extended to information in electronic form and associated systems. The committee believes that elements of responsible use should be taught along with the basics of how to use computer and communication systems, much as people learn how to be responsible users of libraries. Building concern about security and responsible use into computing and general curricula (where computers are used) may be more constructive in the long run than focusing efforts on separate and isolated ethics units. This is not to discourage the many recent efforts among computer-related professional societies, schools, and companies to strengthen and discuss codes of ethics.17 However, today much of the security training is funded by commercial companies and their employee students; that training, in turn, is focused on security officers and not end users. The committee underscores that the process becomes one to persuade, lead, and educate, and when possible, to make the unacceptability of not protecting computer systems outweigh the cost of taking appropriate action.

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Incident Reporting and Tracking

More extensive and systematic reporting and tracking of security and other system problems could help to persuade decisionmakers of their value and policymakers of related risks. For example, investigation and prosecution of computer crimes have proceeded slowly because of the uneven understanding within the legal community of the criminal potential as well as the relatively high costs involved in computer crimes (Conly, 1989; U.S. DOJ, 1989). At this time there is little statistical or organized knowledge about vulnerabilities, threats, risks, and failures. (Neumann and Parker (1989) represent one attempt to characterize vulnerabilities.) What is known about security breaches is largely anecdotal, as many security events happen off the record; one source of such information within the computer science and engineering community is the electronic forum or digest known as RISKS.18 Estimates of aggregate losses vary widely, ranging from millions to billions of dollars, and estimates cited frequently in news reports are challenged by prosecutors (comments of federal prosecutor William Cook at the 1989 National Computer Security Conference). The European Community has begun to develop computer incident tracking capabilities; the British and the French both have new programs (Prefontaine, 1990). A reliable body of information could be used to make the public and the government more aware of the risks.

A means is needed for gathering information about incidents, vulnerabilities, and so forth in a controlled manner, whereby information would actually be available to those who need it—vendors, users, investigators, prosecutors, and researchers. There are a number of implementation issues that would have to be addressed, such as provision for a need-to-know compartment for unclassified information that is considered sensitive because of the potential implications of its widespread dissemination. It would also be necessary to couple reports with the caveat that yesterday's mode of attack may not necessarily be tomorrow's. The incident-reporting system associated with the National Transportation Safety Board illustrates one approach to data collection (although the handling, storage, and retrieval of the data are likely to be different—computer incident data are much more likely than transportation data to be exploited for copy-cat or derivative attacks).

Given the volume of transactions and activity that has occurred in the information systems of the private sector and occurs there each day, and given the decade or so during which numerous computer mishaps, intentional and accidental, have been documented and recorded, the validated evidence that has been accumulated remains minuscule by comparison to that of criminal incidents or accidents in other areas

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

of business risk, for example, fire, embezzlement, and theft. This situation may reflect a relatively low incidence of problems to date, but there is strong evidence that available information is significantly underreported.19 The effort begun by the DARPA Computer Emergency Response Team to develop a mechanism to track the emergency incidents to which it responds, and related plans at NIST, are a step in the right direction that could provide the impetus for a more comprehensive effort.20 Such an effort is discussed in Chapter 7.

Technical Tools to Compensate for Limited Consumer Awareness

Limited awareness of security needs or hazards can be offset in part by technical tools. Properly designed technical solutions may serve to reinforce safe behavior in a nonthreatening way, with little or no infringement of personal privacy or convenience. Impersonal, even-handed technical solutions may well be better received than nontechnical administrative enforcement. The key is to build in protections that preserve an organization's assets with the minimum possible infringement on personal privacy, convenience, and ease of use. As an explicit example, consider the ubiquitous password as a personal-identification safeguard. In response to complaints about forgetting passwords and about requirements to change them periodically, automated on-line prompting procedures can be introduced; a question-and-response process can be automatically triggered by elapsed calendar time since the last password change, and automated screening can be provided to deter a user from selecting an ill-conceived choice. Concerted vendor action, perhaps aided by trade associations, and consumer demand may be needed to get such tools offered and supported routinely by vendors.

Some issues pertaining to the proper use of such automated tools call for sensitivity and informed decision making by management. One concern is the potential for loss of community responsibility. Individual users no longer have the motivation, nor in many cases even the capability, to monitor the state of their system. Just as depersonalized ''renewed" cities of high-rises and doormen sacrifice the safety provided by observant neighbors in earlier, apparently chaotic, gossip-ridden, ethnic neighborhoods (Jacobs, 1972), so a system that relies on carefully administered access controls and firewalls sacrifices the social pressure and community alertness that prevented severe malfeasance in older nonsecure systems. A perpetrator in a tightly controlled system knows better who to look out for than one in an open system. Furthermore, a tightly controlled system discourages,

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

even punishes, the simple curiosity of ordinary users that can spot unusual acts. Wise management will avoid partitioning the community too finely lest the human component, on which all security ultimately rests, be lost. Simply put, technological tools are necessary but should not be overused

REGULATION AS A MARKET INFLUENCE: PRODUCT QUALITY AND LIABILITY

Regulation is a policy tool that can compensate for consumer inability to understand a complex product on which much may depend. Relatively little about computer systems is now regulated, aside from physical aspects of hardware.21 Although software is a principal determinant of the trustworthiness of computer systems, software has generally not been subject to regulation. However, regulations such as those governing export of technology, the development of safety-critical systems (recently introduced in the United Kingdom), or the privacy of records about persons (as implemented in Scandinavia) do have an immediate bearing on computer security and assurance. The issue of privacy protection through regulation is discussed in Chapter 2, Appendix 2.1.

Like other industries, the computer industry is uncomfortable with regulation. Industry argues that regulations can discourage production, in part by making it more costly and financially risky. This is one of the criticisms directed against export controls. However, regulation can also open up markets, when market forces do not produce socially desirable outcomes, by requiring all manufacturers to provide capabilities that would otherwise be too risky for individual vendors to introduce. Vendors have often been put on an equal footing via regulation when public safety has been an issue (e.g., in the environmental, food, drug, and transportation arenas). In the market for trusted systems, the Orange Book and associated evaluations, playing the role of standards and certification, have helped to do the same—unfortunately, that market remains both small and uncertain.22 As suggested above in "A Soft Market," individual vendors find adding trust technology into their systems financially risky because consumers are unable to evaluate security and trust and are therefore unwilling to pay for these qualities.23

Although in the United States regulation is currently a policy option of last resort, growing recognition of the security and safety ramifications of computer systems will focus attention on the question of whether regulation of computer and communications software and system developers is needed or appropriate, at least in specific situations

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

(for example, where lives are at risk). The issue has already been broached in a recent congressional committee report (Paul, 1989). Although full treatment of that question is outside the scope of this report, the committee felt it necessary to lay out some of the relevant issues as a reminder that sometimes last resorts are used, and to provide reinforcement for its belief that some incentives for making GSSP truly generally accepted would be of value.

Product Quality Regulations

System manufacturers generally have much greater technical expertise than system owners, who in acquiring and using a system must rely on the superior technical skill of the system vendor. The same observation, of course, applies to many regulated products on which the public depends, such as automobiles, pharmaceuticals, and transportation carriers. Similar motivations lie behind a variety of standards and certification programs, which may be either mandatory (effectively regulations) or voluntary (FTC, 1983). Whereas failure of an automobile can have severe, but localized, consequences, failure of an information system can adversely affect many users simultaneously—plus other individuals who may, for example, be connected to a given system or about whom information may be stored on a given system—and can even prevent efficient functioning of major societal institutions. This problem of interdependence was a concern in recent GAO inquiries into the security of government and financial systems (GAO, 1989e, 1990a,b). The widespread havoc that various computer viruses have wreaked amply demonstrates the damage that can occur when a weak spot in a single type of system is exploited. The accidental failure of an AT&T switching system, which blocked an estimated 40 million telephone calls over a nine-hour period on January 15, 1990, also illustrates the kind of disruption that is possible even under conditions of rigorous software and system testing. The public exposure and mutual interdependence of networked computer systems make trustworthiness as important for such systems as it is for systems where lives or large amounts of money are at stake, as in transportation or banking. Indeed, in settings as diverse as the testing of pharmaceuticals, the design of automobiles, or the creation of spreadsheet programs, results from programs and computers that are not directly involved in critical applications ultimately wind up in just such applications.

Goods and services that impinge on public health and safety have historically been regulated. Moreover, the direct risk to human life is a stronger and historically more successful motivation for regulation

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

than the risk to economic well-being, except in the case of a few key industries (e.g., banks and insurance carriers). This situation suggests that regulation of safety aspects of computers, a process that has begun in the United Kingdom (U.K. Ministry of Defence, 1989a,b), has the best chance for success, especially with safety-critical industries such as medical devices and health care, or even transportation. It also suggests that the case for security-related regulation will be strongest where there are the greatest tie-ins to safety or other critical impacts. Thus computer systems used in applications for which some form of regulation may be warranted may themselves be subject to regulation, because of the nature of the application. This is the thinking behind, for example, the Food and Drug Administration's efforts to look at computer systems embedded in medical instruments and processes (Peterson, 1988). Note, however, that it is not always possible to tell when a general-purpose system may be used in a safety-critical application. Thus standardized ratings have been used in other settings.24

Product Liability as a Market Influence

In addition to being directly regulated, the quality of software and systems and, in particular, their security and safety aspects, may be regulated implicitly if courts find vendors legally liable for safety- or security-relevant flaws. Those flaws could be a result of negligence or of misrepresentation; the law involved might involve contracts, torts, or consumer protection (e.g., warranties). At present, there is some indication from case law that vendors are more likely now than previously to be found liable for software or system flaws, and some legal analysts expect that trend to grow stronger (Agranoff, 1989; Nycum, 1989; Boss and Woodward, 1988). The committee applauds that trend, because it believes that security and trust have been overlooked or ignored in system development more often than not. Further, the committee believes that a recognized standard for system design and development, which could consist of GSSP, can provide a yardstick against which liability can be assessed.25 Depending exclusively on legal liability as a mechanism to stimulate improvements in quality could backfire: it could inhibit innovation because of fears linking legal risks and the development of new products. GSSP could help allay such fears and curb capricious litigation by clarifying general expectations about what constitutes responsible design and development.

Software plays a critical role in assuring the trustworthiness of computer and communications systems. However, the risk that software may not function properly is borne largely by the consumer, especially

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

for off-the-shelf software, which is typically obtained under licenses laden with disclaimers. Off-the-shelf applications programs and even operating systems are typically acquired by license with limited rights, under the terms specified by the manufacturer, as opposed to direct sale (which would imply that the vendor forfeits control over the terms and conditions of its use) (Davis, 1985). The purchaser typically has no bargaining power with respect to the terms and conditions of the license.26 PC-based software licenses present the extreme case, since they are often sealed under shrink-wrap packaging whose opening signifies acceptance of the license. Typically, such licenses limit liability for damages to replacement of defective media or documentation, repair of substantial program errors, or refund of the license fee. From the vendor's perspective, this is not surprising: the revenue from an individual "sale" of PC software is very small, in the tens or hundreds of dollars; from the consumer's perspective, the absence of additional protections contributes to relatively low prices for packaged software. By contrast, customized applications systems, which may well be purchased rather than licensed, are developed in response to the specifically stated requirements of the client. The terms and conditions are those negotiated between the parties, the buyer has some real bargaining power, and the contract will reflect the intent and objectives of both parties.

Some consumer protection may come from the Uniform Commercial Code (UCC). Consumer protection may also come from the Magnuson-Moss Warranty Act (15 USC § 2301 et seq. (1982)), which provides standards for full warranties, permits limited warranties, and requires that warranties be expressed in understandable language and be available at the point of sale.

The UCC is a uniform law, drafted by the National Conference of Commissioners on Uniform State Laws and adopted as law by 49 states, that governs commercial transactions, including the sale of goods. While there is no law requiring express warranties in software licenses, the UCC addresses what constitutes an express warranty where provided, how it is to be enforced, and how to disclaim implied warranties.27 The acquisition of a good by license is a "transaction" in goods and is generally covered by Article 2 of the UCC, although some provisions of the code refer specifically to "sale" and may not be applicable to licensed goods. The National Conference of Commissioners is expected to clarify the issue of whether software is a "good" (and therefore covered by the UCC) by including software within the definition of a "good." In any case, the state courts are quite familiar with the UCC and tend to apply its principles to software

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

license transactions. Note that a proposed extension to the UCC, Section 4A, would impose liability on banks for errors in electronic funds transfers under certain conditions. This provision is already seen as motivating greater wire transfer network security among banks (Datapro Research, 1989b).

The UCC provides a number of protections for the buyer of goods. In every sale of a product by a seller that deals in goods of the kind sold, there is an implied warranty that the product is merchantable. The usual test for merchantability is whether the product is fit for the ordinary purposes for which such products are used. The buyer can recover damages whether or not the seller knew of a defect, or whether or not the seller could have discovered such a defect. The UCC also provides an implied warranty of fitness for a particular purpose. This warranty provides damages where any seller, whether a dealer in goods of the kind sold or not, has any reason to know the specific use to which the product will be put, and knows that the buyer is relying on the seller's superior expertise to select a suitable product. These warranties may be, and almost always are, disclaimed as part of PC software shrink-wrap licenses, often by conspicuously including such words as "as is" or "with all faults."

The UCC does permit the vendor to limit or exclude consequential and incidental damages, unless such limitation is unconscionable (e.g., because it is overly one-sided). Consequential damages are compensation for an injury that does not flow immediately and directly from the action, but only from the consequences or results of the action. For example, damages from a computer break-in that exploited a flawed password mechanism would be deemed consequential to the extent that the supplier of the password mechanism was held responsible. Recovery from suppliers can take other less far-reaching (and more plausible) forms, such as incidental damages. Incidental damages include commercially reasonable charges incurred incident to a breach, such as costs incurred to mitigate the damage.

While disclaimers and standard-form contracts or licenses are legal and help to keep prices down, as applied to software they raise questions about whether consumers understand what is happening and what popular licensing practices may mean. These questions were noted in a recent review of computer contract cases:

Since purchasers generally base their selection of equipment and software on the sellers' representations as to the technical performance capabilities and reliability of equipment, the buyers often ignore the generally broad disclaimers of express and implied warranties in standard vendor contracts. When they become disappointed and discover that disclaimers foreclose their contract remedies, they turn to the law of misrepresentation for relief.

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

Misrepresentation cases will continue to proliferate until the industry more closely aligns its express warranties with the reasonable expectations of its customers, who assume that the hardware and software they buy will perform as described by the sellers' representatives who sold them the product. (Boss and Woodward, 1988, p. 1533)

The vulnerability of consumers and the mismatch of expectations even where individualized contracts are involved have been underscored by a few recent incidents involving vendor disabling of installed software in the course of disputes with customers.28

Software and Systems Present Special Problems

It is clear from the foregoing discussion that a buyer of off-the-shelf software has extremely limited recourse should the licensed software not perform as expected. The major motivation for the vendor to produce trustworthy software is the desire to remain competitive. In the process, however, features for which customer demand is not high may receive inadequate attention. For example, restraints to protect passengers and emission controls to protect the public at large are now universally installed in automobiles because they have been mandated by government action. Although public interest groups helped spur government action, few individual consumers demanded these features, perhaps because of the increased cost or the perception of reduced performance or the inability of an individual to bargain for them effectively. Yet few would argue that these impositions are not in the public interest; what does stimulate argument is the stringency of the safeguard required.

Unsafe or nonsecure software poses analogous risks to users and to others exposed to it (see Chapter 2's "Risks and Vulnerabilities"). More trustworthy software may, like safer and cleaner automobiles, carry a higher product price tag and may also suffer from a perception of reduced performance. In the absence of general consumer demand for more trustworthy software, should manufacturers of off-the-shelf software be subjected to governmental action? In particular, should the government act to reduce a software vendor's ability to disclaim warranties and to limit damages?

The software industry and software itself exhibit some characteristics that limit the scope for governmental action. On the one hand, complex software will inevitably contain errors; no human being can guarantee that it will be free of errors. Imposition of strict liability (without a finding of malice or negligence) for any error would clearly not be equitable, since the exercise of even an exceptionally high degree of care in software production would not guarantee an error-free prod-

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

uct. On the other hand, tools and testing methods to reduce the probability of errors are available. Systematic use of such tools and methods prior to software release reduces the frequency and severity of errors in the fielded product. The committee believes that these tools and methods are not now in wide use both because they are not well known (e.g., the forefront technology of automated protocol analysis, which can dramatically shorten the development cycle) or because, given the evolution of products and practices in the industry, they appear to have been ignored by vendors (e.g., as has been the case for strongly type-checked link editors).

Of course, licensees must accept many risks in using software. Users must train themselves sufficiently in the proper operation of a computer system and software before relying on them. A software vendor should not be held liable for damage caused by users' gross ignorance.29 At the same time, the software vendor must bear a degree of responsibility in helping to properly train the user through adequate and clear documentation describing proper use of the product, and its limitations, including their bearing on security and safety. The superior knowledge and skill of the software vendor itself should impose a duty of care on that vendor toward the unskilled licensee, who in purchasing the product must rely on the vendor's representations, skill, and knowledge.30 At the same time, any imposition of liability on the vendor must imply a concomitant imposition of responsibility on the user to make a reasonable effort to learn how to use the software properly.

Perhaps the most compelling argument against increasing product liability for software and systems vendors is the potential for adverse impacts on the dynamic software industry, where products come quickly to the market and advances are continually made—both of which are major consumer benefits. Innovation is frequently supported by venture capital, and imposition of heavy warranty liability can chill the flow of capital and restrict the introduction of new products or the proliferation of new ventures. Even when raising capital is not an issue, risk aversion itself can discourage innovation. In either case, the increased business risk to the vendor is reflected in higher product prices to the consumer, which in turn may mean that fewer consumers benefit from a given piece of software.

Toward Equitable Allocation of Liability

The possible adverse consequences of holding software and system vendors to a higher standard of care must be carefully weighed against the potential benefits. As more powerful and more highly

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

interconnected systems become more widespread, there will be increasing concern that the current allocation of the risk of software failure is too one-sided for an information society, at least for off-the-shelf software. The industry is sufficiently mature and verification tools and methodologies are sufficiently well understood today that total insulation of the industry from the consequences of software failure can no longer be justified. Operating system software and the major off-the-shelf applications software packages are produced by companies with a business base substantial enough to support quality assurance programs that would yield safer and more secure software; such programs could also reduce any liability risk to manageable proportions. As it is, vendors have already begun programs to make sure that their own development and production efforts are free of contamination from viruses. IBM, for example, set up its High-Integrity Computing Laboratory for this purpose (Smith, 1989; committee briefing by IBM), and ADAPSO, a trade association, has been promoting such efforts for its constituent software and services companies (Landry, 1990). Similarly, vendors do, to varying degrees, notify users of security-related flaws. For example, Sun Microsystems recently announced the Customer Warning System for handling security incidents31 (Ulbrich and Collins, 1990).

Shifting more (not all) risk to the vendors would result in greater care being taken in the production and testing of software. The British move to require greater testing of safety-relevant software illustrates that these concerns are not just local, but are in fact relevant to a worldwide marketplace. The resulting increased use of verification techniques would not only improve the level of software trustworthiness in the most general sense, but would also necessarily improve the level of trust in the specific information security context. (See Chapter 4's "Relating Specifications to Programs" and "Formal Specification and Verification.")

The national interest in the trustworthiness of software is sufficiently strong that Congress should review this question to determine (1) whether federal law is required (or whether state efforts are adequate) and (2) to what extent risks that can be averted through safer software should be shifted from user to vendor. Equitable risk allocation, which reasonably balances vendor and user interests, is achievable and will advance the national interest.

The development of GSSP, as recommended in Chapters 1 and 2, would provide a positive force to balance and complement the negative force of product liability. GSSP would provide a clear foundation of expectation that customers may count on as standards of performance and vendors may regard as standards of adequacy, against which

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

legal claims could be judged. Interestingly, a similar notion was expressed by insurance industry representatives interviewed for this study, who suggested that some form of standard that could be harmonized with accounting standards would be a potent mechanism to improve security controls in the business community. Their rationale was that such standards would raise the profile of the issue with corporate directors and officers, who are liable to owners (stockholders, partners, and so on).32

The committee recognizes that security is not the only property involved in the issue of product liability; safety is obviously another such property. However, as security is a subliminal property of software, it is here that the gap between unspoken customer expectations and unarticulated vendor intentions looms largest. Advances in articulating GSSP would go far toward clarifying the entire field. Both customers and vendors stand to gain.

APPENDIX 6.1—

EXPORT CONTROL PROCESS

National security export controls (hereafter, "export controls") limit access in other countries to technologies and products that could be valuable for military purposes. The control process, which varies by type of product, involves a list of controlled items and an administrative structure for enforcing controls on the export of listed items. Controlled exports do not mean no exports. Rather, these exports are controlled in terms of destination and, in some cases, volume or end use, with restrictions specified as part of the export license. It should be noted that even the tightest export controls do not totally block access to protected technology.

Four organizations have been the principal influences on the export control policy and process of the United States, namely the Coordinating Committee for Multilateral Export Control (CoCom), in which the United States participates, and the U.S. Departments of State, Commerce, and Defense. Each of these organizations has its own policies and jurisdictions for export control, but all the organizations interact heavily with regard to common pursuits (NAS, 1987).

CoCom, a multilateral effort to curb the flow of technology from the West to the Soviet Union and what have been its allies in the East Bloc, has included representatives from Japan, Australia, and all NATO countries except Iceland. Products controlled by CoCom are listed on the Industrial List (IL). The Department of State administers the International Traffic in Arms Regulations (ITAR; 22 CFR, Parts 120–130) through its Center for Defense Trade (formerly the Office of Munitions Control) in consultation with the Department of Defense.

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

That office maintains the U.S. Munitions Control List, which includes technologies and products representing an obvious military threat, such as weaponry. Finally, the Department of Commerce administers the Export Administration Regulations (EAR; CFR Parts 368–399), in consultation with the Department of Defense. Commerce maintains the Control List (CL), which has classified elements, and the Commodity Control List (CCL), which is not classified. Both of these lists contain dual-use technologies and products, which have both military and civilian/commercial value, and military-critical technologies that may be treated specially.

Recent developments in Eastern Europe have placed pressure on CoCom as an institution and on the United States, which is generally more conservative than other CoCom nations about controlling exports of dual-use technology. Even the topic of trade with other CoCom countries has stirred substantial debate within the U.S. government, some centering on how products are labeled (the most publicized controversy pertains to defining what is a supercomputer) and where they are listed, and much on whether a product should be listed at all.

Exports of general- and special-purpose computer systems are controlled if the systems offer one or more of three qualities: high performance (potentially useful in such strategic applications as nuclear bomb development or war gaming), specific military-critical functionality (e.g., radiation hardening and ruggedness or applications like on-board fire control), or the capability to produce high-performance or military-critical computer systems (e.g., sophisticated computer-aided design and manufacturing systems). Exports of supercomputers to countries other than Canada and Japan are subject to case-by-case review, which can take months, and require special conditions associated with the sale, installation, and operation of the supercomputer, so-called supercomputer safeguard plans.

APPENDIX 6.2—

INSURANCE

Insurance is a means for sharing a risk. The insured pays the insurer (up front, through a premium, and/or when receiving reimbursement, through a deductible or other copayment) to share his risks; if an adverse event takes place, the insurance policy provides for payment to compensate for the damage or loss incurred. The business community already buys insurance for risks ranging from fire to theft as well as for protection against employee dishonesty (bonding).

To be insurable requires the following:

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
  • A volume base for risk spreading (insurance on communication satellites has a very small volume, something that contributes to its cost);

  • An establishable proof of loss;

  • A quantifiable loss (e.g., the value of mailing lists and research data cannot be consistently and objectively quantified, according to insurance representatives);

  • An ability to tie a loss to a time frame of occurrence;

  • An ability to credit responsibility for the loss; and

  • A knowable loss base.

With these elements, a purchaser of insurance can effectively transfer risk to a carrier and prove a loss. Risks that do not satisfy these elements include inherent business risks.

Another factor to consider is the nature of the consequences, which influences the liability base: a computer-aided manufacturing program controlling a robot may put lives at risk, whereas a number-crunching general ledger program will not.

The earliest insurance offerings covering computer environments were directed at third-party providers of computer services (e.g., service bureaus) concerned about direct and contingent liability associated with losses to their customers. Also leading the computer insurance market were banks—driven by state and federal auditors' concerns—and electronic funds transfer (EFT) systems, ranging from those established by the Federal Reserve (e.g., Fedwire) to the automated clearinghouses, for which there was legislative impetus behind the establishment and use of insurance coverage. This governmental urging of provisions for insurance against computer system risks was initially resisted by the insurance industry, which claimed not to understand the risks.

Insurance for banks and other financial services institutions is relatively well developed, reflecting both the size of the potential loss, the ease with which the risk can be underwritten, and regulations requiring such protection. Much computer-related insurance for the banking industry, for example, builds on a historic base in bonds that protect against employee dishonesty, since most crimes against banks are perpetrated on the inside or with insider participation.

Outside of financial services, the insurance picture is mixed and less mature. There is some coverage against computer system mishaps available through employee bonding and property and casualty coverage. It is easiest to insure the tangible elements of a computer system. By contrast, coverage may be available for restoring a database, but not for reconstructing it from scratch. Another basis for insurance is found in business interruption coverage. Thus recovery of costs for system downtime is available. A new development in the

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

1980s was the introduction of limited coverage against external intrusions and associated offenses, including tampering, extortion, and others. Although the insurance described above protects the system-using organization, insurance representatives suggest there is a growing potential for coverage of errors and omissions on the part of the vendor, arising from the development of hardware, firmware, and software, to protect the vendor against liability claims. Such coverage appears targeted to developers of such complex products as engineering design software.

NOTES

1.  

Note that add-on controls are futile unless the user has full control over all the software on a machine.

2.  

A glaring example of a facility that can compromise security is ''object reuse," which never was an issue in Unix, because it could not happen. Today's non-Unix systems from Digital Equipment Corporation and IBM still allow object reuse.

3.  

As noted by one analyst, Unix was originally designed by programmers for use by other programmers in an environment fostering open cooperation rather than privacy (Curry, 1990).

4.  

The fact that consumers are preoccupied with threats posed by insiders and have problems today that could benefit from better procedures and physical security measures, let alone technical measures, is discussed in the section titled "Consumer Awareness."

5.  

For example, the most recent of a series of intra-governmental advisories is the Office of Management and Budget's (OMB's) Guidance for Preparation of Security Plans for Federal Computer Systems that Contain Sensitive Information (OMB, 1990). This bulletin addresses the security planning process required by the Computer Security Act of 1987 (P.L. 100-235). It is expected to be superseded by a revision to OMB Circular Number A-130 and incorporated into future standards or guidelines from the National Institute of Standards and Technology.

6.  

An examination of this challenge for computing technologies generally can be found in a previous Computer Science and Technology Board report, Global Trends in Computer Technology and Their Impact on Export Control (NRC, 1988a).

7.  

There may also have been instances in which software implementations of DES or RSA were sent abroad by oversight or because the transmitter of the implementation was unaware of the law. The physical portability of software makes such slips almost inevitable.

8.  

Note that the United Kingdom and Australia set the threshold at B2 or the equivalent.

9.  

Note that in this time period only one A1 product has been on the evaluated product list. The information on approval rates came from NSA briefings for the committee.

10.  

This point was made by Digital Equipment Corporation in July 1990 testimony before the House Subcommittee on Transportation, Aviation, and Materials.

11.  

For example, observers of the market for disaster recovery services have noted that until a 1986 fire in Montreal, a principal marketing tool was a 1978 study assessing how long businesses could survive without their data processing operations; more recent fires (affecting the Hinsdale, Ill., central office for telephone service and lower

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

   

Manhattan's business district) have also provided dramatic evidence of the consequences of system mishaps (Datamation, 1987).

12.  

This situation and a variant, in which bad products effectively drive out good ones, is not unique (see Akerlof, 1970).

13.  

A security officer may even occasionally need to decrypt an encrypted file that was encrypted by a suspect using a key known only to the suspect; the security officer may have very mixed feelings about the optimum strength of an encryption method that is available for routine use in protecting the company's data.

14.  

These issues have been actively discussed on electronic bulletin boards and forums (e.g., RISKS, CuD, the Well) and in the general and business press with the publicized launch of the Electronic Frontiers Foundation in response to recent investigations and prosecutions.

15.  

"Insurance as a Market Lever" and Chapter Appendix 6.2 draw on discussions with insurance industry representatives, including carrier and agent personnel.

16.  

Insurance industry representatives voice concern about technology outpacing underwriting: if a policy is written at one point in time, will the language and exclusions prove appropriate when a claim is filed later, after new technology has been developed and introduced?

17.  

Indeed, there is some evidence that universities should do even more. For example, based on a recent survey, John Higgins observed the following:

It seems evident that a substantial majority of current university graduates in computer science have no formal introduction to the issues of information security as a result of their university training.… While it is unlikely that every institution would develop a variety of courses in security, it is important that some institutions do. It establishes and helps to maintain the credibility of the subject and provides a nucleus of students interested in security topics. The most favorable interpretation of the survey seems to suggest that at present there are at best only two or three such universities in the nation. (Higgins, 1989, p. 556)

18.  

RISKS, formally known as the Forum on Risks to the Public in the Use of Computers and Related Systems, was established in August 1985 by Peter G. Neumann as chair of the Association for Computing Machinery's (ACM) Committee on Computers and Public Policy. It is an electronic forum for discussing issues relating to the use and misuse of computers in applications affecting our lives. Involving many thousands of people around the world, RISKS has become a repository for anecdotes, news items, and assorted comments thereon. The most interesting cases discussed are included in the regular issues of ACM's Software Engineering Notes (See Neumann, 1989). An updated index to about a thousand cases is under development.

19.  

The relative reluctance of victims to report computer crimes was noted to the committee by prosecutors and insurance representatives.

20.  

Experience shows that many users do not repair flaws or install patches (software to correct a flaw) even given notification. Since penetrators have demonstrated the ability to "reverse engineer" patches (and other remedies) and go looking for systems that lack the necessary corrections, the proper strategy for handling discovered flaws is not easy to devise.

21.  

Computer hardware, for example, must meet the Federal Communications Commission's regulations for electronic emanations, and European regulations on ergonomic and safety qualities of computer screens and keyboards have affected the appearance and operation of systems worldwide.

22.  

This point was made by Digital Equipment Corporation in July 1990 testimony before the House Subcommittee on Transportation, Aviation, and Materials.

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×

23.  

Vendors also argue that some consumers may prefer products with little security, but the prevalent lack of consumer understanding of the choices casts doubt on this explanation for the weak market.

24.  

For example, rope manufacturers use a system of standardized strength ratings, since one cannot tell at the point of manufacture whether a rope will be used to tie packages or to suspend objects, for example. Of course, some highly specialized rope, such as climbing lines, carries extra assurance, which comes with added cost.

25.  

Michael Agranoff observes, "Such standards would not eliminate computer abuse, especially by 'insiders'; they would not eliminate computer-related negligence. They would, however, provide a 'curb on technology,' a baseline from which to judge both compensation for victims of computer abuse and the efficacy of measures to combat computer crime" (Agranoff, 1989, p. 275).

26.  

The terms and conditions governing the acquisition of operating-system and off-the-shelf software have many of the attributes of an adhesion contract (although whether there is a contract at all is open to debate). An adhesion contract is a standardized contract form offered on a "take-it-or-leave-it" basis, with no opportunity to bargain. The prospective buyer can acquire the item only under the stated terms and conditions. Of course, the "buyer" has the option of not acquiring the software, or of acquiring a competing program that is most likely subject to the same or a similar set of terms and conditions, but often the entire industry offers the item only under a similar set of terms and conditions.

27.  

The UCC upholds express warranties in Section 2-313. An express warranty is created when the seller affirms a "fact or promise, describes the product, and provides a sample or model, and the buyer relies on the affirmation, description, sample, or model as part of the basis of the bargain." By their very nature, express warranties cannot be disclaimed. The UCC will not allow a vendor to make an express promise that is then disclaimed. Language that cannot be reasonably reconciled is resolved in favor of the buyer.

28.  

Most recently, Logisticon, Inc., apparently gained telephone access to Revlon, Inc.'s computers and disabled software it supplied. Revlon, claiming dissatisfaction with the software, had suspended payments. While Logisticon argued it was repossessing its property, Revlon suffered a significant interruption in business operations and filed suit (Pollack, 1990).

29.  

Although it would be inequitable to impose liability for clearly unintended uses in unintended operating environments, a vendor should not escape all liability for breach of warranty simply because a product can be used across a wide spectrum of applications or operating environments.

30.  

That superior knowledge is an argument for promoting the technical steps discussed in the section titled "Consumer Awareness," such as shipping systems with security features turned on.

31.  

The Customer Warning System involves a point of contact for reporting security problems; proactive alerts to customers of worms, viruses, or other security holes; and distribution of fixes.

32.  

The Foreign Corrupt Practices Act is one step toward linking accounting and information security practices; it requires accounting and other management controls that security experts interpret as including computer security controls (Snyders, 1983). Also, note that an effort is under way on the part of a group of security practitioners to address the affirmative obligations of corporate officers and directors to safeguard information assets (personal communication from Sandra Lambert, July 1990).

Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 143
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 144
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 145
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 146
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 147
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 148
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 149
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 150
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 151
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 152
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 153
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 154
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 155
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 156
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 157
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 158
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 159
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 160
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 161
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 162
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 163
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 164
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 165
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 166
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 167
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 168
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 169
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 170
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 171
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 172
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 173
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 174
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 175
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 176
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 177
Suggested Citation:"Why the Security Market Has Not Worked Well." National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581.
×
Page 178
Next: The Need to Establish an Information Security Foundation »
Computers at Risk: Safe Computing in the Information Age Get This Book
×
Buy Paperback | $85.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Computers at Risk presents a comprehensive agenda for developing nationwide policies and practices for computer security. Specific recommendations are provided for industry and for government agencies engaged in computer security activities.

The volume also outlines problems and opportunities in computer security research, recommends ways to improve the research infrastructure, and suggests topics for investigators.

The book explores the diversity of the field, the need to engineer countermeasures based on speculation of what experts think computer attackers may do next, why the technology community has failed to respond to the need for enhanced security systems, how innovators could be encouraged to bring more options to the marketplace, and balancing the importance of security against the right of privacy.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!