6
Why the Security Market Has Not Worked Well

Currently available are a wide variety of goods and services intended to enhance computer and communications security. These range from accessory devices for physical security, identification, authentication, and encryption to insurance and disaster recovery services, which provide computer and communications centers as a backup to an organization's or individual's own equipment and facilities. This chapter focuses on the market for secure or trusted systems and related products, primarily software. It provides an overview of the market and its problems, outlines the influences of the federal government on this market, discusses the lack of consumer awareness and options for alleviating it, and assesses actual and potential government regulation of the secure system market. Additional details on the export control process and insurance are provided in two chapter appendixes.

THE MARKET FOR TRUSTWORTHY SYSTEMS

Secure or trusted information systems are supplied by vendors of general- and special-purpose hardware and software. Overall, the market for these systems has developed slowly, although the pace is picking up somewhat now. Whereas the market in 1980 was dominated by commercial computer and communications systems with no security features, the market in 1990 includes a significant number of systems that offer discretionary access control and a growing number from both major and niche vendors with both discretionary and mandatory access control, which provides significant protections against breaches of confidentiality. Notable is the trend to produce systems rated at the Orange Book's B1 level (see Appendix A of this report), often by



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 143
Computers at Risk: Safe Computing in the Information Age 6 Why the Security Market Has Not Worked Well Currently available are a wide variety of goods and services intended to enhance computer and communications security. These range from accessory devices for physical security, identification, authentication, and encryption to insurance and disaster recovery services, which provide computer and communications centers as a backup to an organization's or individual's own equipment and facilities. This chapter focuses on the market for secure or trusted systems and related products, primarily software. It provides an overview of the market and its problems, outlines the influences of the federal government on this market, discusses the lack of consumer awareness and options for alleviating it, and assesses actual and potential government regulation of the secure system market. Additional details on the export control process and insurance are provided in two chapter appendixes. THE MARKET FOR TRUSTWORTHY SYSTEMS Secure or trusted information systems are supplied by vendors of general- and special-purpose hardware and software. Overall, the market for these systems has developed slowly, although the pace is picking up somewhat now. Whereas the market in 1980 was dominated by commercial computer and communications systems with no security features, the market in 1990 includes a significant number of systems that offer discretionary access control and a growing number from both major and niche vendors with both discretionary and mandatory access control, which provides significant protections against breaches of confidentiality. Notable is the trend to produce systems rated at the Orange Book's B1 level (see Appendix A of this report), often by

OCR for page 143
Computers at Risk: Safe Computing in the Information Age adapting products that had had fewer security features and less assurance. According to vendors, consumers most frequently demand security in connection with networked systems, which serve multiple users. One market research firm (International Resource Development) has estimated that the market for local area network (LAN) security devices may grow up to sixfold by the mid-1990s; it also foresees significant growth in data and voice encryption devices, in part because their costs are declining (Brown, 1989a). Other factors cited for growth in the encryption market are requirements for control of fraud in financial services and elsewhere (Datapro Research, 1989a). Prominent in the market has been host access control software for IBM mainframes, especially IBM's RACF and Computer Associates' ACF2 and Top Secret. This type of add-on software provides (but does not enforce) services, such as user identification, authentication, authorization, and audit trails, that the underlying operating systems lack. It was originally developed in the 1970s and early 1980s, driven by the spread of multiaccess applications (mainframe-based systems were not originally developed with security as a significant consideration). Both IBM and Computer Associates plan to make these products conform to Orange Book B1 criteria. Although IBM intends now to bring its major operating systems up to the B1 level, it is reluctant to undertake development to achieve higher levels of assurance (committee briefing by IBM). Moreover, the market for host access control systems is growing slowly because those who need them generally have them already.1 One market analyst, Datapro, notes that sales come mostly from organizations required by federal or state regulations to implement security controls (Datapro Research, 1990a). The most powerful alternatives to add-on software, of course, are systems with security and trust built in. In contrast to the mainframe environment, some vendors have been building more security features directly into midrange and open systems, possibly benefiting from the more rapid growth of this part of the market. Even in the personal computer market, newer operating systems (e.g., OS/2) offer more security than older ones (e.g., MS/DOS). Multics, the first commercial operating system that was developed (by the Massachusetts Institute of Technology, General Electric, and AT&T Bell Laboratories) with security as a design goal, achieved a B2 rating in 1985. While Multics has a loyal following and is frequently cited as a prime exemplar of system security, its commercial history has not been encouraging. Its pending discontinuation by its vendor (now Bull, previously Honeywell, originally General Electric) apparently reflects a strategic commitment to other operating systems (Datapro Research, 1990b).

OCR for page 143
Computers at Risk: Safe Computing in the Information Age The history of Unix illustrates the variability of market forces during the lifetime of a single product. Originally Unix had security facilities superior to those in most commercial systems then in widespread use.2 Unix was enthusiastically adopted by the academic computer science community because of its effectiveness for software development. This community, where security consciousness was not widespread, created new capabilities, especially to interface to DARPA-sponsored networking (e.g., remote log-in and remote command execution).3 As Unix spread into the commercial marketplace, the new capabilities were demanded despite the fact that they undermined the ability to run a tight ship from the security standpoint. Subsequently, and largely spurred by the Orange Book, various efforts to strengthen the Unix system have been undertaken (including T-MACH, funded by DARPA; LOCK, funded by the National Security Agency; the IEEE POSIX 1003.6 standards proposal; and various manufacturers' projects). But the corrections will not be total: many customers still choose freedom over safety. The slow growth of the market for secure software and systems feeds vendor perceptions that its profitability is limited. Both high development costs and a perceived small market have made secure software and system development appear as a significant risk to vendors. Moreover, a vendor that introduces a secure product before its competitors has only a year or two to charge a premium. After that, consumers come to expect that the new attributes will be part of the standard product offering. Thus the pace of change and competition in the overall market for computer technology may be inimical to security, subordinating security-relevant quality to creativity, functionality, and timely releases or upgrades. These other attributes are rewarded in the marketplace and more easily understood by consumers and even software developers. While the overall market for computer technology is growing and broadening, the tremendous growth in retail distribution, as opposed to custom or low-volume/high-price sales, has helped to distance vendors from consumers and to diminish the voice of the growing body of computer users in vendor decision making. Although vendors have relatively direct communications with large-system customers—customers whom they know by name and with whom they have individualized contracts—they are relatively removed from buyers of personal computer products, who may be customers of a retail outlet rather than of the manufacturer itself. Retail distribution itself may constrain the marketing of security products. Vendors of encryption and access control products have indicated that some retailers may avoid offering security products because ''the issue of security dampens enthusiasm," while some of these relatively small vendors avoid re-

OCR for page 143
Computers at Risk: Safe Computing in the Information Age tail distribution because it requires more customer support than they can manage (Datapro Research, 1989a). Many in the security field attribute the increased availability of more secure systems to government policies stimulating demand for secure systems (see "Federal Government Influence on the Market" below). Those policies have led to a two-tiered market: government agencies, especially those that process classified information, and their vendors, are likely to demand Orange Book-rated trusted systems; other agencies, commercial organizations, and individuals that process sensitive but unclassified information are more likely to use less sophisticated safeguards. This second market tier constitutes the bulk of the market for computer-based systems. The committee believes that, more often than not, consumers do not have enough or good enough safeguards, both because options on the market often appear to be ineffective or too expensive, and because the value of running a safe operation is often not fully appreciated. Since data describing the marketplace are limited and of questionable quality, the committee bases its judgment on members' experiences in major system user and vendor companies and consultancies. This judgment also reflects the committee's recognition that even systems conforming to relatively high Orange Book ratings have limitations, and do not adequately address consumer needs for integrity and availability safeguards. A SOFT MARKET: CONCERNS OF VENDORS Vendors argue that a lack of broad-based consumer understanding of security risks and safeguard options results in relatively low levels of demand for computer and communications security. For example, one survey of network users found that only 17 percent of Fortune 1000 sites and 10 percent of other sites used network security systems (Network World, 1990). Thus, although market research may signal high growth rates in certain security markets, the absolute market volume is small. To gain insight into the current market climate for secure products, the committee interviewed several hardware and software vendors. Vendors find security hard to sell, in part because consumers and vendors have very different perceptions of the security problem.4 This situation calls for creative marketing: one vendor stresses functionality in marketing operating system software for single-user systems and security in marketing essentially the same software for multiuser local area networked systems. A commonly reported problem is limited willingness of management to pay for security, although the rise in expectations following publicity over major computer crimes sug-

OCR for page 143
Computers at Risk: Safe Computing in the Information Age gests that at least at the technical level, consumers are ready for more security. From the consumer's perspective, it is easy to buy something that is cheap; buying something expensive requires risk assessment and an investment in persuading management of the need. Vendors observed that they hear about what consumers would like, but they do not hear consumers say that they will not buy products that lack certain security features. Vendors differ in their attitudes toward the Orange Book as a stimulus to commercial product security. Some indicated that they saw the government as leading the market; others characterized the government as a force that motivates their customers but not them directly. Vendors familiar with the Orange Book find it offers little comfort in marketing. For example, one customer told a sales representative that he did not need the capabilities required by the Orange Book and then proceeded to list, in his own words, requirements for mandatory access control and complete auditing safeguards, which are covered extensively in the Orange Book. Overall, vendors maintained that the Orange Book has had limited appeal outside the government contracting market, in part because it is associated with the military and in part because it adds yet more jargon to an already technically complex subject. This sentiment echoes the findings of another study that gathered inputs from vendors (AFCEA, 1989). Vendors also indicated that marketing a product developed in the Orange Book environment to commercial clients required special tactics, extra work that most have been reluctant to undertake. Vendors also complained that it is risky to develop products intended for government evaluation (associated with the Orange Book) because the evaluation process itself is expensive for vendors—it takes time and money to supply necessary information—and because of uncertainty that the desired rating will be awarded. Time is a key concern in the relatively fast-paced computer system market, and vendors complain about both the time to complete an evaluation and the timing of the evaluation relative to the product cycle. The vendor's product cycle is driven by many factors—competition, market demands for functionality, development costs, and compatibility and synchrony with other products—of which security is just one more factor, and a factor that is sometimes perceived as having a negative impact on some of the others. While vendors may have a product development-to-release cycle that takes about three to six years, the evaluations have tended to come late in the product cycle, often resulting in the issuing of ratings after a product has been superseded by newer technology. The time to complete an evaluation has been a function of Na-

OCR for page 143
Computers at Risk: Safe Computing in the Information Age tional Computer Security Center (NCSC) resources and practice. NCSC's schedule has been driven by its emphasis on security, the perceived needs of its principal clients in the national security community, and the (limited) availability of evaluation staff. By 1990, NCSC was completing evaluations at a rate of about five per year, although the shift from evaluating primarily C-level systems to primarily B-level systems was expected to extend the time required per evaluation (Anthes, 1989d; committee briefing by NSA). The time involved reflects the quality of the evaluation resources: individuals assigned to do evaluations have often had limited, if any, experience in developing or analyzing complex systems, a situation that extends the time needed to complete an evaluation; both vendors and NCSC management have recognized this. Further, as a member of the NCSC staff observed to the committee, "We don't speed things up." As of late October 1990, 1 system had obtained an A1 rating, none had been rated B3, 2 had been rated B2, 3 had been rated B1, 13 had been rated C2, and 1 had been rated C1 (personal communication, NSA, October 26, 1990). Prospects for future evaluations are uncertain, in view of the recent reorganization of the NCSC (see Chapter 7). Vendors have little incentive to produce ratable systems when the absence of rated products has not detectably impaired sales. Customers, even government agencies that nominally require rated products, tend to buy whatever is available, functionally desirable, and or compatible with previously purchased technology. Customer willingness to buy unrated products that come only with vendor claims about their security properties suggests possibilities for false advertising and other risks to consumers. Consider the multilevel secure database management system released by Sybase in February 1990 (Danca, 1990a). The Secure Server, as it is called, was designed and developed to meet B1-level requirements for mandatory access control as defined in the Orange Book. The development for that product began in 1985, with the initial operational (Beta) release in the spring of 1989. The Air Force adopted the Secure Server in its next version of the Global Decision Support System (GDSS), which is used by the Military Airlift Command to monitor and control worldwide airlift capabilities. However, at the time of its release, the Secure Server had not been evaluated against the Orange Book criteria because the relevant criteria, contained in the Trusted Database Interpretation (TDI), were still being reviewed. Although the TDI is expected to be released in late 1990 or early 1991, it will be at least six months (and probably nine months) before any official opinion is rendered by NCSC. In short, Sybase will be marketing a secure product that took five years to develop and the Air Force will be using that

OCR for page 143
Computers at Risk: Safe Computing in the Information Age product for a full year before any evaluation information is released. Both the vendors and consumers have proceeded with some degree of risk. FEDERAL GOVERNMENT INFLUENCE ON THE MARKET The federal government has tried to influence commercial-grade computer security through direct procurement, research support, and regulatory requirements placed on the handling of data in the private sector. That influence has been realized both directly through government actions (e.g., procurement and investment in research) and indirectly through regulations and policies that provide incentives or disincentives in the marketplaces.5 The influence of the Orange Book is discussed in Chapters 2 to 5 and in Appendix A. Procurement and strategic research programs are discussed briefly below. Procurement The U.S. government has tried to suggest that a strong government and commercial market would exist for security products were such products available (EIA, 1987). Industry is skeptical of such promises, arguing that the government does not follow through in its procurement (AFCEA, 1989), even after sponsoring the development of special projects for military-critical technology. However, one step the government has taken that has apparently stimulated the market is known as "C2 by '92." A directive (NTISSP No. 200, issued on July 15,1987) of the National Telecommunications and Information Systems Security Committee (NTISSC), the body that develops and issues national system security operating policies, required federal agencies and their contractors to install by 1992 discretionary access control and auditing at the Orange Book C2 level in multiuser computer systems containing classified or unclassified but sensitive information. This directive is widely believed to have stimulated the production of C2-level systems. However, its impact in the future is in question, given the divergence in programs for protecting classified and sensitive but unclassified information that has been reinforced by the Computer Security Act of 1987 and the revision of National Security Decision Directive 145 (see Chapter 7). The Computer Security Act itself has the potential for increasing the demand for trusted systems, but the security assessment and planning process it triggered fell short of expectations (GAO, 1990c). Concern for security is not a consistent factor in government procurements. A small sample, compiled by the committee, of 30 recent

OCR for page 143
Computers at Risk: Safe Computing in the Information Age (1989) requests for proposal (RFPs), 10 of which were issued by DOD organizations and 20 of which were issued by the civil agencies, presents a picture of uneven concern for security: five RFPs had no stated security requirements. Five DOD and eight civil agency RFPs specified adherence to standards defined by the NCSC and the National Institute of Standards and Technology (NIST), although three of the DOD RFPs did not specify an Orange Book level. Two DOD and three civil agency RFPs indicated that unclassified but protectable data would be handled. None of the DOD RFPs specified encryption requirements; three civil agency RFPs required Data Encryption Standard (DES) encryption, and one required NSA-approved encryption technology. Access control features were required by 13 RFPs. Auditing features were required by six. The procurement process itself provides vehicles for weakening the demand for security. Vendors occasionally challenge (through mechanisms for comment within the procurement process) strong security requirements in RFPs, on the grounds that such requirements limit competition. For example, a C2 requirement for personal computers was dropped from an RFP from the Air Force Computer Acquisition Command (AFCAC) because conforming systems were not available (Poos, 1990). Budgetary pressures may also contribute to weakening security requirements. Such pressures may, for example, result in the inclusion of security technology as a non-evaluated option, rather than as a requirement, leading to a vendor perception that the organization is only paying lip service to the need for security. Interestingly, DOD itself is exploring novel ways to use the procurement process to stimulate the market beyond the Orange Book and military standards. In 1989 it launched the Protection of Logistics Unclassified/Sensitive Systems (PLUS) program to promote standards for secure data processing and data exchange among DOD and its suppliers. PLUS complements other DOD efforts to automate procurement procedures (e.g., electronic data interchange and Computer-aided Acquisition and Logistics Support (CALS) programs), helping to automate procurement (Kass, 1990). A subsidiary goal of PLUS is cheaper commercial security products (personal communication with PLUS staff). Strategic Federal Investments in Research and Development The government, especially through DARPA funding, has contributed to computer technology through large-scale strategic research and development programs that supported the creation or enhancement of facilities such as the (recently decommissioned) Arpanet network

OCR for page 143
Computers at Risk: Safe Computing in the Information Age serving researchers, Multics and ADEPT 50 (operating systems with security features), MACH (an extension of the Unix operating system that fully integrates network capabilities and that has been championed by the industry consortium Open Software Foundation), and the Connection Machine (an advanced parallel processor). Each of these projects—which were sponsored by DARPA—has moved the market into areas that are beneficial to both government and commercial computer users. The Arpanet and Multics experiences illustrate how very large scale, multifaceted, systems-oriented projects can catalyze substantial technological advances, expand the level of expertise in the research community, and spin off developments in a number of areas. Scale, complexity, and systems orientation are particularly important for progress in the computer and communications security arena, and the government is the largest supporter of these projects. Historically, security has been a secondary concern in such projects, although it is gaining more attention now. The widespread impact of these projects suggests that similar initiatives emphasizing security could pay off handsomely. In the security field specifically, projects such as Multics and ADEPT 50 (which provided strong access control mechanisms), LOCK (hardware-based integrity and assurance), SeaView (a secure database management system), TMACH (a trusted or secure version of MACH), and the CCEP (Commercial COMSEC Endorsement Program for commercially produced encryption products) are intended to stimulate the market to develop enhanced security capabilities by reducing some of the development risks. The LOCK program, for example, was designed to make full documentation and background material available to major vendors so that they might profit from the LOCK experience; similar benefits are expected from the TMACH development program. Another example is NSA's STU-III telephone project, which involved vendors in the design process. Five prospective vendors competed to develop designs; three went on to develop products. The interval from contract award to commercial product was less than three years, although years of research and development were necessary beforehand. The STU-III has decreased the price of secure voice and data communications from over $10,000 per unit to about $2,000 per unit, pleasing both government consumers and the commercial vendors. Moreover, in 1990 the DOD purchased several thousand STU-III terminals for use not only in DOD facilities but also for loan to qualified defense contractors; these firms will receive the majority of the purchased units. This program will help to overcome one obvious disincentive for commercial acquisition: to be of use, not only the party originating a call but also the receiver must have a STU-III.

OCR for page 143
Computers at Risk: Safe Computing in the Information Age For national security reasons, programs that are sponsored by NSA confine direct technology transfer to companies with U.S. majority ownership, thereby excluding companies with foreign ownership, control, or influence (FOCI). While the United States has legitimate national interests in maintaining technological advantage, the increasingly international nature of the computer business makes it difficult to even identify what is a U.S. company, much less target incentives (NRC, 1990). Another factor to consider in the realm of strategic research and development is the fact that, consistent with its primary mission, NSA's projects are relatively closed, whereas an agency like DARPA can more aggressively reach out to the computer science and technology community. The proposed federal high-performance computing program (OSTP, 1989) could provide a vehicle for strategic research investment in system security technology; indeed, security is cited as a consideration in developing the component National Research and Education Network—and security would clearly be important to the success of the network. Agencies involved in generating technology through this program include DOD (with responsibility concentrated in DARPA), the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), the Department of Energy (DOE), and NIST. However, funding uncertainty and delays associated with the high-performance computing program suggest both that security aspects could be compromised and that additional but more modest large-scale technology development projects that promote secure system development may be more feasible. Certainly, they would have substantial benefits in terms of advancing and commercializing trust technology. Other government-backed research programs that focus on physical, natural, or biomedical sciences (e.g., the anticipated database for the mapping and sequencing of the human genome, or remote-access earth sciences facilities) also have security considerations that could provide useful testbeds for innovative approaches or demonstrations of known technology. Export Controls as a Market Inhibitor Vendors maintain that controls on exports inhibit the development of improved commercial computer and communications security products. Controls on the export of commercial computer security technology raise questions about the kind of technology transfer that should be controlled (and why), whether security technologies aimed at the civilian market should be considered to have military relevance (dual use), whether control should continue under the provisions aimed at

OCR for page 143
Computers at Risk: Safe Computing in the Information Age munitions, and other considerations that affect how commercial and military perspectives should be weighed and balanced for these technologies. An overview of the export control process is provided in Chapter Appendix 6.1. The challenge for policymakers is to balance national security and economic security interests in drawing the line between technology that should be controlled, because it compromises national security (in this case by hampering intelligence gathering by government entities) and technology that need not be, and allowing that line to move over time.6 The committee considered controls on the export of trusted systems and on the export of commercial-grade cryptographic products. The current rules constraining the export of trusted (and cryptographic) systems were developed at a time when the U.S. position in this area of technology was predominant. As in other areas of technology, that position has changed, and it is time to review the nature of the controls and their application, to assure that whatever controls are in place balance all U.S. interests and thereby support national security in the fullest sense over the long term. The emergence of foreign criteria and evaluation schemes (see "Comparing National Criteria Sets" in Chapter 5) makes reconsideration of export controls on trusted systems especially timely. Balancing the possible temporary military benefit against the long-run interests of both national security applications and commercial viability, the committee concludes that Orange Book ratings, per se, do not signify military-critical technology, even at the B3 and A1 levels. Of course, specific implementations of B3 and A1 systems may involve technology (e.g., certain forms of encryption) that does raise national security concerns, but such technology is not necessary for achieving those ratings. NSA officials who briefed the committee offered support for that conclusion, which is also supported by the fact that the criteria for achieving Orange Book ratings are published information. The committee urges clarifying just what aspects of a trusted system are to be controlled, independent of Orange Book levels, and targeting more precisely the technology that it is essential to control. It also urges reexamination of controls on implementations of the Data Encryption Standard (DES), which also derive from published information (the standard; NBS, 1977). Issues in both of these areas are discussed below. Technology Transfer: Rationale for Controlling Security Exports Currently, the military and intelligence communities provide the largest concentration of effort, expertise, and resources allocated to

OCR for page 143
Computers at Risk: Safe Computing in the Information Age for off-the-shelf software, which is typically obtained under licenses laden with disclaimers. Off-the-shelf applications programs and even operating systems are typically acquired by license with limited rights, under the terms specified by the manufacturer, as opposed to direct sale (which would imply that the vendor forfeits control over the terms and conditions of its use) (Davis, 1985). The purchaser typically has no bargaining power with respect to the terms and conditions of the license.26 PC-based software licenses present the extreme case, since they are often sealed under shrink-wrap packaging whose opening signifies acceptance of the license. Typically, such licenses limit liability for damages to replacement of defective media or documentation, repair of substantial program errors, or refund of the license fee. From the vendor's perspective, this is not surprising: the revenue from an individual "sale" of PC software is very small, in the tens or hundreds of dollars; from the consumer's perspective, the absence of additional protections contributes to relatively low prices for packaged software. By contrast, customized applications systems, which may well be purchased rather than licensed, are developed in response to the specifically stated requirements of the client. The terms and conditions are those negotiated between the parties, the buyer has some real bargaining power, and the contract will reflect the intent and objectives of both parties. Some consumer protection may come from the Uniform Commercial Code (UCC). Consumer protection may also come from the Magnuson-Moss Warranty Act (15 USC § 2301 et seq. (1982)), which provides standards for full warranties, permits limited warranties, and requires that warranties be expressed in understandable language and be available at the point of sale. The UCC is a uniform law, drafted by the National Conference of Commissioners on Uniform State Laws and adopted as law by 49 states, that governs commercial transactions, including the sale of goods. While there is no law requiring express warranties in software licenses, the UCC addresses what constitutes an express warranty where provided, how it is to be enforced, and how to disclaim implied warranties.27 The acquisition of a good by license is a "transaction" in goods and is generally covered by Article 2 of the UCC, although some provisions of the code refer specifically to "sale" and may not be applicable to licensed goods. The National Conference of Commissioners is expected to clarify the issue of whether software is a "good" (and therefore covered by the UCC) by including software within the definition of a "good." In any case, the state courts are quite familiar with the UCC and tend to apply its principles to software

OCR for page 143
Computers at Risk: Safe Computing in the Information Age license transactions. Note that a proposed extension to the UCC, Section 4A, would impose liability on banks for errors in electronic funds transfers under certain conditions. This provision is already seen as motivating greater wire transfer network security among banks (Datapro Research, 1989b). The UCC provides a number of protections for the buyer of goods. In every sale of a product by a seller that deals in goods of the kind sold, there is an implied warranty that the product is merchantable. The usual test for merchantability is whether the product is fit for the ordinary purposes for which such products are used. The buyer can recover damages whether or not the seller knew of a defect, or whether or not the seller could have discovered such a defect. The UCC also provides an implied warranty of fitness for a particular purpose. This warranty provides damages where any seller, whether a dealer in goods of the kind sold or not, has any reason to know the specific use to which the product will be put, and knows that the buyer is relying on the seller's superior expertise to select a suitable product. These warranties may be, and almost always are, disclaimed as part of PC software shrink-wrap licenses, often by conspicuously including such words as "as is" or "with all faults." The UCC does permit the vendor to limit or exclude consequential and incidental damages, unless such limitation is unconscionable (e.g., because it is overly one-sided). Consequential damages are compensation for an injury that does not flow immediately and directly from the action, but only from the consequences or results of the action. For example, damages from a computer break-in that exploited a flawed password mechanism would be deemed consequential to the extent that the supplier of the password mechanism was held responsible. Recovery from suppliers can take other less far-reaching (and more plausible) forms, such as incidental damages. Incidental damages include commercially reasonable charges incurred incident to a breach, such as costs incurred to mitigate the damage. While disclaimers and standard-form contracts or licenses are legal and help to keep prices down, as applied to software they raise questions about whether consumers understand what is happening and what popular licensing practices may mean. These questions were noted in a recent review of computer contract cases: Since purchasers generally base their selection of equipment and software on the sellers' representations as to the technical performance capabilities and reliability of equipment, the buyers often ignore the generally broad disclaimers of express and implied warranties in standard vendor contracts. When they become disappointed and discover that disclaimers foreclose their contract remedies, they turn to the law of misrepresentation for relief.

OCR for page 143
Computers at Risk: Safe Computing in the Information Age Misrepresentation cases will continue to proliferate until the industry more closely aligns its express warranties with the reasonable expectations of its customers, who assume that the hardware and software they buy will perform as described by the sellers' representatives who sold them the product. (Boss and Woodward, 1988, p. 1533) The vulnerability of consumers and the mismatch of expectations even where individualized contracts are involved have been underscored by a few recent incidents involving vendor disabling of installed software in the course of disputes with customers.28 Software and Systems Present Special Problems It is clear from the foregoing discussion that a buyer of off-the-shelf software has extremely limited recourse should the licensed software not perform as expected. The major motivation for the vendor to produce trustworthy software is the desire to remain competitive. In the process, however, features for which customer demand is not high may receive inadequate attention. For example, restraints to protect passengers and emission controls to protect the public at large are now universally installed in automobiles because they have been mandated by government action. Although public interest groups helped spur government action, few individual consumers demanded these features, perhaps because of the increased cost or the perception of reduced performance or the inability of an individual to bargain for them effectively. Yet few would argue that these impositions are not in the public interest; what does stimulate argument is the stringency of the safeguard required. Unsafe or nonsecure software poses analogous risks to users and to others exposed to it (see Chapter 2's "Risks and Vulnerabilities"). More trustworthy software may, like safer and cleaner automobiles, carry a higher product price tag and may also suffer from a perception of reduced performance. In the absence of general consumer demand for more trustworthy software, should manufacturers of off-the-shelf software be subjected to governmental action? In particular, should the government act to reduce a software vendor's ability to disclaim warranties and to limit damages? The software industry and software itself exhibit some characteristics that limit the scope for governmental action. On the one hand, complex software will inevitably contain errors; no human being can guarantee that it will be free of errors. Imposition of strict liability (without a finding of malice or negligence) for any error would clearly not be equitable, since the exercise of even an exceptionally high degree of care in software production would not guarantee an error-free prod-

OCR for page 143
Computers at Risk: Safe Computing in the Information Age uct. On the other hand, tools and testing methods to reduce the probability of errors are available. Systematic use of such tools and methods prior to software release reduces the frequency and severity of errors in the fielded product. The committee believes that these tools and methods are not now in wide use both because they are not well known (e.g., the forefront technology of automated protocol analysis, which can dramatically shorten the development cycle) or because, given the evolution of products and practices in the industry, they appear to have been ignored by vendors (e.g., as has been the case for strongly type-checked link editors). Of course, licensees must accept many risks in using software. Users must train themselves sufficiently in the proper operation of a computer system and software before relying on them. A software vendor should not be held liable for damage caused by users' gross ignorance.29 At the same time, the software vendor must bear a degree of responsibility in helping to properly train the user through adequate and clear documentation describing proper use of the product, and its limitations, including their bearing on security and safety. The superior knowledge and skill of the software vendor itself should impose a duty of care on that vendor toward the unskilled licensee, who in purchasing the product must rely on the vendor's representations, skill, and knowledge.30 At the same time, any imposition of liability on the vendor must imply a concomitant imposition of responsibility on the user to make a reasonable effort to learn how to use the software properly. Perhaps the most compelling argument against increasing product liability for software and systems vendors is the potential for adverse impacts on the dynamic software industry, where products come quickly to the market and advances are continually made—both of which are major consumer benefits. Innovation is frequently supported by venture capital, and imposition of heavy warranty liability can chill the flow of capital and restrict the introduction of new products or the proliferation of new ventures. Even when raising capital is not an issue, risk aversion itself can discourage innovation. In either case, the increased business risk to the vendor is reflected in higher product prices to the consumer, which in turn may mean that fewer consumers benefit from a given piece of software. Toward Equitable Allocation of Liability The possible adverse consequences of holding software and system vendors to a higher standard of care must be carefully weighed against the potential benefits. As more powerful and more highly

OCR for page 143
Computers at Risk: Safe Computing in the Information Age interconnected systems become more widespread, there will be increasing concern that the current allocation of the risk of software failure is too one-sided for an information society, at least for off-the-shelf software. The industry is sufficiently mature and verification tools and methodologies are sufficiently well understood today that total insulation of the industry from the consequences of software failure can no longer be justified. Operating system software and the major off-the-shelf applications software packages are produced by companies with a business base substantial enough to support quality assurance programs that would yield safer and more secure software; such programs could also reduce any liability risk to manageable proportions. As it is, vendors have already begun programs to make sure that their own development and production efforts are free of contamination from viruses. IBM, for example, set up its High-Integrity Computing Laboratory for this purpose (Smith, 1989; committee briefing by IBM), and ADAPSO, a trade association, has been promoting such efforts for its constituent software and services companies (Landry, 1990). Similarly, vendors do, to varying degrees, notify users of security-related flaws. For example, Sun Microsystems recently announced the Customer Warning System for handling security incidents31 (Ulbrich and Collins, 1990). Shifting more (not all) risk to the vendors would result in greater care being taken in the production and testing of software. The British move to require greater testing of safety-relevant software illustrates that these concerns are not just local, but are in fact relevant to a worldwide marketplace. The resulting increased use of verification techniques would not only improve the level of software trustworthiness in the most general sense, but would also necessarily improve the level of trust in the specific information security context. (See Chapter 4's "Relating Specifications to Programs" and "Formal Specification and Verification.") The national interest in the trustworthiness of software is sufficiently strong that Congress should review this question to determine (1) whether federal law is required (or whether state efforts are adequate) and (2) to what extent risks that can be averted through safer software should be shifted from user to vendor. Equitable risk allocation, which reasonably balances vendor and user interests, is achievable and will advance the national interest. The development of GSSP, as recommended in Chapters 1 and 2, would provide a positive force to balance and complement the negative force of product liability. GSSP would provide a clear foundation of expectation that customers may count on as standards of performance and vendors may regard as standards of adequacy, against which

OCR for page 143
Computers at Risk: Safe Computing in the Information Age legal claims could be judged. Interestingly, a similar notion was expressed by insurance industry representatives interviewed for this study, who suggested that some form of standard that could be harmonized with accounting standards would be a potent mechanism to improve security controls in the business community. Their rationale was that such standards would raise the profile of the issue with corporate directors and officers, who are liable to owners (stockholders, partners, and so on).32 The committee recognizes that security is not the only property involved in the issue of product liability; safety is obviously another such property. However, as security is a subliminal property of software, it is here that the gap between unspoken customer expectations and unarticulated vendor intentions looms largest. Advances in articulating GSSP would go far toward clarifying the entire field. Both customers and vendors stand to gain. APPENDIX 6.1— EXPORT CONTROL PROCESS National security export controls (hereafter, "export controls") limit access in other countries to technologies and products that could be valuable for military purposes. The control process, which varies by type of product, involves a list of controlled items and an administrative structure for enforcing controls on the export of listed items. Controlled exports do not mean no exports. Rather, these exports are controlled in terms of destination and, in some cases, volume or end use, with restrictions specified as part of the export license. It should be noted that even the tightest export controls do not totally block access to protected technology. Four organizations have been the principal influences on the export control policy and process of the United States, namely the Coordinating Committee for Multilateral Export Control (CoCom), in which the United States participates, and the U.S. Departments of State, Commerce, and Defense. Each of these organizations has its own policies and jurisdictions for export control, but all the organizations interact heavily with regard to common pursuits (NAS, 1987). CoCom, a multilateral effort to curb the flow of technology from the West to the Soviet Union and what have been its allies in the East Bloc, has included representatives from Japan, Australia, and all NATO countries except Iceland. Products controlled by CoCom are listed on the Industrial List (IL). The Department of State administers the International Traffic in Arms Regulations (ITAR; 22 CFR, Parts 120–130) through its Center for Defense Trade (formerly the Office of Munitions Control) in consultation with the Department of Defense.

OCR for page 143
Computers at Risk: Safe Computing in the Information Age That office maintains the U.S. Munitions Control List, which includes technologies and products representing an obvious military threat, such as weaponry. Finally, the Department of Commerce administers the Export Administration Regulations (EAR; CFR Parts 368–399), in consultation with the Department of Defense. Commerce maintains the Control List (CL), which has classified elements, and the Commodity Control List (CCL), which is not classified. Both of these lists contain dual-use technologies and products, which have both military and civilian/commercial value, and military-critical technologies that may be treated specially. Recent developments in Eastern Europe have placed pressure on CoCom as an institution and on the United States, which is generally more conservative than other CoCom nations about controlling exports of dual-use technology. Even the topic of trade with other CoCom countries has stirred substantial debate within the U.S. government, some centering on how products are labeled (the most publicized controversy pertains to defining what is a supercomputer) and where they are listed, and much on whether a product should be listed at all. Exports of general- and special-purpose computer systems are controlled if the systems offer one or more of three qualities: high performance (potentially useful in such strategic applications as nuclear bomb development or war gaming), specific military-critical functionality (e.g., radiation hardening and ruggedness or applications like on-board fire control), or the capability to produce high-performance or military-critical computer systems (e.g., sophisticated computer-aided design and manufacturing systems). Exports of supercomputers to countries other than Canada and Japan are subject to case-by-case review, which can take months, and require special conditions associated with the sale, installation, and operation of the supercomputer, so-called supercomputer safeguard plans. APPENDIX 6.2— INSURANCE Insurance is a means for sharing a risk. The insured pays the insurer (up front, through a premium, and/or when receiving reimbursement, through a deductible or other copayment) to share his risks; if an adverse event takes place, the insurance policy provides for payment to compensate for the damage or loss incurred. The business community already buys insurance for risks ranging from fire to theft as well as for protection against employee dishonesty (bonding). To be insurable requires the following:

OCR for page 143
Computers at Risk: Safe Computing in the Information Age A volume base for risk spreading (insurance on communication satellites has a very small volume, something that contributes to its cost); An establishable proof of loss; A quantifiable loss (e.g., the value of mailing lists and research data cannot be consistently and objectively quantified, according to insurance representatives); An ability to tie a loss to a time frame of occurrence; An ability to credit responsibility for the loss; and A knowable loss base. With these elements, a purchaser of insurance can effectively transfer risk to a carrier and prove a loss. Risks that do not satisfy these elements include inherent business risks. Another factor to consider is the nature of the consequences, which influences the liability base: a computer-aided manufacturing program controlling a robot may put lives at risk, whereas a number-crunching general ledger program will not. The earliest insurance offerings covering computer environments were directed at third-party providers of computer services (e.g., service bureaus) concerned about direct and contingent liability associated with losses to their customers. Also leading the computer insurance market were banks—driven by state and federal auditors' concerns—and electronic funds transfer (EFT) systems, ranging from those established by the Federal Reserve (e.g., Fedwire) to the automated clearinghouses, for which there was legislative impetus behind the establishment and use of insurance coverage. This governmental urging of provisions for insurance against computer system risks was initially resisted by the insurance industry, which claimed not to understand the risks. Insurance for banks and other financial services institutions is relatively well developed, reflecting both the size of the potential loss, the ease with which the risk can be underwritten, and regulations requiring such protection. Much computer-related insurance for the banking industry, for example, builds on a historic base in bonds that protect against employee dishonesty, since most crimes against banks are perpetrated on the inside or with insider participation. Outside of financial services, the insurance picture is mixed and less mature. There is some coverage against computer system mishaps available through employee bonding and property and casualty coverage. It is easiest to insure the tangible elements of a computer system. By contrast, coverage may be available for restoring a database, but not for reconstructing it from scratch. Another basis for insurance is found in business interruption coverage. Thus recovery of costs for system downtime is available. A new development in the

OCR for page 143
Computers at Risk: Safe Computing in the Information Age 1980s was the introduction of limited coverage against external intrusions and associated offenses, including tampering, extortion, and others. Although the insurance described above protects the system-using organization, insurance representatives suggest there is a growing potential for coverage of errors and omissions on the part of the vendor, arising from the development of hardware, firmware, and software, to protect the vendor against liability claims. Such coverage appears targeted to developers of such complex products as engineering design software. NOTES 1.   Note that add-on controls are futile unless the user has full control over all the software on a machine. 2.   A glaring example of a facility that can compromise security is ''object reuse," which never was an issue in Unix, because it could not happen. Today's non-Unix systems from Digital Equipment Corporation and IBM still allow object reuse. 3.   As noted by one analyst, Unix was originally designed by programmers for use by other programmers in an environment fostering open cooperation rather than privacy (Curry, 1990). 4.   The fact that consumers are preoccupied with threats posed by insiders and have problems today that could benefit from better procedures and physical security measures, let alone technical measures, is discussed in the section titled "Consumer Awareness." 5.   For example, the most recent of a series of intra-governmental advisories is the Office of Management and Budget's (OMB's) Guidance for Preparation of Security Plans for Federal Computer Systems that Contain Sensitive Information (OMB, 1990). This bulletin addresses the security planning process required by the Computer Security Act of 1987 (P.L. 100-235). It is expected to be superseded by a revision to OMB Circular Number A-130 and incorporated into future standards or guidelines from the National Institute of Standards and Technology. 6.   An examination of this challenge for computing technologies generally can be found in a previous Computer Science and Technology Board report, Global Trends in Computer Technology and Their Impact on Export Control (NRC, 1988a). 7.   There may also have been instances in which software implementations of DES or RSA were sent abroad by oversight or because the transmitter of the implementation was unaware of the law. The physical portability of software makes such slips almost inevitable. 8.   Note that the United Kingdom and Australia set the threshold at B2 or the equivalent. 9.   Note that in this time period only one A1 product has been on the evaluated product list. The information on approval rates came from NSA briefings for the committee. 10.   This point was made by Digital Equipment Corporation in July 1990 testimony before the House Subcommittee on Transportation, Aviation, and Materials. 11.   For example, observers of the market for disaster recovery services have noted that until a 1986 fire in Montreal, a principal marketing tool was a 1978 study assessing how long businesses could survive without their data processing operations; more recent fires (affecting the Hinsdale, Ill., central office for telephone service and lower

OCR for page 143
Computers at Risk: Safe Computing in the Information Age     Manhattan's business district) have also provided dramatic evidence of the consequences of system mishaps (Datamation, 1987). 12.   This situation and a variant, in which bad products effectively drive out good ones, is not unique (see Akerlof, 1970). 13.   A security officer may even occasionally need to decrypt an encrypted file that was encrypted by a suspect using a key known only to the suspect; the security officer may have very mixed feelings about the optimum strength of an encryption method that is available for routine use in protecting the company's data. 14.   These issues have been actively discussed on electronic bulletin boards and forums (e.g., RISKS, CuD, the Well) and in the general and business press with the publicized launch of the Electronic Frontiers Foundation in response to recent investigations and prosecutions. 15.   "Insurance as a Market Lever" and Chapter Appendix 6.2 draw on discussions with insurance industry representatives, including carrier and agent personnel. 16.   Insurance industry representatives voice concern about technology outpacing underwriting: if a policy is written at one point in time, will the language and exclusions prove appropriate when a claim is filed later, after new technology has been developed and introduced? 17.   Indeed, there is some evidence that universities should do even more. For example, based on a recent survey, John Higgins observed the following: It seems evident that a substantial majority of current university graduates in computer science have no formal introduction to the issues of information security as a result of their university training.… While it is unlikely that every institution would develop a variety of courses in security, it is important that some institutions do. It establishes and helps to maintain the credibility of the subject and provides a nucleus of students interested in security topics. The most favorable interpretation of the survey seems to suggest that at present there are at best only two or three such universities in the nation. (Higgins, 1989, p. 556) 18.   RISKS, formally known as the Forum on Risks to the Public in the Use of Computers and Related Systems, was established in August 1985 by Peter G. Neumann as chair of the Association for Computing Machinery's (ACM) Committee on Computers and Public Policy. It is an electronic forum for discussing issues relating to the use and misuse of computers in applications affecting our lives. Involving many thousands of people around the world, RISKS has become a repository for anecdotes, news items, and assorted comments thereon. The most interesting cases discussed are included in the regular issues of ACM's Software Engineering Notes (See Neumann, 1989). An updated index to about a thousand cases is under development. 19.   The relative reluctance of victims to report computer crimes was noted to the committee by prosecutors and insurance representatives. 20.   Experience shows that many users do not repair flaws or install patches (software to correct a flaw) even given notification. Since penetrators have demonstrated the ability to "reverse engineer" patches (and other remedies) and go looking for systems that lack the necessary corrections, the proper strategy for handling discovered flaws is not easy to devise. 21.   Computer hardware, for example, must meet the Federal Communications Commission's regulations for electronic emanations, and European regulations on ergonomic and safety qualities of computer screens and keyboards have affected the appearance and operation of systems worldwide. 22.   This point was made by Digital Equipment Corporation in July 1990 testimony before the House Subcommittee on Transportation, Aviation, and Materials.

OCR for page 143
Computers at Risk: Safe Computing in the Information Age 23.   Vendors also argue that some consumers may prefer products with little security, but the prevalent lack of consumer understanding of the choices casts doubt on this explanation for the weak market. 24.   For example, rope manufacturers use a system of standardized strength ratings, since one cannot tell at the point of manufacture whether a rope will be used to tie packages or to suspend objects, for example. Of course, some highly specialized rope, such as climbing lines, carries extra assurance, which comes with added cost. 25.   Michael Agranoff observes, "Such standards would not eliminate computer abuse, especially by 'insiders'; they would not eliminate computer-related negligence. They would, however, provide a 'curb on technology,' a baseline from which to judge both compensation for victims of computer abuse and the efficacy of measures to combat computer crime" (Agranoff, 1989, p. 275). 26.   The terms and conditions governing the acquisition of operating-system and off-the-shelf software have many of the attributes of an adhesion contract (although whether there is a contract at all is open to debate). An adhesion contract is a standardized contract form offered on a "take-it-or-leave-it" basis, with no opportunity to bargain. The prospective buyer can acquire the item only under the stated terms and conditions. Of course, the "buyer" has the option of not acquiring the software, or of acquiring a competing program that is most likely subject to the same or a similar set of terms and conditions, but often the entire industry offers the item only under a similar set of terms and conditions. 27.   The UCC upholds express warranties in Section 2-313. An express warranty is created when the seller affirms a "fact or promise, describes the product, and provides a sample or model, and the buyer relies on the affirmation, description, sample, or model as part of the basis of the bargain." By their very nature, express warranties cannot be disclaimed. The UCC will not allow a vendor to make an express promise that is then disclaimed. Language that cannot be reasonably reconciled is resolved in favor of the buyer. 28.   Most recently, Logisticon, Inc., apparently gained telephone access to Revlon, Inc.'s computers and disabled software it supplied. Revlon, claiming dissatisfaction with the software, had suspended payments. While Logisticon argued it was repossessing its property, Revlon suffered a significant interruption in business operations and filed suit (Pollack, 1990). 29.   Although it would be inequitable to impose liability for clearly unintended uses in unintended operating environments, a vendor should not escape all liability for breach of warranty simply because a product can be used across a wide spectrum of applications or operating environments. 30.   That superior knowledge is an argument for promoting the technical steps discussed in the section titled "Consumer Awareness," such as shipping systems with security features turned on. 31.   The Customer Warning System involves a point of contact for reporting security problems; proactive alerts to customers of worms, viruses, or other security holes; and distribution of fixes. 32.   The Foreign Corrupt Practices Act is one step toward linking accounting and information security practices; it requires accounting and other management controls that security experts interpret as including computer security controls (Snyders, 1983). Also, note that an effort is under way on the part of a group of security practitioners to address the affirmative obligations of corporate officers and directors to safeguard information assets (personal communication from Sandra Lambert, July 1990).