4
Programming Methodology

This chapter discusses issues pertinent to producing all high-quality software and, in particular, issues pertinent primarily to producing software designed to resist attack. Both application and system-level software are considered. Although there are differences between how the two are produced, the similarities dominate the differences.

Of the several factors that govern the difficulty of producing software, one of the most important is the level of quality to be attained, as indicated by the extent to which the software performs according to expectations. High-quality software does what it is supposed to do almost all the time, even when its users make mistakes. For the purposes of this study, software is classified according to four levels of quality: exploratory, production quality, critical, and secure. These levels differ according to what the software is expected to do (its functionality) and the complexity of the conditions under which the software is expected to be used (environmental complexity).

Exploratory software does not have to work; the chief issue is speed of development. Although it has uses, exploratory software is not discussed in this report.

Production-quality software needs to work reasonably well most of the time, and its failures should have limited effects. For example, we expect our spreadsheets to work most of the time but are willing to put up with occasional crashes, and even with occasional loss of data. We are not willing to put up with incorrect results.

Critical software needs to work very well almost all of the time, and certain kinds of failures must be avoided. Critical software is used in trusted and safety-critical applications, for example, medical instruments, where failure of the software can have catastrophic results.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 102
Computers at Risk: Safe Computing in the Information Age 4 Programming Methodology This chapter discusses issues pertinent to producing all high-quality software and, in particular, issues pertinent primarily to producing software designed to resist attack. Both application and system-level software are considered. Although there are differences between how the two are produced, the similarities dominate the differences. Of the several factors that govern the difficulty of producing software, one of the most important is the level of quality to be attained, as indicated by the extent to which the software performs according to expectations. High-quality software does what it is supposed to do almost all the time, even when its users make mistakes. For the purposes of this study, software is classified according to four levels of quality: exploratory, production quality, critical, and secure. These levels differ according to what the software is expected to do (its functionality) and the complexity of the conditions under which the software is expected to be used (environmental complexity). Exploratory software does not have to work; the chief issue is speed of development. Although it has uses, exploratory software is not discussed in this report. Production-quality software needs to work reasonably well most of the time, and its failures should have limited effects. For example, we expect our spreadsheets to work most of the time but are willing to put up with occasional crashes, and even with occasional loss of data. We are not willing to put up with incorrect results. Critical software needs to work very well almost all of the time, and certain kinds of failures must be avoided. Critical software is used in trusted and safety-critical applications, for example, medical instruments, where failure of the software can have catastrophic results.

OCR for page 102
Computers at Risk: Safe Computing in the Information Age In producing critical software the primary worries are minimizing bugs in the software and ensuring reasonable behavior when nonmalicious users do unexpected things or when unexpected combinations of external events occur. Producing critical software presents the same problems as producing production-quality software, but because the cost of failure is higher, the standards must be higher. In producing critical software the goal is to decrease risk, not to decrease cost. Secure software is critical software that needs to be resistant to attack. Producing it presents the same problems as does producing critical software, plus some others. One of the key problems is analyzing the kinds of attacks that the software must be designed to resist. The level and kind of threat have a significant impact on how difficult the software is to produce. Issues to consider include the following: To what do potential attackers have access? The spectrum ranges from the keyboard of an automated teller machine to the object code of an operational system. Who are the attackers and what resources do they have? The spectrum ranges from a bored graduate student, to a malicious insider, to a knowledgeable, well-funded, highly motivated organization (e.g., a private or national intelligence-gathering organization). How much and what has to be protected? In addition, the developers of secure software cannot adopt the various probabilistic measures of quality that developers of other software often can. For many applications, it is quite reasonable to tolerate a flaw that is rarely exposed and to assume that its having occurred once does not increase the likelihood that it will occur again (Gray, 1987; Adams, 1984). It is also reasonable to assume that logically independent failures will be statistically independent and not happen in concert. In contrast, a security vulnerability, once discovered, will be rapidly disseminated among a community of attackers and can be expected to be exploited on a regular basis until it is fixed. In principle, software can be secure without being production quality. The most obvious problem is that software that fails frequently will result in denial of service. Such software also opens the door to less obvious security breaches. A perpetrator of an intelligence-grade attack (see Appendix E, "High-grade Threats") wants to avoid alerting the administrators of the target system while conducting an attack; a system with numerous low-level vulnerabilities provides a rich source of false alarms and diversions that can be used to cover up the actual attack or to provide windows of opportunity (e.g., when the system is recovering from a crash) for the subversion of hardware or software.

OCR for page 102
Computers at Risk: Safe Computing in the Information Age Low-quality software also invites attack by insiders, by requiring that administrative personnel be granted excessive privileges of access to manually repair data after software or system failures. Another important factor contributing to the difficulty of producing software is the set of performance constraints the software is intended to meet, that is, constraints on the resources (usually memory or time) the software is permitted to consume during use. At one extreme, there may be no limit on the size of the software, and denial of service is considered acceptable. At the other extreme is software that must fit into limited memory and meet "hard" real-time constraints. It has been said that writing extremely efficient programs is an exercise in logical brinkmanship. Working on the brink increases the probability of faults and vulnerabilities. If one must work on the brink, the goals of the software should be scaled back to compensate. Perhaps the most important factor influencing the difficulty of producing software is size. Producing big systems, for example, a global communication system, is qualitatively different from producing small ones. The reasons for this are well documented (NRC, 1989a). In summary, simultaneous growth in level of quality, performance constraints, functionality, and environmental complexity results in a corresponding dramatic increase in the cost and risk of producing, and the risk of using, the software. There is no technology available to avoid this, nor is research likely to provide us with such a technology in the foreseeable future. If the highest possible quality is demanded for secure software, something else must give. Because security cannot be attained without quality and the environment in which a system is to run is usually hard to control, typically one must either remove performance constraints (perhaps by allocating extra resources) or reduce the intended functionality. SOFTWARE IS MORE THAN CODE Good software is more than good code. It must be accompanied by high-quality documentation, including a requirements document, a design document, carefully written specifications for key modules, test plans, a maintenance plan, and so on. Of particular importance for secure software is a guide to operations. More comprehensive than a user's manual, such a guide often calls for operational procedures that must be undertaken by people other than users of the software, for example, by system administrators. In evaluating software one must consider what it will do if the instructions in the guide to operations are followed, and what it will do if

OCR for page 102
Computers at Risk: Safe Computing in the Information Age they are not. One must also evaluate how likely it is that capable people with good intentions will succeed in following the procedures laid down in the guide to operations. For critical and secure software, a guide to operations is particularly important. In combination with the software it must provide for the following: Auditing: What information is to be collected, how it is to be collected, and what is to be done with it must be described. Those who have penetrated secure software cannot be expected to file a bug report, and so mechanisms for detecting such penetrations are needed. Reduction of raw audit data to intelligible form remains a complex and expensive process; a plan for secure software must include resources for the development of systems to reduce and display audit data. Recovery: Producing fault-free software of significant size is nearly impossible. Therefore one must plan for dealing with faults, for example, by using carefully designed recovery procedures that are exercised on a regular basis. When they are needed, it is important that such procedures function properly and that those who will be using them are familiar with their operation. If at all possible manual procedures should be in place to maintain operations in the absence of computing. This requires evaluating the risk of hardware or software crashes versus the benefits when everything works. Operation in an emergency mode: There may be provisions for bypassing some security features in times of extreme emergency. For example, procedures may exist that permit "breaking in" to protected data in critical circumstances such as incapacitation or dismissal of employees with special authorizations. However, the system design should treat such emergencies explicitly, as part of the set of events that must be managed by security controls. Software should be delivered with some evidence that it meets its specifications (assurance). For noncritical software the good reputation of the vendor may be enough. Critical software should be accompanied by documentation describing the analysis the software has been subjected to. For critical software there must be no doubt about what configurations the conclusions of testing and validation apply to and no doubt that what is delivered is what was validated. Secure software should be accompanied by instructions and tools that make it possible to do continuing quality assurance in the field. Software delivered without assurance evidence may provide only illusory security. A system that is manifestly nonsecure will generally inspire caution on the part of its users; a system that provides illusory security will inspire trust and then betray that trust when attacked.

OCR for page 102
Computers at Risk: Safe Computing in the Information Age Arrangements should be made to have the assurance evidence reviewed by a team of experts who are individually and organizationally independent from the development team. Software should be delivered with a plan for its maintenance and enhancement. This plan should outline how various expected changes might be accomplished and should also make clear what kinds of changes might seriously compromise the software. Secure software must be developed under a security plan. The plan should address what elements of the software are to be kept confidential, how to manage trusted distribution of software changes, and how authorized users can be notified of newly discovered vulnerabilities without having that knowledge fall into the wrong hands. SIMPLER IS BETTER The best software is simple in two respects. It has a relatively simple internal structure, and it presents a relatively simple interface to the environment in which it is embedded. Before deciding to incorporate a feature into a software system, one should attempt to understand all the costs of adding that feature and do a careful cost-benefit analysis. The cost of adding a feature to software is usually underestimated. The dominant cost is not that of the feature per se, but that of sorting out and controlling the interactions of that feature with all the others. In particular, underestimating cost results from a failure to appreciate the effects of scale. The other side of the coin is that the value of a new feature is usually overestimated. When features are added, a program becomes more complex for its users as well as for its developers. Furthermore, the interactions of features may introduce unexpected security risks. It is axiomatic among attackers that one does not break components but rather systems, by exploiting unanticipated combinations of features. It cannot be emphasized enough that truly secure systems are modest, straightforward, and understandable. The best designs are straightforward. The more intricate the design and the greater the number of special-case features to accomplish a given functionality, the greater the scope for errors. Sometimes simple designs may be (or may appear to be) unacceptably inefficient. This can lead developers to compromise the structure or integrity of code or to employ intricate fast algorithms, responses that almost always make the software harder to produce and less reliable, and often make it more dependent on the precise characteristics of the input. Better hardware and less ambitious specifications deserve strong consideration before one ventures into such an exercise in software

OCR for page 102
Computers at Risk: Safe Computing in the Information Age virtuosity. Such trade-offs deserve special attention by designers of secure systems, who too often accept the almost impossible requirements to preserve the full performance, function, and hardware of predecessor systems. THE ROLE OF PROGRAMMING LANGUAGES An important threat to all software is bugs that have been accidentally introduced by programmers. It has been clearly demonstrated that higher-level programming languages tend to reduce the number of such bugs, for the following reasons: Higher-level languages reduce the total amount of code that must be written. Higher-level languages provide abstraction mechanisms that make programs easier to read. All higher-level languages provide procedures. The better languages provide mechanisms for data abstraction (e.g., packages) and for control abstraction (e.g., iterators). Higher-level languages provide checkable redundancy, such as type checking that can turn programs with unintended semantics into illegal programs that are rejected by the compiler. This helps turn errors that would otherwise occur while the program is running into errors that must be fixed before the program can run. Higher-level languages can eliminate the possibility of making certain kinds of errors. Languages with automatic storage management, for example, greatly reduce the likelihood of a program trying to use memory that no longer belongs to it. Much useful analysis can be done by the compiler, but there is usually ample opportunity to use other tools as well. Sometimes these tools—for example, various C preprocessors—make up for deficiencies in the programming language. Sometimes they enforce coding standards peculiar to an organization or project, for example, the standard that all types be defined in a separate repository. Sometimes they are primitive program verification systems that look for anomalies in the code, for example, code that cannot be reached. A potential drawback to using higher-level programming languages in producing secure software is that they open up the possibility of certain kinds of "tunneling attacks." In a tunneling attack, the attacker attempts to exploit vulnerabilities at a level of abstraction beneath that at which the system developers were working. To avoid such attacks one must be able to analyze the software beneath the level of the source language. Higher-level languages often have large run-time packages (e.g., the Ada Run-Time Support Library). These run-time

OCR for page 102
Computers at Risk: Safe Computing in the Information Age packages are often provided as black boxes by compiler vendors and are not subject to the requirements for independent examination and development of assurance evidence that the rest of the software must satisfy. They are, therefore, often a weak link in the security chain. THE ROLE OF SPECIFICATIONS Specifications describe software components. They are written primarily to provide precise, easy-to-read, module-level documentation of interfaces. This documentation facilitates system design, integration, and maintenance, and it encourages reuse of modules. The most vexing problems in building systems involve overall system organization and the integration of components. Modularity is the key to effective integration, and specifications are essential for achieving program modularity. Abstraction boundaries allow one to understand programs one module at a time. However, an abstraction is intangible. Without a specification, there is no way to know what the abstraction is or to distinguish it from one of its implementations (i.e., executable code). The process of writing a specification clarifies and deepens understanding of the object being specified by encouraging prompt attention to inconsistencies, incompletenesses, and ambiguities. Once written, specifications are helpful to auditors, implementors, and maintainers. A specification describes an agreement between clients and providers of a service. The provider agrees to write a module that meets a specification. The user agrees not to rely on any properties of the module that are not guaranteed by the specification. Thus specifications provide logical firewalls between providers and clients of abstractions. During system auditing, specifications provide information that can be used to generate test data, build stubs, and analyze information flow. During system integration they reduce the number and severity of interfacing problems by reducing the number of implicit assumptions. Specifications are usually much easier to understand than are implementations—thus combining specifications is less work than combining implementations. By relying only on those properties guaranteed by a specification, one makes the software easier to maintain because it is clear what properties must be maintained when an abstraction or its implementation is changed. By distinguishing abstractions from implementations, one increases the probability of building reusable components.

OCR for page 102
Computers at Risk: Safe Computing in the Information Age One of the most important uses of specifications is design verification. Getting a design "right" is often much more difficult than implementing the design.1 Therefore, the ease and precision with which conjectures about a design can be stated and checked are of primary importance. The kinds of questions one might ask about a design specification fall into a spectrum including two extremes: general questions relevant to any specification and problem-specific questions dealing with a particular application. The general questions usually deal with inconsistency (e.g., Does the specification contradict itself?) or incompleteness (e.g., Have important issues not been addressed?). Between the two extremes are questions related to a class of designs, for example, generic security questions. Design verification has enjoyed considerable success both inside and outside the security area. The key to this success has been that the conjectures to be checked and the specifications from which they are supposed to follow can both be written at the same relatively high level of abstraction. RELATING SPECIFICATIONS TO PROGRAMS The preceding discussions of the roles of programming languages and specifications have emphasized the importance of separately analyzing both specifications and programs. Showing that programs meet their specifications is approached mainly by the use of testing and verification (or proving). Testing is a form of analysis in which a relatively small number of cases are examined. Verification deals with a potentially unbounded number of cases and almost always involves some form of inductive reasoning, either over the number of steps of a program (e.g., one shows that if some property holds after the program has executed n steps, it will also hold after n + 1 steps) or over the structure of a data type (e.g., one shows that if some property holds for the first n elements of an array, it will also hold for the first n + 1 elements). The purpose of both kinds of analysis is to discover errors in programs and specifications, not to certify that either is error-free. Proponents of testing have always understood this. Testing cannot provide assurance that a property holds—there are simply too many cases to be examined in any realistic system. In principle, verification can be used to certify that a program satisfies its specification. In practice, this is not the case. As the history of mathematics makes clear, even the most closely scrutinized proofs may be flawed. Testing techniques can be grouped roughly into three classes: (1) random testing involves selection of data across the environment, often with some frequency distribution; (2) structural testing involves

OCR for page 102
Computers at Risk: Safe Computing in the Information Age generating test cases from a program itself, forcing known behavior onto the program; and (3) functional testing uses the specified functions of a program as the basis for defining test cases (Howden, 1987; Miller and Howden, 1981). These techniques are complementary and should be used in concert. It is important that verification not be equated with formal proofs. Informal but rigorous reasoning about the relationships between implementations and specifications has proved to be an effective approach to finding errors (Solomon, 1982). People building concurrent programs frequently state key invariants and make informal arguments about their validity (Lamport, 1989; Wing, 1990). Common sense and much empirical evidence make it clear that neither testing nor verification by itself is adequate to provide assurance for critical and secure software. In addition to being necessarily incomplete, testing is not a cheap process, often requiring that months be spent in grinding out test cases, running the system on them, and examining the results. These tests must be repeated whenever the code or operating environment is changed (a process called regressions testing). Testing software under actual operating conditions is particularly expensive.2 Verification relies on induction to address multiple cases at once. However, discovering the appropriate induction hypotheses can be a difficult task. Furthermore, unless the proofs are machine checked they are likely to contain errors, and, as discussed in the following section, large machine-checked proofs are typically beyond the current state of the art. Many views exist on how testing and proving can be combined. The IBM ''cleanroom" approach (Linger and Mills, 1988; Selby et al., 1987) uses a form of design that facilitates informal proofs during an inspection process combined with testing to yield statistical evidence. Some parts of a system may be tested and others proved. The basic technique of proving—working a symbolic expression down a path of the program—may be used in either a testing or proving mode. This is especially applicable to secure systems when the symbolic expression represents an interesting security infraction, such as penetrating a communication system or faking an encryption key. Inductive arguments may be used to show that certain paths cannot be taken, thereby reducing the number of cases to be analyzed. Real-time systems pose special problems. The current practice is to use information gathered from semiformal but often ad hoc analysis (e.g., design reviews, summation of estimated times for events along specific program paths, and simulation) to determine whether an implementation will meet its specified time deadlines with an acceptable degree of probability. More systematic methods for analyzing func-

OCR for page 102
Computers at Risk: Safe Computing in the Information Age tional and performance properties of real-time software systems are needed. FORMAL SPECIFICATION AND VERIFICATION In the computer science literature, the phrase "formal method" is often used to refer to any application of a mathematical technique to the development or analysis of hardware or software (IEEE, 1990b,c). In this report, "formal" is used in the narrower sense of "subject to symbolic reasoning." Thus, for example, a formal proof is a proof that can, at least in principle, be checked by machine. The process of formally verifying that a program is correct with respect to its specification involves both generating and proving verification conditions. A verification-condition generator accepts as input a piece of code and formal specifications for that code, and then outputs a set of verification conditions, also called conjectures or proof obligations. These verification conditions are input to a theorem prover in an attempt to prove their validity using the underlying logic. If the conditions are all proved, then the program is said to satisfy its specification. The security community has been interested for some time in the use of formal verification to increase confidence in the security of software (Craigen and Summerskill, 1990). While some success has been reported (Haigh et al., 1987), on the whole formal program verification has not proved to be a generally cost-effective technique. The major obstacles have been the following (Kemmerer, 1986): The difficulty of crossing the barrier between the level of abstraction represented by code and the level of abstraction at which specifications should be written. Limits on theorem-proving technology. Given the current state of theorem-proving technology, program verification entails extensive user interaction to prove relatively simple theorems. The lack of well-engineered tools. The last obstacle is certainly surmountable, but whether the first two can be overcome is subject to debate. There are fundamental limits to how good theorem provers can become. The basic problem is undecidable, but that is not relevant for most of the proof obligations that arise in program verification. A more worrisome fact is that reasoning about many relatively simple theories is inherently expensive,3 and many of the formulas that arise in practice take a long time to simplify. Despite these difficulties,

OCR for page 102
Computers at Risk: Safe Computing in the Information Age there has been enough progress in mechanical theorem proving in the last decade (Lindsay, 1988) to give some cause for optimism. Whether or not the abstraction barrier can be gracefully crossed is the most critical question. The problem is that the properties people care about, for example, authentication of users, are most easily stated at a level of abstraction far removed from that at which the code is written. Those doing formal program verification spend most of their time mired in code-level details, for example, proving that two variables do not refer to the same piece of storage, and in trying to map those details onto the properties they really care about. A formal specification is a prerequisite to formal program verification. However, as outlined above in the section titled "The Role of Specifications," specifications have an important role that is independent of program verification. The potential advantages of formal over informal specifications are clear: formal specifications have an unambiguous meaning and are subject to manipulation by programs. To fully realize these advantages, one must have access to tools that support constructing and reasoning about formal specifications. An important aspect of modern programming languages is that they are carefully engineered so that some kinds of programming errors are detected by either the compiler or the run-time system. Some languages use "specs" or "defs" modules (Mitchell et al., 1979), which can be viewed as a first step in integrating formal specifications into the programming process. However, experience with such languages shows that while programmers are careful with those parts (e.g., the types of arguments) that are checked by their programming environment, they are much less careful about those parts (e.g., constraints on the values of arguments) that are not checked. If the latter parts were checked as well, programmers would be careful about them, too. Designs are expressed in a formal notation that can be analyzed, and formal statements about them can be proved. The process of formal design verification can be used to increase one's confidence that the specifications say "the right thing," for example, that they imply some security property. Organizations building secure systems have made serious attempts to apply formal specification, formal design verification, and formal program verification. This committee interviewed members of several such organizations4 and observed a consistent pattern: Writing formal specifications and doing design verification significantly increased people's confidence in the quality of their designs. Important flaws were found both during the writing of specifications and during the actual design verification. Although the majority of

OCR for page 102
Computers at Risk: Safe Computing in the Information Age the flaws were found as the specifications were written, the "threat" of design verification was an important factor in getting people to take the specification process seriously. Design-level verification is far more cost-effective than is program-level verification. Writing code-level entry/exit assertions is useful even if they are not verified. Although usable tools exist for writing and proving properties about specifications, better specification languages and tools are needed. More attention needs to be devoted to formalizing a variety of generally applicable security properties that can be verified at the design level. Little is understood about the formal specification and verification of performance constraints. HAZARD ANALYSIS For critical and secure systems, hazard analysis is important. This involves the identification of environmental and system factors that can go wrong and the levels of concern that should be attached to the results. Environmental events include such actions as an operator mistyping a command or an earthquake toppling a disk drive. Systematic hazard analysis starts with a list of such events generated by experts in such domains as the application, the physics of the underlying technology, and the history of failures of similar systems. Each hazard is then traced into the system by asking pertinent questions: Is system behavior defined for this hazard? How will the system actually behave under these conditions? What can be done to minimize the effects of this hazard? Thus hazard analysis is a form of validation in assuring that the environment is well understood and that the product is being built to respond properly to expected events. Many forms of security breaches can be treated as hazards (U.K. Ministry of Defence, 1989b). Physical system safety engineers have long used techniques such as failure-mode effects analysis and fault trees to trace the effects of hazards. Software is also amenable to analysis by such techniques, but additional problems arise (Leveson, 1986). First, the sheer complexity of most software limits the depth of analysis. Second, the failure modes of computer-controlled systems are not as intuitive as those for physical systems. By analogy, as radios with analog tuners age, the ability to separate stations slowly decreases. In contrast, radios with digital tuners tend to work well, or not at all.

OCR for page 102
Computers at Risk: Safe Computing in the Information Age STRUCTURING THE DEVELOPMENT PROCESS Some of the more popular approaches to software development have aspects that this committee believes are counterproductive. Some approaches encourage organizations to ignore what they already have when starting a new software project. There seems to be an almost irresistible urge to start with a clean slate. While this offers the advantage of not having to live with past mistakes, it offers the opportunity to make a host of new ones. Most of the time, using existing software reduces both cost and risk. If software has been around for some time, those working with it already have a considerable investment in understanding it. This investment should not be discarded lightly. Finally, when the hazards of a system are well understood, it often becomes possible to devise operational procedures to limit their scope. For similar reasons it is usually prudent to stick to established tools when building software that must be secure. Not only should programmers use programming languages they already understand, but they should also look for compilers that have been used extensively in similar projects. Although this is a conservative approach that over the long haul is likely to impede progress in the state of the art, it is clear that using new tools significantly increases risk. The development process should not place unnecessary barriers between the design, implementation, and validation stages of an effort to produce software. Particularly dangerous in producing critical or secure software are approaches that rely primarily on ex post facto validation. Software should be evaluated as it is being built, so that the process as well as the product can be examined. The most reliable evaluations involve knowing what goes on while the system is being designed. Evaluation by outsiders is necessary but should not be the primary method of assurance. Both software and the software development process should be structured so as to include incremental development based on alternation between relatively short design and implementation phases. This style of development has several advantages, among them the following: It helps to keep designers in touch with the real world by providing feedback. It tends to lead to a more modular design because designers are encouraged to invent coherent subsystems that can be implemented independently of other subsystems. (That is not to say that the various subsystems do not share code.) It leads to designs in which piecewise validation (usually by some combination of reasoning and testing) of the implementation is

OCR for page 102
Computers at Risk: Safe Computing in the Information Age possible. At the same time it encourages designers to think of planning for validation as part of the design process. By encouraging designers to think of the design as something that changes rather than as a static entity that is done "correctly" once, it tends to lead to designs that can be more easily changed if the software needs to be modified. MANAGING SOFTWARE PROCUREMENT Current trends in software procurement (particularly under government contracts) are rather disturbing: It has become increasingly common for those buying software to develop an adversarial relationship with those producing it. Recent legislation (the Procurement Integrity Act of 1989, P.L. 100-679, Section 27) could be interpreted as virtually mandating such a relationship. If implemented, this act, which would stop the flow of "inside" information to potential vendors, might have the effect of stopping the flow of all information to potential vendors, thus significantly increasing the number of government software procurements that would overrun costs or fail to meet the customer's expectations.5 Purchasers of software have begun to take an increasingly narrow view of the cost of software. Procurement standards that require buying software from the lowest bidder tend to work against efforts to improve software quality. Likewise, the procurement of software by organizations that are separate from the end users typically leads to an emphasis on reduction of initial cost, with a corresponding increase in life-cycle expense. Contractors often use their most talented engineers to procure contracts rather than to build systems. The best software is produced when the customer and vendor have a cooperative relationship. In the beginning, this makes it possible for the customer to be frank about his needs and the vendor to be frank about the difficulty of meeting those needs. A negotiation can then follow as together the customer and vendor attempt to balance the customer's desires against implementation difficulties. As the project progresses, particularly if it is done in the incremental way suggested above, the vendor and customer must both feel free to revisit the definition of what the software is to do. Such a relationship, while still possible in the private sector, could become difficult in government procurements, owing to the difficulty of determining what is or is not illegal under the Procurement Integrity Act of 1989 (if it is actually implemented). Adaptation to changed circumstances and

OCR for page 102
Computers at Risk: Safe Computing in the Information Age redirection of contracts to incorporate lessons learned could be difficult, because the law makes even preliminary discussion of such issues between customer and vendor a criminal offense. Thus increasingly the emphasis in the customer-vendor relationship could be on satisfaction of the letter of the contract. The sense of team ownership of a problem, so essential to success in an intangible field such as software development, would be lost completely. Procurement standards that require software to be purchased from the lowest bidder often miss the point that the real cost of software is not the initial purchase price. The costs of porting, supporting, maintaining, and modifying the software usually dominate initial production costs. Furthermore the cost of using software that does not perform as well as it might can often outweigh any savings achieved at the time it is purchased. Finally, buying software from the lowest bidder encourages vendors to take a short-term approach to software development. In a well-run software organization, every significant software project should have as a secondary goal producing components that will be useful in other projects. This will not happen by accident, since it is more work and therefore more costly to produce components that are likely to be reusable. SCHEDULING SOFTWARE DEVELOPMENT One of the reasons that software projects are chronically behind schedule and over budget is that they start with unrealistic requirements, schedules, and budgets. A customer's requirements are often vague wish lists, which are frequently interpreted as less onerous than they in fact prove to be when they are later clarified. The scheduled delivery date for software is often based on marketing considerations (e.g., winning a contract), rather than on a careful analysis of how much work is actually involved. An unrealistically optimistic schedule has many disadvantages: Decisions about what the software will do are made under crisis conditions and at the wrong time (near the end of a project) and for the wrong reasons (how hard something will be to implement given the current state of the software, rather than how important it is or how hard it would have been to implement from the starting point). Programmers who have worked hard trying to meet an impossible schedule will be demoralized when it becomes apparent that the schedule cannot be met. They will eventually begin to believe that missing deadlines is the norm. The whole development process is distorted. People may spend inordinate amounts of care on relatively unimportant pieces of the

OCR for page 102
Computers at Risk: Safe Computing in the Information Age software that happen to be built early in the project and then race through important pieces near the end. Activities like quality assurance that typically occur near the end of the process get compressed and slighted. Scheduling the development of critical or secure software is somewhat different from the scheduling for other kinds of software. Extra time and money must be allocated for extensive review and analysis. If an outside review is required, this must be taken into account from the beginning, since extra time and money must be allocated throughout the life of the project. One consequence of an extremely careful review process is the increased likelihood of uncovering problems. Time and money must be reserved for dealing with such problems prior to system delivery. EDUCATION AND TRAINING There is a shortage of well-qualified people to work on production-quality software. There is a more serious shortage of those qualified to build critical software, and a dramatic shortage of people qualified to build secure software. A discussion of the general shortage of qualified technical people in this country is beyond the scope of this report. However, a few comments are in order about the narrower problems associated with the education and training of those working on critical and secure software. Setting requirements for, specifying, and building critical software require specialized knowledge not possessed by typical software engineers. Over the years other engineering disciplines have developed specialized techniques—hazard analysis—for analyzing critical artifacts. Such techniques are not covered in most software engineering curricula, nor are they covered by most on-the-job training. Furthermore, working on critical software requires specialized knowledge of what can go wrong in the application domain. Working on secure software requires yet more skills. Most notably, one must be trained to understand the potential for attack, for software in general and for the specific application domain in particular. This committee advocates a two-pronged approach to addressing the shortage of people qualified to work on software: a new university-based program in combination with provisions for more on-the-job education as a part of current and future software projects. The university-based program would be aimed at returning, graduate-level students who are already somewhat familiar with at least one application area. While the program would cover conventional software engineering, special emphasis would be given to topics related

OCR for page 102
Computers at Risk: Safe Computing in the Information Age to critical and secure software. For example, different project management structures would be discussed in terms of their impact on both productivity and security. Discussions of quality assurance might emphasize safety engineering more than would be expected in a traditional software engineering program. Although careful consideration should be given to the specific content of such a curriculum, it seems clear that at least a one-year or perhaps even a two-year program is needed. Such a program could best be developed at universities with strong graduate engineering and business programs. The committee envisions as an initial step approximately three such programs, each turning out perhaps 20 people a year. Over time, it would be necessary (and probably possible) to increase the number of graduates. Developing such a program would not be inexpensive: the committee estimates that the cost would be on the order of $1 million. Given the current shortage and the time it will take to establish university programs that can increase the supply of qualified software engineers, managers of large security-related development efforts should deal explicitly with the need to educate project members. Both time and money for this should be appear in project budgets. MANAGEMENT CONCERNS IN PRODUCING SECURE SOFTWARE Managing a project to produce secure software requires all the basic skills and discipline required to manage any substantial project. However, production of secure software typically differs from production of general high-quality software in one area, and that is in the heavy emphasis placed on assurance, and in particular on the evaluation of assurance conducted by an independent team. Perhaps the most difficult, and certainly the most distinctive, management problem faced in the production of secure software is integrating the development and the assurance evaluation efforts. The two efforts are typically conducted by different teams that have different outlooks and use different notations. In general, the assurance team has an analytical outlook that is reflected in the notations it uses to describe a system; the development team focuses on the timely production of software, and accordingly emphasizes synthesis and creativity. As a consequence it is very easy for an antagonistic relationship to develop between the two teams. One result is that what is analyzed (typically a description of a system) may bear little resemblance to the software that is actually produced. Geographic and organizational

OCR for page 102
Computers at Risk: Safe Computing in the Information Age separation of the assurance and development teams compounds this problem. Ideally, the teams work side by side with the same material; as a practical matter, a jointly satisfactory "translation notation" may have to be devised so that the assurance team does not have to work with actual source code (which is typically not processable by their tools) and the development team does not have to program in an inappropriate language. Scheduling of the various assurance and implementation milestones is typically a difficult process. Assurance technology is considerably less mature than implementation technology, and the tools it uses are often laboratory prototypes rather than production-quality software. Estimates of time and effort on the part of the assurance team are therefore difficult to make, and the various assurance milestones often become the "gating factor" in maintaining a project's schedule. Managers must make it clear from the outset, and maintain the posture, that assurance is an important aspect of the project and not just something that causes schedule slips and prevents programmers from doing things in otherwise reasonable ways. They must also recognize the fact that assurance will be a continuing cost. When a software system is modified, the assurance evidence must be updated. This means more than merely running regression tests. If, for example, assurance involves covert channel analyses, then those too must be redone. The project plan must include a long, slow start-up in the beginning, with a higher percentage of time devoted to specification and analysis than is devoted to design. This lead time is required because the typical design team can devise mechanisms at a rate that greatly exceeds the ability of the assurance team to capture the mechanisms in their notations and to analyze them. Managers should also cultivate a project culture in which assurance is viewed as everybody's problem and not just some mysterious process that takes place after the software is done. It is particularly necessary that the developers appreciate an attacker's mind-set, so that they themselves look at everything they do from the point of view of the threat. Information security (INFOSEC) attacks generally succeed because the attacker has embarked on an adventure, whereas the defenders are just working at a job. Management must instill the probing, skeptical, confident view of the attacker in each developer if the software is to be secure in fact as well as on paper. WHAT MAKES SECURE SOFTWARE DIFFERENT From the perspective of programming methodology, the hardest part of producing secure software is producing good software. If one

OCR for page 102
Computers at Risk: Safe Computing in the Information Age includes denial of service under the security rubric, producing secure software involves all the difficulties associated with building critical software, plus the additional difficulties associated with assuring integrity and confidentiality under the presumption of outside attack. Some of the techniques generally considered useful in producing software have additional benefits in the security realm. People in the programming methodology field have long stressed the importance of modularity. In addition to making software easier to build, modularity helps to limit the scope of bugs and penetrations. Modularity may even be useful in reducing the impact of subverted developers. There are also some apparent trade-offs between security concerns and other facets of good practice—''apparent" because most of the time one should opt for good software practice; without it one will not have anything useful. Attempts to provide protection from high-grade threats by strictly limiting the number of people with access to various parts of the software may be self-defeating. The social process of the interaction of professionals on a project, conducted formally or casually, is a powerful tool for achieving correctness in fields like mathematics or software that deal with intangibles. Secrecy stops the social process in its tracks, and strict application of the "need-to-know" principle makes it very likely that system elements are subject to scrutiny only by insiders with a vested interest in the success of the project. Secrecy may also hinder the technical evolution of countermeasures; individuals assigned to the development of a given device or subsystem may not be aware of even the existence of predecessor devices, much less their specific strengths and weaknesses and mix of success and failure. The inherent mutability of software conflicts with the requirements for achieving security. Consequently secure software is often deliberately made difficult to modify, for example, by burning code into read-only memory. Not only does this make it hard for attackers to subvert the software, but it also, unfortunately, makes it hard to make legitimate changes, for example, fixing a known vulnerability. In resource-limited projects, any resources devoted to protecting those parts of a system deemed most vulnerable will detract from protecting other parts of the system. One must be careful to ensure that other parts of the system are not unduly impoverished. RECOMMENDED APPROACHES TO SOUND DEVELOPMENT METHODOLOGY The recommendations that follow are broad directives intended to reflect general principles. Some are included in the fourth subset of

OCR for page 102
Computers at Risk: Safe Computing in the Information Age the committee's recommendation 2, which calls for short-term actions that build on existing capabilities (see Chapter 1). Finding: What correlates most strongly with lack of vulnerabilities in software is simplicity. Furthermore, as complexity and size increase, the probability of serious vulnerabilities increases more than linearly. Recommendation: To produce software systems that are secure, structure systems so that security-critical components are simple and small. Finding: Software of significant size must be assumed to have residual errors that can compromise security. Recommendation: Reduce vulnerability arising from failure of security. Keep validated copies of vital data off-line. Establish contingency plans for extended computer outages. Finding: Extensive and extended use of software tends to reduce the number of residual errors, and hence the vulnerabilities. Recommendation: Encourage the development of generally available components with well-documented program-level interfaces that can be incorporated into secure software. Among these should be standardized interfaces to security services. Finding: Design-level verification using formal specifications has proved to be effective in the security area. Recommendation: Do more research on the development of tools to support formal design-level verification. Emphasize as a particularly important aspect of this research the identification of design-level properties to be verified. Finding: The most important bottleneck in reasoning about programs is the difficulty of dealing with multiple levels of abstraction. Recommendation: Conduct research on program verification so as to put greater emphasis on this problem. Finding: Software that taxes the resources of the computing environment in which it is run is likely to be complex and thus vulnerable. Recommendation: When building secure software, provide excess memory and computing capacity relative to the intended functionality. Finding: The use of higher-level programming languages reduces the probability of residual errors, which in turn reduces the probability of residual vulnerabilities. Recommendation: When tunneling attacks are not a major concern, use higher-level languages in building secure software. Finding: Using established software tends to reduce risk. Recommendation: In general, build secure software by extending existing software with which experience has been gained. Furthermore, use mature technology, for example, compilers that have been in use for some time. Finding: Ex post facto evaluation of software is not as reliable

OCR for page 102
Computers at Risk: Safe Computing in the Information Age as evaluation that takes place during the construction of the software. Recommendation: Couple development of secure software with regular evaluation. If evaluation is to be done by an outside organization, involve that organization in the project from the start. Finding: There is a severe shortage of people qualified to build secure software. Recommendation: Establish educational programs that emphasize the construction of trusted and secure software in the context of software engineering. Finding: Adopting new software production practices involves a substantial risk that cannot usually be undertaken without convincing evidence that significant benefits are likely to result. This greatly inhibits the adoption of new and improved practice. Recommendation: Establish an organization for the purpose of conducting showcase projects to demonstrate the effectiveness of applying well-understood techniques to the development of secure software. Finding: Assurance is often the gating factor in maintaining a project schedule for producing secure software. This is particularly true during the design phase of a project. Recommendation: Build into schedules more time and resources for assurance than are currently typical. Finding: There is a trade-off between the traditional security technique of limiting access to information to those with a need to know and the traditional software engineering technique of extensively reviewing designs and code. Although there are circumstances in which it is appropriate to keep mechanisms secret, for most parts of most applications the benefits of secrecy are outweighed by the costs. When a project attempts to maintain secrecy, it must take extraordinary measures, for example, providing for cleared "inspectors general," to ensure that the need to maintain secrecy is not abused for other purposes, such as avoiding accountability on the part of developers. Recommendation: Design software so as to limit the need for secrecy. NOTES 1.   For example, Jay Crawford of the Naval Weapons Center at China Lake, California, reports that the majority of errors in the production versions of the flight software managed there were classified as specification and design errors rather than coding errors. 2.   The Navy estimates that testing software in an operating aircraft costs $10,000 per hour. 3.   Checking the satisfiability of simple boolean formulas, for example, is an NP-complete problem; that is, the worst-case time required (probably) grows exponentially in the size of the formula. 4.   Morrie Gasser and Ray Modeen, Secure Systems Group, Digital Equipment Corporation; Timothy E. Levin, Gemini Computers, Inc.; J. Thomas Haigh, Secure Computing

OCR for page 102
Computers at Risk: Safe Computing in the Information Age     Technology Corporation (formerly Honeywell Secure Computing Technology Center); and George Dinolt, Ford Aerospace Corporation. 5.   Implementation of the Procurement Integrity Act of 1989 was suspended through November 30, 1990, and may be further suspended until May 31, 1991, to consider proposed changes by the Administration (see Congressional Record of June 21, 1990, and August 2, 1990).