Click for next page ( 8


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 7
Perspective The software research community has not kept up with the development of complex software systems in industry and government, while in tile field, managers concerned with procuring software systems often evince idealized and unrealistic views of how software is developed. A critical reality in the field that is insufficiently appreciated by academic researchers and systems purchasers is the extent to which existing systems tie up resources. So-called system maintenance, discussed below, may constitute up to 75 percent of a system's cost over its lifetime. The high cost of maintenance diminishes the amount of money available to replace existing systems, and it lengthens the payback period for investment in new system development. It makes designing for a system's entire life cycle imperative, but such design is rarely if ever achieved. CSTB workshop participants agreed that to progress in system development, it is time to portray systems realistically by (1) viewing software systems as systems which has implications for optimal system design, (2) recognizing that change is intrinsic in large systems, and (3) striving for a better and more timely blending of software engineering and applications expertise (see discussions headed "Build a Unifying Model," p. 10, and "Nurture Collaboration," p. 17. They also agreed that a more rigorous use of mathematical techniques can help researchers to manage and diminish complexity. SHORT-TERM ACTIONS Portray Systems Realistically Mew Systems as Systems, not as Collections of Parts While the computer field has helped to popularize the word "systems" and the concept of systems, it is ironic that information systems developers have not developed formal mechanisms to understand systems and the interrelationships among system components. Software engineering researchers have been unable to provide effective guidance to practitioners regarding the process of system definition and the concomitant implementation of functional elements. Progress in developing software systems requires a fundamental appreciation that those systems are more than just a collection of parts and that software is embedded in larger systems with a variety of physical components; 7

OCR for page 7
8 design of such systems must deal with both of these issues. Design of software systems must also take into account the fact that the whole system includes people as well as hardware, software, and a wide variety of material elements. Recognize Change as Inoinsic in Large Systems Software projects are increasingly likely to be built on top of an existing, installed base of code rather than built anew. As that installed base of software grows over time, software systems that might or might not have been designed to endure have been patched, modified, and "maintained," transforming them greatly from their original designs and functions (Belady and Lehman, 1985~. To factors are at work here: The first is that systems are often not well-enough designed to begin with, and the second is that user needs change over time new requirements arise, and existing systems must be adapted to accommodate them. But commonly used conceptualizations, such as the "waterfall model" or even the "spiral model," assume a more sure-footed progression from requirement specification to design to coding to testing and to delivery of software than is realistic (Royce, 1970; Boehm, 1988~. Given that 40 to 60 percent or more of the effort in the development of complex software systems goes into maintaining i.e., changing-such systems (Boehm, 1981), the design and development processes could be made more efficient if the reality of change were accepted explicitly. Sustaining the usefulness of software systems differs from the care of other assets because it entails two distinct activities: (1) corrective and preventive maintenance, which includes the repair of latent defects and technological wear and tear, and (2) enhancement, which normally introduces major transformations not only in the form but also in the functions and objectives of the software. Enhancement activities have been observed to constitute perhaps 75 percent of the total maintenance effort.2 The degree and impact of change is analogous to the evolution of an urban neighbor- hood: Over time, old and obsolete buildings are torn down, the supply of utilities changes in both quality and delivery aspects, and transportation routes and media change. As new needs, wants, and capabilities emerge, the structure and function of the neighborhood evolve; the neighborhood is not thrown out wholesale and replaced because doing so would be far too costly. As with changes in neighborhoods, changes in software are not always improvements; software systems suffer from the tension between providing for functional flexibility and assuring structural integrity of the system. Software developers in industry and government are increasingly aware that change occurs from the earliest design stages as initial expressions of customer requirements are refined. Managing this change involves managing a mix of old code (typically with inadequate documentation of original specifications as well as modifications made over time), new programmers, and new technology. The process is ad hoc, and the problem grows over time; the larger the installed base of code, the more formidable the problem. The problem is aggravated where management decisions, including Contrnctina rlP.ri~i~ keep developers and maintainers separate. Ideallyj system designers leave hooks for the changes they can anticipate, but problems arise from major changes that result from changed circumstances or goals. Also, schedule and process pressures often militate against providing for functional flexibility and future changes. Further, the current generation of computer-aided tools for software engineers concentrates on development activities and generally neglects maintenance. As a result, supporting information for new code and the tools to exploit it are not carried forward from development to maintenance. More seriously, these tools do not accommodate large bodies of code developed without using the tools, although ~^^, ~_ _ _^v ^~ ~ ~

OCR for page 7
9 some progress is being made in the necessary restructuring of programs to accommodate computer-aided tools. Just as change, per se, should be accepted as a basic factor In most large, complex systems, designing for change should become a fundamental body of knowledge and skill. The very notion of maintenance as an activity separate from the creation process seems to legitimize high costs, poor support, and poorly managed redesign. Eliminating this notion via a move toward designing and building systems in anticipation of change would help to increase the engineering control over post-release modification. Since software reflects both system specifications and design decisions, changing either element will indirectly produce changes in the code. One possibility is to strive for designing systems that are more modular or easier to replace as needs change. Note that the issue of determining what the software shall do (the `'requirements definition") is much broader than software engineering practices today would suggest; this perceptual difference contributes to the maintenance problem. What is needed is a thorough investigation, analysis, and synthesis of what the combined functions will, or should, be of the automated and non-automated (human, business, or physical) elements of the system, including all "think flows," work flows, information flows, and other functionalities. A total systems approach, as discussed above, would be involved, with a heavy emphasis on the conceptualization of the functional role of both the automated parts and the fully combined systems, allowing for reengineering to accommodate or exploit the changes that are made possible by introduction of the automated system. Understanding the reasons for change and the costs, impacts, and methods of change could lead to more control of a major part of software development costs. Creating mechanisms that allow for change and that make systems robust while undergoing change will help to reduce opportunity costs in system development and deployment. Part of what is needed is a change in attitude. But for the long term, a theory of software systems is needed that will build on empirical study of software system applications. Study and Preserve Software Artifacts: Learn From Real Systems Past and Present Although systems developers work with an evolving set of goals and technologies, they can still learn valuable lessons from existing systems, lessons about what led to success or failure and what triggered incremental or major advances. The history of computing is replete with instances in which identifying the intellectual origins of key developments is difficult or impossible because most advances, in their time, were not thought of as intellectual issues but instead were treated as particular solutions to the problems of the day. Most software specialists would be hard put to name "seven wonders of the software systems world" or to state why those wonders are noteworthy.3 Meanwhile, the artifacts of such systems are disappearing every day as older equipment and systems are replaced with newer ones, as projects end, and as new applications emerge. Because almost all large software systems have been built in corporate or government settings where obsolete systems are eventually replaced, and because those systems have received little academic attention, useful information may be vanishing. A concerted effort is needed to study (and in some cases preserve) systems and to develop a process for the systematic examination of contemporary and new systems as they arise. Immediate archival of major software artifacts, together with the software tools needed to examine them, or even to experiment with them, would enable both contemporary and future study. Systematic study of those systems would facilitate understanding of the ontology of architecture and system components, provide a basis

OCR for page 7
10 for measuring what goes on in software development, and support the construction of better program generators. Studies of contemporary systems would provide an understanding of the character- istics of software developed under present techniques. Such an effort would examine software entities such as requirements documentation, design representation, and testing and support tools, in addition to the actual source code itself, which has traditionally been the focus of measurement. Better mechanisms that provide quantifiable measures of requirements, design, and testing aspects must be developed in order to understand the quality baseline that exists today. Existing mechanisms for measuring source code must be put to more widespread use to better assess their utility and to refine them (Bowen et al., 1985; McCabe, 1976~. In addition, variations in quality need to be traced to their sources to understand how to control and improve the process. Thus this effort should encompass less successful as well as exemplary artifacts, if only to show how poor design affects maintainability. The examination of artifacts should be combined with directed interviews of the practitioners (and system users) and observation of the process to correlate development practices with resulting product quality. Having quantifiable measurements would enable new, innovative development meth- ods and practices to be evaluated for their impact on product quality. However, as the software industry evolves, so too must the measurement techniques. For example, if new means of representing requirements and design are put into practice, the measurement techniques must be updated to accommodate these new representations. In addition, efforts to automate measurement can be improved if researchers consider measurability as an objective when developing new development methods and design representations. Such measurement and research cannot take place in the laboratory due to the size of the actual systems being developed (the costs of experiments at this scale are prohibitive), and it is unlikely that small experiments can be extrapolated to apply to large-scale projects. A cooperative effort between government, industry, and academia could provide necessary funding, access to real-world artifacts and practitioners, and the academic research talent required for such an effort. Such a combined effort also would provide an excellent platform for greater collaboration in software engineering research between members of these communities. Designation and funding of one or more responsible entities are needed, and candidates include federal agencies (e.g., the National Science Foundation and the National Institute of Standards and Technology), federally funded research and development centers, or private institutions. Finally, active discussion of artifacts should be encouraged. A vehicle like the on-line RISKS forum sponsored by the Association for Computing Machinery, which provides a periodic digest and exchange of views among researchers and practitioners on risks associated with computer-based technology, should be established. Also, completed case studies would provide excellent teaching matenals. LONG-TERM ACTIONS Build a Unifying Model for Software System Development Shortcomings in software systems often reflect an imperfect fit to the needs of particular users, and in this situation lie the seeds for useful research. The imperfect fit results from the nature of the design and development process: Developers of complex software systems seek to translate the needs of end-users, conveyed in everyday language, into instructions for computer systems. They accomplish this translation by designing systems that can be described at different conceptual levels, ranging from language comprehensible to the intended user (e.g., "plain English" or formal models of the

OCR for page 7
11 LEVELS User lVodel Requirement Architecture Code DOMAINS 1 1 Me . ~1 j , ~--~ 1 ' 1 ' II I ~ Mj j L 1 l 1 ~, 1 - 1 Machine Language I I ~Mn,~ On, 1- FIGURE 2.1 Illustration of a unifying model for software system design. application domain) to machine language, which actually drives the computer. Different spheres of activity are referred to by the profession as end-user domains; these include the following: scientific computation, - eng~neenng c es~gn, modeling and visualization, transaction processing, and embedded command and control systems. These domains tend to have different types of abstraction and different language re- quirements arising from differences in the representations of application information and associated computations. As a result, software developers work with a variety of domain- specific models. During the design process in any domain, key pieces of information or insights tend to be lost or misinterpreted. How can the process of moving from a domain-specific model to a working piece of software be improved? One approach would be to develop a unifying view of the software design process and the process of abstraction, a view that would define a framework for the task of the complex software system developer. CSTB workshop participants did not reach a consensus on this complicated issue, but to illustrate the point, they began to sketch out the parameters for such a framework (Figure 2.1~. For example, a system design can be thought of as a sequence of models, one (Mid) at each level. Different sorts of details and design decisions are dealt with at each level. The model at each level is expressed in a language (Icy). Languages are not necessarily textual or symbolic; they may use graphics or even gestures. Also, languages are not always formally defined.4 Just as the domains of discourse at each level are different, so are the languages. Finally, the unifying view would distinguish domain-specific models, a multilevel set of appropriate languages (although it is possible that languages may be shared-or largely shared across domains), abstractions, and interlevel conversion mechanisms.

OCR for page 7
12 Useful Outcomes The concept of a uniting model and the associated issues are emblematic of the larger problem of achieving more complementarily between software engineering research and practice. A unifying model would not necessarily be of immediate use to a system builder. But it would be a tool for academic analysis that could, in turn, yield structures and tools useful to a practitioner. In particular, it could help a researcher to analyze the software development process and forge imnr`1vemPnI~ Chat might metro that Bran more efficient in Drnrti~P. - 0 - - --r ~~ ~,^^ ~ ^~-~ ~ In _ _ ~ _ _ ~.__. For example, a unifying view could help the software engineering researcher to see the relation among existing mechanisms, to see what mechanisms are missing, and to devise ways to facilitate transitions from one major conceptual level to another (since it is necessary to be able to convert a system description at one level to a system description at an adjacent level).5 By showing how parts are related, unification may facilitate the collapse of domain-specific models to include fewer levels than at present.6 Eventually, it may be possible to move automatically from a description of requirements to a working product, bypassing intermediate levels. The modeling process can facilitate this progress much as the modeling of production processes in manufacturing has facilitated the reduction of the number of tasks in manufacturing processes and the application of manufacturing automation. Also useful, as noted above, is better coordination technology which would support both the modeling and the development processes. Research Implications -~d, While the theory and nature of program transformation functions, drawing on a body of knowledge about language that crosses levels (sometimes called wide-spectrum language), have already been developed (Balzer, 1985; Partsch and Steinbruggen, 1983; and Smith et al., 1985), the proposed kind of unifying view would also motivate new styles of research independent from those noted above. Relevant current research ad- dresses traditional programming language (although some of this research is in eclipse), computer-supported cooperative work (beyond the mere mechanical aspects see dis- cussion headed "Nurture Collaboration," p. 17), and efforts to raise the level at which automation can be applied. Also needed are the following: Research that would support the development of domain-specific models and cor- responding program generators it is critical to recognize the legitimacy of specialization to the domain at the expense of expressive generality. Research to identify domains, levels, and commonalitie.s neuron Hamming cinch languages are needed for each level and domain. ~. v ~_4 V-J ~V111~111 ~111~; Research into the architectural level, which cuts across individual domain models. This level deals with the gross function of modules and the ways they are put together (for procedure call, data flow, messages, data sharing, and code mingling). The aggregates defined at this level include "state machine," "object-oriented system," and "pipe/filter system." Contrast this with the programming level, where the issues are algorithms and data structures and the defined entities are procedures and types. Research into whether it is possible to implement a concept found in the mechan- ical engineering environment, the quarter-scale model, and if so, how. A quarter-scale model, which would provide a more precise and detailed correspondence to the desired system than does a conventional prototype, would help to convey the complexity and various design attributes of a software system. It would allow practitioners to better comprehend how well a design works, and it would allow managers to control risk by helping them to understand where problems exist and to better estimate the resources

OCR for page 7
13 required to solve those problems. In essence, it would make a seemingly intangible product, software, more real. Investigation of the mechanisms for making the transition between and among the various levels of abstraction. This research would involve exploration of automation aspects (e.g., compilers and generators) and computer-aided and manually directed steps. It would also involve exploration of the order of development of the models of a system: Whereas the conventional waterfall life cycle calls for completing each model before translating to the next, other approaches such as rapid prototyping or the spiral model allow for simultaneous development of several models. Reformulation of expressions of rigor and technical precision (sometimes referred to as "correctness"), performance given resources, traceability, cost, reliability, and integrity. Strengthen the Mathematical and Scientific Foundations of Software Engineering In the absence of a stronger scientific and engineering foundation, complex software systems are often produced by brute force, with managers assigning more and more people to the development effort and taking more and more time. As software engineers begin to envision systems that require many thousands of person-years, current pragmatic or heuristic approaches begin to appear less adequate to meet application needs. In this environment, software engineering leaders are beginning to call for more systematic approaches: More mathematics, science, and engineering are needed (Mills, 1989~. Workshop participants focused on application of such approaches to software anal- ysis; they also affirmed the value of mathematical foundations for better modeling and translation of real-world problems to the abstractions of software systems. Software analysis, which seeks to assure that software works as specified and as designed, is both a significant and a critical part of the implementation of large software systems. Unfor- tunately, analysis activities have received too little focused attention, and what attention they have received has been largely limited to today's main analytical approach testing. Sting techniques, moreover, are constantly being discovered and rediscovered. A more rigorous and comprehensive approach to analysis is needed, one that renders techniques explicit, teaches about them, and develops its own literature and authority. In addition to testing, such techniques as proving, modeling, and simulation should be further developed and targeted to more properties (e.g., safeW and functional correctness). Work is needed in performing measurements, establishing metrics, and finding a way to validate them. The understanding of what constitutes a defect and how to verify that designs or code are defect-free is today very limited. Note that the ability to find defects earlier in the life cycle of a product or to prevent them from being introduced reduces test cost and reduces the number of defects in products delivered to end-users. This ability involves qualifier assessment and qualifier assurance. Research questions center on how to specify and measure the attributes (functional, behavioral, and performance) a system must possess in a manner that permits correct generation or proof. What aspects of a product can be assured satisfactorily only by testing as opposed to experimentation? What are the economic trade-offs between developing mathematical proofs and conducting testing? How to design for testability and verifiability is also an issue here. Promising directions include the application of formal methods (which involve math- ematical proofs), exploration of the mechanical and civil engineering concept of a quarter-scale model for previewing a design, application of the "cleanroom concept" (featuring waLk-throughs of software with proofs of claims about features rather than

OCR for page 7
14 checklists of flaws; Mills, 1989), and statistical quality control analogous to that used in manufacturing. A handbook of testing and/or quality assessment is desirable and will be possible with further development of the field of analysis. NOTES 1. These conclusions are consonant with those of the Defense Science Board Ask Force (1987), which focused on management aspects because attitudes, policies, and practices were a major factor in defense software system acquisition. 2. The National Bureau of Standards (now the National Institute of Standards and Technology) drew on several studies to decompose maintenance into corrective maintenance (20 percent), including diagnosis and fixing design, logic, or coding errors; adaptive maintenance (25 percent), which provides for responses to changes in the external environment; perfective maintenance (50 percent or more), which incorporates enhancements; and preventive maintenance (5 percent), which improves future maintainability and reliability (Martin and Osborne, 1983; Swanson and Lientz, 1980~. Similarly, experience with U.S. Air Force weapons systems suggests that while 15 to 35 percent of software maintenance corrects design errors, 25 to 50 percent adds new capability, 20 to 40 percent responds to changes in the threat, 10 to 25 percent provides new system interfaces, 10 to 20 percent improves efficiency, 5 to 15 percent improves human factors and ~ tat 1n n'3rrPnt AM .. ~ --are- - (Mosemann, 1989). 3. An informal query addressed to a community of several hundred software engineering specialists suggested the following candidates: the SAGE missile defense system, the Sabre interactive system for airline reservations, the Yacc compiler tool for UNIX, ARPANET communications software. and the Vi.sir~ ~nr~q~h^~' ~q^~q~~ ^~^-^ others. 4. ~ serve this model-definition role, a language must provide five essential capabili- ties: (1) component suitability module-level elements, not necessarily compilation units, with function shared by many applications; (2) operators for combining de- sign elements; (3) abstraction ability to give names to elements for further use; (4) closure-named element can be used like primitives; and (5) specification- more properties than computational functionality, with specifications of composites derivable from specifications of elements. This process of transition is sometimes accomplished manually and sometimes me- chanically. Mechanical transitions (e.g., using program generators or compilers) can enhance productivity, but they depend on more precision and understanding than are often available. 6. Overall, the number of levels has grown and shrunk with technology over time. For example, today few people actually code in machine language, and relatively few program in assembly code. --I ~r~~ ~1~ Ullll~UcU ~paolllgr _, ~,~._~,-~l~c~all~ pa~a,84;, among