Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 32
Appendix B Position Statements Prior to the workshop, participants were asked to submit position statements that responded to the following two questions: 1. What do you consider to be the worst problem you have with current software production, and what suggestions do you have for alleviating it? 2. What do you see as the most critical problem that industry and the nation have with current software production, and what solutions do you suggest? Some participants revised their statements as a result of workshop deliberations. These statements are presented as submitted by the authors, with some standardization of format. 32
OCR for page 33
33 FRANCES E. ALLEN The worst problem I have with current software production is transferring prototyped ideas developed in a computer science environment to product software useful to customers. Technology transfer between different groups is frequently difficult but transferring technologies and requirements between two very different cultures is doubly difficult. Users of software have reliability and economic (speed, space, cost) constraints that are of little interest to the computer scientist; the computer scientist has solutions which, when properly engineered, could greatly enhance products. I believe there are three ways of alleviating the problem. One way is to develop a technology for measuring and evaluating the effectiveness of an approach when applied to a given problem. We have ways of evaluating the complexity and correctness of an algorithm; we need ways of evaluating and predicting the appropriateness of specific software solutions to specific problems. In other words, software engineering must become a science with accepted and validated predictive metrics. The second way of alleviating the problems of moving ideas and prototypes to market is to build usable prototypes. (I will discuss this below.) The third way of alleviating the problem is education. Computer scientists must become more relevant and must understand the realities of the market place. The most critical problem that industry and the nation have with current software production is the number of lines of code needed to accomplish a function. Many production and maintenance costs can be correlated to the number of KLOCs (thousands of lines of code) needed. Programmers produce X KLOCs a year and Y errors occur per Z KLOC. But each KLOC isn't providing much function. If programs were written in much higher level languages then many fewer KLOCs would be required to do the same job. So though the X, Y. Z numbers might stay the same, fewer programmers would be needed and fewer errors would occur. Moving to very high level languages requires compiler technology which effectively maps the program to the underlying system without loss of efficiency. Much of that technology exists today. I recommend a concentrated effort on expanding that technology and exploiting it in the context of very high level languages. This proposal runs counter to what is happening today with C emerging as a major systems language. C is regressive in that it was designed to allow and generally requires that the user optimize his program. Hence, users of the language are spending time and effort doing what compilers can do as well or better in many cases. What has been gained? More KLOCs but not more productivity, more function or fewer errors. ~J O
OCR for page 34
34 DAVID R BARSTOW Currently, Schlumberger's most significant software problem is duplication of effort: we often write several times what appears to be essentially the same software. One solution to the problem is to maintain an extensive software library, but this approach is complicated by a diversity of target machines and environments. A second solution would be to develop sophisticated programming environments that present to the user a higher level computational model, coupled with translators that automatically produce code for different targets. The most critical software problem faced by industry and the nation is the cost of maintenance and evolution: most studies of software costs indicate that over two-thirds of the cost of a large system is incurred after the system is delivered. These costs cannot be reduced completely, of course, since uses and expectations about a software system will naturally change during the system's lifetime. But much of the cost is due to the fact that a considerable amount of information, such as the rationale for design and implementation decisions, is lost during development and must be reconstructed by the maintainers and evolvers of the system. One way to address this problem would be to develop knowledge-based techniques for explicitly representing such information so that it could be stored during development and referenced during evolution. One good way to develop such techniques would be through case studies of existing large systems, perhaps through collaborative efforts between industry and academia.
OCR for page 35
35 LASZLO A. BELADY Worst Problem Possible Solution Since large scale software development is a labor intensive activity, look for the problem where people spend the most time. Through our field studies of industry MCC found that the predominant activity in complex system development is the participants' teaching and instructing each other. Users must teach the software engineers about the application domain, and vice versa; designers of subsystems must describe the intricacies of their work to other designers, and later to implementors; and since the process is rather iterative, this mutual teaching happens several times and in several participant groupings. Indeed, most of the time all project participants must be ready to transmit the knowledge they have acquired about the emerging product and to analyze together the consequences on the total systems of local (design) decisions. Even more importantly, experience gathered in the computer aided system project setting could spawn much needed efforts in computer aiding the training and restraining process needed everywhere to keep the nation's workforce attuned to changing circumstances, and thus competitive. Perhaps the experience accumulated over decades in Computer Aided Instruction (CAI) must be tuned, applied and refined for the complex system development process. Results from AI could also be applied to help eliminate the "teaching" overload for all involved. Indust~y/National Problem Software is the glue that holds the islands of computer applications in distributed systems. For the next decades this gradual integration into networks will take place in each industry, between enterprises, at the national level and beyond. The resulting systems will be built out of off-the-shelf software and hardware components, where each integrated subsystem is unique and must be designed individually by a team of experts: users, managers, application specialists, programmers. The design of these '`hetero-systems'' needs fundamentally new approaches, in particular: · efficient, cooperative, project teamwork augmented by computer technology (which will be applicable everywhere where people must work tightly together, not only in the computer industry) · convergence of hardware-sofmare design; in fact, a deliberate shift in basic education is also needed to create interdisciplinary "system designers" instead of separate hardware and software professionals. But, more importantly, experience gathered in the computer aided system project setting could spawn much needed efforts in computer aiding the training and re-training process.
OCR for page 36
36 LARRY BERNSTEIN Worst Problem Facing Me in Software Productive Software architecture problems are the most difficult for me. The solution requires problem solving skins with a heavy dose of the ability to do engineering trade-offs. People trained in computer science do not often bring these skins to the work place. We cannot teach them in two- to four-week short courses, so we often suffer while rookies learn on the job. Studies of computer science curricula in the ACM pointed out the lack of problem-solving skins in the typical computer science curriculum (Computing as a Discipline, Peter J. Denning, Douglas E. Commer, David Gries, Michael C. Mulder, Allen nicker, A. Joe Turner, and Paul R. Young, Report of the ACM Task Force on the Core of Computer Science, January 1989, Vol. 32, No. 1). Those with a bachelor's degree are often mechanics who know how to program, but do not know how to decide what problem needs solving, or what alternatives there are for its solution. Making four semesters of engineering science a requirement for computer scientists is a minimal solution. Apprenticeships and identifying software architectures are quite useful. Prototypes are helpful to make design decisions quantitative rather than qualitative. Worst Problems Facing the Counting in Software Productivity Do often sunders, customers, and managers are willing to be "low balled" on effort estimation. The lack of appreciation for up front capitalization in the software industry with consequential failures points to a serious problem confronting us. It leads to the scattered and slow application of proven techniques to enhance productivity and fosters a climate for hucksters to sell their latest all purpose course to those ailing projects. A technology platform incorporating proven approaches would facilitate technology transfer from universities to industry and between companies. Changes are needed to permit and foster such cooperation between competitors. Ties of U.S. companies to Japanese companies win speed the growth of the Japanese as viable software competitors, yet we discourage similar ties in the United States. We need to have joint research with Japan and Canada so as to foster a market where each benefits and contributes to the extension of software technology. Various Harvard Business Review articles have dealt with capitalization and the introduction of technology. Recommendation A specific research recommendation is to regularize design by creating a handbook which would: . · organize software knowledge, provide canonical architecture", · provide algorithms in a more usable way than Knuth did, · facilitate understanding of constraints, domains of application, and tradeoff analysis, and · foster codification of routine designs that can then be taught and used by journeyman architects. A second issue is to focus on software (not just code!) reuse. Specific items to tackle include · Determine when structured interfaces between subsystems with different models of the problem domain are sufficient and when integration by designing to a single model of the problem domain is necessary. · Develop benefit models for justifying investment in making software reusable. · Classify architecture which will encourage reuse. · Determine how much reuse is possible with current prices.
OCR for page 37
37 · Develop indexing and cataloging techniques to find reusable elements. A new theory of testing is needed to design software that is testable to certify quality. On Design for Testability How do you specify the attributes (functional and non-functional) a system must possess in a manner which permits correct generation or proof? What attributes are only verifiable by testing? What are the economic trade-offs between proofs and testing? On Cerdpying Stalin Classify quality certification methods and measures effective in real (large) projects for functional and non-functional performance. Examples include the following: · Proof of correctness. · Theory of stochastic software usage. · Functional scenario testing is thirty times more effective than coverage testing. We need to build systems in anticipation of change by understanding the correct granularity of components and forcing localization of change.
OCR for page 38
Abstract 38 RICHARD B. BUTLER AND THOMAS A. CORBI Program Understanding: Challenge for the 1990's There are a variety of motivatorsi which are continuing to encourage corporations to invest in software tools and training to increase software productivity, including: increased demand for software; limited supply of software engineers; rising software engineer support expectations (e.g., for CASE tools); and reduced hardware costs. A key motivator for software tools and programmer education in the 1990's will be software evolved over decades from several thousand line, sequential programming systems into multi-million line, multi-tasking complex systems. This paper discusses the nature of maturing complex systems. Next, it examines current software technology and engineering approaches to address continuing development of these systems. Program understanding is identified as a key element which supports many development activities. Lack of training and education in understanding programs is identified as an inhibitor. Directions to encourage development of new software tools and engineering techniques to assist the process of understanding our industry's existing complex systems are suggested. Maturing Complex Slystems As the programming systems written in the 1960's and 1970's continue to mature, the focus for software tools and programmer education will shift from tools and techniques to help develop new programming projects to analysis tools and training to help us understand and enhance maturing complex programming systems. In the 1970's, the work of Belady and Lehman2~4 strongly suggested that all large programs will undergo significant change during the in-service phase of their lifecycle, regardless of the a pnon intentions of the organization. Clearly, they were right. As an industry, we have continued to grow and change our large software systems to remove defects? address new requirements, improve design and/or performance, interface to new programs, adjust to changes In data structures · exploit new hardware and software features, and ~ scale up the new architectures and processing power. As we extended the lifetimes of our systems by continuing to modify and enhance them, we also increased our already significant data processing investments in them and continued to increase our reliance on them. Complex software systems have grown to be significant assets in many companies. However, as we introduce changes and enhancements into our maturing systems, the structure of the systems begins to deteriorate. Modifications alter originally "clean" designs. Fix is made upon fix. Data structures are altered. Members of the "original" programming teams disperse. Once "current" documentation gradually becomes outdated. System erosion takes its toll and key systems steadily become less and less maintainable and increasingly difficult, error prone, and expensive to modify. Flaherty's5 study indicates the effect on productivity of modifying product code compared to producing new code. His data for the studied S/370 communications, control, and language software show that productivity differences were greater between the ratio of changed source code to total amount of code than productivity differences between the different kinds of product classes-productivity was lowest when changing less than 20% of the total code in each of the products studied. The kind of software seemed to be a less important factor related to lower
OCR for page 39
39 productivity than did the attribute of changing a small percentage of the total source code of the product. Does this predict ever decreasing programmer produciivi~ for our industry as we change small percentages of maturing complex systems? Clearly as systems grow older, larger, and more complex, the challenges which will face tomorrow's programming community will be even more difficult than today's. Even the Wall Street Journal stereotypes today's "beeper carrying" programmer who answers the call when catastrophe strikes: He is so vital because the computer software he maintains keeps blowing up, threatening to keep paychecks from being issued or invoices from being mailed. He must repeatedly ride to the rescue night and day because the software, altered repeatedly over the years, has become brittle. Programming problems have simply gotten out of hand. Corporate computer programmers, in fact, now spend 80%0 of their time just repairing the software and updating it to keep it running. Developing new applications in this patchwork quilt has become so muddled that many companies can't figure out where all the money is going.6 The skills needed to do today's programming job have become much more diverse. ~ successfully modify some aging programs, programmers have become part historian, part detective, and part clairvoyant. Why? "Software renewal" or "enhancement" programming is quite different from the kind of idealized software engineering programming taught in university courses: The major difference between new development and enhancement work is the enormous impact that the base system has on key activities. For example, while a new system might start with exploring users' requirements and then move into design, an enhancements project will often force the users' requirements to fit into existing data and structural constraints, and much of the design effort will be devoted to exploring the current programs to find out how and where new features can be added and what their impact will be on existing functions. The task of making functional enhancements to existing systems can be likened to the architectural work of adding a new room to an existing building. The design will be severely constrained by the existing structure, and both the architect and the builders must take care not to weaken the existing structure when the additions are made. Although the costs of the new room usually will be lower than the costs of constructing an entirely new building, the costs per square foot may be much higher because of the need to remove existing walls, reroute plumbing and electrical circuits and take special care to avoid disrupting the current site.7 The industry is becoming increasingly mired in these kinds of application software "renovation" and maintenance problems. Pariah reports the magnitude of the problem: · Results of a survey of 149 managers of MVS installations with programming staffs ranging from 25-800 programmers indicating that maintenance tasks (program fixes/modifications) represent from 55 to 95% of their work load. · Estimates that $30B is spent each year on maintenance ($10B in the US) with 505E of most companies' DP budgets going to maintenance and that 50-805to of the time of an estimated 1M programmers or programming managers is spent on maintenance. ~ An MIT study which indicates that for every $1 allocated for ~ new development nroiect $9 will be spent on maintenance for the life cycle of the project. - r--J---, Whereas improved design techniques, application generators, and wider usage of reusable software parts may help alleviate some aspects of the "old code" problem,9 until these approaches take widespread hold in our critical complex systems, programmers will need tools and training to assist in reconstructing and analyzing information in previously developed and modified systems. Even when more "modern" software development techniques and technologies are widespread, new and unanticipated requirements for "ities" (e.g., usability, installability, reliability, integrity, security, recoverability, reconfigurability, serviceability, etc.) which are not yet taught in software engineering, are not yet part of the methodology being used, and are not yet "parameters" to the code generator will necessitate rediscovery and rework of our complex systems.
OCR for page 40
40 Approaches to Maturing Complex Systems The notion of providing tools for program understanding is not new. Work in the 1970'sl°-l4 which grew out of the program proving, automatic programming and debugging, and artificial intelligence efforts first broached the subject. Researchers stressed how rich program descriptions (assertions, invariants, etc.) could automate error detection and debugging. The difficult in modelling interesting problem domains and representing programming knowledge, coupled with the problems of symbolic execution, has inhibited progress. While there has been some limited success,l5 the lack of fully implemented, robust systems capable of "understanding" and/or debugging a wide range of programs underscores the difficuytv of the Problem and rho shortcomings of these AI-based approaches. Recognizing the growing "old program" problem in the applications area, entrepreneurs have transformed this problem into a business opportunity and are marketing "code restructuring" tools. A variety of restructuring tools have emerged (see reference 16 for an examination of restructuring). The restructuring approach to address "old" programs has had mixed success. While helpful in some cases to clean on some mn~lil`~.~ in Other raceme I.... ~^ _~+ appear to help. ~ r ~ ^ ~ Hi._ A_ A__ ~r ~^ ~TV ~^ ~_ ~O 1~1 U~LU11115 Unix llUt One government studyl7 has shown positive effects which can result from restructuring include some reduced maintenance and testing time, more consistency of style, reduced violations of local coding and structure standards, better learning, and additional structural documentation output from restructuring tools. However, on the negative side: the initial source may not be able to be successfully processed by some restructurers requiring modification before restructuring; compile times, load module size, and execution time for the restructured program can increase; human intervention may be required to provide meaningful names for structures introduced by the tool. Movement and replacement of block commentary is problematic for some restructurers. And, as has been observed, overall system control and data structures Which have. ~.r^dPA swear time are not addressed: If you pass an unstructured, unmodular mess through one of these restructuring systems, you end up with at best, a structured, unmodular mess. I personally feel modularity is more important than structured code, I have an easier time dealing with programs with a bunch of GOTO's than one with it's control logic spread out over the entire programed ^ ^ ~, A__- ~ ~ _4 ~w ~1 ~ In general, automatically re-capturing a design from source code, at the present state of the art, is not considered feasible. But some work is underway and some success has been reported. Sneed et al.~9~20 have been working with a unique set of COBOL tools which can be used to assist in rediscovering information about old code via static analysis, to interactively assist in re-modularizing and then restructuring, and finally to generate new source code representation of the original software. Also, research carried out jointly by CRIAI (Consorzio Campano di Ricerca per l'Informatica e l'Automazione Industriale) and DIS (Dipartimento di Inforrnatica e Sistemistica at the University of Naples) reports2i the automatic generation of low level Jackson or Warnier/Orr documents which are totally consistent with COBOL source code. Both Sneed and CRIAI/DIS agree, however, that determining higher level design abstractions will require additional knowledge outside that which can be analyzed directly from the source code. The experience of IBM's Federal Systems Division with the aging Federal Aviation Administration's National Airspace System (NAS)22 seems to indicate that the best way cut is to relearn the old software relying primarily on the source code, to rediscover the module and data structure design, and to use a structured approach23~25 of formally recording the design in a design language which supports the data typing, abstract types, control structures, and data abstraction models. This often proved to be an iterative process (from very detailed design levels to more abstract), but it resulted in a uniform means of understanding and communicating about the
OCR for page 41
41 original design. The function and state machine models then provided the designer a specification from which, subsequently, to make changes to the source code. The need to expand "traditional" software engineering techniques to encompass reverse engineering design and to address "software redevelopment" has been recognized elsewhere: Lee principal technical activity of software engineering is moving toward something akin to "software redevelopment." Software redevelopment means taking an existing software description (e.g., as expressed in a programming or very high level language) and transforming it into an efficient, easier-to-maintain realization portable across local computing environments. This redevelopment technology would ideally be applicable to both 1) rapidly assembled system prototypes into production quality systems, and 2) old procrustean software developed 3 to 20 years ago still in use and embedded in ongoing organization routines but increasingly difficult to maintain.26 Understanding Programs: a Key Aciivi~y With our aging software systems, studies indicate that "more than half of the programmer's task is understanding the ~stem."27 The Fjeldstat-Hamlen study2S found that, in making an enhancement, maintenance programmers studied the original program · about three-and-a-half times as long as they studied the documentation, and · just as long as they spent implementing the enhancement. In order to work with "old" code, today's programmers are forced to spend most of their time studying the only really accurate representation of the system. To understand a program, there are three things you can do: read about it (e.g., documentation); read it (e.g., source code); or run it (e.g., watch execution, get trace data, examine dynamic storage, etc.). Static analysis (control flow, data flow, cross reference) can augment reading the source. Documentation can be excellent or it can be misleading. Studying the dynamic behavior of an executing program can be very useful and can dramatically improve understanding by revealing program characteristics which cannot be assimilated from reading the source code alone. But the source code is usually the primary source of information. While we all recognize that "understanding" a program is important, most often it goes unmentioned as an explicit task in most programmer job or task descriptions. Why? The process of understanding a piece of code is not an explicit deliverable in a programming project. Sometimes a junior programmer win have an assignment to "learn this piece of code"-oddly, as if it were a one time activity. Experienced programmers who do enhancement programming realize, just as architects and builders doing a major renovation, that they must repeatedly examine the actual existing structure. Old architectural designs and blueprints may be of some use, but to be certain that a modification will be successful, they must discover or rediscover and assemble detailed pieces of information by going to the "site." In programming, regardless of the 'waterfall" or "iterative" process, this kind of investigation happens at various points along the way: · While requirements are being examined, lead designers or developers are typically navigating through the existing code base to get a rough idea of the size of the job, the areas of the system which will be impacted, and the knowledge and skills which will be needed by the programming team which does the work · As design proceeds from the high level to low level, each of the team members repeatedly examines the existing code base to discover how the new function can be grafted onto the existing data structures and into the general control flow and data flow of the existing system. · Wise designers may tour the existing code to get an idea of performance implications which the enhancement may have on various critical paths through the existing system. · Just before the coding begins, programmers are looking over the `'neighborhood" of modules which will be involved in the enhancement. They are doing the planning of the detailed packaging separating the low level design into pieces which must be implemented by new
OCR for page 42
42 modules or which can be fit into existing modules. Often they are building the lists of new and changed modules and macros for the configuration management or library control team who need this information in order to reintegrate the new and changed source code when putting the pieces of the system back together again. · Dunng the coding phase programmers are immersed in the "old code". Programmers are constantly making very detailed decisions to re-wnte or restructure existing code vs. decisions to change the existing code by deleting moving and adding a few lines here and a few lines there. Understanding the e~st~ng programs Is also key to adding new modules: how to interface to existing functions in the old code? how to use the existing data structures properly? how not to cause unwanted side effects? · A new requirement or two and a few design changes usually come into focus after the programmers have left the starting blocks. "New code" has just become "old code". Unanticipated changes must be evaluated as to their potential impact to the system and whether or not these proposed changes can be contained In the current schedules and resources. The "old base" and the "new evolving" code under development must be scrutinized to supplement the intuitions of the lead programmers before notifying management of the risks. · Testers may delve into the code if they are using "white box" techniques. Sometimes even a technical writer will venture into the source code to clarify something for a publication under · . revlslon. · Debugging dump reading and trace analysis constantly require long terminal sessions of "program understanding" where symptoms are used to postulate causes. Each hypothesis causes the programmer to go exploring the erasing system to find the source of the bug. And when the problem Is found then a more "bounded" exploration is usually required to gather the key information required to actually build the fix and insert yet another modification into the system. Therefore the program understanding process Is a crucial sub-element In achieving many of the project deliverables: sizings high level design low level design build plan actual code debugged code fuses etc. The programmer attempts to understand a programming systems so he can make Informed decisions about the changes he Is making. The literature refers to this "understanding process" as "program comprehension": Lee program comprehension task is a critical one because it is a subtask of debugging, modification, and learning. The programmer is given a program and is asked to study it. We conjecture that the programmer with the aid of his or her syntactic knowledge of the language, constructs a multileveled internal semantic structure to represent the program. At the highest level the programmer should develop an understanding of what the program does: for example, this program sorts an input tape containing fixed-length records, prints a word frequency dictionary, or parses an arithmetic expression. This high-level comprehension may be accomplished even if low-level details are not fully understood. At low semantic levels the programmer may recognize familiar sequences of statements or algorithms. Similarly, the programmer may comprehend low-level details without recognizing the overall pattern of operation. The central contention is that programmers develop an internal semantic structure to represent the syntax of the program, but they do not memorize or comprehend the program in a line-by-line form based on syntax.29 Leaming to Understand Programs While software engineering (e.g. applied computer science) appears as a course offering in many university and college Computer Science departments "software renewal" "program comprehension" or "enhancement programming" is absent. When you think in terms of the skills which are needed as our software assets grow and age lack of academic training in "how to go about understanding programs" will be a major inhibitor to programmer productivity in the 1990's. . . . Unfortunately, a review by the author of more than 50 books on programming methodologies revealed almost no citations dealing with the productivity of functional enhancements, except a few minor observations in the context of maintenance. The work of functional enhancements to existing software systems is underreported in the software
OCR for page 82
82 Acknowledgments The ideas presented here have been brewing for a long time. I appreciate the support of Mobay Corporation and CMU's Computer Science Department and Software Engineering Institute. The specific stimulus to write this argument down was provided by a workshop on programming language research organized for the Offlce of Naval Research and advance planning for a workshop on complex system software problems organized by the Computer Science and Technology Board of the National Research Council. Norm Gibbs, Al Newell, Paul Gleichauf, Ralph London, lUm Lane, and Jim Perry provided helpful comments on various drafts. References  Mary Shawl Abstraction techniques in modern programming languages. IEEE Software, 1, 4, October 1984, pp. 10-26.  Jeff Shear. Knight-Ridder's data base blitz. Insight, 4, 44, October 31, 1988, pp. 44-45.  Gina Kolata. Computing in the language of science. Science, 224, April 13, 1984, pp. 140-141.  R. H. Rand. Computer Algebra in Applied Mathematics: An Introduction to MACSYMA Pittman 1984.  Stephen Wolfram. Mathematica: A System for Doing Mathematics by Computer. Addison-Wesley, 1988.  Jon Bentley. Little Languages. Communications of the ACM, August 1986. [71 Barry W. Boehm. Software Engineering Economics. Prentice-HaI1 1981.  James E. lUmayko. Computers in Space: The NASA Experience. Volume 18 of Allen Kent and James G. Williams, eds., The Encyclopedia of Computer Science and Technology, 1987.  Jeffrey Rothfeder. It's late, costly, incompetent but fly firing a computer system, Business Week, 3078, November 7, 1988.  David Marca and Clement L. McGowan. SADT: Structured Analysis and Design Technique. McGraw-Hill 19~.  Frank DeRemer and Hans H. Kron. Programming-in-the-large versus programming-in the-small. IEEE Transactions on Software Engineering, SE-2, 2, June 1976, pp. 1-13.  Butler W. Lampsoh and Eric E. Schmidt. Organizing software in a distributed environment. Proc. SIGPLAN i83 Symposium on Programming Language Issues in Software Systems, pp. 1-13.  L. S. Marks et al. Marls' Standard Handbook for Mechanical Engineers. McGraw-Hill 1987.  R. H. Perry et al. Perry's Chemical Engineer's Handbook Sixth Edition, McGraw-Hill, 1984.  James Kip Finch. Engineering and Western Civilization. McGraw-Hill, 1951.  Mary Shawl Software and Some Lessons from Engineering. Manuscript in preparation.
OCR for page 83
83 CHARLES SIMONYI My Worst Problem with Current Software Production I am a development manager at Microsoft Corporation, developing language products for the microcomputer marketplace. The worst problem affecting my activity is insufficient programming productivity. In the microcomputer software business resources are ample, while the premiums on short time-to-market, on the "tightness" and on the reliability of the produced code are all high. Under these circumstances, adding extra people to projects is even less helpful that was indicated in Fred Brook's classic "Mythical Man-Month". There is also a shortage of "star" quality programmers. As to the solutions, on the personnel front we are expending a lot of effort to search out talent in the United States and internationally. We also organized an internal training course for new hires. However, these measures will just let us continue to grow but no improvement in the rate of growth can be expected. I think that the remedy to the productivity question with the greatest potential will come from a long-lasting, steady, and inexorable effort of making small incremental improvements to every facet of the programming process which (1) is easily mechanizable, and (2) recurs at a reasonable frequency which can be lower and lower as progress is made. I think of the Japanese approach to optimizing production lines as the pattern to imitate. In Hitachi's automated factory for assembling VCR chassis; general purpose and special purpose robots alternate with a few manual assembly workers. The; interesting point is that there was no all-encompassing uniform vision: Hitachi engineers thought that general purpose robots were too slow and too expensive for simpler tasks such as dressing shafts with lubricant or to slip a washer in its place. At the opposite extreme, a drive band which had to run in several spatial planes was best handled by human workers, even though, in principle, a robot could have been built to do the job, but such a robot would have cost too much in terms of capital, development time, and maintenance. In the middle of the task complexity spectrum general purpose robots were used. The product design was altered by the engineers to make the robots' work easier. For example, they made sure that the robots can firmly grasp the part, that the move of the robot arm through a simple arc is unimpeded, and so on. The actual product design could not be simplified because it was constrained by competitive requirements. If anything, the increase in productivity makes it possible to elaborate the design lay adding difficult to implement but desirable features. For instance, front loading in VCRs is much more complex mechanically than top loading, yet consumers prefer front loading because of its convenience and because front loading units can be stacked with other hid equipment. The point here is that simplicity is always relative to the requirements. The requirements always tend to increase in complexity, and there is nothing wrong with that. These notions have direct parallels in software production. Software complexity should be measured relative to the requirements and can be expected to increase in spite of the designer's best efforts. Frequent simple tasks should be solved by specialized means (e.g., most languages have a special symbol "+" for addition) and manual methods can be appropriate for the most complex production steps (for example hand compiling the innermost loop). This is nothing new: the issue is really the proper mix. In manufacturing, as in programming proauc`~on ova snaps nave obvious problems warn reliance on manual labor and on inflexible special purpose machines, while the most modern shops ove~utilize general purpose equipment and fail to integrate the other approaches properly. Hitachi, in a very pragmatic way, created a new and potent cocktail from the different methods each with particular strengths and weaknesses. I do not deny that more radical treks in *e problem space and in the solution space could result in major breakthroughs and also wonder sometimes if America has the temperament to pile up the small quantitative changes in search of the qualitative changes. Yet I believe that the payoffs from the cumulative improvements will be very large. I also believe that the approach is only medium in cost and low in risk. While the usual rule of thumb is that this would indicate low payoff, I am comfortable with the paradox and blame special cultural, historical, and business circumstances for the disparity. I give you a few examples to- illustrate the miniscule scopes of the individual improvements to software tools which can be worthwhile to make. During debugging it is often useful to find all references to a variable foo in a procedure. Every editor typically has some "find" command which, after the user types in "too", will scan for and highlight the instances one lay one. An improvement, for this purpose, is the ability to point to
OCR for page 84
84 one instance and see the highlight appear on all instances at once. This is hardly automatic programming, not even CASE, just one small step. In the programming language we use (C3 just as in practically all languages, a procedure can return only a single data item as its result. Many procedures could benefit from the ability of returning more than one data item. A trivial example is integer division which develops a remainder in addition to a quotient. Of course myriad possibilities exist for implementing the desired erect: pass a pointer to the location where the second result should go ("call by reference"), return the second result in a global vanable, aggregate the results in a data structure and return the single aggregate value, and so on. An improvement can be made to the language by adding Mesa's constructors and extractors which removes the asymmetry between the first and the other results retuned, resolves the programmed' quandary providing a single optimal solution to the problem, and enables focus on optimizing the preferred solution. We will need on the order of a thousand such improvements to make an appreciable difference. It will be easy technically. It has not been done before probably because of the following non-technical problems: 1. Iblented people do not get excited about incremental projects. 2. One has to have source access to all the software which is to be modified: editor, compiler, debugger, run-time, etc. Such access is rare. 3. The incremental improvements are so small that when uncertain side-effects are also considered, the net result may well be negative instead of positive. Many argue, for example, that the mere existence of something new can be so costly, in terms of training, potential errors, conceptual load on the user, or maintenance, that only profound benefits could justify it. I consider the last argument very dangerous in that it encourages complacency and guarantees stagnation. Of course it has more than a grain of truth in it and that is where its potency comes from. My point is that if you look around in the solution space from the current state of the art and you see increasing costs in every direction, you can only conclude that you are in some local minimum. Cost hills in the solution space are obstacles to be overcome, not fenceposts for an optimal solution. One has to climb some of the cost hills to get to better solutions. I would also like to list a few areas which are not problems at least where I work: marketing, product design, implementation methodology, testing methodology, product reliability, working environment, and management. This is not to say that these activities are either easy or well understood. For instance, we still cannot schedule software development with any accuracy, but we learned to manage the uncertainty and we can expect much smaller absolute errors (even at the same relative error) if productivity can be improved. Similarly, testing is still very unscientific and somewhat uncertain. Again, with sufficient safety factors we can achieve the required reliability at the cost of being very inefficient relative to some ideal. With greater productivity more built-in tests can be created and testing efficiency can improve. So ~ see productivity as the "cash" of software production which can be spent in many ways to purchase benefits where they are the most needed: faster time to market, more reliability of the product, or even greater product performance. We can get this last result by iterating and tuning the design and implementation when the higher productivity makes that affordable. i, ~- ~r~--~^~ The Nation's Most Critical Problem with Current Software Production Before coming to the workshop, I wrote that I did not know what the most critical problem is for the industry. Now, after the workshop, I still do not know which problem is the most critical, but I can list a number of serious problems with promising approaches for their solutions and comment on them. This is a small subset of the list created by the workshop and I am in general agreement with all other items on the longer list as well except that the sheer length of the list indicates to me that some of the problems may not be as critical as some of the others. 1. Make routine tasks routine. This is a very powerful slogan which covers a lot of territory. The implication is that many tasks which "ought" to be routine are far from routine in practice.
OCR for page 85
85 The workshop expressed a desire to promote the development of an Engineering Handbook to cover routine practices. I share this desire. I wonder, however, if this can be a research topic. "Routine" is a synonym for "dull" while the Handbook would have to be written by first rate talents. The whole thing boils down to this larger issue: how does society get its best minds to focus on dull problems which nonetheless have great value to society. Historically, it has always been a package deal: solve the problem and win the war, as in the Manhattan project; go to the moon, as in Apollo; and get rich, as in publishing, startups and leveraged buyouts. I am enthusiastic about the handbook but I would feel more sanguine about a "Ziff-Davis" or a "Chemical Abstracts" financing and organizing the work than NRC feeding the oxymoron "routine research". We should also keep in mind that in some instances a routine task could be automated or eliminated altogether the ultimate in "routin~tion". This Is an area where my organization will be doing some work. 2. Clean room method. I join my colleagues in believing that software reliability and debugging costs are great problems, and that Harlan Mills' "clean room method" is an exciting new way of addressing the issues. Being high risk and high return, it is a proper research topic. 3. Reuse. Here we have an obvious method with a huge potential economic return, yet very much underutilized. I agree that it is an important area and it is a proper subject for research. My only caveat is this: many of the resarch proposals implicitly or explicitly assume that the lack of reuse must be due either to the insufficient information flow, or to irrational decision making ("NIH syndrome"~. My experience has been that software reuse was difficult even when the information was available and the decision making rationally considered long-term benefits. ~ avoid disappointment, the research will have to extend beyond the "module library" concept to include: · Module generators recommended at the workshop as alternatives for large collections of discrete modules. For example, instead of multiple sine routines, publish one program which inputs the requirements ("compile time parameters") and outputs the optimal sine routine for the particular purpose. I agree with this. and would only add that we need better languages for module ~ A - - --~ ~ O-- O generation. A giant list of "prints" or FORMAT statements will not do either for the writers or the users. The module creator will obviously benefit from a better paradigm. In principle, the users should not be concerned with what is inside the generator, but in practice the program may have to beat least a part of its own documentation. · A study of the economics of reuse. For example Larry Bernstein pointed out that the simpler productivity measurements can create disincentives for reuse. Or, consider that the creation of reusable software is more difficult than the one-shot custom approach. The beneficiary of this extra effort is typically a different organizational or economic entity. Some sort of pricing or accounting mechanism must be used to keep the books straight and allocate credit where credit is due. This brings us to: · Developing metrics especial) for reuse. Assume a module of cost X with 2 potential applications. Without reuse the cost is 2X. With reuse, it is sometimes claimed, the cost will be X + O. that is, write once and use it again free, a clear win-win proposition. This is not a realistic scenario. We need to research the cost picture: how much more expensive is it to make the code reusable? How expensive is it to use reusable code? What characterizes code for which the reuse costs are low, such as numerical routines? When are reuse costs higher than the custom development costs? I would also like to make a second-order point. Let us call the solution to the nation's software problems the "next generation tools." It is safe to guess that these will be very complex software systems and that they will be developed by using more current tools, which connects to my response to Question #1. Scientists had to measure a lot of atomic weights before the periodic table was discovered. They had to measure an awful lot of spectral frequencies before quantum theory could be developed. We, too, should try to make the great leaps but meanwhile assiduously do our homework.
OCR for page 86
86 I think in the near term, the appearance of a ubiquitous programmer's individual cross development platform would greatly enhance our ability to create more, cheaper, and better quality software. By "ubiquitous" I mean many hundreds of thousands, that is, 386 OS\2 based, rather than tens of thousands, that is, workstation based systems. By "cross development" I mean that the target system would be separate from, and in general different from, the highly standardized and optimized development machine. It would be also nice if the ubiquitous platforms used a ubiquitous language as well. Insofar as the user interface to the editor, compiler, debugger, etc., is also a language, possibly larger than a programming language, this commonality is certainly within reach with the advent of the graphical user interfaces, and in particular of the SAA standard. The computer language is a greater problem. Wide acceptance can be ensured, in general, by two means: by the fiat of a powerful organization, as was the case for Ada, or by overwhelming success in the marketplace of business and academe. Here Lotus 1-2-3 comes to mind as an example for the degree of success which might be necessary; in languages C came the closest to this. However, C and even C+ + did not make a major effort to address a wider user base, and in fact takes justified pride in the substantial accomplishments in software technology which have sprung from the focus on narrow goals. I am a believer in a major heresy. I believe that PL/1 had the right idea, if at the wrong time. A leading programming language should try to satisfy a very wide range of users, for example those using COBOL (forms, pictures, decimal arithmetic, mass storage access), C+ + (object oriented programming, C efficiency, bit level access), SNOBOL (sting processing, pattern matching), ADA (type checking, interface checking, exception handling), Fortran (math libraries), and even Assembly language (super high efficiency, complete access to facilities). Let me just respond superficially to the most obvious objections: 1. PL/1 is terrible; PL/1 was designed to have everything; ergo wanting everything is ternble. Answer: Problems with PV1 are real, but they can be expressed in terms of "lacks": PL/1 lacks efficient procedure calls, PL/1 lacks many structuring or object-oriented constructs, the PLI1 compiler lacks speed, or I lack knowledge of how to write a simple loop in PL/1. All of these problems can be solved by having "more". 2. Not having features is the essence of the benefit I am seeking. How can you satisfy that? Answer: I cannot. Most people seek the benefits which are easily (and usually) derived from not having features. Among these benefits are ease of learning, efficiency, availability, low cost. However, the same benefits can be obtained in other ways as well; typically this requires large one-time ("capital") investment which then will possibly enable new benefits. Again, the point is that simplicity should be just one means to some end, not the end in itself. 3. You cannot learn a monster language. Answer: A large piece of software (or even the portion of a large system known to a single person) is much more complex in terms of number of identifiers, number of operators, or the number of lines of documentation than a complex programming language. If some elaboration of the smaller part-the language benefits the bigger part the software being written in the language-we have a good tradeoff. My feeling is that a small trend of enhancing the most promising language C has already started with the growing popularity of C++. I believe that this trend will continue with even larger supersets of C appearing and winning in the marketplace. I think we should encourage this trend, promote a rapprochement between the C people and the data processing professionals, and point out the dangers to those who remain locked into Ada.
OCR for page 87
87 WII1`IAM A. WULF The Worst Problem The worst problem is not cost; it's not unreliability; it's not schedule slips. Rather, it's the fact that in the near future we won't be able to produce the requisite software at ad! Programmer productivity, measured in lines-of-code per unit time has increased ~ 6% per year since the middle 60's. The amount of code required to support an application measured over the same period, has been increasing at more than 20% per year. This disparity cannot continue. Software is already the limiting factor in many areas; the situation win only get worse unless by some magic programmer productivity increases dramatically. The Basic Problem The problems of software development are so well articulated in Fred Brooks' article "No Silver Bullets" and the Defense Science Board Report that preceded it, that there is little that I can add. I can, however, cast them in another way that I find useful. Software production is a craft industry. For programming, just as for carpentry or basket-weaving, every property of the final product is the result of the craftsmanship of people-its cost, its timeliness, its performance, its reliability, its understandability, its usability, and its suitability for the task. Viewing programming as a craft explains why I am intellectually unsatisfied with "software engineering". It also explains why the software crisis, proclaimed twenty years ago, ~ still with us. Better management, better environments, and better tools obviously help either the carpenter or the programmer, but they do not change the fundamental nature of the activity. Not that current software engineering is wrong-headed; it's very, very important-after all, it's all we have. But we must look elsewhere for a fundamental solution. Viewing programming as a craft also suggests the shape of that solution. We must find a way to make the bulk of programming a capital-intensive, automated activity. Here the analogy with other crafts breaks down because programming is a creative, design activity rather than a production one. However, we have numerous examples where we have achieved automation. No one writes parsers anymore; parser-generators do that. Few people write code generators any more; code generator generators do that. No one should write structured editors anymore; systems such as Teitelbaum's Synthesizer Generator can do that. Tools such as parser generators are fundamentally different from tools such as debuggers, editors or version management systems. They embody a model of a class of application programs and capture knowledge about how to build programs for those applications; in a loose sense they are "expert systems" for building applications in a particular domain. The better ones are capable of producing better applications than the vast majority of programmers because they embody the most advanced expertise in the field. Thus they provide both enormous productivity leverage and better quality as well. What Should We Do? I have three suggestions-one general, long-term one, and two more specific, immediate ones. 1. Recognize that the general solution is hard ! There are two implications of this: · We need basic, long-term research, recognizing that it won't lead to a "quick fix" (I fear that too many prior programs, such as STARS, have been sold on the promise of quick, massive returns).
OCR for page 88
88 · We must look for specific, special case solutions-that is, common situations (like parsers and code-generators) where automation is possible now. I'll return to the second point below. 2. Change government procurement policy, especially DoD's! The current policy, which requires delivery of the tools used to build a product along with the product, is a strong disincentive for a company to invest in tooling. This is backwards; we should be doing everything possible to make it attractive for the private sector to invest in greater automation. 3. Measure! Ten years or so ago, IBM did a study of its largest customer's largest applications. They discovered that 60% of the code in these applications was devoted to screen management. I don't know if that's still true, and it doesn't matter. What does matter is that we (the field) don't know if it's true. Suppose it were true, or that something else equally mundane accounts for a significant fraction of the code. We might be able to literally double productivity overnight by automating that type of code production. I doubt that anvthin~ TO fir~m~ti~ `~^lilA emerge, but it's criminal that we don't know. ,~ ~ ., _ ~ . . --J ~ ~-^~ a_ ~ V CAP VA SQUAW, ~UmmAIIAng 10 a program of basic research is both the most important and the most difficult to achieve. I am not sure that the software research community itself is willing to admit how hard the problem really is, and we have lost a great deal of credibility by advocating a long series of panaceas that weren't (flowcharting, documentation, time-sharing, high-level languages, structured programming, verification, . . .). It is not enough to say that we must do basic research. What research, and why? My sincere hope is that this workshop will at least begin the process of laying out a credible basic research agenda for software. TV that end, let me mention lust a few of mv net o~ntliA.-tPQ for Quash ^- agenda: ---A Rae ^-^ ~- EVE -ball call · Mathematics of computation: I am skeptical that classical mathematics is an appropriate tool for our purposes; witness the fact that most formal specifications are as large as, as buggy as, and usually more difficult to understand than the programs they purport to specify. I don't think the problem is to make programming "more like mathematics"; it's quite the other way around. · Languages: We have all but abandoned language research, which I think is a serious mistake-historically new languages have been the vehicle for wrrying the latest/best programming concepts. I am especially disappointed that current mechanisms for abstraction are so weak. I am also disappointed in the "NewSpeak mentality" of current language orthodoxy (NewSpeak was the language Big Brother imposed in 1984; it omitted this. onn~PntO th-^~.~..lA all, :~^ _ --in utter seditious thoughts [bad programs]). · Parallelism. The ~.mnh-cic of ah" 1^~. I. __ ~WOW MAIM vYVU1U VIEW 11V VP~aKeAS 10 ,.^r^~V4V ~& ·A1~ 1~VL LWU LlciWUt;V nas been on synchronization-on mechanisms for reducing the problems of parallelism to better understand sequential ones. Such a miAndSet Will never lead to massively parallel algorithms or ~v.~Pm~ ThP i,l--^1 Chat ~ _A~-1 algorithms and systems with no synchronization at all. _~ ~ . . ^._ .~A VA1~U1~ tJO i)~1~11~;1 testing and testability: Both of these have been stepchildren, and even denigrated in academia ("... make it correct in the first place...."). My experience in industry suggests that there is a great deal to be gained by (1) making testability a first order concern during design, and be) making testing an integral part of the implementation process. This is by no means a complete list; hopefully it will provide some fodder for thought.
OCR for page 89
89 ANDRES G. ZELLWEGER Introduction The following two position statements provide, from my perspective, some of the motivation for the recommendations that came out of the Complex Software Systems Workshop. In general, the recommendations from the workshop provide a long term attack on the problems I have outlined below. I have suggested some additional avenues that may eventually offer us solutions to the problems most critical to me. I also believe that it is important that the software community, together with the military/industrial complex, take three actions that can provide much needed near term relief to the "software problem". These are (1) education of users and builders of large software systems to teach them how to approach the joint understanding of what a system can and should do; (2) institutionalization of the current state or fine art In sor~ware up; Lieu development; and (3) initiation of a joint effort by the DoD, affected civilian government agencies, and ~ndustrv to solve the Problem of the incompatibility between prescribed software DoD , . ~ ~ ^~ ~_^ :~ mAJ74 ~ A; ^-A ~-^~ ~^~ ~ 1 , , _ ~ _ ~ ,3 ~ 1_~ 1~ And ~ ~-~ development paradigms and what, in practice, we nave ~;ar~;u we URAL Busy. My Most Serious Problem with Software Development As the Corporate Chief Engineer, I am intimately concerned with CTAs ability to produce software. Our goal in the software area is precisely the goal stated in the Workshop Statement the efficient production of quality software. 1b that end, we have been concentrating on improving the support infrastructure (QA, CM, software development standards, etc.), standardizing and strengthening our engineering environment 'end process for developing software, and developing a base of reusable software components. For the past year, I have been conducting periodic internal reviews of all projects in the company with the primary objective of improving the quality or act the products we deliver to our customers. Interestingly enough, based on that experience, our most serious problem does not appear to be in the production process per se, rather it is in the cost end 'schedule increases due to changing user requirements. Despite the fact that CTA uses rigorous methods to analyze and define the computer human interface and refines the interface with rapid prototyping (both with extensive customer involvement) we still find that our customers "change their mind" about this interface, and thus also about the requirements for the system, as we go through preliminary and detailed design. While there are many contributing factors to this phenomenon, I suspect that the primary reason for this is that neither we nor our customers have a very good idea of what they want a system to do at the outset of a development project. There is clearly a lack of understanding, but, unfortunately, in most cases, the nature of the government acquisition process and the pressures of schedules force us to begin a system design anyway. As the design process progresses, our customers become smarter about how their application could be solved (and about the capabilities of modern computer technology) and we see the classical "requirements creep". Another contributing factor, particularly on large projects that take several years to complete, is a legitimate change in user needs that impacts software requirements. I see two complementary solutions to this problem. First, we must recognize that, in nearly all cases, a software system must change with the world around it from its conception to its demise. Software must therefore, by design, be inherently capable of evolution. We are beginning to learn how to build such software (see, for example, the FAA's "Advanced Automation System: Strategies for Future Air Traffic Control Systems" in the February 1987 issue of Computer), but I think we are just scratching the surface of a good solution. Second, just as the American public had to become accustomed to fast foods in order to achieve the significant breakthrough in the cost of food delivery, users of computer systems must learn that certain sacrifices need to be made to get their systems faster and for less money. Standard user interfaces, incremental delivery of capabilities,
OCR for page 90
go and the use of commercial packages that may not have all the bells and whistles a user would like are just a few examples of such sacrifices. Education, not only of the developers of software but also of users, is absolutely essential if we are to make progress in this area. Research into how special purpose systems can be built from standard building blocks or how tailored building blocks might be generated automatically is beginning but a great deal more is needed to make significant breakthroughs. The Industry and Nation's Most Critical Problem with Software Production Today The list of symptoms we hear every day, particularly in the aerospace/defense industry, is long: Software doesn't meet user needs. Software doesn't work as advertised. Software fails. Software is late. Software cost is more than original estimate. At the same time, as technology advances, our appetites are growing. Systems are getting bigger and more complex. The life cycle is getting longer. Software safety is becoming more critical as we increase our dependence on computer systems. The most critical near term problem that must be addressed if we are to alleviate some of these symptoms is the replacement (and institutionalization) of the unwieldy waterfall model with a new software development paradigm. The waterfall model and all the DoD standards (especially 1521B, 2167, and 483) served as a vehicle to let industry build and the government specify and manage large software projects. Unfortunately, the waterfall model no longer works and has not been replaced with a new model and a set of compatible standards, development methods, and management practices. The most recent version of 2167 has taken away many of the waterfall constraints, but offers nothing to replace the paradigm. Software developers are essentially told to "tailor". This freedom is, in some ways, an advantage for the more sophisticated developers, but places a severe burden on the government and the majority of the developers of large aerospace/defense systems. Typical questions are, What documentation should be developed?, How should we deal with project reviews, test, documentation, cost and schedule when reusable parts or prototyping is involved?, How do we implement "build a little, learn a little" and still get good documentation and coherent designs?, What should be my project milestones and when should they occur?, How do I evaluate the cost of the proposed approach and quantify the impact of the approach on risk?, and so on. Over the past decade we (the software community) have learned a great deal about what works and what doesn't work in software development. The acceptance of Ada and with it a renewed emphasis on good software engineering practice also needs to be factored into the software development process (e.g., more design time, different testing strategies). Several paradigms that incorporate what we have learned (the Barry Boehm software spiral is perhaps the best known) have been proposed, but as a community we must now take the next step and adopt and institutionalize one or more paradigms so that the aerospace/defense industry can once again set up standardized "factories" to build software in a manner that is compatible with government specifications, standards, deliverables (documentation), and well defined schedules and review pOiDtS. The inherent need for this solution stems from the way in which the government procures its software. Perhaps a longer term (and better?) solution is possible if we change the way government goes about the software procurement and management process. I believe that a fruitful avenue of research would be the exploration of new ways of specifying and buying software and the impact of this on the way that aerospace and defense contractors could approach the development of large software systems.
OCR for page 91
91 ARTHUR I. ZYGIELBAUM My Worst Problems with Software Software development has challenged me just like other developers and managers. We have an inability to correctly and accurately predict, monitor, and control cost and schedule during the development process and the process of sustaining engineering for significant and complex software projects. Further, most software products are tied to particular groups of people and usually to one or two "gurus." The "attacks" we have made on the problem are classic and include, in the past, creation and rigorous enforcement of software standards, the use of "standard" languages, and tools to aid in monitoring the development process. But we've discovered that the impact of any of these elements is difficult to ascertain. The Jet Propulsion Laboratory (JPL3 took strong steps four years ago to create a software resource center (SORCE) to help change the practice of software engineering at the Laboratory. Charged with the task of training managers and engineers, evaluating tools, consulting and maintaining software standards, SORCE is beginning to improve the process. JPL supports this effort at about $2M per year of internal funding. SORCE has also begun to collect software metrics and to use them to develop a "corporate" memory of success and failure. As strong as the SORCE effort is, we still suffer from significant overruns in cost and time in trying to meet our commitments. Fred Brooks, in his paper, "There's No Silver Bullet," predicted this outcome. Brooks identified two types of software development difficulties. The first was that set of problems created in trying to improve the process. For example, a new language may reduce overall complexity over an earlier language, but win introduce new difficulties through syntax ambiguity or lack of rigor in type checking, etc. These accidental difficulties are solvable through careful design or procedure. The second class of errors is inherent in the process. Software is hard to do correctly. There are few human endeavors that are as difficult to grasp as a complex program or set of programs. The relations, processes, and purposes of the elements of a program are difficult to describe and thus difficult to use as construction elements. Creating tools, methods or magic to solve these difficulties is extremely hard. Another symptom of the problem is an inability to discover the initial set of requirements that lead to a specification for software development. It has been said that the programmer becomes the system engineer of last resort. Being unable to completely specitr the design in a closed, measurable form, we tend to leave design decisions to the last possible stage in the development process. It is difficult to manage a process when the end goal cannot be adequately described! Underlying the difficulties I identify is the lack of an extensible scientific basis for the process called software engineering. Dr. Mary Shaw of the Software Engineering Institute and Carnegie Mellon University very articulately described this through analogy with other professional engineering disciplines. She describes three stages of evolution in a practice before it Is really engineering. The first is "craft." Bridge building was a craft when first practiced. There was little regard for the cost of natural resources and the practice of building was left to "gurus" who could pass the knowledge to a few others. Extension or improvement tended to be through accident rather than through development. The second stage is "commercial." Here there is refinement of practice to a point where economic use of resources is possible. The process knowledge is more widely known, but extensions and improvements still tend to be from exercise and accident. During this stage a scientific basis for the discipline begins to evolve. For bridge building, this was the application of physics to understanding the building materials and the structures made from those materials. Eventually practice and the evolving science become joined into an engineering discipline. The scientific basis allows prediction of the process and improvement through systematic methods. Further, the scientific theories themselves are extensible which results in further process and product improvements. o
OCR for page 92
92 In my opinion, the key for the future of software is in the development of this underlying theory. It took many hundreds of years for civil engineering. It will take decades for software. But we need to make the commitment now. Developing this theory requires experiments with programmers, engineering and management teams, and the gathering of metrics. These studies must be made both at the individual practitioner level and in "programming-in-the-large" covering groups of engineers and managers. The effort requires the use of rigorous scientific practice in the development, testing and refinement of hypothesis leading to theory. The Complex Software Systems Workshop would be an excellent forum to provide a national focus on the need to develop a science to underlie software engineering. Industry and National Problems with Software Rather than the usual views on the difficulty of producing software, I would like to look at the difficulty of using software and our increasing dependence on software. During a recent trip to Washington, I was amused and dismayed to overhear a confrontation between an irritated customer and a clerk of a large national rent-a-car chain. It seems that the customer had signed a contract specifying a particular rental rate for a car. When he returned the car, he paid for it with a credit card. It was not the credit card identified in his "preferred customer" identification. Since the agency's computer could not link the card to the customer, the rate printed on the invoice was higher than that in the original contract. The customer, correctly, I think, argued that the method of payment should not affect the rate. The rate was tied to his frequent use of the agency and to his corporate standing. The clerk said that she had no choice but to charge the rate quoted by the computer. I was similarly amused by a recent television show where the intrepid detective told his boss that the evidence gathered was accurate and correct because they "checked it in THE computer." Computers are becoming an increasingly pervasive part of our lives. There is little that we do that is not affected or enabled by them. But as demonstrated by my examples, the computer can do more than automate a process. It can replace that process with an inaccurate representation. Naturally, the representation is defined in software. Capturing requirements and understanding the implications of a particular software architecture are difficult. Hence the rent-a-car rules and regulations created by software difficulties. And hence the artificial increase in the cost of a product. A recent study of students was reported wherein computer software was set to make predictable errors in simple calculations. Most of the students accepted the answers without relying on common sense. We readily assume the correctness of the computer. Computer accuracy is largely dependent on the software implementation. Errors can lead to amusement or to disaster. I am quite sure that the reader can conjure or remember similar examples. We also need to consider errors which are induced maliciously through worms, viruses, and direct manipulation. Let me not carry this further except to note that while we are concerned with the difficulty and cost in developing software, the larger problem may be in the cost of using the software. And this cost may be in terms of money, time, property and human life. There are several elements required to solve this problem. The first is education. That is, we must provide mechanisms to assure that operators and users of information systems understand the basic processes being automated through the system. The second is in providing adequate means and methods for identifying the requirements for the system and in deriving a correct specification for development. The third is in providing adequate testing and prototyping. The fourth, and not final, element is in developing systems which maintain the integrity of both software and information.
Representative terms from entire chapter: