National Academies Press: OpenBook

Critical Code: Software Producibility for Defense (2010)

Chapter: 5 Reinvigorate DoD Software Engineering Research

« Previous: 4 Adopt a Strategic Approach to Software Assurance
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

5
Reinvigorate DoD Software Engineering Research

In this chapter, the committee summarizes and recommends seven technology research areas as critical to the advancement of defense software productibility. These seven areas were identified by the committee on the basis of the following considerations:

  • Priorities identified from the analysis reported in the foregoing chapters: architecture, incremental process, measurement, and assurance. This builds on extensive interviews with leaders from the DoD and industry and research regarding both challenges and potential opportunities for the DoD.

  • Areas of potential technology and practice that might not otherwise develop sufficiently rapidly without direct investment from the DoD. Although other agencies are investing in areas related to software producibility, the focus and approach to investment do not sufficiently address the priorities as identified above.

  • Potential for a fleshed-out program proposal to satisfy research management “feasibility” criteria such as the Heilmeier questions (see Box 5.1), which identify a set of “tests” for research program proposal1—that is, areas where investment most likely leads to a return that benefits the DoD.

  • Areas not sufficiently addressed by other major federal research sponsors, including the Networking and Information Technology Research and Development (NITRD) agencies.

Prefacing this summary of areas recommended for future research investment is an exploration of the role of academic research in software producibility and a discussion of the impacts of past investments. The chapter also includes a brief discussion regarding effective practice for research program management to maximize impact while managing overall programmatic risk.2

1

There are many versions of the questions; one such version can be found in Box 5.1.

2

Indeed, there is a parallel between programmatic risk in the development of innovative software and programmatic risk in research program management. More important, perhaps, is the analogy between engineering risk in innovative software development and management risk in research program management. Several kinds of research management risk and various approaches to management risk mitigation are identified in Chapter 4 of National Research Council (NRC), 2002, Information Technology Research, Innovation, and E-Government, Washington, DC: National Academies Press. Available online at http://www.nap.edu/catalog.php?record_id=10355. Last accessed August 20, 2010.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

THE ROLE OF ACADEMIC RESEARCH IN SOFTWARE PRODUCIBILITY

The academic research community—along with a small number of industry research groups—has traditionally addressed many of the core technical problems related to software producibility. The academic value proposition has several direct components: The first is workforce. University graduates are the core of the engineering workforce. The most talented and highly trained graduates—those who contribute to innovation in a primary way—tend to come from PhD programs. More generally, the research community generates a steady supply of people—graduates at all levels—educated at the frontiers of current knowledge in important areas of specialization. The economics of these programs depend on externally funded research projects. That is, unlike bachelor’s and master’s enrollments, the production of PhD graduates by universities is in direct proportion to sponsored research. It is perhaps too obvious to point this out, but cleared individuals with top technical qualifications are most likely to be graduates of U.S. universities.

The second component is new knowledge. The style of computer science and software research, historically, has focused on the creation of scientific understanding that is both fundamental and applicable. This is in keeping with the “boundlessness” of software as described in Chapter 1.3 Although industry plays a limited role in performing research relevant to fundamental open problems, there is no institution in the United States other than the research community, located primarily at universities, that focuses on broad and often non-appropriable advancements to knowledge that are directly relevant to practice. Indeed, major corporate labs that have historically supported non-appropriable and open-publication research as a significant part of their overall portfolios (such as Bell Labs and Xerox PARC) have been restructured or scaled back in recent years. This scaling back of private-sector research is due to numerous factors, including a loss by many players of safe monopoly status, analogous to that which enabled Bell Labs to thrive. This creates greater internal pressure on laboratory managers to create measurable return on investment (ROI) cases for research projects. This is particularly challenging for software producibility research, which is often focused on creating new measures of “return” rather than on incremental advances according to readily measurable criteria. This increases the significance of the role of academic research, government laboratories, and federally funded research and developments centers (FFRDCs). This is not to say that major research effort in software producibility is not underway in industry. At Microsoft and IBM, particularly, there is aggressive and forward-looking work in this area that is having significant influence across the industry.

Academic research and development (R&D) is also a major generator of entrepreneurial activity in information technology (IT).4 The small companies in that sector have an important role in developing and market testing new ideas. The infrastructure to support these ventures is an important differentiator of the U.S. innovation system. This infrastructure includes university intellectual property and people supported by university R&D projects. These companies may sometimes disrupt the comfortable market structures of incumbent firms, but arguably not in the same way as do competition or foreign innovation. Regardless, weak incumbents tend to fall by the wayside when there is any disruption. Strong incumbents become stronger. This constant disruption is a characteristic of the more than half-century of IT innovation. It is essential that the DoD itself be effective as a strong incumbent that is capable of gaining strength through disruptive innovations, rather than being a victim (see below). The intelligence community’s Disruptive Technology Office (DTO, now part of Intelligence Advanced Projects Research Agency5) can be presumed to have been founded upon this model.

A third area of value provided by university-based R&D (and industrial lab R&D as well) is surprise reduction. Computing technology is continuing to experience very rapid change, at a rate that has been

3

This is the fundamental yet eventually useful knowledge in what Donald Stokes has called Pasteur’s Quadrant. See Donald E. Stokes, 1997, Pasteur’s Quadrant—Basic Science and Technological Innovation, Washington, DC: Brookings Institution Press.

4

The committee uses “information technology” or “IT” to refer to the full range of computing and information technology areas in the scope of the NITRD multi-agency coordination activity (see http://www.nitrd.gov/ Last accessed August 20, 2010).

5

See http://www.iarpa.gov/. Last accessed August 20, 2010.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

BOX 5.1

Heilmeier Criteria

When George Heilmeier was DARPA director in the mid 1970s he developed a set of pithy questions to ask research program managers when they proposed new program ideas. That set of questions has persisted, and it continues to be applied in various forms by research managers everywhere. Here is a composite rendering of the questions, along with some commentary regarding research program management.

  1. What are you trying to do? Explain objectives using no jargon. The scope of the project must be defined, as well as the key stakeholders in the outcome. The purpose of “no jargon,” in part, is to assure that the scope and value can be described in ways that individuals outside the immediate field can appreciate the context and value of what is proposed.

  2. How is it done today? What are the limits of current practice? This is an accounting of the baseline state, the value it delivers, the limits on what can be done in the present configuration, and, to some extent, the pain experience as a consequence of those limits.

  3. What’s new in your approach? Why do you think it will be successful? Often the novelty is less in the form of a dramatically “new idea,” but rather in the convergence of existing ideas and new developments elsewhere in the field. A cynical view of “cloud computing,” for example, is that it is a delivery on the dream of “utility computing” articulated in the early 1960s at the dawn of the era of timesharing. Cloud, of course, takes this idea many steps forward in scalability, capability, and other ways. In other words, it is less important that the idea be “novel,” but rather timely, potentially game changing, and feasible. Feasibility, in this context, does not mean free of risk, but rather that the dependencies on infrastructure and other elements of the package are realistic. Feasibility also means that there are potential research performers who have the means and motive to engage on the topic. For academic research, this means the ability to build a capable team of PhD students, engineering staff as required, potential transition partners, collaborators at other institutions, etc.

  4. If you’re successful, what difference will it make? To whom? This is an identification of stakeholders, and in addition an indication of potential pathways from research results to impact. For many research projects related to computing and software, those pathways can be complex. These complexities are discussed in the

undiminished for several decades and perhaps is accelerating because of a now-global involvement in advancing IT. Given the rapid change intrinsic to IT, the research community (in academia and in industry, especially start-up companies) serves not only as a source of solutions to the hardest problems, a source of new concepts and ideas, and a source of trained people with high levels of expertise, but also as a bellwether, in the sense that it anticipates and provides early warning of important technological changes. For software, the potential for surprise is heightened by a combination of the rapid growth of globalization, the concurrent movement up the value chain of places to which R&D has been outsourced, and the explicit investments from national governments and the European Union in advancing national technological capability. Given the role of externalities in IT economics, it is not unreasonable to expect the innovation center of gravity to change rapidly in many key areas, which could shift control in critical areas of the technology ecosystems described above. This is already happening in several areas of IT infrastructure, such as chip manufacturing. In this sense, the research community has a critical role in defense-critical areas that are experiencing rapid change. A consequence of this role is the availability of top talent to address critical software-related defense problems as they arise.

The fourth component of the academic R&D value proposition is non-appropriable invention, as described in Chapter 1. This is one of the several forms of innovation carried out by the university

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
  1. NRC “tire tracks” reports.1 For software, the path often connects the research results to the DoD through the development of commercial capabilities, where private investment takes a promising research idea and matures it to the point that it can be adopted by development teams. This adoption could be by software development teams in defense contractors or it could be by development teams creating commercial products or services. For example, the reliability of DoD desktop computers undeniably was improved, quite dramatically, as a result of the improvements made by Microsoft to the process of development and evaluation for device driver code enabled by the SLAM tool (described elsewhere in this chapter), which in turn were enabled by research sponsorship from DARPA and NSF. In addition to defining the impact, there is value in understanding not only those stakeholders who will benefit, but also those who may be disrupted in other ways.

  2. What are the risks and the payoffs? This is not only an accounting of the familiar “risk/reward” model, but also an indication of what are the principal uncertainties, how (and when) they might be mitigated, and what are the rewards for success in resolving those uncertainties.2

  3. How much will it cost? How long will it take? An important question is whether there are specific cost thresholds. For certain physics experiments, for example, either the apparatus can be built, or not. But for other kinds of research there may be more of a “gentle slope” of payoff as a function of level of effort. The answer to the questions of cost and schedule, therefore, should not only be specific numbers, but also, in many cases, should provide a description of a function that maps resources to results.

  4. What are the midterm and final “exams” to assess progress? It is essential that there be ways to assess progress, not only at the end of a project, but also at milestones along the way. (This is analogous to the idea of “early validation” of requirements, architecture, design, etc., as a way to reduce engineering risk in software.) In many research areas, quantitative measures of progress are lacking or, indeed, their formulation is itself the subject of research. For this reason, in some challenging research areas the identification

  

1 See National Research Council (NRC), 1995, Evolving the High Performance Computing and Communications Initiative, Washington, DC: National Academy Press; and NRC, 2003, Innovation in Information Technology, Washington, DC: National Academies Press.

  

2 An inventory of “engineering” risks related to research program management is in the NRC report on E-Government National Research Council, 2002, Information Technology Research, Innovation, and E-Government, Washington, DC: National Academies Press. Available online at http://www.nap.edu/catalog.php?record_id=10355. Last accessed August 3, 2010.

research community. In a market economy, with internal ROI cases prerequisite for R&D investment inside firms, this is a role most appropriate to universities and similar institutions—of course firms often carry out or sponsor such innovation for a variety of reasons, but it is not their core purpose. For IT in particular, such R&D is essential to national competitiveness and to increases in market-wide value. Although the openness of university research is sometimes considered a negative factor with respect to the advancement of technology for national security, it is also the case that universities have unique incentives, unlike industry, to advance the discipline even when the hard-won results are non-appropriable or difficult to fully appropriate. As noted above, it is evident from the history of the field that the advancement of IT and software producibility disproportionately depends on this kind of technology advance. Of course, universities also create an enormous body of appropriable intellectual property that has the potential to be transitioned into practice.


Finding 5-1: Academic research and development continues to be the principal means for developing the most highly skilled members of the software workforce, including those who will train the next generation of leaders, and for stimulating the entrepreneurial activity that leads to disruptive innovation in the information technology industry. Both academic and industry labs are creating

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

the fundamental advances in knowledge that are needed to drive innovation leadership in new technologies and to advance software technologies that are broadly applicable across industry and the DoD supply chain.

DoD Influence on Academic R&D

The overall directions and priorities for sponsored research that leads to university-originated invention are greatly influenced by funding levels and agency priorities. For example, the Defense Advanced Research Project Agency’s (DARPA’s) deliberately strong relationship with the IT research community, which began in the 1960s and endured for nearly 40 years, has had profound influence on IT research priorities, the overall culture of computer science research, and the massive economic and national outcomes. This is documented in multiple NRC reports relating to the innovation pipeline for IT, which trace the origins of a broad set of specific IT innovations, each of which has led to a multibillion dollar market.6

Data available from NITRD and other sources indicate that there has been a significant reduction in federally sponsored research related to software producibility as well as to high-confidence software and systems (see Box 1.5). Furthermore, it is the committee’s impression that in recent years many of the researchers in these areas have moved into other fields or scaled down their research efforts as a result of, among other things, the DoD’s having shifted funding away from software-related R&D, apparently on the assumption that industry can address the problems without government intervention. As stated previously, industry generally has less incentive to produce the fundamental advances in knowledge that enable disruptive advances in practice, building on fundamental advances but less often creating them. The impact of R&D cutbacks generally (excluding health-related R&D) has been noted by the top officers of major IT firms that depend on a flow of innovation and talent.

Academic R&D, Looking Forward

There are some challenges to proceeding with a new program for academic R&D related to software-intensive systems producibility. These challenges relate generally to saliency, realism, and risk. University researchers and faculty tend to be aware of broadly needed advances, but they do not always have adequate visibility into the full range of issues created by leading demands for large-scale, complex industrial and military systems. This awareness is hindered by many things, including national security classification, restricted research constraints, professional connectivity, and cost, in the sense of time and effort required to move up the learning curve. In a different domain, DARPA took a positive step in this regard by initiating the DARPA Computer Science Study Group, wherein junior faculty are given clearances and so are able to gain direct exposure to military challenge problems. Several specific DoD programs have undertaken similar efforts to give faculty a domain exposure, often with great success. One example from the 1990s is the Command and Control University (C2U) created by the Command Post of the Future (CPOF) program, which not only gave researchers access to military challenges, but also led to collaborations yielding new innovation in system concepts.7

With respect to ensuring that researchers have access to problems at scale, companies such as Google and Yahoo!, and national laboratories such as Los Alamos, have developed collaborative programs to expose faculty and graduate students to high-performance computing systems, large datasets, and the software approaches being taken with those systems. These companies, like the DoD, have worked out

6

See NRC, 2003, Innovation in Information Technology, Washington, DC: National Academies Press. Available online at http://www.nap.edu/catalog.php?record_id=10795. Last accessed August 20, 2010. Also see the predecessor report NRC, 1995, Evolving the High Performance Computing and Communications Initiative, Washington, DC: National Academies Press. Available online at http://www.nap.edu/catalog.php?record_id=4948. Last accessed August 20, 2010.

7

The committee understands that prototype systems from this program are now deployed in Iraq.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

a level of exposure that enables researchers to engage productively without compromising core intellectual property. The DoD has a track record of success in this regard as well.

For software producibility research, a different kind of access is needed. Certainly, the success of large-scale production-quality open source has afforded researchers great opportunity not only to experiment with large code bases, but also to undertake longitudinal and organizational analyses of larger projects. This has been enabled by the sophistication of the tools—code management systems, defect databases, designs and models, test cases. These projects are comparable in scale and functionality to commercial software and have greatly assisted the software engineering community in its research. Additionally, commercial firms are affording researchers greater access to proprietary code bases for experimentation and analysis. An early and significant example is work by Gail Murphy in which she assessed consistency of an as-built code base with architectural intent.8 She studied both an open source project and a proprietary project. If security and commercial ownership issues could be resolved (perhaps by clearing selected researchers), members of the research community would benefit greatly from access to DoD-related artifacts, including surrogates and “sanitized” artifacts that omit critical algorithms and/or data. Regardless of access, the committee recommends improved data collection to support analysis (see Recommendation 2-2).

INVESTING IN RESEARCH IN SOFTWARE PRODUCIBILITY

The Impact of Past Investments

Software development has changed and, for the most part, improved considerably during the past several decades. Software systems have grown in size and complexity and are now an integrated component of every aspect of our society, including finance, transportation, communication, and health care. Since the 1960s, Moore’s Law has correctly predicted the tremendous growth in the number of transistors on chips and, generally speaking, the extent of hardware-delivered computing power. An analogous growth has occurred in the size and power of software systems if machine-level instructions, rather than transistors, are the measure of growth.9,10,11 Today’s systems are built using high-level languages and numerous software library components, developed using sophisticated tools and frameworks, and executed with powerful runtime support capabilities.

Research in software engineering, programming technologies, and other areas of computer science has been a catalyst for many of these advances. Nearly all of this research was undertaken at research universities as part of federal programs led by DARPA, the National Science Foundation (NSF), and the Service basic (category 6.1) research programs of the Office of Naval Research, Air Force Office of Science Research, and Army Research Office.

Three illustrations of the impact of federal sponsorship (in academia and industry) that is specifically related to software engineering are presented in Box 5.2. These illustrations, drawn from a study undertaken by Osterweil et al.,12 complement the analyses of the NRC reports cited above relating to research impacts on practice and on the IT economy.

8

Gail Murphy, 1995, “Software Reflexion Models: Bridging the Gap Between Source and High-level Models,” Proceedings of the Third ACM SIGSOFT Symposium on Foundations of Software Engineering, Washington, DC, October 10-13, pp. 18-28.

9

Barry Boehm, 1999, “Managing Software Productivity and Reuse,” IEEE Computer September, 32(9):111-113.

10

Mary Shaw, 2002, “The Tyranny of Transistors: What Counts about Software?” Proceedings of the Fourth Workshop on Economics-Driven Software Engineering Research, IEEE Computer Society, Orlando, FL, May 19-25, pp. 49-51.

11

Barry Boehm, 2006, “A View of 20th and 21st Century Software engineering,” Proceedings of the 28th International Conference on Software Engineering, ACM, Shanghai, China, May 20-28, pp. 12-29.

12

Leon J. Osterweil, Carlo Ghezzi, Jeff Kramer, Alexander L. Wolf, 2008, “Determining the Impact of Software Engineering Research on Practice,” IEEE Computer 41(3):39-49.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

Challenges and Opportunities for Investment

Notwithstanding the enormous payoffs from past investments in software research, making the case for future research investments in software producibility faces a number of challenges, rooted largely in the nature of software development as a field of study. All scientific research fields face challenges in justifying future investments, but the unique characteristics of software and the dynamics of knowledge creation in software producibility create particular challenges for this field. There are, however, opportunities based on developments in the technology, in the overall environment of practice, and in the improvement of scientific practice. These challenges and opportunities influence the application of the criteria summarized at the outset of this chapter. Below are a few examples of influences, both positive and negative:

  • Maturation of the discipline. Many researchers will agree that, as a discipline, software engineering research has matured considerably in the past decade. This is a consequence of both improved research methods and improved circumstances. The circumstances include a vast improvement in access to large bodies of code, both through large-scale open-source projects and through improvement in researcher access to proprietary code bases. An additional circumstance is the emergence of highly capable tools, including source-code management systems, development environments, analysis frameworks, etc., that afford researchers opportunity to conduct experiments at meaningful levels of scale. The effect is that it is more often possible for software engineering researchers to give satisfactory responses to the Heilmeier questions (see Box 5.1). At the same time, software engineering practice remains behind the state of the art in research. As discussed in Chapter 1, software development remains more akin to a craft than to an engineering discipline, in which the productivity and trustworthiness of system development rest on fundamental and well-validated principles, practices, and technologies. And it is still the case that even sanitized representative software artifacts are not available for academic analysis in many defense areas.

  • Diffusion pathways and timescale. Many of the results of software research are broadly applicable and provide for enabling technologies and methods useful in a range of specific application domains. Breadth of applicability is valued in research, but it is also double-edged from the standpoint of sponsors. First, there is a greater chance that results may diffuse to adversaries as well as to collaborators. Second, there is a commons problem: because the benefits are broad, no particular stakeholder can justify the investments needed to produce them. Thus, for example, DoD Service R&D programs tend to focus much more on Service-specific technologies than on common-benefit software technology. Twenty years ago, the Service Laboratories played a significant part in maturing and transitioning software producibility technology, but the “tragedy of the commons” has virtually dried up this key channel. Moreover, advances in software producibility very often are enabling advances rather than being advances of immediate use in particular products. Better techniques for identifying, diagnosing, and repairing software faults, for example, enable production of better systems but are not directly used in particular software products. The value of such advances is thus often hard to quantify precisely for any single advance, or from the perspective of any single program. Yet when integrated over longer periods of time and in terms of impacts on many engineering products, the benefits of the stream of advances emerging from software research are very clear (as summarized above). In the case of defense software producibility, there are clear drivers of defense software “leading demand,” and there are ways that the DoD can invest in and realize benefits earlier and more effectively than can potential adversaries. Moreover the DoD remains a major beneficiary of the longer-term production of software producibility knowledge.

  • Novelty of ideas. It is noted earlier in this chapter that cloud computing, taken broadly, is really a manifestation of a half-century-old idea of “utility computing” that has just now become feasible due to the positions of the various exponential curves that model processor, storage, and communications capabilities and costs—as well as enabling engineering, management, and business innovation. This account is a bit simplistic, obviously, but it makes an essential point: The specific novelty of an idea

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

BOX 5.2

Three Examples of the Impact of Past Investments

  1. From bug detection to lightweight formal methods. Bugs have plagued software since before there were computers,1 and researchers have been actively working on developing tools to help detect and prevent errors in software systems for at least half a century. Early compilers focused on syntactic errors and simple debugging support, but soon tools were developed to detect more complex semantic errors. Simple definition-reference bug detection techniques2,3 were followed by more sophisticated approaches.4,5,6 Programming languages such as Ada, Java, and C# incorporated some of these concepts directly into the language, and thus, for example, checked for array indexes being out of bounds during compilation and added runtime checking only when necessary. This work laid the foundation for a range of model checking and program analysis tools that are now emerging at companies like Microsoft and Google as these companies increase their concern for secure, high-quality systems. Systems such as Microsoft’s SLAM and PreFAST are based upon the research advances funded by the federal government. For example, a report on SLAM states, “The project used and extended ideas from symbolic model checking, program analysis and theorem proving.”7 Those ideas emerged from academic research performed years earlier related to model checking and binary decision diagrams, and indeed Edmund Clarke won the 2008 Turing Award for his work on model checking. The authors of this tool, which has been is credited with significantly reducing the incidence of “blue screen” system crashes, were awarded the 2009 Microsoft Engineering Excellence, a success that represents the culmination of federally funded research from the 1970s through the 1990s.

    Early research on software testing advocated for coverage measures, such as statement and branch coverage, and tools were developed for symbolically executing paths in programs and automatically generating test cases to satisfy such measures8,9,10 The storage and speed of the machines at that time made this approach impractical, but advances in hardware combined with continued research advances in lightweight reasoning engines and higher-level languages have now made coverage monitoring a

  

1 Letters from Ada Lovelace to Charles Babbage discussing programming errors are mentioned in Grady Booch and Doug Bryan, 1993, Software Engineering with ADA, 3rd Ed., Boston: Addison-Wesley Professional. Also see Grace Murray Hopper’s note in the log for the Aiken Mark II in 1947 in Grace Murray Hopper, 1981, “The First Bug,” Annals of the History of Computing 3(3):285-286, 1981.

  

2 Leon J. Osterweil and Lloyd D. Fosdick, 1976, “Some Experience with DAVE: A Fortran Program Analyzer,” in Proceedings of the National Computer Conference and Exposition, ACM, New York, NY, June 7-10, pp. 909-915.

  

3 Barabara G. Ryder, 1974, “The pfort Verifier,” Software: Practice and Experience 4(4):359-377.

  

4 Kurt M. Olender and Leon J. Osterweil, 1990, “Cecil: A Sequencing Constraint Language for Automatic Static Analysis Generation,” IEEE Transactions on Software Engineering 16(3):268-280.

  

5 Edmund M. Clarke and E. Allen Emerson, 1981, “Synthesis of Synchronization Skeletons for Branching Time Temporal Logic.” Pp. 52-71 in Logic of Programs: Workshop Lecture Notes in Computer Science 131, Berlin: Springer.

  

6 Gerard J. Holzmann, 1997, “The Model Checker SPIN,” IEEE Transactions on Software Engineering 23(5): 279-295.

  

7 Thomas Ball, Byron Cook, Vladimir Levin, and Sriram K. Rajamani, 2004, “SLAM and Static Driver Verifier: Technology Transfer of Formal Methods Inside Microsoft,” Lecture Notes in Computer Science (LNCS) 2999:1-20; Eerke Boiten, John, Derrick, and Graeme, Smith, eds., 2004, Fourth International Conference on Integrated Formal Methods IFM 2004), Canterbury, Kent, England, April 4-7. Researchers at Microsoft have stated that the majority of the “blue screen of death” errors evident in the 1990s were attributed to problems that could have been prevented with this analysis tool.

  

8 Lori A. Clarke, 1976, “A Program Testing System,” in Proceedings of the 1976 ACM Annual Conference, ACM, Houston, TX, October 20-22, pp. 488-491.

  

9 James C. King, 1975, “A New Approach to Program Testing,” in Proceedings of the International Conference on Reliable Software, ACM, Los Angeles, CA, April 21-23, pp. 228-233.

  

10 Robert S. Boyer, Bernard Elspas, and Karl N. Levitt, 1975, “SELECT—A Formal System for Testing and Debugging Programs by Symbolic Execution,” in Proceedings of the International Conference on Reliable Software, ACM, Los Angeles, CA, April 21-23, pp. 234-245.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
  1. common industrial practice, along with sophisticated support for test case generation.11 Similarly, current trends in testing, such as the “Test First” approach widely adopted by agile software development teams (and tools such as the JUnit unit-testing framework) owe their foundation to early work on test case descriptions and automated execution,12,13 again based on U.S. government-funded research.

    Programming language development has also been strongly influenced by work in analysis of software systems, as noted above with Ada and its support for type safety and automated bounds checking. Although Ada was not a broad commercial success for various political, programmatic, and social-economic reasons, it is recognized as the direct ancestor of Java, which is widely adopted partly because of its embodiment of lessons from type theory, program analysis, and programming environment research. These lessons enabled Java to support richly capable libraries and software frameworks (as noted in Chapter 1). The C# language from Microsoft builds on similar foundations, and all three languages provide a stronger foundation for writing secure and high-quality code.

  2. From development environments to software architectures to domain-specific frameworks. The success of Java is also partly due to the recognition of the importance of an interactive development environment (IDE). Early development environments were language centric, such as Interlisp14 from 1981, but continued government-supported research, such as Field15 and Arcadia,16 advocated for looser interaction models and broad support for interoperability. This led to work on common data interchange models,17 the forerunners to XML and all its variants, multi-language virtual machine models such as the Java Virtual Machine, and common interoperability protocols, such as Java’s remote method invocation (RMI) and certain features of Microsoft’s .NET framework. These advances, combined with the language principles, enabled the development of modern integrated development environments (IDEs) such as Microsoft’s Visual Studio and Eclipse, originally developed by IBM but later released to open source.

    As software systems continued to grow in size and complexity, software engineering research broadened from algorithm and data structure design to include software architecture issues18,19 and the recog

  

11 Dorota Huizinga and Adam Kolawa, 2007, Automated Defect Prevention: Best Practices in Software Management, Hoboken, NJ: Wiley-IEEE Computer Society Press.

  

12 Phyllis G. Frankl, Richard G. Hamlet, Bev Littlewood, and Lorenzo Strigini, 1998, “Evaluating Testing Methods by Delivered Reliability,” IEEE Transactions on Software Engineering 24(8):586-601.

  

13 Phyllis G. Frankl, Richard G. Hamlet, Bev Littlewood, and Lorenzo Strigini, 1997, “Choosing a Testing Method to Deliver Reliability,” in Proceedings of the 19th International Conference on Software Engineering, ACM, Boston, MA, May 17-23, 1997, pp. 68-78.

  

14 Warren Teitelman and Larry Masinter, 1981,“The Interlisp Programming Environment,” Computer 14(4):25-33.

  

15 Steven P. Reiss, 1990, “Connecting Tools Using Message Passing in the Field Environment,” IEEE Software 7(4):57-66.

  

16 Richard N. Taylor, Frank C. Belz, Lori A. Clarke, Leon Osterweil, Richard W. Selby, Jack C. Wileden, Alexander L. Wolf, and Michael Young, 1989, “Foundations for the Arcadia Environment Architecture,” ACM SIGSOFT Software Engineering Notes 24(2):1-13.

  

17 David Alex Lamb, 1987, “IDL: Sharing Intermediate Representations,” ACM Transactions on Programming Languages and Systems 9(3):297-318.

  

18 Mary Shaw and David Garlan, 1996, Software Architecture Perspectives on an Emerging Discipline, Upper Saddle River, NJ: Prentice Hall.

  

19 Dewayne E. Perry and Alexander L. Wolf, 1992, “Foundations for the Study of Software Architecture,” ACM SIGSOFT Software Engineering Notes 17(4):40-52.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
  1. nition of common styles and design patterns.20 Work in software architecture was also an enabler of the development of high-level frameworks, such as Service Oriented Architectures21 electronic enterprise systems, the backbone of current e-business. Commercial architecture standards such as REST22 derive from government-supported software architecture research.

  2. From the waterfall to an agile compromise. Early software developers often viewed themselves as independent artisans because they worked individually or in very small groups. The reality of the complexity of the systems that were being developed, the long-term duration, the vast resources required, and the large percentage of unsuccessful projects, led to the realization that large software system development needed to be supported by a carefully managed process. Early process models, and particularly the waterfall model,23 were developed as organizing frameworks to help organize the considerable pre-implementation and modeling work, and within them were identified the major software development phases. The actual flow from phase to phase was sometimes interpreted overly simplistically, leading to process models (e.g., the DoD 2167A standard) that are now considered cumbersome and overly rigid. Software leaders in academia and industry, such as Belady, Lehman, Mills, Boehm, and others, argued for more reasoned development models that incorporated risk assessment and incremental, evolutionary development.24,25,26 These models contained the seeds of the iterative ideas that now are nearly ubiquitously adopted by small development teams throughout industry. These were documented in the case of Microsoft by Cusumano and Selby27 and in the now extensive literature of small-team iterative methods under rubrics such as extreme, agile, scrum, TSP, and others. These methods are quite aggressively driving the development of tools to better support team activity including coordination across teams to support larger projects. Concepts including code refactoring, short development sprints, and continuous integration are now accepted practices. However, most agile practices have serious assurance and scalability problems,28 and need to be used selectively in large mission-critical systems or systems of systems.

  

20 Martin Fowler, 2002, Patterns of Enterprise Application Architecture, Boston: Addison-Wesley Longman Publishing.

  

21 Michael Bell, 2008, “Introduction to Service-Oriented Modeling,” Service-Oriented Modeling: Service Analysis, Design, and Architecture, Hoboken, NJ: Wiley & Sons.

  

22 Roy T. Fielding and Richard N. Taylor, 2002, “Principled Design of the Modern Web Architecture,” ACM Transactions on Internet Technology 2(2):115-150.

  

23 Winston W. Royce, 1970, “Managing the Development of Large Software Systems: Concepts and Techniques,” Technical Papers of Western Electronic Show and Convention (WesCon), August 25-28, Los Angeles, CA.

  

24 Laszlo Belady and Meir M. Lehman, 1985, Program Evolution Processes of Software Change, London, UK: Academic Press.

  

25 Barry Boehm, 1986, “A Spiral Model of Software Development and Enhancement,” ACM SIGSOFT Software Engineering Notes 11(4):14-24.

  

26 Harlan Mills, 1991, “Cleanroom Engineering,” American Programmer, May, pp. 31-37.

  

27 Michael A. Cusumano and Richard W. Selby, 1995, Microsoft Secrets: How the World’s Most Powerful Software Company Creates Technology, Shapes Markets, and Manages People, New York: Harper Collins Business.

  

28 Barry Boehm and Richard Turner, 2004, Balancing Agility and Discipline, Boston: Addison-Wesley.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

may matter much less than the timeliness of the idea and the readiness of the environment to address it in a successful way. Many old ideas, considered as failing concepts, resurfaced years later at the “right time” and made a significant difference. In other words, the key question is not so much, What are the new ideas? but rather, What are the ideas whose time has come?

  • Measurement of effectiveness and performance. The challenges of software measurement as discussed in the previous chapters—with respect to process measures, architecture evaluation, evidence to support assurance, and overall extent of system capability—apply also to software engineering research. We lack, for example, good ways to measure the impact of any specific research result on software quality, which stems in part from the lack of good measures of software quality. Without reliable, validated measures it is hard to quantify the impact of innovations in software producibility, even those that are widely credited with improving quality, such as the introduction of strong typing into programming languages or traceability in software-development databases. This is analogous to the productivity paradox, recently resolved.13 Because software is an enabling technology—a building material rather than a built structure—it may not fit with research program management models that focus on production of artifacts with immediately, clearly, and decisively measurable value.

  • Timescale for impact. Frequently, it is only after a significant research investment has been made and proof of concept demonstrated that industry has stepped in to transition a new concept into a commercial or in-house product. Also, there are many novel products/services that result from multiple, independent research results, none of which is decisive in isolation, but which when creatively combined lead to breakthroughs. Although it may appear that a new development emerged overnight, further inspection usually reveals decades of breakthroughs and incremental advances and insights, primarily funded from federal grants, before a new approach becomes commonly accepted and widely available. CSTB’s 2003 report Innovation in Information Technology reinforces this point. It states, “One of the most important messages … is the long, unpredictable incubation period—requiring steady work and funding—between initial exploration and commercial deployment. Starting a project that requires considerable time often seems risky, but the payoff from successes justifies backing researchers who have vision.”

AREAS FOR FUTURE RESEARCH INVESTMENT

In this section, the committee identifies seven areas for potential future research investment and, for each area, a set of specific topics that the committee identifies as both promising and especially relevant to defense software producibility. These selections are made on the basis of the criteria outlined at the beginning of this chapter. The descriptions summarize scope, challenges, ideas, and pathways to impact. But, obviously, these descriptions are not (even summary) program plans—the development of program plans from technical descriptions requires consideration of the various program management risk issues,14 development of management processes and plans on the basis of the risk identification, identification of collaborating stakeholders, and other program management functions. In the development of program plans, choices must be made regarding scale of the research endeavor and the extent of prototype engineering, field validation, and other activities that are required to assess the value of emerging research results. In some areas, a larger number of smaller projects may be most effective, while in other areas more experimental engineering is required and the research goals may be best addressed

13

This is analogous to the so-called “productivity paradox,” according to which economists struggled to account for the productivity benefits that accrued from investments made by firms in IT. The productivity improvements due to IT are now identified, but for a long time there was speculation regarding whether the issue was productivity or the ability to measure particular influences on productivity. (This issue is also taken up in Chapter 1.)

14

An inventory of risk issues for research program management appears in Chapter 4 of NRC, 2002, Information Technology Research, Innovation, and E-Government, Washington, DC: National Academy Press. Available online at http://www.nap.edu/catalog.php?record_id=10355. Last accessed August 20, 2010.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

through a small set of larger and more integrated projects.15 Also in the development of program plans, choices must be made regarding the degree to which an agency program focuses on a particular solution strategy—rather than posing a problem and soliciting a diversity of potential solution approaches, many of which may not have been anticipated when the problem was posed and the program formulated. Particularly in software research, where the development of new metaphors and models is essential to progress, this latter approach can be very valuable.16

The descriptions, rather, serve primarily as a summary of points made in earlier chapters relating to technology advances that would “make a difference” in software producibility (and that meet the criteria). The committee offers them as recommended focal points for renewed investment in defense software producibility.

Area 1.
Architecture Modeling and Architectural Analysis

As noted throughout this report, improvements in the ability of the DoD to manage system design, evaluation, development, and evolution at the architectural level are a key to improved software producibility. For precedented systems, such advances would mean having and using documented, validated architectures and making good ecosystems choices. Improvements here can increase the value and flexibility of libraries and frameworks, and can facilitate their use through modeling and validation, for example. For innovative systems (this report’s principal focus), good architecture choices are often the keys to successful development and are significant to the scaling up and interlinking of systems, process management, enabling incremental practices, assurance, and reduction of diverse kinds of engineering risks related to design, interoperation, and supply-chain choices. Because the DoD benefits greatly from interlinked systems (net-centric, ultra-scale, systems of systems), advances in architecture-related capabilities make a greater difference both in potential to achieve systems capability and in ability to effectively manage architecture-related engineering risks. Yet, despite major advances in knowledge of software and system architecture, the state of knowledge and certainly the state of technology and practice today are inadequate to support DoD needs in this area, even for precedented systems. DoD success in software-intensive systems producibility depends on future research results in this area, and the transitioning of such results into useful notations, technologies, practices, and rules. The committee identifies three principal goals for architecture research.

Goal 1.1:
Facilitate Mission-Oriented Modular Architectures

A good example of mission-oriented modular architecture is the decoupling of sensors, battle command, and weapons release. These functions, co-located in a tank, battleship, or fighter aircraft for example, not only can be separated geographically, but also can be shared across multiple battlefield functions. This has a near-irresistible value, analogous to Metcalfe’s Law for network-structured systems.17 This is part of the compelling rationale for goals associated with the Army Future Combat Systems (FCS) and Theater Ballistic Missile Defense (BMD) models with net-centric approaches, intelligence linking, and the like. In this model, a shooter can be guided by a multitude of geographically dispersed sensors, and unmanned sensors and shooters can be positioned at dangerous locations

15

DARPA, for example, has used both approaches to advantage over the years.

16

This is “solution risk” as described in Chapter 4 of NRC, 2002, Information Technology Research, Innovation, and E-Government, Washington, DC: National Academy Press. Available online at http://www.nap.edu/catalog.php?record_id=10355. Last accessed August 20, 2010.

17

Metcalfe’s Law asserts that the aggregate value of a network to its members grows with the square of the number n of members—proportional to the number of edges in a complete graph of size n. This is a folkloric explanation of why the pressure to combine networks (as in the original internet, but also for instant message interoperation, convergence of fax standards, etc.) is so difficult for operators to resist, even when it creates business risks through loss of lock-in.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

through the use of autonomous and teleoperated vehicles. The model thus affords tremendous power and agility to theater commanders.

With respect to this research issue: Because the architecture is fundamentally driven by interoperability and integration requirements, effective management of architecture can be a great enabler for joint (multiple military services, including air, land, sea, space, and cyber) and combined (international and coalition) warfare. But from the standpoint of systems engineering, the power comes at a significant price, which is the high level of complexity and engineering risk that comes from the extent of coupling and operational flexibility required among the multitude of sensors, weapons, and battle command centers. For example, how can architectures be developed and validated to support the kind of local autonomy necessary for a vehicle to navigate effectively over mixed terrain? How can “unanticipated requirements be anticipated” such as command and control for rapidly assembled coalitions, for example, to address a natural disaster? How can software and systems architectures be evolved, for example, as algorithms and machine-learning capabilities improve? Moreover, by specifying interfaces where testing or measurement is possible, by defining reusable components, and by separating critical from noncritical parts of the system, architecture plays an essential role in assurance. What happens when a vehicle or platform is compromised? How is resiliency built into the architecture to avoid a deliberately stimulated cascading failure?

Architecture is more than a “top down” laying out of systems structure or theoretical contemplation of design possibilities. The skills of a software architect in trading off diverse considerations to fix on essential design commitments is described by the Roman architect Vitruvius (ca. 15 BC):

The architect should be equipped with knowledge of many branches of study and varied kinds of learning, for it is by his judgment that all work done by the other arts is put to the test. This service of his is the child of theory and practice. Practice is the continuous and regular exercise of employment where manual work is done with any necessary material according to the design of the drawing. Theory, on the other hand is the ability to demonstrate and explain things wrought in accordance with technical skills and method. It follows, therefore, that architects who have aimed at acquiring manual skills without theory have not been able to reach a position of authority to correspond with their pain, while those who relied only on theories and learning were obviously hunting the shadow, not the substance. But those who have mastered both, like men equipped in full armor, soon acquire influence and attain their purpose.

There are several specific challenges associated with this goal:

  • Architectural decisions. Architectural decision making is driven by the combined consideration of multiple interacting factors. Some factors derive from stakeholder needs—these are functional scope and quality attributes such as degree of assurance needed, operational safety and security, design evolvability, online adaptability, performance, cost, etc. Other factors are internal factors, reflecting the interdependency of the various dimensions of architectural decision making. For today’s major applications, for example, a diversity of architectural styles is induced by sets of interrelated decisions concerning the combination of frameworks, platforms, and middleware to be adopted. Advancing architecture into a more scientific activity requires improvement in our understanding of architectures as sets of critical and dynamic (and internal and external) parameter values subject to complex constraints and dependences.

  • Architecture scalability and evolvability. Current architecture capabilities do not scale up to representing and evolving architecture models across multiple systems, multiple subcontractor levels, and multiple increments. They do not do well at such needed functions as change impact analysis or multiversion change propagation for large-scale systems or systems of systems.

  • Architectural measures. In return for investments in architecture, one expects to gain predictable, quantifiable advantages in both system development and operation. This is particularly significant, as cost statistics show that architecture decisions account for greater proportions of overall cost as systems scale up in size and complexity. As architecture scales up, modularity becomes an increasingly crucial issue, for example. Decisions in this area are among the most consequential yet least well understood in

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

any major project. There are well-understood consequences for system trustworthiness (e.g., through the isolation of critical elements), producibility, flexibility, and adaptability. Less modularity makes assurance more elusive, and it makes changes more costly and risky. The research challenge is to develop new techniques for architectural modeling and analysis that focus on various measures of modularity and interlinking among system elements.

  • Architectural modeling and evaluation. How can architectural models be expressed to support such diverse architecture-level analyses prior to the full development of code? What kinds of analytics can be developed, including simulation, static analyses of various kinds, model checking, and other analyses. What kind of traceability support can be created to connect architectural representations to representations of requirements and other stakeholder concerns, on one hand, and to the more detailed concerns of system design, construction, and governance of development and change, on the other?

  • Architecture compliance. How can tools be developed (for increasingly complex architecture models and styles) to assist designers, developers, and requirements engineers in assessing, on an ongoing basis, the consistency of their models with architectural models? This is complicated by issues related to framework design, concurrency, and other issues. For example, a framework or application programming interface (API) may expect to receive an object not only of a particular type (e.g., “file handle”) but also in a particular state (“open”). This is not well addressed in current programming languages or architectural models.

Goal 1.2:
Facilitate Architecture-Aware Systems Management

Management of architecture aligns with management of sourcing of components and infrastructure, with system development and evolution, and also with definition of mission processes (or business processes). Such alignment, or congruence (which refers specifically to the relationship of architecture structure with organization structure),18 is essential to managing the coordinated scaling up and evolution of systems, organizations, and the mission processes supported. It is the IT-business convergence that is a consideration for many corporate chief information officer (CIO) organizations and that is also a key to success for many IT-enabled firms.19

Challenges associated with this goal are as follows:

  • Models of congruence. As architecture models are enriched, models for modeling and managing congruence become more complex and technically involved.

  • Enriched software supply chains. Supply-chain structure is only increasing in richness and complexity, and it is further complicated by the greater extent of intertwining of iterative processes across producer/consumer boundaries. What architecture-level interventions could facilitate assessment, across a supply chain, of consistency of an evolving system with its defined architectural intent?

  • Ecosystems and infrastructure. The DoD is unavoidably a participant in diverse commercial ecosystems. What architectural practices can assist in lessening the engineering risks associated with this involvement? For example, how can notions of technical software and system architecture be extended, adapted, or improved to enable better design and performance of the socio-technical ecosystems that surround, develop, and use complex systems?

  • Incompatible hardware and software architectural relationships. As discussed in Chapter 2, many systems architectures are organized into functional-hierarchy hardware relationships (also reinforced by

18

Marcelo Cataldo, James D. Herbsleb, Kathleen M. Carley, 2008, Socio-Technical Congruence: A Framework for Assessing the Impact of Technical and Work Dependencies on Software Development Productivity. Proceedings of the Second ACM-IEEE Symposium on Empirical Software Engineering and Measurement, ACM, Kaiserslautern, Germany, October 9-10, pp. 2-11.

19

For examples at Amazon and Boeing, see NRC, 2007, Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale, Washington, DC: National Academies Press. Available online at http://www.nap.edu/catalog.php?record_id=11936. Last accessed August 20, 2010.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

the current revision of MIL-STD-881 on Work Breakdown Structures) that are incompatible with layered service-oriented software architectures. Research is needed on how better to reconcile these.

Goal 1.3:
Facilitate Architecture-Driven Development

The core practice of architecture depends on our ability to take conceptual structures and manifest them concretely as architectural designs before systems are actually constructed. This is the essential feedback loop to reduce the most fundamental of engineering risks in innovative software engineering. As noted in Chapter 1, there is no physical limit regarding what can be accomplished at the architecture level to facilitate component-based development—in a way that addresses concerns over modularity, assurance, measurement, and other considerations.

Challenges associated with this goal are as follows:

  • Architecture designs for particular domains. It is sometimes asserted that there are relatively few fundamental “phyla” of software, such as web services stacks, control systems of various kinds, distributed data-intensive systems, graphical user-interaction systems, etc. Within each of these phyla are various established ecosystems and also more advanced custom designs. The DoD can derive great benefits when it leads the advancement of ecosystem development for areas critical to its mission—it can directly assure attention to issues related to defense needs, rather than having to find ways to work around deficiencies in ecosystems established by others.

  • Emerging architectural concepts. Software architecture capability continues to be enriched beyond the old model of static structural connections. Recent developments include frameworks and plug-ins, dynamic and adaptive models, service-oriented models, application frameworks, cloud and utility computing, virtualization, data-intensive models, and others. There continue to be emerging concepts that can be of benefit to complex DoD quality attribute requirements.

Goal 1.4:
Facilitate Architecture Recovery

Many DoD systems do not have the benefit (and risk) of developing completely new architectures, but must find ways to provide continuity of service from legacy systems whose software is not well structured or documented (a different kind of risk). Some initial approaches for recovering service-oriented architectures for such legacy systems are emerging.20 Further research and experience on such approaches would strengthen software producibility for the increasing number of DoD brownfield software development situations.

Area 2.
Assurance: Validation, Verification, Analysis of Design and Code

Chapter 4 elaborates the significance, role, and practice of software assurance. It also identifies a number of capabilities that, if better applied and/or augmented, could greatly enhance the ability of the DoD to develop systems that are both highly capable and highly assured—and to do so with acceptable costs and programmatic risk. As noted in Chapter 1, the broadening role of systems and the consequent increase in hazards associated with very large systems combine to enhance the significance of assurance, while the challenge of assurance is increased due to the complexity of modern architectures and supply chains. On the other hand, the capacity to achieve assurance is enhanced by the recent important progress in modern programming languages, tools, modeling, and analysis capability.

20

Two examples are the IBM VITA approach (Hopkins and Jenkins, 2008, Eating the IT Elephant: Moving from Greenfield Development to Brownfield, Upper Saddle River, NJ: IBM Press) and the CMU-SEI SMART approach (Edwin J. Morris, Dennis B. Smith, and Soumya Simanta, 2008, SMART: Analyzing the Reuse Potential of Legacy Components on a Service-Oriented Architecture Environment, CMU/SEI-TR-2008-TN-008, Pittsburgh, PA: Carnegie Mellon University).

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

For new assurance technologies and practices, critical acceptance criteria must include scalability, which for many attributes (such as security and safety) usually also means composability, and usability by developers with minimal training. This greatly facilitates preventive use on a routine basis and thus enhances the ability of the DoD to structure incentives back into the supply chain for developers to create evidence along with code products. As in the case of security, many interventions in technology and practice that relate to assurance are not in the form of separate tools, but rather in the form of enhancements to tools and practices already in place for other purposes. For example, assurance considerations affect architecture modeling (e.g., to detect information paths that are not supposed to be present), requirements-related models, traceability and team information management tooling, programming language design, runtime infrastructure design, and many other areas.

Goal 2.1:
Effective Evaluation for Critical Quality Attributes

This includes a wide range of technologies related to modeling, reverse engineering (“program understanding”), analysis, testing, inspection support, verification, and model checking, as well as support for managing the associated collected information and proof structures. This goal is addressed not only through the development of new techniques, but, as noted, also through the enhancement of practices and tools related to a diverse set of software engineering activities.

In general, a mature software development shop will employ multiple techniques to support assurance and evaluation. This is based on the fact that there are many different quality attributes and kinds of defects.21 At a mature industry development organization, many different kinds of techniques and tools are used, including test frameworks, analyses with respect to different kinds of quality attributes, binary and source analysis, inspection support, metric tracking, and many others. This means that improvements in particular capabilities, when structured appropriately, can gracefully be inserted into practice.

Considerable further research is needed, however, to ensure scalability of such tradeoff analysis capabilities—that optimizing on one assurance aspect does not overly penalize other quality attributes. For example, optimizing on security has been seen to adversely affect performance (via system overhead), reliability (via single points of failure), adaptability (via recertification delays), or usability (via authentication constraints and delays), particularly for complex net-centric systems of systems.

Goal 2.2:
Assurance for Components in Large Heterogeneous Systems

The goal of composable assurance for larger-scale systems is broad and complex. On the one hand, there are already a small number of composable analyses already in use (typing being a principal example). But, on the other hand, composable analyses have not yet emerged for critical security, performance, and other attributes. The pathway to such capability can include model design, theoretical and semantic research, programming language improvement (as is routinely done with major languages such as Java, Fortran, C++, C, and others), tool development, and so on. A research program that focuses on the goal would thus benefit by encompassing research approaches that address primarily a quality objective and feasibility criterion for potential adoption, but are not overly constrained with respect to specific manifestation in the process.

One of the challenges is to improve assurance for data containment in component-oriented systems. This derives from the observation (in Chapter 4) that in many large and heterogeneous software systems containing diversely sourced components (with corresponding diversity in levels of trust), the attack surface is “at the API.” A particular concern is assuring that flows of data are as intended in the

21

See comments in Chapter 4 regarding Mitre’s Common Weakness Enumeration (CWE) inventory for security and code-safety attributes. There are also diverse attributes related to adaptability and flexibility, for example, modularity measures, coupling, pattern compliance, interface attributes, etc.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

architectural models. This is a deeply technical challenge, as are many of the other challenges related to assurance.

Goal 2.3:
Enhance the Portfolio of Preventive Methods to Achieve Assurance

In addition to the primarily evaluative techniques, interventions in development activities can greatly enhance the potential for accreting, on an ongoing basis in development, a body of evidence in support of assurance cases. For example, assurance considerations affect architecture modeling, requirements-related models, traceability and team information management tooling, programming language design, the design of runtime infrastructure, and many other areas.

If research in this area is successful, the difference it will make will be evident in two ways. First, less work will produce higher assurance, in the form of stronger claims with respect to critical quality attributes, and, second, the balancing of effort in evidence production will shift from acceptance evaluation toward development, thus reducing engineering risk with respect to assurance. A wide range of technical ideas have emerged over the years in support of this concept, and this has also influenced language design. A crude way to think about this is that an existing language, together with additional specification information (e.g., types) and analysis capability (e.g., a type checker), can lead naturally to the next-generation programming of language in which the specification information becomes intrinsic and the analysis capability is integrated with the compiler, loader, and runtime system.

Additional specific challenges include the following:

  • Preventive methods also include ideas building on the concept of “proof-carrying code” or more generally “evidence-carrying code.”22

  • A significant enabler of preventive techniques in development activity is the adoption of processes and practices that enhance assurance. Examples include the Lipner/Howard Security Development Lifecycle and Gary McGraw’s process.23 These processes can continue to be enhanced and refined as new practices, tools, and languages emerge.

  • Architectural building blocks can be enhanced to facilitate instrumentation and logging in systems to support real-time, near-real-time, and forensic checking of consistency with models. It is important to note that not all significant attributes can be checked in this way, although sometimes modifications to architecture can expand the scope of what can be checked dynamically.

  • Develop architectures for containment such as sandboxing, process separation, virtual machines, and abstract machines. There is great opportunity to rethink basic concepts in systems software support, with a focus on achieving the simplifications that can lead to greater assurances regarding regulation of control and data flows among major components. The success of restricted ecosystems such as those evident on iPhones and other restricted platforms suggests the possibility of progress in this area.

  • Employ development techniques including co-development of software, selective specifications (for functional and quality attributes), and evidence of verification (consistency) of the software code with the specifications and associated models. Different techniques apply to different properties—what may be workable for particular quality attributes may not be useful for functional, performance, or deadline properties. Most of these techniques rely on some use of explicit specifications. A goal is to reduce the extent of specification required, ultimately to fragmentary specifications that enable designers and developers to distinguish what is an intended truth from what may be an accidental truth. The intended truth may be a design commitment that can be relied upon. The accidental truth may be a consequence of a particular algorithm or infrastructure choice that needs to be subject to revision as

22

George C. Necula and Peter Lee, 1998, Safe, Untrusted Agents Using Proof-Carrying Code. Lecture Notes in Computer Science—Mobile Agents and Security, London, UK: Springer-Verlag, pp. 61-91.

23

See Michael Howard and Steve Lipner, 2006, The Security Development Lifecycle, Redmond, WA: Microsoft Press; also Grady McGraw, Software Security: Building Security In, Boston: Addison-Wesley.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

technology evolves. This co-development approach is intended to facilitate incremental and iterative development practices because it simultaneously creates software and assurance-related evidence.

  • The reality of enriched and diversified supply chains for software systems suggests that pervasive acceptance of preventive methods may not be fully achievable. For this reason, it is important to also address the challenge of improving a posteriori methods, including not only evaluative techniques, but also other approaches based on obfuscation and dynamic techniques.

  • Develop and use programming languages that enhance assurance. The experience of software developers is that language shifts occur at unpredictable times and for unpredictable reasons. Nonetheless, these shifts are generally extensively influenced by research. For example, Ada95, Java, and C# were all influenced by the same set of ideas regarding types, safe storage management, concurrency, name space management, access management, and many other languages elements. The emerging generation of domain-specific languages and dynamic languages is now well established, providing developers with greater flexibility in development practice but also less safety than the established languages. Research work could be accelerated to augment these languages with features that preserve the usability and flexibility while enhancing the potential for assurance.

Area 3.
Process Support and Economic Models for Assurance and Adaptability

Chapters 2 and 4 both address issues related to process and assurance and suggest the following as research goals.

Goal 3.1:
Enhance Process Support for Both Agile and Assured Software Development

This includes both product and process architectures based on identifying the parts of the product and process most needing agility or assurance, and organizing the architectures around them. For products, one way to do this is by encapsulating the major sources of change into modules to be handled by agile methods.24 Examples of such sources of change are user interfaces, interfaces to independently evolving systems, or device drivers. For projects, one way to do this is to partition evolutionary development around stabilized high-assurance development increments, while a parallel team is handling the change traffic and developing the specifications and plans for the next increment.

It also includes further improvements in information management for teams and larger development organizations. Areas of focus could beneficially include improved traceability particularly for formal and “semi-formal” information items, integration of models and analyses and simulation, and measurement support to facilitate iteration and evaluation (e.g., to dynamically identify and adapt to new sources of rapid change).

Goal 3.2:
Address Supply-Chain Challenges and Opportunities

As supply chains are enriched and diversified, there is an increasing potential benefit from tools that can manage a joint corpus of information and whose content and sharing is regulated according to a contractual relationship. Enhancements of this kind can better support evidence production by producers to accelerate client acceptance evaluation. The enhancements can also better support intertwined iterations. Such tools need to be reinforced by contractual provisions enabling visibility and measurability of development and risk management plans and progress vs. plans, both along a supply chain, and up and down the subcontractor chains.

24

David Parnas, 1978, “Designing Software for Ease of Extension and Contraction,” Proceedings of the 3rd International Conference on Software Engineering, IEEE, Atlanta, GA, May 10-12, pp. 264-277.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Goal 3.3:
Facilitate Application of Economic Principles to Decision Making

An additional area of potential significance is the development of applicable economic models to guide decision making, for example, related to the interplay of architecture choices, component and ecosystems choices, supply-chain choices, and attributes of cost, risk, schedule, and quality. As discussed in Chapter 2, prioritizing features to enable time-certain development provides a strong proxy for economic value-based management and decision making.

Goal 3.4:
Develop and Apply Policy Guidance and Infrastructure for Conducting Evidence-Based DoD Milestone Reviews

As also discussed in Chapter 2, this task includes establishing the evidence of solution feasibility as a first-class deliverable, reviewing evidence-development plans, and tracking evidence development progress vs. plans via earned value management. It also requires research into which classes of process-focused evidence development (models, simulations, prototypes, benchmarks, exercises, instrumentation, etc.) are best suited for which classes of system elements.

Goal 3.5:
Enhance Process Support for Integrated Definition and Development of System Hardware, Software, and Human Factors Requirements and Architectural Solutions

Too often, system architectures are driven by hardware relationships that overly constrain software and human factors solutions. Examples of approaches are “soft systems engineering,” systems architecting, co-evolution, incremental iterative development (IID) models based on spiral development, and Brooks’s design processes and patterns.25

Area 4.
Requirements

The challenges for requirements are, in many respects, similar to those of architecture. How to achieve early validation? How to express the information that is gathered from stakeholders concerning both functional requirements and quality attributes? How to achieve traceability and model consistency that effectively links requirements with architecture and assurance?

As noted in the previous chapters, requirements are only occasionally fully established at the outset of the development of an innovative software system. More often, there are early constraints on quality attributes, definitions of the overall scope of function and interlinking, and a few other “shall” or “musthave” constraints. Many of the other elements that eventually become manifest as features or quality attributes are in fact the result of early iterations with stakeholders, and many of these are informed by the improved understanding of both the technological and operational environments as they evolve. In other words, requirements engineering is an ongoing activity throughout development. For long-lived systems, as noted in the 2006 Software Engineering Institute (SEI) report Ultra-Large-Scale Systems, requirements engineering is ongoing throughout the lifetime of the system.

25

Soft systems engineering (see Peter Checkland, 1981, Systems Thinking, Systems Practice, Hoboken, NJ: Wiley); systems architecting (see Eberhardt Rechtin, 1991, Systems Architecting: Creating & Building Complex Systems, Englewood Cliffs, NJ: Prentice Hall), co-evolution (see Mary Lou Maher, Josiah Poon, and Sylvie Boulanger, 1996, “Formalizing Design Exploration as Co-evolution: A Combined Gene Approach,” pp. 3-30 in Advances in Formal Design Methods for CAD, John S. Gero and Fay Sudweeks, eds., London, UK: Chapman and Hall), the incremental commitment model upgrade of spiral development (NRC, Richard W. Pew and Anne S. Mavor, eds., 2007, Human-System Integration in the System Development Process: A New Look, Washington, DC: National Academies Press, available online at http://books.nap.edu/catalog.php?record_id=11893), and Brooks’s design processes and patterns (see Fred Brooks, 2010, The Design of Design: Essays from a Computer Scientist, New York, NY: Addison-Wesley).

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Goal 4.1:
Expressive Models and Supporting Tools

A feature of modern requirements methodology is the capture of scenarios and use cases and the expression of these using effective but mostly informal notations. For agile or feature-driven developments, attention is addressed to the granularity of featuring, so to speak, because that becomes both the basis of priority setting (“above the line”) and the metric of progress (“velocity” or “burn down”). For innovative large systems, there is more focus on capturing the model in a sufficiently precise form to support progress measurement and acceptance evaluation. Regardless of the approach, however, there are common core technical challenges, which are to improve our ability to express requirements-related models (in the sense of unified modeling language (UML) scenarios and use cases), to reason about those models (in the sense of the Massachusetts Institute of Technology’s Alloy), and to facilitate traceability with respect to architecture and implementation (correspondence measures). Requirements engineering is fundamentally about the transition from informal human expression to explicit structured representations. Any incremental improvement in formality that doesn’t compromise overall expressiveness or efficiency of operation has the potential to make a big difference with respect to these goals.

Related to this goal is the development of improved domain-specific models and methods that pertain to critical defense domains such as control systems, command and control, large-scale information management, and many others.

Goal 4.2:
Support Traceability and Early Validation

Traceability is more readily achieved when the feature-driven model is adopted, but this is not always readily applicable to defense systems. Research on requirements expression will result in improvements to models, tooling, and early validation practices (e.g., prototyping and simulation). As part of this effort, it is essential to also address traceability issues, because these have a profound influence on assurance and validation generally.

Goal 4.3:
Process Support for Stakeholder Engagement and Model Development

Stakeholders in large projects may come from diverse perspectives and may have diverse interests. The requirements can often appear to be a negotiation among stakeholders regarding the significance of various functional features and quality attributes. This creates a challenge of avoiding both over-commitment (e.g., through negotiation) to particular characteristics as well as under-commitment. What modeling mechanisms, processes, and tools can be developed to assist stakeholders in identifying goals and models, and in managing not just what is committed to, but also how much commitment is made? This is particularly critical in incremental and iterative development projects.

Area 5.
Language, Modeling, Coding, and Tools

As noted in the previous chapters, programming languages and associated capabilities have a considerable influence on the major factors identified in this report—architecture, assurance, process, and measurement. For example, programming language improvements have influence on the ability of architects to achieve goals related to system structure and modularity. More generally, programming languages are the medium by which human developers convey their intent both to the computer and to other developers. As such, a programming language both constrains what a developer can say and at the same time encourages particular styles of expression. As noted in the previous chapters, programming language design has considerable influence on the major factors identified in this report—architecture, assurance, process, and measurement. For example, programming language improvements have influence on the ability of architects to enforce goals related to system structure and modularity. Modularity is much more than a matter of control and data flow. There are abstractions related to objects, types, and

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

actions that are increasingly supported in modern languages, and these enable developers to express domain concepts more directly in program text. They also enable architects to render their abstract models for system modular structures more directly into the explicit structure of programs.

Modularity is much more than a matter of control and data flow; there are abstractions related to objects, types, and actions that are increasingly supported in modern languages, and these not only enable developers to express domain concepts more directly in program text26 but also enable architects to render their abstract models for system modular structures more directly into the explicit structure of programs.

Goal 5.1:
Enhance the Expressiveness of Programming Language to Address Current and Emerging Challenges

Despite many declarations of success, the evolution of programming languages continues, and it is driven by strong demand for improvements from developers seeking greater levels of expressiveness (e.g., through direct expression of concepts such as higher-order functions,27 deterministic parallelism, atomicity, data permissions, and so on), improved ability to support particular domains (either through direct expression as intrinsics or through the ability to provision appropriate libraries and frameworks), improved flexibility for developers (e.g., dynamic languages that compromise less static checking for a more rapid iterative development model, but with more risk of unwanted runtime errors), improved a priori assurance (e.g., through the simultaneous development of code, specifications, and associated proofs for assurance), and improved access to scalable performance (e.g., through intrinsics such as Generate/MapReduce that support data-intensive computing in microprocessor clusters).

Goal 5.2:
Enhance Ability to Exploit Modern Concurrency, Including Shared Memory Multicore and Scalable Distributed Memory

For the past 30 years, as a consequence of the steady improvements in processor design, software developers have been given a doubling in performance every year and a half, adding up to a million-fold improvement in three decades. Over that period, the same code ran faster on the new chips. In the past few years, processor clock speeds have topped out; there are now multiple processors on a chip, and chip designers continue to provide the expected performance improvement, but only in a potential way and accessible only to those software developers who can harness the power of multiple processors. Suddenly, everything has to be “done by committee”—by multiple threads of control coordinating together to get the work done. It is said that Moore’s Law has given way to Amdahl’s Law. To make matters more difficult, the ability of multiple threads to access shared state in memory does not scale

26

In the early days of Fortran, for example, the only data types in the language were numbers and arrays of various dimensionalities. Any program that manipulated textual data, for example, needed to encode the text characters, textual strings, and any overarching paragraph and document structure very explicitly into numbers and arrays. A person reading program text would see only numerical and array operations, because that was the limit of what could be expressed in the notation. This meant that programmers needed to keep track, in their heads or in documentation, of the nature of this representational encoding. It also meant that testers and evaluators needed to assess programs through this (hopefully) same layer of interpretation. With modern languages (including more modern Fortran versions), these structures can be much more directly expressed—characters and strings are intrinsic in nearly all modern languages. This is a simple illustrative example, but the point remains: There are concepts and structures in domains significant to defense that, in modern languages, must be addressed through similar representational machinations. This is a part of the “endless value spiral” argument of Chapter 1, and it explains why we should not expect any plateau in the evolution of programming languages, models, and other problem-relevant expressive notations. Indeed, it is why language names such as “Fortran” and “Ada” have the staying power of strong brands, even when the specific languages to which they refer are evolving quite rapidly (for example, Ada83 to Ada95 and thence to Ada 2005).

27

An example is Microsoft’s F#, which builds on two decades of work on advanced functional languages such as Standard ML and Haskell. Another example is Sun’s Fortress language, which builds on a combined heritage of functional programming, deterministic parallelism, and numerical computation.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

up—it must eventually be supplanted by distributed models, with information shared using message passing. This hybrid approach, combined with a distributed approach to scalable storage, is the reality of many modern high-performance data centers.28

Software developers, language designers, and tool developers are still struggling to figure out how to harness the concurrency in a way that works well for software development. What are the correct abstractions? What are suitable concepts for data structures? How can assurance be achieved when programs operate in non-deterministic fashion? This provisioning of modern computing power is a major challenge for language designers and tool designers.

Goal 5.3:
Enhance Developer Productivity for New Development and Evolution

As noted above, languages enhanced with models and tools often merge into new languages that incorporate the model concepts directly in the language design. But there is a growing suite of tool capabilities that are conceptually separate from language, and the delivery of these capabilities is a significant influence on developer and team productivity and on software producibility generally. Modern tools such as the open source Eclipse (created by IBM29) and Microsoft’s Visual Studio environment for “managed code” provide rich features to support application development generally. They also have tailored support for development within certain ecosystems, such as the Visual Studio support for web applications developed within the Microsoft Asp.NET framework. Individual developer tools are often linked with team capabilities, which include configuration management of code and related artifacts, defect and issue tracking and linking, build and test support, and management of team measures and processes. This linkage greatly empowers small teams and, increasingly, larger development organizations.

Area 6.
Cyber-Physical Systems

DoD systems are increasingly operating in large-scale, network-centric configurations that take input from many remote sensors and provide geographically dispersed operators with the ability to interact with the collected information and to control remote effectors. In circumstances where the presence of humans in the loop is too expensive or their responses are too slow, these so-called cyber-physical systems must respond autonomously and flexibly to both anticipated and unanticipated combinations of events during execution. Moreover, cyber-physical systems are increasingly being networked to form long-lived systems of systems—and even ultra-large-scale systems30—that must run unobtrusively and largely autonomously, shielding operators from unnecessary details (but keeping them apprised so they can react during emergencies), while simultaneously communicating and responding to mission-critical information at heretofore infeasible rates.

Cyber-physical systems are increasingly critical in defense applications of all kinds and at all levels of scale, including distributed resource management in shipboard defense systems, coordinating groups of unmanned air vehicles, and controlling low-power sensors in tactical urban environments. These are systems with very close linkage of hardware sensors and effectors with software control. They are often structured as control systems, but also can involve multiple complex interacting control systems, such as in deconflicting multiple call-for-fire requests in a crowded battlespace consisting of joint services and coalition partners.

One critical area of concern is the creation and validation of the cyber-physical stack. For example,

28

These issues are the focus of a forthcoming report from the National Research Council, The Future of Computing Performance: Game Over or Next Level?, Samuel Fuller and Lynette Millett, eds., Washington, DC: National Academies Press, forthcoming.

29

Siobhan O’Mahony, Fernando Cela Diaz, and Evan Mamas, 2005, “IBM and Eclipse (A),” Harvard Business School Case 906007, Cambridge, MA: Harvard University Press.

30

Software Engineering Institute, 2006, Ultra-Large-Scale Systems: The Software Challenge of the Future, Pittsburgh, PA: Carnegie Mellon University. Available online at http://www.sei.cmu.edu/library/assets/ULS_Book20062.pdf. Last accessed August 20, 2010.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

how to evolve the development of distributed real-time and/or embedded systems from a cottage craft that does not generally yield scalable or readily assurance solutions to a more robust approach guided by model-integrated computing, domain-specific languages and analysis tools, and control-theoretic adaptation techniques. This is a significant challenge for language and platform design, ecosystem development, tool design, and practices.31

Another challenge facing the DoD is how to routinize and automate more of the development of embedded cyber-physical control systems. There are particular challenges related to scalability, predictability, and evolvability of systems. Assurance is also a major issue, and it is exacerbated by the lack of ability to reliably connect models with code and with the various components in the current generation of embedded software stack, which are typically optimized for time/space utilization and predictability, rather than ease of understanding, analysis, composition, scalability, and validation.

Yet another challenge facing DoD cyber-physical systems—particularly net-centric cyber-physical systems—is how to handle variability and control adaptively and robustly. Cyber-physical systems today often work well as long as they receive all the resources for which they were designed in a timely fashion, but fail completely under the slightest anomaly. There is little flexibility in their behavior, that is, most of the adaptation is pushed to end users or administrators. Instead of hard failure or indefinite waiting, what net-centric cyber-physical systems require is either reconfiguration to reacquire the needed resources automatically or graceful degradation if they are not available.

Goal 6.1:
Accelerate Ecosystem Development for Cyber-Physical Systems

Today, it is too often the case that substantial effort expended to develop cyber-physical systems focuses on either (1) building ad hoc solutions based on tedious and error-prone low-level platforms and tools or (2) cobbling together functionality missing in off-the-shelf real-time and embedded operating systems and middleware. As a result, subsequent composition and validation of these ad hoc capabilities is either infeasible or prohibitively expensive. One reason why redevelopment persists is that it is still often relatively easy to pull together a minimalist ad hoc solution, which remains largely invisible to all except the developers and testers. Unfortunately, this approach yields brittle, error-prone systems and substantial recurring downstream ownership costs, particularly for complex and long-lived network-centric DoD systems and larger-scale systems-of-systems.

One of the most immediate goals is therefore to accelerate ecosystem development for cyber-physical systems. There has been considerable exploration of this area in a multi-agency setting under the auspices of the NITRD coordination activity (see Box 1.5), and there are benefits to linking it with other efforts related to software producibility. There are opportunities to exploit and advance modern language concepts, innovative operating system and middleware ideas, scheduling and resource management techniques, and code generation capabilities.

Achieving this goal will require new cyber-physical system software architectures whose component functional and quality-of-service (QoS) properties can be expressed with sufficient precision (e.g., via the use of model-integrated computing techniques and domain-specific languages and tools) that they can be predictably assembled with each other, leaving less lower-level complexity for application developers to address and thereby reducing system development and ownership costs. In particular, cyber-physical system ecosystems must not simply build better device drivers, operating system schedulers,

31

This challenge was discussed in the committee’s 2007 workshop report. See NRC, 2007, Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale, Washington, DC: National Academies Press. Available online at http://www.nap.edu/catalog.php?record_id=11936. Last accessed August 20, 2010. It has also been explored in the NRC report Software for Dependable Systems: Sufficient Evidence? See NRC, Daniel Jackson, Martyn Thomas, and Lynette I. Millett, eds., 2007, Software for Dependable Systems, Sufficient Evidence? Washington, DC: National Academies Press. Available online at http://www.nap.edu/catalog.php?record_id=11923. Last accessed August 20, 2010. It has been the subject of a series of workshops under the auspices of the NITRD HCSS area sponsored by NSF and other agencies. For more information see http://www.nitrd.gov/subcommittee/hcss.aspx. Last accessed August 20, 2010.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

or middleware brokers in isolation, but rather integrate these capabilities together and deliver them to applications in ways that enable them to achieve fine-grained tradeoffs between key QoS properties, such as throughout, latency, jitter, scalability, security, and reliability.

Key R&D breakthroughs needed to meet the goal of accelerating ecosystem development for cyber-physical systems involve devising new languages and platforms that enable users and operators to clearly understand the QoS requirements and usage patterns of software components so it becomes possible to analyze whether or not these requirements are being (or even can be) met and to aggregate these requirements, making it possible to form decisions, policies, and mechanisms that can support effective global management in net-centric environments. Meeting these needs will require flexibility on the parts of both the application components and the cyber-physical system infrastructure ecosystem used throughout DoD systems.

Goal 6.2:
Develop Architectures and Software Frameworks to Support Embedded Applications

Embedded cyber-physical systems can operate robustly in harsh environments through careful coordination of a complex network of sensors and effectors. Given the increasing complexity of emerging DoD embedded cyber-physical systems, such fine-tuned coordination is ordinarily a nearly impossible task, both conceptually and as a software engineering undertaking. Model-based software development uses models of a system to capture and track system requirements, automatically generate code, and semi-automatically provide tests or proofs of correctness. Models can also be used to build validation proofs or test suites for the generated code.

Model-based software development removes much of the need for fine-tuned coordination, by allowing programmers to read and set the evolution of abstract state variables hidden within the physical system. For example, a program might state, “produce 10.3 seconds of 35% thrust,” rather than specify the details of actuating and sensing the hardware (e.g., “signal controller 1 to open valve 12,” and “check pressure and acceleration to confirm that valve 12 is open”). Hence a model-based program constitutes a high-level specification of intended state evolutions. To execute a model-based program an interpreter could use a model of a controlled plant to continuously deduce the plant’s state from observations and to generate control actions that move the plant to specified states.

Achieving the goal of model-based embedded software development requires new expressive languages for specifying intended state evolutions and plant behavior, automated execution methods for performing all aspects of fine-grained coordination, and software architectures and frameworks for pervasive/immersive sensor networks. Key R&D breakthroughs needed to meet the goal of developing architectures and software frameworks to support embedded applications include closing the consistency gap between model and code, preserving structural design features in code, translating informal requirements into formal requirements, tracing requirements into implementation, integrating disparately modeled submodels, and enriching formalisms that support QoS properties, as well as techniques that support rapid reconfiguration and reliability with unreliable components.

Goal 6.3:
Develop and Validate Technologies That Support Both Variability and Control in Net-Centric Cyber-Physical Systems

As DoD cyber-physical systems become increasingly interconnected to form net-centric systems of systems it is becoming clear that (1) different levels of service are possible and desirable under different conditions and costs and (2) the level of service in one property must be coordinated with and/or traded off against the level of service in others to achieve the intended mission results. To date, little work has focused on techniques for controlling and trading off the overall behavior of these integrated net-centric cyber-physical systems. Another key goal is therefore to develop and validate new technologies that support both variability and control in net-centric cyber-physical systems.

Achieving this goal will require devising new adaptive and reflective software technologies, recog-

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

nizing that not all requirements can be met all of the time, yet still ensuring predictable and controllable end-to-end behavior. In adaptive software technologies, the functional and QoS-related properties of cyber-physical software can be modified either statically (e.g., to reduce footprint, leverage capabilities that exist in specific platforms, enable functional subsetting, and minimize hardware/software infrastructure dependencies) or dynamically (e.g., to optimize system responses to changing environments or requirements, such as changing component interconnections, power levels, CPU/network bandwidth, latency/jitter, and dependability needs).

Reflective software technologies go further to permit automated examination of the capabilities they offer and automated adjustment to optimize those capabilities. Reflective techniques make the internal organization of systems—as well as the mechanisms used in their construction—both visible and manipulable for application and infrastructure programs to inspect and modify at runtime. Reflective technologies thus support more advanced adaptive behavior and more dynamic strategies keyed to current circumstances, that is, necessary software adaptations can be performed autonomously based on conditions within the system, in the system’s environment, or in system QoS policies defined by operators.

Key R&D breakthroughs needed to meet the goal of developing and validating adaptive and reflective software for net-centric cyber-physical systems involve investigating ways to make such modifications dependably (e.g., while meeting stringent—often conflicting—end-to-end QoS requirements) while simultaneously ensuring that the system functional requirements are met.

Area 7.
Human-Systems Integration

It is significant that most large-scale complex enterprise systems include fallible humans as constituent elements, but there has been a lack of design practices, including architecture concepts and development processes, that account for the ways in which humans integrate into systems as participants. Human-systems integration (HSI) is about much more than the colors of pixels and the design of graphical user integration frameworks. The presence of humans in a system, such as pilots in an airplane, fundamentally affects the design and architecture of that system.

This issue was the subject of a separate NRC report32 and is not elaborated upon here except to emphasize some of its software-related recommendations:

  • Conduct a research program with the goal of revolutionizing the role of end users in designing the system they will use.

  • Conduct research to understand the factors that contribute to system resilience, the role of people in resilient systems, and how to design more resilient systems.

  • Refine and coordinate the definition of a systems development process that concurrently engineers the system’s hardware, software, and human factors aspects, and accommodates the emergence of HSI requirements, such as the incremental commitment model.

  • Research and develop shared representations to facilitate communication across different disciplines and lifecycle phases.

  • Research and develop improved methods and testbeds for systems-of-systems HSI.

  • Research and develop improved methods and tools for integrating incompatible legacy and external-system user interfaces.

32

See NRC, 2007, Human-System Integration in the System Development Process: A New Look, Washington, DC: National Academies Press, Available online at http://www.nap.edu/openbook.php?record_id=11893. Last accessed August 20, 2010.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

Summary Findings and Recommendations

Finding 5-2: Technology has a significant role in enabling modern incremental and iterative software development practices at levels of scale ranging from small teams to large distributed development organizations.


Recommendation 5-1: The DoD should take immediate action to reinvigorate its investment in software producibility research. This investment should be undertaken through a diversity of research programs across the DoD and should include academia, industry labs, and collaborations.


Recommendation 5-2: The DoD should take action to undertake DoD-sponsored research programs in the following areas identified as critical to the advancement of defense software producibility: (1) architecture modeling and architectural analysis; (2) assurance: validation, verification, analysis of design and code; (3) process support and economic models for assurance and adaptability; (4) requirements; (5) language, modeling, coding, and tools; (6) cyber-physical systems; and (7) human-systems integration.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×

This page intentionally left blank.

Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 112
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 113
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 114
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 115
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 116
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 117
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 118
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 119
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 120
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 121
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 122
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 123
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 124
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 125
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 126
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 127
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 128
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 129
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 130
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 131
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 132
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 133
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 134
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 135
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 136
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 137
Suggested Citation:"5 Reinvigorate DoD Software Engineering Research." National Research Council. 2010. Critical Code: Software Producibility for Defense. Washington, DC: The National Academies Press. doi: 10.17226/12979.
×
Page 138
Next: Appendixes »
Critical Code: Software Producibility for Defense Get This Book
×
Buy Paperback | $47.00 Buy Ebook | $37.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Critical Code contemplates Department of Defense (DoD) needs and priorities for software research and suggests a research agenda and related actions. Building on two prior books--Summary of a Workshop on Software Intensive Systems and Uncertainty at Scale and Preliminary Observations on DoD Software Research Needs and Priorities--the present volume assesses the nature of the national investment in software research and, in particular, considers ways to revitalize the knowledge base needed to design, produce, and employ software-intensive systems for tomorrow's defense needs.

Critical Code discusses four sets of questions:

  • To what extent is software capability significant for the DoD? Is it becoming more or less significant and strategic in systems development?
  • Will the advances in software producibility needed by the DoD emerge unaided from industry at a pace sufficient to meet evolving defense requirements?
  • What are the opportunities for the DoD to make more effective use of emerging technology to improve software capability and software producibility?
  • In which technology areas should the DoD invest in research to advance defense software capability and producibility?
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!